Download as pdf or txt
Download as pdf or txt
You are on page 1of 70

Computer Vision ECCV 2020 16th

European Conference Glasgow UK


August 23 28 2020 Proceedings Part
XXVI Andrea Vedaldi
Visit to download the full and correct content document:
https://ebookmeta.com/product/computer-vision-eccv-2020-16th-european-conferenc
e-glasgow-uk-august-23-28-2020-proceedings-part-xxvi-andrea-vedaldi/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Computer Vision ECCV 2020 16th European Conference


Glasgow UK August 23 28 2020 Proceedings Part XXVII
Andrea Vedaldi

https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-xxvii-andrea-vedaldi/

Computer Vision ECCV 2020 16th European Conference


Glasgow UK August 23 28 2020 Proceedings Part VI Andrea
Vedaldi

https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-vi-andrea-vedaldi/

Computer Vision ECCV 2020 16th European Conference


Glasgow UK August 23 28 2020 Proceedings Part XIII
Andrea Vedaldi

https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-xiii-andrea-vedaldi/

Computer Vision ECCV 2020 16th European Conference


Glasgow UK August 23 28 2020 Proceedings Part XXIX
Andrea Vedaldi

https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-xxix-andrea-vedaldi/
Computer Vision ECCV 2020 16th European Conference
Glasgow UK August 23 28 2020 Proceedings Part IV Andrea
Vedaldi

https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-iv-andrea-vedaldi/

Computer Vision ECCV 2020 16th European Conference


Glasgow UK August 23 28 2020 Proceedings Part VIII
Andrea Vedaldi

https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-viii-andrea-vedaldi/

Computer Vision ECCV 2020 16th European Conference


Glasgow UK August 23 28 2020 Proceedings Part I Andrea
Vedaldi

https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-i-andrea-vedaldi/

Computer Vision ECCV 2020 16th European Conference


Glasgow UK August 23 28 2020 Proceedings Part XXX
Andrea Vedaldi

https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-xxx-andrea-vedaldi/

Computer Vision ECCV 2020 Workshops Glasgow UK August


23 28 2020 Proceedings Part II Lecture Notes in
Computer Science Adrien Bartoli (Editor)

https://ebookmeta.com/product/computer-vision-
eccv-2020-workshops-glasgow-uk-august-23-28-2020-proceedings-
part-ii-lecture-notes-in-computer-science-adrien-bartoli-editor/
Andrea Vedaldi
Horst Bischof
Thomas Brox
Jan-Michael Frahm (Eds.)
LNCS 12371

Computer Vision –
ECCV 2020
16th European Conference
Glasgow, UK, August 23–28, 2020
Proceedings, Part XXVI
Lecture Notes in Computer Science 12371

Founding Editors
Gerhard Goos
Karlsruhe Institute of Technology, Karlsruhe, Germany
Juris Hartmanis
Cornell University, Ithaca, NY, USA

Editorial Board Members


Elisa Bertino
Purdue University, West Lafayette, IN, USA
Wen Gao
Peking University, Beijing, China
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Gerhard Woeginger
RWTH Aachen, Aachen, Germany
Moti Yung
Columbia University, New York, NY, USA
More information about this series at http://www.springer.com/series/7412
Andrea Vedaldi Horst Bischof
• •

Thomas Brox Jan-Michael Frahm (Eds.)


Computer Vision –
ECCV 2020
16th European Conference
Glasgow, UK, August 23–28, 2020
Proceedings, Part XXVI

123
Editors
Andrea Vedaldi Horst Bischof
University of Oxford Graz University of Technology
Oxford, UK Graz, Austria
Thomas Brox Jan-Michael Frahm
University of Freiburg University of North Carolina at Chapel Hill
Freiburg im Breisgau, Germany Chapel Hill, NC, USA

ISSN 0302-9743 ISSN 1611-3349 (electronic)


Lecture Notes in Computer Science
ISBN 978-3-030-58573-0 ISBN 978-3-030-58574-7 (eBook)
https://doi.org/10.1007/978-3-030-58574-7
LNCS Sublibrary: SL6 – Image Processing, Computer Vision, Pattern Recognition, and Graphics

© Springer Nature Switzerland AG 2020


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now
known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are
believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors
give a warranty, expressed or implied, with respect to the material contained herein or for any errors or
omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword

Hosting the European Conference on Computer Vision (ECCV 2020) was certainly an
exciting journey. From the 2016 plan to hold it at the Edinburgh International
Conference Centre (hosting 1,800 delegates) to the 2018 plan to hold it at Glasgow’s
Scottish Exhibition Centre (up to 6,000 delegates), we finally ended with moving
online because of the COVID-19 outbreak. While possibly having fewer delegates than
expected because of the online format, ECCV 2020 still had over 3,100 registered
participants.
Although online, the conference delivered most of the activities expected at a
face-to-face conference: peer-reviewed papers, industrial exhibitors, demonstrations,
and messaging between delegates. In addition to the main technical sessions, the
conference included a strong program of satellite events with 16 tutorials and 44
workshops.
Furthermore, the online conference format enabled new conference features. Every
paper had an associated teaser video and a longer full presentation video. Along with
the papers and slides from the videos, all these materials were available the week before
the conference. This allowed delegates to become familiar with the paper content and
be ready for the live interaction with the authors during the conference week. The live
event consisted of brief presentations by the oral and spotlight authors and industrial
sponsors. Question and answer sessions for all papers were timed to occur twice so
delegates from around the world had convenient access to the authors.
As with ECCV 2018, authors’ draft versions of the papers appeared online with
open access, now on both the Computer Vision Foundation (CVF) and the European
Computer Vision Association (ECVA) websites. An archival publication arrangement
was put in place with the cooperation of Springer. SpringerLink hosts the final version
of the papers with further improvements, such as activating reference links and sup-
plementary materials. These two approaches benefit all potential readers: a version
available freely for all researchers, and an authoritative and citable version with
additional benefits for SpringerLink subscribers. We thank Alfred Hofmann and
Aliaksandr Birukou from Springer for helping to negotiate this agreement, which we
expect will continue for future versions of ECCV.

August 2020 Vittorio Ferrari


Bob Fisher
Cordelia Schmid
Emanuele Trucco
Preface

Welcome to the proceedings of the European Conference on Computer Vision (ECCV


2020). This is a unique edition of ECCV in many ways. Due to the COVID-19
pandemic, this is the first time the conference was held online, in a virtual format. This
was also the first time the conference relied exclusively on the Open Review platform
to manage the review process. Despite these challenges ECCV is thriving. The con-
ference received 5,150 valid paper submissions, of which 1,360 were accepted for
publication (27%) and, of those, 160 were presented as spotlights (3%) and 104 as orals
(2%). This amounts to more than twice the number of submissions to ECCV 2018
(2,439). Furthermore, CVPR, the largest conference on computer vision, received
5,850 submissions this year, meaning that ECCV is now 87% the size of CVPR in
terms of submissions. By comparison, in 2018 the size of ECCV was only 73% of
CVPR.
The review model was similar to previous editions of ECCV; in particular, it was
double blind in the sense that the authors did not know the name of the reviewers and
vice versa. Furthermore, each conference submission was held confidentially, and was
only publicly revealed if and once accepted for publication. Each paper received at least
three reviews, totalling more than 15,000 reviews. Handling the review process at this
scale was a significant challenge. In order to ensure that each submission received as
fair and high-quality reviews as possible, we recruited 2,830 reviewers (a 130%
increase with reference to 2018) and 207 area chairs (a 60% increase). The area chairs
were selected based on their technical expertise and reputation, largely among people
that served as area chair in previous top computer vision and machine learning con-
ferences (ECCV, ICCV, CVPR, NeurIPS, etc.). Reviewers were similarly invited from
previous conferences. We also encouraged experienced area chairs to suggest addi-
tional chairs and reviewers in the initial phase of recruiting.
Despite doubling the number of submissions, the reviewer load was slightly reduced
from 2018, from a maximum of 8 papers down to 7 (with some reviewers offering to
handle 6 papers plus an emergency review). The area chair load increased slightly,
from 18 papers on average to 22 papers on average.
Conflicts of interest between authors, area chairs, and reviewers were handled lar-
gely automatically by the Open Review platform via their curated list of user profiles.
Many authors submitting to ECCV already had a profile in Open Review. We set a
paper registration deadline one week before the paper submission deadline in order to
encourage all missing authors to register and create their Open Review profiles well on
time (in practice, we allowed authors to create/change papers arbitrarily until the
submission deadline). Except for minor issues with users creating duplicate profiles,
this allowed us to easily and quickly identify institutional conflicts, and avoid them,
while matching papers to area chairs and reviewers.
Papers were matched to area chairs based on: an affinity score computed by the
Open Review platform, which is based on paper titles and abstracts, and an affinity
viii Preface

score computed by the Toronto Paper Matching System (TPMS), which is based on the
paper’s full text, the area chair bids for individual papers, load balancing, and conflict
avoidance. Open Review provides the program chairs a convenient web interface to
experiment with different configurations of the matching algorithm. The chosen con-
figuration resulted in about 50% of the assigned papers to be highly ranked by the area
chair bids, and 50% to be ranked in the middle, with very few low bids assigned.
Assignments to reviewers were similar, with two differences. First, there was a
maximum of 7 papers assigned to each reviewer. Second, area chairs recommended up
to seven reviewers per paper, providing another highly-weighed term to the affinity
scores used for matching.
The assignment of papers to area chairs was smooth. However, it was more difficult
to find suitable reviewers for all papers. Having a ratio of 5.6 papers per reviewer with a
maximum load of 7 (due to emergency reviewer commitment), which did not allow for
much wiggle room in order to also satisfy conflict and expertise constraints. We
received some complaints from reviewers who did not feel qualified to review specific
papers and we reassigned them wherever possible. However, the large scale of the
conference, the many constraints, and the fact that a large fraction of such complaints
arrived very late in the review process made this process very difficult and not all
complaints could be addressed.
Reviewers had six weeks to complete their assignments. Possibly due to COVID-19
or the fact that the NeurIPS deadline was moved closer to the review deadline, a record
30% of the reviews were still missing after the deadline. By comparison, ECCV 2018
experienced only 10% missing reviews at this stage of the process. In the subsequent
week, area chairs chased the missing reviews intensely, found replacement reviewers in
their own team, and managed to reach 10% missing reviews. Eventually, we could
provide almost all reviews (more than 99.9%) with a delay of only a couple of days on
the initial schedule by a significant use of emergency reviews. If this trend is confirmed,
it might be a major challenge to run a smooth review process in future editions of
ECCV. The community must reconsider prioritization of the time spent on paper
writing (the number of submissions increased a lot despite COVID-19) and time spent
on paper reviewing (the number of reviews delivered in time decreased a lot pre-
sumably due to COVID-19 or NeurIPS deadline). With this imbalance the peer-review
system that ensures the quality of our top conferences may break soon.
Reviewers submitted their reviews independently. In the reviews, they had the
opportunity to ask questions to the authors to be addressed in the rebuttal. However,
reviewers were told not to request any significant new experiment. Using the Open
Review interface, authors could provide an answer to each individual review, but were
also allowed to cross-reference reviews and responses in their answers. Rather than
PDF files, we allowed the use of formatted text for the rebuttal. The rebuttal and initial
reviews were then made visible to all reviewers and the primary area chair for a given
paper. The area chair encouraged and moderated the reviewer discussion. During the
discussions, reviewers were invited to reach a consensus and possibly adjust their
ratings as a result of the discussion and of the evidence in the rebuttal.
After the discussion period ended, most reviewers entered a final rating and rec-
ommendation, although in many cases this did not differ from their initial recom-
mendation. Based on the updated reviews and discussion, the primary area chair then
Preface ix

made a preliminary decision to accept or reject the paper and wrote a justification for it
(meta-review). Except for cases where the outcome of this process was absolutely clear
(as indicated by the three reviewers and primary area chairs all recommending clear
rejection), the decision was then examined and potentially challenged by a secondary
area chair. This led to further discussion and overturning a small number of preliminary
decisions. Needless to say, there was no in-person area chair meeting, which would
have been impossible due to COVID-19.
Area chairs were invited to observe the consensus of the reviewers whenever
possible and use extreme caution in overturning a clear consensus to accept or reject a
paper. If an area chair still decided to do so, she/he was asked to clearly justify it in the
meta-review and to explicitly obtain the agreement of the secondary area chair. In
practice, very few papers were rejected after being confidently accepted by the
reviewers.
This was the first time Open Review was used as the main platform to run ECCV. In
2018, the program chairs used CMT3 for the user-facing interface and Open Review
internally, for matching and conflict resolution. Since it is clearly preferable to only use
a single platform, this year we switched to using Open Review in full. The experience
was largely positive. The platform is highly-configurable, scalable, and open source.
Being written in Python, it is easy to write scripts to extract data programmatically. The
paper matching and conflict resolution algorithms and interfaces are top-notch, also due
to the excellent author profiles in the platform. Naturally, there were a few kinks along
the way due to the fact that the ECCV Open Review configuration was created from
scratch for this event and it differs in substantial ways from many other Open Review
conferences. However, the Open Review development and support team did a fantastic
job in helping us to get the configuration right and to address issues in a timely manner
as they unavoidably occurred. We cannot thank them enough for the tremendous effort
they put into this project.
Finally, we would like to thank everyone involved in making ECCV 2020 possible
in these very strange and difficult times. This starts with our authors, followed by the
area chairs and reviewers, who ran the review process at an unprecedented scale. The
whole Open Review team (and in particular Melisa Bok, Mohit Unyal, Carlos
Mondragon Chapa, and Celeste Martinez Gomez) worked incredibly hard for the entire
duration of the process. We would also like to thank René Vidal for contributing to the
adoption of Open Review. Our thanks also go to Laurent Charling for TPMS and to the
program chairs of ICML, ICLR, and NeurIPS for cross checking double submissions.
We thank the website chair, Giovanni Farinella, and the CPI team (in particular Ashley
Cook, Miriam Verdon, Nicola McGrane, and Sharon Kerr) for promptly adding
material to the website as needed in the various phases of the process. Finally, we thank
the publication chairs, Albert Ali Salah, Hamdi Dibeklioglu, Metehan Doyran, Henry
Howard-Jenkins, Victor Prisacariu, Siyu Tang, and Gul Varol, who managed to
compile these substantial proceedings in an exceedingly compressed schedule. We
express our thanks to the ECVA team, in particular Kristina Scherbaum for allowing
open access of the proceedings. We thank Alfred Hofmann from Springer who again
x Preface

serve as the publisher. Finally, we thank the other chairs of ECCV 2020, including in
particular the general chairs for very useful feedback with the handling of the program.

August 2020 Andrea Vedaldi


Horst Bischof
Thomas Brox
Jan-Michael Frahm
Organization

General Chairs
Vittorio Ferrari Google Research, Switzerland
Bob Fisher University of Edinburgh, UK
Cordelia Schmid Google and Inria, France
Emanuele Trucco University of Dundee, UK

Program Chairs
Andrea Vedaldi University of Oxford, UK
Horst Bischof Graz University of Technology, Austria
Thomas Brox University of Freiburg, Germany
Jan-Michael Frahm University of North Carolina, USA

Industrial Liaison Chairs


Jim Ashe University of Edinburgh, UK
Helmut Grabner Zurich University of Applied Sciences, Switzerland
Diane Larlus NAVER LABS Europe, France
Cristian Novotny University of Edinburgh, UK

Local Arrangement Chairs


Yvan Petillot Heriot-Watt University, UK
Paul Siebert University of Glasgow, UK

Academic Demonstration Chair


Thomas Mensink Google Research and University of Amsterdam,
The Netherlands

Poster Chair
Stephen Mckenna University of Dundee, UK

Technology Chair
Gerardo Aragon Camarasa University of Glasgow, UK
xii Organization

Tutorial Chairs
Carlo Colombo University of Florence, Italy
Sotirios Tsaftaris University of Edinburgh, UK

Publication Chairs
Albert Ali Salah Utrecht University, The Netherlands
Hamdi Dibeklioglu Bilkent University, Turkey
Metehan Doyran Utrecht University, The Netherlands
Henry Howard-Jenkins University of Oxford, UK
Victor Adrian Prisacariu University of Oxford, UK
Siyu Tang ETH Zurich, Switzerland
Gul Varol University of Oxford, UK

Website Chair
Giovanni Maria Farinella University of Catania, Italy

Workshops Chairs
Adrien Bartoli University of Clermont Auvergne, France
Andrea Fusiello University of Udine, Italy

Area Chairs
Lourdes Agapito University College London, UK
Zeynep Akata University of Tübingen, Germany
Karteek Alahari Inria, France
Antonis Argyros University of Crete, Greece
Hossein Azizpour KTH Royal Institute of Technology, Sweden
Joao P. Barreto Universidade de Coimbra, Portugal
Alexander C. Berg University of North Carolina at Chapel Hill, USA
Matthew B. Blaschko KU Leuven, Belgium
Lubomir D. Bourdev WaveOne, Inc., USA
Edmond Boyer Inria, France
Yuri Boykov University of Waterloo, Canada
Gabriel Brostow University College London, UK
Michael S. Brown National University of Singapore, Singapore
Jianfei Cai Monash University, Australia
Barbara Caputo Politecnico di Torino, Italy
Ayan Chakrabarti Washington University, St. Louis, USA
Tat-Jen Cham Nanyang Technological University, Singapore
Manmohan Chandraker University of California, San Diego, USA
Rama Chellappa Johns Hopkins University, USA
Liang-Chieh Chen Google, USA
Organization xiii

Yung-Yu Chuang National Taiwan University, Taiwan


Ondrej Chum Czech Technical University in Prague, Czech Republic
Brian Clipp Kitware, USA
John Collomosse University of Surrey and Adobe Research, UK
Jason J. Corso University of Michigan, USA
David J. Crandall Indiana University, USA
Daniel Cremers University of California, Los Angeles, USA
Fabio Cuzzolin Oxford Brookes University, UK
Jifeng Dai SenseTime, SAR China
Kostas Daniilidis University of Pennsylvania, USA
Andrew Davison Imperial College London, UK
Alessio Del Bue Fondazione Istituto Italiano di Tecnologia, Italy
Jia Deng Princeton University, USA
Alexey Dosovitskiy Google, Germany
Matthijs Douze Facebook, France
Enrique Dunn Stevens Institute of Technology, USA
Irfan Essa Georgia Institute of Technology and Google, USA
Giovanni Maria Farinella University of Catania, Italy
Ryan Farrell Brigham Young University, USA
Paolo Favaro University of Bern, Switzerland
Rogerio Feris International Business Machines, USA
Cornelia Fermuller University of Maryland, College Park, USA
David J. Fleet Vector Institute, Canada
Friedrich Fraundorfer DLR, Austria
Mario Fritz CISPA Helmholtz Center for Information Security,
Germany
Pascal Fua EPFL (Swiss Federal Institute of Technology
Lausanne), Switzerland
Yasutaka Furukawa Simon Fraser University, Canada
Li Fuxin Oregon State University, USA
Efstratios Gavves University of Amsterdam, The Netherlands
Peter Vincent Gehler Amazon, USA
Theo Gevers University of Amsterdam, The Netherlands
Ross Girshick Facebook AI Research, USA
Boqing Gong Google, USA
Stephen Gould Australian National University, Australia
Jinwei Gu SenseTime Research, USA
Abhinav Gupta Facebook, USA
Bohyung Han Seoul National University, South Korea
Bharath Hariharan Cornell University, USA
Tal Hassner Facebook AI Research, USA
Xuming He Australian National University, Australia
Joao F. Henriques University of Oxford, UK
Adrian Hilton University of Surrey, UK
Minh Hoai Stony Brooks, State University of New York, USA
Derek Hoiem University of Illinois Urbana-Champaign, USA
xiv Organization

Timothy Hospedales University of Edinburgh and Samsung, UK


Gang Hua Wormpex AI Research, USA
Slobodan Ilic Siemens AG, Germany
Hiroshi Ishikawa Waseda University, Japan
Jiaya Jia The Chinese University of Hong Kong, SAR China
Hailin Jin Adobe Research, USA
Justin Johnson University of Michigan, USA
Frederic Jurie University of Caen Normandie, France
Fredrik Kahl Chalmers University, Sweden
Sing Bing Kang Zillow, USA
Gunhee Kim Seoul National University, South Korea
Junmo Kim Korea Advanced Institute of Science and Technology,
South Korea
Tae-Kyun Kim Imperial College London, UK
Ron Kimmel Technion-Israel Institute of Technology, Israel
Alexander Kirillov Facebook AI Research, USA
Kris Kitani Carnegie Mellon University, USA
Iasonas Kokkinos Ariel AI, UK
Vladlen Koltun Intel Labs, USA
Nikos Komodakis Ecole des Ponts ParisTech, France
Piotr Koniusz Australian National University, Australia
M. Pawan Kumar University of Oxford, UK
Kyros Kutulakos University of Toronto, Canada
Christoph Lampert IST Austria, Austria
Ivan Laptev Inria, France
Diane Larlus NAVER LABS Europe, France
Laura Leal-Taixe Technical University Munich, Germany
Honglak Lee Google and University of Michigan, USA
Joon-Young Lee Adobe Research, USA
Kyoung Mu Lee Seoul National University, South Korea
Seungyong Lee POSTECH, South Korea
Yong Jae Lee University of California, Davis, USA
Bastian Leibe RWTH Aachen University, Germany
Victor Lempitsky Samsung, Russia
Ales Leonardis University of Birmingham, UK
Marius Leordeanu Institute of Mathematics of the Romanian Academy,
Romania
Vincent Lepetit ENPC ParisTech, France
Hongdong Li The Australian National University, Australia
Xi Li Zhejiang University, China
Yin Li University of Wisconsin-Madison, USA
Zicheng Liao Zhejiang University, China
Jongwoo Lim Hanyang University, South Korea
Stephen Lin Microsoft Research Asia, China
Yen-Yu Lin National Chiao Tung University, Taiwan, China
Zhe Lin Adobe Research, USA
Organization xv

Haibin Ling Stony Brooks, State University of New York, USA


Jiaying Liu Peking University, China
Ming-Yu Liu NVIDIA, USA
Si Liu Beihang University, China
Xiaoming Liu Michigan State University, USA
Huchuan Lu Dalian University of Technology, China
Simon Lucey Carnegie Mellon University, USA
Jiebo Luo University of Rochester, USA
Julien Mairal Inria, France
Michael Maire University of Chicago, USA
Subhransu Maji University of Massachusetts, Amherst, USA
Yasushi Makihara Osaka University, Japan
Jiri Matas Czech Technical University in Prague, Czech Republic
Yasuyuki Matsushita Osaka University, Japan
Philippos Mordohai Stevens Institute of Technology, USA
Vittorio Murino University of Verona, Italy
Naila Murray NAVER LABS Europe, France
Hajime Nagahara Osaka University, Japan
P. J. Narayanan International Institute of Information Technology
(IIIT), Hyderabad, India
Nassir Navab Technical University of Munich, Germany
Natalia Neverova Facebook AI Research, France
Matthias Niessner Technical University of Munich, Germany
Jean-Marc Odobez Idiap Research Institute and Swiss Federal Institute
of Technology Lausanne, Switzerland
Francesca Odone Università di Genova, Italy
Takeshi Oishi The University of Tokyo, Tokyo Institute
of Technology, Japan
Vicente Ordonez University of Virginia, USA
Manohar Paluri Facebook AI Research, USA
Maja Pantic Imperial College London, UK
In Kyu Park Inha University, South Korea
Ioannis Patras Queen Mary University of London, UK
Patrick Perez Valeo, France
Bryan A. Plummer Boston University, USA
Thomas Pock Graz University of Technology, Austria
Marc Pollefeys ETH Zurich and Microsoft MR & AI Zurich Lab,
Switzerland
Jean Ponce Inria, France
Gerard Pons-Moll MPII, Saarland Informatics Campus, Germany
Jordi Pont-Tuset Google, Switzerland
James Matthew Rehg Georgia Institute of Technology, USA
Ian Reid University of Adelaide, Australia
Olaf Ronneberger DeepMind London, UK
Stefan Roth TU Darmstadt, Germany
Bryan Russell Adobe Research, USA
xvi Organization

Mathieu Salzmann EPFL, Switzerland


Dimitris Samaras Stony Brook University, USA
Imari Sato National Institute of Informatics (NII), Japan
Yoichi Sato The University of Tokyo, Japan
Torsten Sattler Czech Technical University in Prague, Czech Republic
Daniel Scharstein Middlebury College, USA
Bernt Schiele MPII, Saarland Informatics Campus, Germany
Julia A. Schnabel King’s College London, UK
Nicu Sebe University of Trento, Italy
Greg Shakhnarovich Toyota Technological Institute at Chicago, USA
Humphrey Shi University of Oregon, USA
Jianbo Shi University of Pennsylvania, USA
Jianping Shi SenseTime, China
Leonid Sigal University of British Columbia, Canada
Cees Snoek University of Amsterdam, The Netherlands
Richard Souvenir Temple University, USA
Hao Su University of California, San Diego, USA
Akihiro Sugimoto National Institute of Informatics (NII), Japan
Jian Sun Megvii Technology, China
Jian Sun Xi’an Jiaotong University, China
Chris Sweeney Facebook Reality Labs, USA
Yu-wing Tai Kuaishou Technology, China
Chi-Keung Tang The Hong Kong University of Science
and Technology, SAR China
Radu Timofte ETH Zurich, Switzerland
Sinisa Todorovic Oregon State University, USA
Giorgos Tolias Czech Technical University in Prague, Czech Republic
Carlo Tomasi Duke University, USA
Tatiana Tommasi Politecnico di Torino, Italy
Lorenzo Torresani Facebook AI Research and Dartmouth College, USA
Alexander Toshev Google, USA
Zhuowen Tu University of California, San Diego, USA
Tinne Tuytelaars KU Leuven, Belgium
Jasper Uijlings Google, Switzerland
Nuno Vasconcelos University of California, San Diego, USA
Olga Veksler University of Waterloo, Canada
Rene Vidal Johns Hopkins University, USA
Gang Wang Alibaba Group, China
Jingdong Wang Microsoft Research Asia, China
Yizhou Wang Peking University, China
Lior Wolf Facebook AI Research and Tel Aviv University, Israel
Jianxin Wu Nanjing University, China
Tao Xiang University of Surrey, UK
Saining Xie Facebook AI Research, USA
Ming-Hsuan Yang University of California at Merced and Google, USA
Ruigang Yang University of Kentucky, USA
Organization xvii

Kwang Moo Yi University of Victoria, Canada


Zhaozheng Yin Stony Brook, State University of New York, USA
Chang D. Yoo Korea Advanced Institute of Science and Technology,
South Korea
Shaodi You University of Amsterdam, The Netherlands
Jingyi Yu ShanghaiTech University, China
Stella Yu University of California, Berkeley, and ICSI, USA
Stefanos Zafeiriou Imperial College London, UK
Hongbin Zha Peking University, China
Tianzhu Zhang University of Science and Technology of China, China
Liang Zheng Australian National University, Australia
Todd E. Zickler Harvard University, USA
Andrew Zisserman University of Oxford, UK

Technical Program Committee

Sathyanarayanan Samuel Albanie Pablo Arbelaez


N. Aakur Shadi Albarqouni Shervin Ardeshir
Wael Abd Almgaeed Cenek Albl Sercan O. Arik
Abdelrahman Hassan Abu Alhaija Anil Armagan
Abdelhamed Daniel Aliaga Anurag Arnab
Abdullah Abuolaim Mohammad Chetan Arora
Supreeth Achar S. Aliakbarian Federica Arrigoni
Hanno Ackermann Rahaf Aljundi Mathieu Aubry
Ehsan Adeli Thiemo Alldieck Shai Avidan
Triantafyllos Afouras Jon Almazan Angelica I. Aviles-Rivero
Sameer Agarwal Jose M. Alvarez Yannis Avrithis
Aishwarya Agrawal Senjian An Ismail Ben Ayed
Harsh Agrawal Saket Anand Shekoofeh Azizi
Pulkit Agrawal Codruta Ancuti Ioan Andrei Bârsan
Antonio Agudo Cosmin Ancuti Artem Babenko
Eirikur Agustsson Peter Anderson Deepak Babu Sam
Karim Ahmed Juan Andrade-Cetto Seung-Hwan Baek
Byeongjoo Ahn Alexander Andreopoulos Seungryul Baek
Unaiza Ahsan Misha Andriluka Andrew D. Bagdanov
Thalaiyasingam Ajanthan Dragomir Anguelov Shai Bagon
Kenan E. Ak Rushil Anirudh Yuval Bahat
Emre Akbas Michel Antunes Junjie Bai
Naveed Akhtar Oisin Mac Aodha Song Bai
Derya Akkaynak Srikar Appalaraju Xiang Bai
Yagiz Aksoy Relja Arandjelovic Yalong Bai
Ziad Al-Halah Nikita Araslanov Yancheng Bai
Xavier Alameda-Pineda Andre Araujo Peter Bajcsy
Jean-Baptiste Alayrac Helder Araujo Slawomir Bak
xviii Organization

Mahsa Baktashmotlagh Florian Bernard Pradeep Buddharaju


Kavita Bala Stefano Berretti Uta Buechler
Yogesh Balaji Marcelo Bertalmio Mai Bui
Guha Balakrishnan Gedas Bertasius Tu Bui
V. N. Balasubramanian Cigdem Beyan Adrian Bulat
Federico Baldassarre Lucas Beyer Giedrius T. Burachas
Vassileios Balntas Vijayakumar Bhagavatula Elena Burceanu
Shurjo Banerjee Arjun Nitin Bhagoji Xavier P. Burgos-Artizzu
Aayush Bansal Apratim Bhattacharyya Kaylee Burns
Ankan Bansal Binod Bhattarai Andrei Bursuc
Jianmin Bao Sai Bi Benjamin Busam
Linchao Bao Jia-Wang Bian Wonmin Byeon
Wenbo Bao Simone Bianco Zoya Bylinskii
Yingze Bao Adel Bibi Sergi Caelles
Akash Bapat Tolga Birdal Jianrui Cai
Md Jawadul Hasan Bappy Tom Bishop Minjie Cai
Fabien Baradel Soma Biswas Yujun Cai
Lorenzo Baraldi Mårten Björkman Zhaowei Cai
Daniel Barath Volker Blanz Zhipeng Cai
Adrian Barbu Vishnu Boddeti Juan C. Caicedo
Kobus Barnard Navaneeth Bodla Simone Calderara
Nick Barnes Simion-Vlad Bogolin Necati Cihan Camgoz
Francisco Barranco Xavier Boix Dylan Campbell
Jonathan T. Barron Piotr Bojanowski Octavia Camps
Arslan Basharat Timo Bolkart Jiale Cao
Chaim Baskin Guido Borghi Kaidi Cao
Anil S. Baslamisli Larbi Boubchir Liangliang Cao
Jorge Batista Guillaume Bourmaud Xiangyong Cao
Kayhan Batmanghelich Adrien Bousseau Xiaochun Cao
Konstantinos Batsos Thierry Bouwmans Yang Cao
David Bau Richard Bowden Yu Cao
Luis Baumela Hakan Boyraz Yue Cao
Christoph Baur Mathieu Brédif Zhangjie Cao
Eduardo Samarth Brahmbhatt Luca Carlone
Bayro-Corrochano Steve Branson Mathilde Caron
Paul Beardsley Nikolas Brasch Dan Casas
Jan Bednavr’ik Biagio Brattoli Thomas J. Cashman
Oscar Beijbom Ernesto Brau Umberto Castellani
Philippe Bekaert Toby P. Breckon Lluis Castrejon
Esube Bekele Francois Bremond Jacopo Cavazza
Vasileios Belagiannis Jesus Briales Fabio Cermelli
Ohad Ben-Shahar Sofia Broomé Hakan Cevikalp
Abhijit Bendale Marcus A. Brubaker Menglei Chai
Róger Bermúdez-Chacón Luc Brun Ishani Chakraborty
Maxim Berman Silvia Bucci Rudrasis Chakraborty
Jesus Bermudez-cameo Shyamal Buch Antoni B. Chan
Organization xix

Kwok-Ping Chan Weifeng Chen Nam Ik Cho


Siddhartha Chandra Weikai Chen Tim Cho
Sharat Chandran Xi Chen Tae Eun Choe
Arjun Chandrasekaran Xiaohan Chen Chiho Choi
Angel X. Chang Xiaozhi Chen Edward Choi
Che-Han Chang Xilin Chen Inchang Choi
Hong Chang Xingyu Chen Jinsoo Choi
Hyun Sung Chang Xinlei Chen Jonghyun Choi
Hyung Jin Chang Xinyun Chen Jongwon Choi
Jianlong Chang Yi-Ting Chen Yukyung Choi
Ju Yong Chang Yilun Chen Hisham Cholakkal
Ming-Ching Chang Ying-Cong Chen Eunji Chong
Simyung Chang Yinpeng Chen Jaegul Choo
Xiaojun Chang Yiran Chen Christopher Choy
Yu-Wei Chao Yu Chen Hang Chu
Devendra S. Chaplot Yu-Sheng Chen Peng Chu
Arslan Chaudhry Yuhua Chen Wen-Sheng Chu
Rizwan A. Chaudhry Yun-Chun Chen Albert Chung
Can Chen Yunpeng Chen Joon Son Chung
Chang Chen Yuntao Chen Hai Ci
Chao Chen Zhuoyuan Chen Safa Cicek
Chen Chen Zitian Chen Ramazan G. Cinbis
Chu-Song Chen Anchieh Cheng Arridhana Ciptadi
Dapeng Chen Bowen Cheng Javier Civera
Dong Chen Erkang Cheng James J. Clark
Dongdong Chen Gong Cheng Ronald Clark
Guanying Chen Guangliang Cheng Felipe Codevilla
Hongge Chen Jingchun Cheng Michael Cogswell
Hsin-yi Chen Jun Cheng Andrea Cohen
Huaijin Chen Li Cheng Maxwell D. Collins
Hwann-Tzong Chen Ming-Ming Cheng Carlo Colombo
Jianbo Chen Yu Cheng Yang Cong
Jianhui Chen Ziang Cheng Adria R. Continente
Jiansheng Chen Anoop Cherian Marcella Cornia
Jiaxin Chen Dmitry Chetverikov John Richard Corring
Jie Chen Ngai-man Cheung Darren Cosker
Jun-Cheng Chen William Cheung Dragos Costea
Kan Chen Ajad Chhatkuli Garrison W. Cottrell
Kevin Chen Naoki Chiba Florent Couzinie-Devy
Lin Chen Benjamin Chidester Marco Cristani
Long Chen Han-pang Chiu Ioana Croitoru
Min-Hung Chen Mang Tik Chiu James L. Crowley
Qifeng Chen Wei-Chen Chiu Jiequan Cui
Shi Chen Donghyeon Cho Zhaopeng Cui
Shixing Chen Hojin Cho Ross Cutler
Tianshui Chen Minsu Cho Antonio D’Innocente
xx Organization

Rozenn Dahyot Mingyu Ding Jan Ernst


Bo Dai Xinghao Ding Sergio Escalera
Dengxin Dai Zhengming Ding Francisco Escolano
Hang Dai Robert DiPietro Victor Escorcia
Longquan Dai Cosimo Distante Carlos Esteves
Shuyang Dai Ajay Divakaran Francisco J. Estrada
Xiyang Dai Mandar Dixit Bin Fan
Yuchao Dai Abdelaziz Djelouah Chenyou Fan
Adrian V. Dalca Thanh-Toan Do Deng-Ping Fan
Dima Damen Jose Dolz Haoqi Fan
Bharath B. Damodaran Bo Dong Hehe Fan
Kristin Dana Chao Dong Heng Fan
Martin Danelljan Jiangxin Dong Kai Fan
Zheng Dang Weiming Dong Lijie Fan
Zachary Alan Daniels Weisheng Dong Linxi Fan
Donald G. Dansereau Xingping Dong Quanfu Fan
Abhishek Das Xuanyi Dong Shaojing Fan
Samyak Datta Yinpeng Dong Xiaochuan Fan
Achal Dave Gianfranco Doretto Xin Fan
Titas De Hazel Doughty Yuchen Fan
Rodrigo de Bem Hassen Drira Sean Fanello
Teo de Campos Bertram Drost Hao-Shu Fang
Raoul de Charette Dawei Du Haoyang Fang
Shalini De Mello Ye Duan Kuan Fang
Joseph DeGol Yueqi Duan Yi Fang
Herve Delingette Abhimanyu Dubey Yuming Fang
Haowen Deng Anastasia Dubrovina Azade Farshad
Jiankang Deng Stefan Duffner Alireza Fathi
Weijian Deng Chi Nhan Duong Raanan Fattal
Zhiwei Deng Thibaut Durand Joao Fayad
Joachim Denzler Zoran Duric Xiaohan Fei
Konstantinos G. Derpanis Iulia Duta Christoph Feichtenhofer
Aditya Deshpande Debidatta Dwibedi Michael Felsberg
Frederic Devernay Benjamin Eckart Chen Feng
Somdip Dey Marc Eder Jiashi Feng
Arturo Deza Marzieh Edraki Junyi Feng
Abhinav Dhall Alexei A. Efros Mengyang Feng
Helisa Dhamo Kiana Ehsani Qianli Feng
Vikas Dhiman Hazm Kemal Ekenel Zhenhua Feng
Fillipe Dias Moreira James H. Elder Michele Fenzi
de Souza Mohamed Elgharib Andras Ferencz
Ali Diba Shireen Elhabian Martin Fergie
Ferran Diego Ehsan Elhamifar Basura Fernando
Guiguang Ding Mohamed Elhoseiny Ethan Fetaya
Henghui Ding Ian Endres Michael Firman
Jian Ding N. Benjamin Erichson John W. Fisher
Organization xxi

Matthew Fisher Jin Gao Dong Gong


Boris Flach Jiyang Gao Ke Gong
Corneliu Florea Junbin Gao Mingming Gong
Wolfgang Foerstner Katelyn Gao Abel Gonzalez-Garcia
David Fofi Lin Gao Ariel Gordon
Gian Luca Foresti Mingfei Gao Daniel Gordon
Per-Erik Forssen Ruiqi Gao Paulo Gotardo
David Fouhey Ruohan Gao Venu Madhav Govindu
Katerina Fragkiadaki Shenghua Gao Ankit Goyal
Victor Fragoso Yuan Gao Priya Goyal
Jean-Sébastien Franco Yue Gao Raghav Goyal
Ohad Fried Noa Garcia Benjamin Graham
Iuri Frosio Alberto Garcia-Garcia Douglas Gray
Cheng-Yang Fu Guillermo Brent A. Griffin
Huazhu Fu Garcia-Hernando Etienne Grossmann
Jianlong Fu Jacob R. Gardner David Gu
Jingjing Fu Animesh Garg Jiayuan Gu
Xueyang Fu Kshitiz Garg Jiuxiang Gu
Yanwei Fu Rahul Garg Lin Gu
Ying Fu Ravi Garg Qiao Gu
Yun Fu Philip N. Garner Shuhang Gu
Olac Fuentes Kirill Gavrilyuk Jose J. Guerrero
Kent Fujiwara Paul Gay Paul Guerrero
Takuya Funatomi Shiming Ge Jie Gui
Christopher Funk Weifeng Ge Jean-Yves Guillemaut
Thomas Funkhouser Baris Gecer Riza Alp Guler
Antonino Furnari Xin Geng Erhan Gundogdu
Ryo Furukawa Kyle Genova Fatma Guney
Erik Gärtner Stamatios Georgoulis Guodong Guo
Raghudeep Gadde Bernard Ghanem Kaiwen Guo
Matheus Gadelha Michael Gharbi Qi Guo
Vandit Gajjar Kamran Ghasedi Sheng Guo
Trevor Gale Golnaz Ghiasi Shi Guo
Juergen Gall Arnab Ghosh Tiantong Guo
Mathias Gallardo Partha Ghosh Xiaojie Guo
Guillermo Gallego Silvio Giancola Yijie Guo
Orazio Gallo Andrew Gilbert Yiluan Guo
Chuang Gan Rohit Girdhar Yuanfang Guo
Zhe Gan Xavier Giro-i-Nieto Yulan Guo
Madan Ravi Ganesh Thomas Gittings Agrim Gupta
Aditya Ganeshan Ioannis Gkioulekas Ankush Gupta
Siddha Ganju Clement Godard Mohit Gupta
Bin-Bin Gao Vaibhava Goel Saurabh Gupta
Changxin Gao Bastian Goldluecke Tanmay Gupta
Feng Gao Lluis Gomez Danna Gurari
Hongchang Gao Nuno Gonçalves Abner Guzman-Rivera
xxii Organization

JunYoung Gwak Zhihai He Ronghang Hu


Michael Gygli Chinmay Hegde Xiaowei Hu
Jung-Woo Ha Janne Heikkila Yinlin Hu
Simon Hadfield Mattias P. Heinrich Yuan-Ting Hu
Isma Hadji Stéphane Herbin Zhe Hu
Bjoern Haefner Alexander Hermans Binh-Son Hua
Taeyoung Hahn Luis Herranz Yang Hua
Levente Hajder John R. Hershey Bingyao Huang
Peter Hall Aaron Hertzmann Di Huang
Emanuela Haller Roei Herzig Dong Huang
Stefan Haller Anders Heyden Fay Huang
Bumsub Ham Steven Hickson Haibin Huang
Abdullah Hamdi Otmar Hilliges Haozhi Huang
Dongyoon Han Tomas Hodan Heng Huang
Hu Han Judy Hoffman Huaibo Huang
Jungong Han Michael Hofmann Jia-Bin Huang
Junwei Han Yannick Hold-Geoffroy Jing Huang
Kai Han Namdar Homayounfar Jingwei Huang
Tian Han Sina Honari Kaizhu Huang
Xiaoguang Han Richang Hong Lei Huang
Xintong Han Seunghoon Hong Qiangui Huang
Yahong Han Xiaopeng Hong Qiaoying Huang
Ankur Handa Yi Hong Qingqiu Huang
Zekun Hao Hidekata Hontani Qixing Huang
Albert Haque Anthony Hoogs Shaoli Huang
Tatsuya Harada Yedid Hoshen Sheng Huang
Mehrtash Harandi Mir Rayat Imtiaz Hossain Siyuan Huang
Adam W. Harley Junhui Hou Weilin Huang
Mahmudul Hasan Le Hou Wenbing Huang
Atsushi Hashimoto Lu Hou Xiangru Huang
Ali Hatamizadeh Tingbo Hou Xun Huang
Munawar Hayat Wei-Lin Hsiao Yan Huang
Dongliang He Cheng-Chun Hsu Yifei Huang
Jingrui He Gee-Sern Jison Hsu Yue Huang
Junfeng He Kuang-jui Hsu Zhiwu Huang
Kaiming He Changbo Hu Zilong Huang
Kun He Di Hu Minyoung Huh
Lei He Guosheng Hu Zhuo Hui
Pan He Han Hu Matthias B. Hullin
Ran He Hao Hu Martin Humenberger
Shengfeng He Hexiang Hu Wei-Chih Hung
Tong He Hou-Ning Hu Zhouyuan Huo
Weipeng He Jie Hu Junhwa Hur
Xuming He Junlin Hu Noureldien Hussein
Yang He Nan Hu Jyh-Jing Hwang
Yihui He Ping Hu Seong Jae Hwang
Organization xxiii

Sung Ju Hwang Lai Jiang Christopher Kanan


Ichiro Ide Li Jiang Kenichi Kanatani
Ivo Ihrke Lu Jiang Angjoo Kanazawa
Daiki Ikami Ming Jiang Atsushi Kanehira
Satoshi Ikehata Peng Jiang Takuhiro Kaneko
Nazli Ikizler-Cinbis Shuqiang Jiang Asako Kanezaki
Sunghoon Im Wei Jiang Bingyi Kang
Yani Ioannou Xudong Jiang Di Kang
Radu Tudor Ionescu Zhuolin Jiang Sunghun Kang
Umar Iqbal Jianbo Jiao Zhao Kang
Go Irie Zequn Jie Vadim Kantorov
Ahmet Iscen Dakai Jin Abhishek Kar
Md Amirul Islam Kyong Hwan Jin Amlan Kar
Vamsi Ithapu Lianwen Jin Theofanis Karaletsos
Nathan Jacobs SouYoung Jin Leonid Karlinsky
Arpit Jain Xiaojie Jin Kevin Karsch
Himalaya Jain Xin Jin Angelos Katharopoulos
Suyog Jain Nebojsa Jojic Isinsu Katircioglu
Stuart James Alexis Joly Hiroharu Kato
Won-Dong Jang Michael Jeffrey Jones Zoltan Kato
Yunseok Jang Hanbyul Joo Dotan Kaufman
Ronnachai Jaroensri Jungseock Joo Jan Kautz
Dinesh Jayaraman Kyungdon Joo Rei Kawakami
Sadeep Jayasumana Ajjen Joshi Qiuhong Ke
Suren Jayasuriya Shantanu H. Joshi Wadim Kehl
Herve Jegou Da-Cheng Juan Petr Kellnhofer
Simon Jenni Marco Körner Aniruddha Kembhavi
Hae-Gon Jeon Kevin Köser Cem Keskin
Yunho Jeon Asim Kadav Margret Keuper
Koteswar R. Jerripothula Christine Kaeser-Chen Daniel Keysers
Hueihan Jhuang Kushal Kafle Ashkan Khakzar
I-hong Jhuo Dagmar Kainmueller Fahad Khan
Dinghuang Ji Ioannis A. Kakadiaris Naeemullah Khan
Hui Ji Zdenek Kalal Salman Khan
Jingwei Ji Nima Kalantari Siddhesh Khandelwal
Pan Ji Yannis Kalantidis Rawal Khirodkar
Yanli Ji Mahdi M. Kalayeh Anna Khoreva
Baoxiong Jia Anmol Kalia Tejas Khot
Kui Jia Sinan Kalkan Parmeshwar Khurd
Xu Jia Vicky Kalogeiton Hadi Kiapour
Chiyu Max Jiang Ashwin Kalyan Joe Kileel
Haiyong Jiang Joni-kristian Kamarainen Chanho Kim
Hao Jiang Gerda Kamberova Dahun Kim
Huaizu Jiang Chandra Kambhamettu Edward Kim
Huajie Jiang Martin Kampel Eunwoo Kim
Ke Jiang Meina Kan Han-ul Kim
xxiv Organization

Hansung Kim Adam Kortylewski Xiangyuan lan


Heewon Kim Jana Kosecka Xu Lan
Hyo Jin Kim Jean Kossaifi Charis Lanaras
Hyunwoo J. Kim Satwik Kottur Georg Langs
Jinkyu Kim Rigas Kouskouridas Oswald Lanz
Jiwon Kim Adriana Kovashka Dong Lao
Jongmin Kim Rama Kovvuri Yizhen Lao
Junsik Kim Adarsh Kowdle Agata Lapedriza
Junyeong Kim Jedrzej Kozerawski Gustav Larsson
Min H. Kim Mateusz Kozinski Viktor Larsson
Namil Kim Philipp Kraehenbuehl Katrin Lasinger
Pyojin Kim Gregory Kramida Christoph Lassner
Seon Joo Kim Josip Krapac Longin Jan Latecki
Seong Tae Kim Dmitry Kravchenko Stéphane Lathuilière
Seungryong Kim Ranjay Krishna Rynson Lau
Sungwoong Kim Pavel Krsek Hei Law
Tae Hyun Kim Alexander Krull Justin Lazarow
Vladimir Kim Jakob Kruse Svetlana Lazebnik
Won Hwa Kim Hiroyuki Kubo Hieu Le
Yonghyun Kim Hilde Kuehne Huu Le
Benjamin Kimia Jason Kuen Ngan Hoang Le
Akisato Kimura Andreas Kuhn Trung-Nghia Le
Pieter-Jan Kindermans Arjan Kuijper Vuong Le
Zsolt Kira Zuzana Kukelova Colin Lea
Itaru Kitahara Ajay Kumar Erik Learned-Miller
Hedvig Kjellstrom Amit Kumar Chen-Yu Lee
Jan Knopp Avinash Kumar Gim Hee Lee
Takumi Kobayashi Suryansh Kumar Hsin-Ying Lee
Erich Kobler Vijay Kumar Hyungtae Lee
Parker Koch Kaustav Kundu Jae-Han Lee
Reinhard Koch Weicheng Kuo Jimmy Addison Lee
Elyor Kodirov Nojun Kwak Joonseok Lee
Amir Kolaman Suha Kwak Kibok Lee
Nicholas Kolkin Junseok Kwon Kuang-Huei Lee
Dimitrios Kollias Nikolaos Kyriazis Kwonjoon Lee
Stefanos Kollias Zorah Lähner Minsik Lee
Soheil Kolouri Ankit Laddha Sang-chul Lee
Adams Wai-Kin Kong Florent Lafarge Seungkyu Lee
Naejin Kong Jean Lahoud Soochan Lee
Shu Kong Kevin Lai Stefan Lee
Tao Kong Shang-Hong Lai Taehee Lee
Yu Kong Wei-Sheng Lai Andreas Lehrmann
Yoshinori Konishi Yu-Kun Lai Jie Lei
Daniil Kononenko Iro Laina Peng Lei
Theodora Kontogianni Antony Lam Matthew Joseph Leotta
Simon Korman John Wheatley Lambert Wee Kheng Leow
Organization xxv

Gil Levi Sheng Li Renjie Liao


Evgeny Levinkov Shiwei Li Shengcai Liao
Aviad Levis Shuang Li Shuai Liao
Jose Lezama Siyang Li Yiyi Liao
Ang Li Stan Z. Li Ser-Nam Lim
Bin Li Tianye Li Chen-Hsuan Lin
Bing Li Wei Li Chung-Ching Lin
Boyi Li Weixin Li Dahua Lin
Changsheng Li Wen Li Ji Lin
Chao Li Wenbo Li Kevin Lin
Chen Li Xiaomeng Li Tianwei Lin
Cheng Li Xin Li Tsung-Yi Lin
Chenglong Li Xiu Li Tsung-Yu Lin
Chi Li Xuelong Li Wei-An Lin
Chun-Guang Li Xueting Li Weiyao Lin
Chun-Liang Li Yan Li Yen-Chen Lin
Chunyuan Li Yandong Li Yuewei Lin
Dong Li Yanghao Li David B. Lindell
Guanbin Li Yehao Li Drew Linsley
Hao Li Yi Li Krzysztof Lis
Haoxiang Li Yijun Li Roee Litman
Hongsheng Li Yikang LI Jim Little
Hongyang Li Yining Li An-An Liu
Houqiang Li Yongjie Li Bo Liu
Huibin Li Yu Li Buyu Liu
Jia Li Yu-Jhe Li Chao Liu
Jianan Li Yunpeng Li Chen Liu
Jianguo Li Yunsheng Li Cheng-lin Liu
Junnan Li Yunzhu Li Chenxi Liu
Junxuan Li Zhe Li Dong Liu
Kai Li Zhen Li Feng Liu
Ke Li Zhengqi Li Guilin Liu
Kejie Li Zhenyang Li Haomiao Liu
Kunpeng Li Zhuwen Li Heshan Liu
Lerenhan Li Dongze Lian Hong Liu
Li Erran Li Xiaochen Lian Ji Liu
Mengtian Li Zhouhui Lian Jingen Liu
Mu Li Chen Liang Jun Liu
Peihua Li Jie Liang Lanlan Liu
Peiyi Li Ming Liang Li Liu
Ping Li Paul Pu Liang Liu Liu
Qi Li Pengpeng Liang Mengyuan Liu
Qing Li Shu Liang Miaomiao Liu
Ruiyu Li Wei Liang Nian Liu
Ruoteng Li Jing Liao Ping Liu
Shaozi Li Minghui Liao Risheng Liu
xxvi Organization

Sheng Liu Yang Long K. T. Ma


Shu Liu Charles T. Loop Ke Ma
Shuaicheng Liu Antonio Lopez Lin Ma
Sifei Liu Roberto J. Lopez-Sastre Liqian Ma
Siqi Liu Javier Lorenzo-Navarro Shugao Ma
Siying Liu Manolis Lourakis Wei-Chiu Ma
Songtao Liu Boyu Lu Xiaojian Ma
Ting Liu Canyi Lu Xingjun Ma
Tongliang Liu Feng Lu Zhanyu Ma
Tyng-Luh Liu Guoyu Lu Zheng Ma
Wanquan Liu Hongtao Lu Radek Jakob Mackowiak
Wei Liu Jiajun Lu Ludovic Magerand
Weiyang Liu Jiasen Lu Shweta Mahajan
Weizhe Liu Jiwen Lu Siddharth Mahendran
Wenyu Liu Kaiyue Lu Long Mai
Wu Liu Le Lu Ameesh Makadia
Xialei Liu Shao-Ping Lu Oscar Mendez Maldonado
Xianglong Liu Shijian Lu Mateusz Malinowski
Xiaodong Liu Xiankai Lu Yury Malkov
Xiaofeng Liu Xin Lu Arun Mallya
Xihui Liu Yao Lu Dipu Manandhar
Xingyu Liu Yiping Lu Massimiliano Mancini
Xinwang Liu Yongxi Lu Fabian Manhardt
Xuanqing Liu Yongyi Lu Kevis-kokitsi Maninis
Xuebo Liu Zhiwu Lu Varun Manjunatha
Yang Liu Fujun Luan Junhua Mao
Yaojie Liu Benjamin E. Lundell Xudong Mao
Yebin Liu Hao Luo Alina Marcu
Yen-Cheng Liu Jian-Hao Luo Edgar Margffoy-Tuay
Yiming Liu Ruotian Luo Dmitrii Marin
Yu Liu Weixin Luo Manuel J. Marin-Jimenez
Yu-Shen Liu Wenhan Luo Kenneth Marino
Yufan Liu Wenjie Luo Niki Martinel
Yun Liu Yan Luo Julieta Martinez
Zheng Liu Zelun Luo Jonathan Masci
Zhijian Liu Zixin Luo Tomohiro Mashita
Zhuang Liu Khoa Luu Iacopo Masi
Zichuan Liu Zhaoyang Lv David Masip
Ziwei Liu Pengyuan Lyu Daniela Massiceti
Zongyi Liu Thomas Möllenhoff Stefan Mathe
Stephan Liwicki Matthias Müller Yusuke Matsui
Liliana Lo Presti Bingpeng Ma Tetsu Matsukawa
Chengjiang Long Chih-Yao Ma Iain A. Matthews
Fuchen Long Chongyang Ma Kevin James Matzen
Mingsheng Long Huimin Ma Bruce Allen Maxwell
Xiang Long Jiayi Ma Stephen Maybank
Organization xxvii

Helmut Mayer Pritish Mohapatra Lakshmanan Nataraj


Amir Mazaheri Pavlo Molchanov Neda Nategh
David McAllester Davide Moltisanti Nelson Isao Nauata
Steven McDonagh Pascal Monasse Fernando Navarro
Stephen J. Mckenna Mathew Monfort Shah Nawaz
Roey Mechrez Aron Monszpart Lukas Neumann
Prakhar Mehrotra Sean Moran Ram Nevatia
Christopher Mei Vlad I. Morariu Alejandro Newell
Xue Mei Francesc Moreno-Noguer Shawn Newsam
Paulo R. S. Mendonca Pietro Morerio Joe Yue-Hei Ng
Lili Meng Stylianos Moschoglou Trung Thanh Ngo
Zibo Meng Yael Moses Duc Thanh Nguyen
Thomas Mensink Roozbeh Mottaghi Lam M. Nguyen
Bjoern Menze Pierre Moulon Phuc Xuan Nguyen
Michele Merler Arsalan Mousavian Thuong Nguyen Canh
Kourosh Meshgi Yadong Mu Mihalis Nicolaou
Pascal Mettes Yasuhiro Mukaigawa Andrei Liviu Nicolicioiu
Christopher Metzler Lopamudra Mukherjee Xuecheng Nie
Liang Mi Yusuke Mukuta Michael Niemeyer
Qiguang Miao Ravi Teja Mullapudi Simon Niklaus
Xin Miao Mario Enrique Munich Christophoros Nikou
Tomer Michaeli Zachary Murez David Nilsson
Frank Michel Ana C. Murillo Jifeng Ning
Antoine Miech J. Krishna Murthy Yuval Nirkin
Krystian Mikolajczyk Damien Muselet Li Niu
Peyman Milanfar Armin Mustafa Yuzhen Niu
Ben Mildenhall Siva Karthik Mustikovela Zhenxing Niu
Gregor Miller Carlo Dal Mutto Shohei Nobuhara
Fausto Milletari Moin Nabi Nicoletta Noceti
Dongbo Min Varun K. Nagaraja Hyeonwoo Noh
Kyle Min Tushar Nagarajan Junhyug Noh
Pedro Miraldo Arsha Nagrani Mehdi Noroozi
Dmytro Mishkin Seungjun Nah Sotiris Nousias
Anand Mishra Nikhil Naik Valsamis Ntouskos
Ashish Mishra Yoshikatsu Nakajima Matthew O’Toole
Ishan Misra Yuta Nakashima Peter Ochs
Niluthpol C. Mithun Atsushi Nakazawa Ferda Ofli
Kaushik Mitra Seonghyeon Nam Seong Joon Oh
Niloy Mitra Vinay P. Namboodiri Seoung Wug Oh
Anton Mitrokhin Medhini Narasimhan Iason Oikonomidis
Ikuhisa Mitsugami Srinivasa Narasimhan Utkarsh Ojha
Anurag Mittal Sanath Narayan Takahiro Okabe
Kaichun Mo Erickson Rangel Takayuki Okatani
Zhipeng Mo Nascimento Fumio Okura
Davide Modolo Jacinto Nascimento Aude Oliva
Michael Moeller Tayyab Naseer Kyle Olszewski
xxviii Organization

Björn Ommer Nikolaos Passalis Daniel Pizarro


Mohamed Omran Vishal Patel Tobias Plötz
Elisabeta Oneata Viorica Patraucean Mirco Planamente
Michael Opitz Badri Narayana Patro Matteo Poggi
Jose Oramas Danda Pani Paudel Moacir A. Ponti
Tribhuvanesh Orekondy Sujoy Paul Parita Pooj
Shaul Oron Georgios Pavlakos Fatih Porikli
Sergio Orts-Escolano Ioannis Pavlidis Horst Possegger
Ivan Oseledets Vladimir Pavlovic Omid Poursaeed
Aljosa Osep Nick Pears Ameya Prabhu
Magnus Oskarsson Kim Steenstrup Pedersen Viraj Uday Prabhu
Anton Osokin Selen Pehlivan Dilip Prasad
Martin R. Oswald Shmuel Peleg Brian L. Price
Wanli Ouyang Chao Peng True Price
Andrew Owens Houwen Peng Maria Priisalu
Mete Ozay Wen-Hsiao Peng Veronique Prinet
Mustafa Ozuysal Xi Peng Victor Adrian Prisacariu
Eduardo Pérez-Pellitero Xiaojiang Peng Jan Prokaj
Gautam Pai Xingchao Peng Sergey Prokudin
Dipan Kumar Pal Yuxin Peng Nicolas Pugeault
P. H. Pamplona Savarese Federico Perazzi Xavier Puig
Jinshan Pan Juan Camilo Perez Albert Pumarola
Junting Pan Vishwanath Peri Pulak Purkait
Xingang Pan Federico Pernici Senthil Purushwalkam
Yingwei Pan Luca Del Pero Charles R. Qi
Yannis Panagakis Florent Perronnin Hang Qi
Rameswar Panda Stavros Petridis Haozhi Qi
Guan Pang Henning Petzka Lu Qi
Jiahao Pang Patrick Peursum Mengshi Qi
Jiangmiao Pang Michael Pfeiffer Siyuan Qi
Tianyu Pang Hanspeter Pfister Xiaojuan Qi
Sharath Pankanti Roman Pflugfelder Yuankai Qi
Nicolas Papadakis Minh Tri Pham Shengju Qian
Dim Papadopoulos Yongri Piao Xuelin Qian
George Papandreou David Picard Siyuan Qiao
Toufiq Parag Tomasz Pieciak Yu Qiao
Shaifali Parashar A. J. Piergiovanni Jie Qin
Sarah Parisot Andrea Pilzer Qiang Qiu
Eunhyeok Park Pedro O. Pinheiro Weichao Qiu
Hyun Soo Park Silvia Laura Pintea Zhaofan Qiu
Jaesik Park Lerrel Pinto Kha Gia Quach
Min-Gyu Park Axel Pinz Yuhui Quan
Taesung Park Robinson Piramuthu Yvain Queau
Alvaro Parra Fiora Pirri Julian Quiroga
C. Alejandro Parraga Leonid Pishchulin Faisal Qureshi
Despoina Paschalidou Francesco Pittaluga Mahdi Rad
Organization xxix

Filip Radenovic Zhou Ren Chris Russell


Petia Radeva Vijay Rengarajan Dan Ruta
Venkatesh Md A. Reza Jongbin Ryu
B. Radhakrishnan Farzaneh Rezaeianaran Ömer Sümer
Ilija Radosavovic Hamed R. Tavakoli Alexandre Sablayrolles
Noha Radwan Nicholas Rhinehart Faraz Saeedan
Rahul Raguram Helge Rhodin Ryusuke Sagawa
Tanzila Rahman Elisa Ricci Christos Sagonas
Amit Raj Alexander Richard Tonmoy Saikia
Ajit Rajwade Eitan Richardson Hideo Saito
Kandan Ramakrishnan Elad Richardson Kuniaki Saito
Santhosh Christian Richardt Shunsuke Saito
K. Ramakrishnan Stephan Richter Shunta Saito
Srikumar Ramalingam Gernot Riegler Ken Sakurada
Ravi Ramamoorthi Daniel Ritchie Joaquin Salas
Vasili Ramanishka Tobias Ritschel Fatemeh Sadat Saleh
Ramprasaath R. Selvaraju Samuel Rivera Mahdi Saleh
Francois Rameau Yong Man Ro Pouya Samangouei
Visvanathan Ramesh Richard Roberts Leo Sampaio
Santu Rana Joseph Robinson Ferraz Ribeiro
Rene Ranftl Ignacio Rocco Artsiom Olegovich
Anand Rangarajan Mrigank Rochan Sanakoyeu
Anurag Ranjan Emanuele Rodolà Enrique Sanchez
Viresh Ranjan Mikel D. Rodriguez Patsorn Sangkloy
Yongming Rao Giorgio Roffo Anush Sankaran
Carolina Raposo Grégory Rogez Aswin Sankaranarayanan
Vivek Rathod Gemma Roig Swami Sankaranarayanan
Sathya N. Ravi Javier Romero Rodrigo Santa Cruz
Avinash Ravichandran Xuejian Rong Amartya Sanyal
Tammy Riklin Raviv Yu Rong Archana Sapkota
Daniel Rebain Amir Rosenfeld Nikolaos Sarafianos
Sylvestre-Alvise Rebuffi Bodo Rosenhahn Jun Sato
N. Dinesh Reddy Guy Rosman Shin’ichi Satoh
Timo Rehfeld Arun Ross Hosnieh Sattar
Paolo Remagnino Paolo Rota Arman Savran
Konstantinos Rematas Peter M. Roth Manolis Savva
Edoardo Remelli Anastasios Roussos Alexander Sax
Dongwei Ren Anirban Roy Hanno Scharr
Haibing Ren Sebastien Roy Simone Schaub-Meyer
Jian Ren Aruni RoyChowdhury Konrad Schindler
Jimmy Ren Artem Rozantsev Dmitrij Schlesinger
Mengye Ren Ognjen Rudovic Uwe Schmidt
Weihong Ren Daniel Rueckert Dirk Schnieders
Wenqi Ren Adria Ruiz Björn Schuller
Zhile Ren Javier Ruiz-del-solar Samuel Schulter
Zhongzheng Ren Christian Rupprecht Idan Schwartz
xxx Organization

William Robson Schwartz Hailin Shi Roger


Alex Schwing Miaojing Shi D. Soberanis-Mukul
Sinisa Segvic Yemin Shi Kihyuk Sohn
Lorenzo Seidenari Zhenmei Shi Francesco Solera
Pradeep Sen Zhiyuan Shi Eric Sommerlade
Ozan Sener Kevin Jonathan Shih Sanghyun Son
Soumyadip Sengupta Shiliang Shiliang Byung Cheol Song
Arda Senocak Hyunjung Shim Chunfeng Song
Mojtaba Seyedhosseini Atsushi Shimada Dongjin Song
Shishir Shah Nobutaka Shimada Jiaming Song
Shital Shah Daeyun Shin Jie Song
Sohil Atul Shah Young Min Shin Jifei Song
Tamar Rott Shaham Koichi Shinoda Jingkuan Song
Huasong Shan Konstantin Shmelkov Mingli Song
Qi Shan Michael Zheng Shou Shiyu Song
Shiguang Shan Abhinav Shrivastava Shuran Song
Jing Shao Tianmin Shu Xiao Song
Roman Shapovalov Zhixin Shu Yafei Song
Gaurav Sharma Hong-Han Shuai Yale Song
Vivek Sharma Pushkar Shukla Yang Song
Viktoriia Sharmanska Christian Siagian Yi-Zhe Song
Dongyu She Mennatullah M. Siam Yibing Song
Sumit Shekhar Kaleem Siddiqi Humberto Sossa
Evan Shelhamer Karan Sikka Cesar de Souza
Chengyao Shen Jae-Young Sim Adrian Spurr
Chunhua Shen Christian Simon Srinath Sridhar
Falong Shen Martin Simonovsky Suraj Srinivas
Jie Shen Dheeraj Singaraju Pratul P. Srinivasan
Li Shen Bharat Singh Anuj Srivastava
Liyue Shen Gurkirt Singh Tania Stathaki
Shuhan Shen Krishna Kumar Singh Christopher Stauffer
Tianwei Shen Maneesh Kumar Singh Simon Stent
Wei Shen Richa Singh Rainer Stiefelhagen
William B. Shen Saurabh Singh Pierre Stock
Yantao Shen Suriya Singh Julian Straub
Ying Shen Vikas Singh Jonathan C. Stroud
Yiru Shen Sudipta N. Sinha Joerg Stueckler
Yujun Shen Vincent Sitzmann Jan Stuehmer
Yuming Shen Josef Sivic David Stutz
Zhiqiang Shen Gregory Slabaugh Chi Su
Ziyi Shen Miroslava Slavcheva Hang Su
Lu Sheng Ron Slossberg Jong-Chyi Su
Yu Sheng Brandon Smith Shuochen Su
Rakshith Shetty Kevin Smith Yu-Chuan Su
Baoguang Shi Vladimir Smutny Ramanathan Subramanian
Guangming Shi Noah Snavely Yusuke Sugano
Organization xxxi

Masanori Suganuma Xiaoyang Tan Andrea Torsello


Yumin Suh Kenichiro Tanaka Fabio Tosi
Mohammed Suhail Masayuki Tanaka Du Tran
Yao Sui Chang Tang Luan Tran
Heung-Il Suk Chengzhou Tang Ngoc-Trung Tran
Josephine Sullivan Danhang Tang Quan Hung Tran
Baochen Sun Ming Tang Truyen Tran
Chen Sun Peng Tang Rudolph Triebel
Chong Sun Qingming Tang Martin Trimmel
Deqing Sun Wei Tang Shashank Tripathi
Jin Sun Xu Tang Subarna Tripathi
Liang Sun Yansong Tang Leonardo Trujillo
Lin Sun Youbao Tang Eduard Trulls
Qianru Sun Yuxing Tang Tomasz Trzcinski
Shao-Hua Sun Zhiqiang Tang Sam Tsai
Shuyang Sun Tatsunori Taniai Yi-Hsuan Tsai
Weiwei Sun Junli Tao Hung-Yu Tseng
Wenxiu Sun Xin Tao Stavros Tsogkas
Xiaoshuai Sun Makarand Tapaswi Aggeliki Tsoli
Xiaoxiao Sun Jean-Philippe Tarel Devis Tuia
Xingyuan Sun Lyne Tchapmi Shubham Tulsiani
Yifan Sun Zachary Teed Sergey Tulyakov
Zhun Sun Bugra Tekin Frederick Tung
Sabine Susstrunk Damien Teney Tony Tung
David Suter Ayush Tewari Daniyar Turmukhambetov
Supasorn Suwajanakorn Christian Theobalt Ambrish Tyagi
Tomas Svoboda Christopher Thomas Radim Tylecek
Eran Swears Diego Thomas Christos Tzelepis
Paul Swoboda Jim Thomas Georgios Tzimiropoulos
Attila Szabo Rajat Mani Thomas Dimitrios Tzionas
Richard Szeliski Xinmei Tian Seiichi Uchida
Duy-Nguyen Ta Yapeng Tian Norimichi Ukita
Andrea Tagliasacchi Yingli Tian Dmitry Ulyanov
Yuichi Taguchi Yonglong Tian Martin Urschler
Ying Tai Zhi Tian Yoshitaka Ushiku
Keita Takahashi Zhuotao Tian Ben Usman
Kouske Takahashi Kinh Tieu Alexander Vakhitov
Jun Takamatsu Joseph Tighe Julien P. C. Valentin
Hugues Talbot Massimo Tistarelli Jack Valmadre
Toru Tamaki Matthew Toews Ernest Valveny
Chaowei Tan Carl Toft Joost van de Weijer
Fuwen Tan Pavel Tokmakov Jan van Gemert
Mingkui Tan Federico Tombari Koen Van Leemput
Mingxing Tan Chetan Tonde Gul Varol
Qingyang Tan Yan Tong Sebastiano Vascon
Robby T. Tan Alessio Tonioni M. Alex O. Vasilescu
xxxii Organization

Subeesh Vasu Hongxing Wang Tao Wang


Mayank Vatsa Hua Wang Tianlu Wang
David Vazquez Jian Wang Tiantian Wang
Javier Vazquez-Corral Jingbo Wang Ting-chun Wang
Ashok Veeraraghavan Jinglu Wang Tingwu Wang
Erik Velasco-Salido Jingya Wang Wei Wang
Raviteja Vemulapalli Jinjun Wang Weiyue Wang
Jonathan Ventura Jinqiao Wang Wenguan Wang
Manisha Verma Jue Wang Wenlin Wang
Roberto Vezzani Ke Wang Wenqi Wang
Ruben Villegas Keze Wang Xiang Wang
Minh Vo Le Wang Xiaobo Wang
MinhDuc Vo Lei Wang Xiaofang Wang
Nam Vo Lezi Wang Xiaoling Wang
Michele Volpi Li Wang Xiaolong Wang
Riccardo Volpi Liang Wang Xiaosong Wang
Carl Vondrick Lijun Wang Xiaoyu Wang
Konstantinos Vougioukas Limin Wang Xin Eric Wang
Tuan-Hung Vu Linwei Wang Xinchao Wang
Sven Wachsmuth Lizhi Wang Xinggang Wang
Neal Wadhwa Mengjiao Wang Xintao Wang
Catherine Wah Mingzhe Wang Yali Wang
Jacob C. Walker Minsi Wang Yan Wang
Thomas S. A. Wallis Naiyan Wang Yang Wang
Chengde Wan Nannan Wang Yangang Wang
Jun Wan Ning Wang Yaxing Wang
Liang Wan Oliver Wang Yi Wang
Renjie Wan Pei Wang Yida Wang
Baoyuan Wang Peng Wang Yilin Wang
Boyu Wang Pichao Wang Yiming Wang
Cheng Wang Qi Wang Yisen Wang
Chu Wang Qian Wang Yongtao Wang
Chuan Wang Qiaosong Wang Yu-Xiong Wang
Chunyu Wang Qifei Wang Yue Wang
Dequan Wang Qilong Wang Yujiang Wang
Di Wang Qing Wang Yunbo Wang
Dilin Wang Qingzhong Wang Yunhe Wang
Dong Wang Quan Wang Zengmao Wang
Fang Wang Rui Wang Zhangyang Wang
Guanzhi Wang Ruiping Wang Zhaowen Wang
Guoyin Wang Ruixing Wang Zhe Wang
Hanzi Wang Shangfei Wang Zhecan Wang
Hao Wang Shenlong Wang Zheng Wang
He Wang Shiyao Wang Zhixiang Wang
Heng Wang Shuhui Wang Zilei Wang
Hongcheng Wang Song Wang Jianqiao Wangni
Organization xxxiii

Anne S. Wannenwetsch Jialin Wu Yang Xiao


Jan Dirk Wegner Jiaxiang Wu Cihang Xie
Scott Wehrwein Jiqing Wu Guosen Xie
Donglai Wei Jonathan Wu Jianwen Xie
Kaixuan Wei Lifang Wu Lingxi Xie
Longhui Wei Qi Wu Sirui Xie
Pengxu Wei Qiang Wu Weidi Xie
Ping Wei Ruizheng Wu Wenxuan Xie
Qi Wei Shangzhe Wu Xiaohua Xie
Shih-En Wei Shun-Cheng Wu Fuyong Xing
Xing Wei Tianfu Wu Jun Xing
Yunchao Wei Wayne Wu Junliang Xing
Zijun Wei Wenxuan Wu Bo Xiong
Jerod Weinman Xiao Wu Peixi Xiong
Michael Weinmann Xiaohe Wu Yu Xiong
Philippe Weinzaepfel Xinxiao Wu Yuanjun Xiong
Yair Weiss Yang Wu Zhiwei Xiong
Bihan Wen Yi Wu Chang Xu
Longyin Wen Yiming Wu Chenliang Xu
Wei Wen Ying Nian Wu Dan Xu
Junwu Weng Yue Wu Danfei Xu
Tsui-Wei Weng Zheng Wu Hang Xu
Xinshuo Weng Zhenyu Wu Hongteng Xu
Eric Wengrowski Zhirong Wu Huijuan Xu
Tomas Werner Zuxuan Wu Jingwei Xu
Gordon Wetzstein Stefanie Wuhrer Jun Xu
Tobias Weyand Jonas Wulff Kai Xu
Patrick Wieschollek Changqun Xia Mengmeng Xu
Maggie Wigness Fangting Xia Mingze Xu
Erik Wijmans Fei Xia Qianqian Xu
Richard Wildes Gui-Song Xia Ran Xu
Olivia Wiles Lu Xia Weijian Xu
Chris Williams Xide Xia Xiangyu Xu
Williem Williem Yin Xia Xiaogang Xu
Kyle Wilson Yingce Xia Xing Xu
Calden Wloka Yongqin Xian Xun Xu
Nicolai Wojke Lei Xiang Yanyu Xu
Christian Wolf Shiming Xiang Yichao Xu
Yongkang Wong Bin Xiao Yong Xu
Sanghyun Woo Fanyi Xiao Yongchao Xu
Scott Workman Guobao Xiao Yuanlu Xu
Baoyuan Wu Huaxin Xiao Zenglin Xu
Bichen Wu Taihong Xiao Zheng Xu
Chao-Yuan Wu Tete Xiao Chuhui Xue
Huikai Wu Tong Xiao Jia Xue
Jiajun Wu Wang Xiao Nan Xue
xxxiv Organization

Tianfan Xue Yanchao Yang Ke Yu


Xiangyang Xue Yee Hong Yang Lequan Yu
Abhay Yadav Yezhou Yang Ning Yu
Yasushi Yagi Zhenheng Yang Qian Yu
I. Zeki Yalniz Anbang Yao Ronald Yu
Kota Yamaguchi Angela Yao Ruichi Yu
Toshihiko Yamasaki Cong Yao Shoou-I Yu
Takayoshi Yamashita Jian Yao Tao Yu
Junchi Yan Li Yao Tianshu Yu
Ke Yan Ting Yao Xiang Yu
Qingan Yan Yao Yao Xin Yu
Sijie Yan Zhewei Yao Xiyu Yu
Xinchen Yan Chengxi Ye Youngjae Yu
Yan Yan Jianbo Ye Yu Yu
Yichao Yan Keren Ye Zhiding Yu
Zhicheng Yan Linwei Ye Chunfeng Yuan
Keiji Yanai Mang Ye Ganzhao Yuan
Bin Yang Mao Ye Jinwei Yuan
Ceyuan Yang Qi Ye Lu Yuan
Dawei Yang Qixiang Ye Quan Yuan
Dong Yang Mei-Chen Yeh Shanxin Yuan
Fan Yang Raymond Yeh Tongtong Yuan
Guandao Yang Yu-Ying Yeh Wenjia Yuan
Guorun Yang Sai-Kit Yeung Ye Yuan
Haichuan Yang Serena Yeung Yuan Yuan
Hao Yang Kwang Moo Yi Yuhui Yuan
Jianwei Yang Li Yi Huanjing Yue
Jiaolong Yang Renjiao Yi Xiangyu Yue
Jie Yang Alper Yilmaz Ersin Yumer
Jing Yang Junho Yim Sergey Zagoruyko
Kaiyu Yang Lijun Yin Egor Zakharov
Linjie Yang Weidong Yin Amir Zamir
Meng Yang Xi Yin Andrei Zanfir
Michael Ying Yang Zhichao Yin Mihai Zanfir
Nan Yang Tatsuya Yokota Pablo Zegers
Shuai Yang Ryo Yonetani Bernhard Zeisl
Shuo Yang Donggeun Yoo John S. Zelek
Tianyu Yang Jae Shin Yoon Niclas Zeller
Tien-Ju Yang Ju Hong Yoon Huayi Zeng
Tsun-Yi Yang Sung-eui Yoon Jiabei Zeng
Wei Yang Laurent Younes Wenjun Zeng
Wenhan Yang Changqian Yu Yu Zeng
Xiao Yang Fisher Yu Xiaohua Zhai
Xiaodong Yang Gang Yu Fangneng Zhan
Xin Yang Jiahui Yu Huangying Zhan
Yan Yang Kaicheng Yu Kun Zhan
Organization xxxv

Xiaohang Zhan Shuai Zhang Qijun Zhao


Baochang Zhang Songyang Zhang Rui Zhao
Bowen Zhang Tao Zhang Shenglin Zhao
Cecilia Zhang Ting Zhang Sicheng Zhao
Changqing Zhang Tong Zhang Tianyi Zhao
Chao Zhang Wayne Zhang Wenda Zhao
Chengquan Zhang Wei Zhang Xiangyun Zhao
Chi Zhang Weizhong Zhang Xin Zhao
Chongyang Zhang Wenwei Zhang Yang Zhao
Dingwen Zhang Xiangyu Zhang Yue Zhao
Dong Zhang Xiaolin Zhang Zhichen Zhao
Feihu Zhang Xiaopeng Zhang Zijing Zhao
Hang Zhang Xiaoqin Zhang Xiantong Zhen
Hanwang Zhang Xiuming Zhang Chuanxia Zheng
Hao Zhang Ya Zhang Feng Zheng
He Zhang Yang Zhang Haiyong Zheng
Hongguang Zhang Yimin Zhang Jia Zheng
Hua Zhang Yinda Zhang Kang Zheng
Ji Zhang Ying Zhang Shuai Kyle Zheng
Jianguo Zhang Yongfei Zhang Wei-Shi Zheng
Jianming Zhang Yu Zhang Yinqiang Zheng
Jiawei Zhang Yulun Zhang Zerong Zheng
Jie Zhang Yunhua Zhang Zhedong Zheng
Jing Zhang Yuting Zhang Zilong Zheng
Juyong Zhang Zhanpeng Zhang Bineng Zhong
Kai Zhang Zhao Zhang Fangwei Zhong
Kaipeng Zhang Zhaoxiang Zhang Guangyu Zhong
Ke Zhang Zhen Zhang Yiran Zhong
Le Zhang Zheng Zhang Yujie Zhong
Lei Zhang Zhifei Zhang Zhun Zhong
Li Zhang Zhijin Zhang Chunluan Zhou
Lihe Zhang Zhishuai Zhang Huiyu Zhou
Linguang Zhang Ziming Zhang Jiahuan Zhou
Lu Zhang Bo Zhao Jun Zhou
Mi Zhang Chen Zhao Lei Zhou
Mingda Zhang Fang Zhao Luowei Zhou
Peng Zhang Haiyu Zhao Luping Zhou
Pingping Zhang Han Zhao Mo Zhou
Qian Zhang Hang Zhao Ning Zhou
Qilin Zhang Hengshuang Zhao Pan Zhou
Quanshi Zhang Jian Zhao Peng Zhou
Richard Zhang Kai Zhao Qianyi Zhou
Rui Zhang Liang Zhao S. Kevin Zhou
Runze Zhang Long Zhao Sanping Zhou
Shengping Zhang Qian Zhao Wengang Zhou
Shifeng Zhang Qibin Zhao Xingyi Zhou
xxxvi Organization

Yanzhao Zhou Wei Zhu Christian Zimmermann


Yi Zhou Xiangyu Zhu Karel Zimmermann
Yin Zhou Xinge Zhu Larry Zitnick
Yipin Zhou Xizhou Zhu Mohammadreza
Yuyin Zhou Yanjun Zhu Zolfaghari
Zihan Zhou Yi Zhu Maria Zontak
Alex Zihao Zhu Yixin Zhu Daniel Zoran
Chenchen Zhu Yizhe Zhu Changqing Zou
Feng Zhu Yousong Zhu Chuhang Zou
Guangming Zhu Zhe Zhu Danping Zou
Ji Zhu Zhen Zhu Qi Zou
Jun-Yan Zhu Zheng Zhu Yang Zou
Lei Zhu Zhenyao Zhu Yuliang Zou
Linchao Zhu Zhihui Zhu Georgios Zoumpourlis
Rui Zhu Zhuotun Zhu Wangmeng Zuo
Shizhan Zhu Bingbing Zhuang Xinxin Zuo
Tyler Lixuan Zhu Wei Zhuo

Additional Reviewers

Victoria Fernandez Jonathan P. Crall Jaedong Hwang


Abrevaya Kenan Dai Andrey Ignatov
Maya Aghaei Lucas Deecke Muhammad
Allam Allam Karan Desai Abdullah Jamal
Christine Prithviraj Dhar Saumya Jetley
Allen-Blanchette Jing Dong Meiguang Jin
Nicolas Aziere Wei Dong Jeff Johnson
Assia Benbihi Turan Kaan Elgin Minsoo Kang
Neha Bhargava Francis Engelmann Saeed Khorram
Bharat Lal Bhatnagar Erik Englesson Mohammad Rami Koujan
Joanna Bitton Fartash Faghri Nilesh Kulkarni
Judy Borowski Zicong Fan Sudhakar Kumawat
Amine Bourki Yang Fu Abdelhak Lemkhenter
Romain Brégier Risheek Garrepalli Alexander Levine
Tali Brayer Yifan Ge Jiachen Li
Sebastian Bujwid Marco Godi Jing Li
Andrea Burns Helmut Grabner Jun Li
Yun-Hao Cao Shuxuan Guo Yi Li
Yuning Chai Jianfeng He Liang Liao
Xiaojun Chang Zhezhi He Ruochen Liao
Bo Chen Samitha Herath Tzu-Heng Lin
Shuo Chen Chih-Hui Ho Phillip Lippe
Zhixiang Chen Yicong Hong Bao-di Liu
Junsuk Choe Vincent Tao Hu Bo Liu
Hung-Kuo Chu Julio Hurtado Fangchen Liu
Organization xxxvii

Hanxiao Liu Ketul Shah Yunyang Xiong


Hongyu Liu Rajvi Shah An Xu
Huidong Liu Hengcan Shi Chi Xu
Miao Liu Xiangxi Shi Yinghao Xu
Xinxin Liu Yujiao Shi Fei Xue
Yongfei Liu William A. P. Smith Tingyun Yan
Yu-Lun Liu Guoxian Song Zike Yan
Amir Livne Robin Strudel Chao Yang
Tiange Luo Abby Stylianou Heran Yang
Wei Ma Xinwei Sun Ren Yang
Xiaoxuan Ma Reuben Tan Wenfei Yang
Ioannis Marras Qingyi Tao Xu Yang
Georg Martius Kedar S. Tatwawadi Rajeev Yasarla
Effrosyni Mavroudi Anh Tuan Tran Shaokai Ye
Tim Meinhardt Son Dinh Tran Yufei Ye
Givi Meishvili Eleni Triantafillou Kun Yi
Meng Meng Aristeidis Tsitiridis Haichao Yu
Zihang Meng Md Zasim Uddin Hanchao Yu
Zhongqi Miao Andrea Vedaldi Ruixuan Yu
Gyeongsik Moon Evangelos Ververas Liangzhe Yuan
Khoi Nguyen Vidit Vidit Chen-Lin Zhang
Yung-Kyun Noh Paul Voigtlaender Fandong Zhang
Antonio Norelli Bo Wan Tianyi Zhang
Jaeyoo Park Huanyu Wang Yang Zhang
Alexander Pashevich Huiyu Wang Yiyi Zhang
Mandela Patrick Junqiu Wang Yongshun Zhang
Mary Phuong Pengxiao Wang Yu Zhang
Bingqiao Qian Tai Wang Zhiwei Zhang
Yu Qiao Xinyao Wang Jiaojiao Zhao
Zhen Qiao Tomoki Watanabe Yipu Zhao
Sai Saketh Rambhatla Mark Weber Xingjian Zhen
Aniket Roy Xi Wei Haizhong Zheng
Amelie Royer Botong Wu Tiancheng Zhi
Parikshit Vishwas James Wu Chengju Zhou
Sakurikar Jiamin Wu Hao Zhou
Mark Sandler Rujie Wu Hao Zhu
Mert Bülent Sarıyıldız Yu Wu Alexander Zimin
Tanner Schmidt Rongchang Xie
Anshul B. Shah Wei Xiong
Contents – Part XXVI

EfficientFCN: Holistically-Guided Decoding for Semantic Segmentation . . . . 1


Jianbo Liu, Junjun He, Jiawei Zhang, Jimmy S. Ren, and Hongsheng Li

GroSS: Group-Size Series Decomposition for Grouped


Architecture Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Henry Howard-Jenkins, Yiwen Li, and Victor Adrian Prisacariu

Efficient Adversarial Attacks for Visual Object Tracking . . . . . . . . . . . . . . . 34


Siyuan Liang, Xingxing Wei, Siyuan Yao, and Xiaochun Cao

Globally-Optimal Event Camera Motion Estimation. . . . . . . . . . . . . . . . . . . 51


Xin Peng, Yifu Wang, Ling Gao, and Laurent Kneip

Weakly-Supervised Learning of Human Dynamics . . . . . . . . . . . . . . . . . . . 68


Petrissa Zell, Bodo Rosenhahn, and Bastian Wandt

Journey Towards Tiny Perceptual Super-Resolution. . . . . . . . . . . . . . . . . . . 85


Royson Lee, Łukasz Dudziak, Mohamed Abdelfattah,
Stylianos I. Venieris, Hyeji Kim, Hongkai Wen, and Nicholas D. Lane

What Makes Fake Images Detectable? Understanding Properties


that Generalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Lucy Chai, David Bau, Ser-Nam Lim, and Phillip Isola

Embedding Propagation: Smoother Manifold for Few-Shot Classification . . . . 121


Pau Rodríguez, Issam Laradji, Alexandre Drouin,
and Alexandre Lacoste

Category Level Object Pose Estimation via Neural Analysis-by-Synthesis . . . 139


Xu Chen, Zijian Dong, Jie Song, Andreas Geiger, and Otmar Hilliges

High-Fidelity Synthesis with Disentangled Representation . . . . . . . . . . . . . . 157


Wonkwang Lee, Donggyun Kim, Seunghoon Hong, and Honglak Lee

PL1 P - Point-Line Minimal Problems Under Partial Visibility in Three


Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Timothy Duff, Kathlén Kohn, Anton Leykin, and Tomas Pajdla

Prediction and Recovery for Adaptive Low-Resolution Person


Re-Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Ke Han, Yan Huang, Zerui Chen, Liang Wang, and Tieniu Tan
xl Contents – Part XXVI

Learning Canonical Representations for Scene Graph to Image Generation. . . 210


Roei Herzig, Amir Bar, Huijuan Xu, Gal Chechik, Trevor Darrell,
and Amir Globerson

Adversarial Robustness on In- and Out-Distribution Improves


Explainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Maximilian Augustin, Alexander Meinke, and Matthias Hein

Deformable Style Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246


Sunnie S. Y. Kim, Nicholas Kolkin, Jason Salavon,
and Gregory Shakhnarovich

Aligning Videos in Space and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262


Senthil Purushwalkam, Tian Ye, Saurabh Gupta, and Abhinav Gupta

Neural Wireframe Renderer: Learning Wireframe to Image Translations. . . . . 279


Yuan Xue, Zihan Zhou, and Xiaolei Huang

RBF-Softmax: Learning Deep Representative Prototypes with Radial Basis


Function Softmax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Xiao Zhang, Rui Zhao, Yu Qiao, and Hongsheng Li

Testing the Safety of Self-driving Vehicles by Simulating Perception


and Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Kelvin Wong, Qiang Zhang, Ming Liang, Bin Yang, Renjie Liao,
Abbas Sadat, and Raquel Urtasun

Determining the Relevance of Features for Deep Neural Networks . . . . . . . . 330


Christian Reimers, Jakob Runge, and Joachim Denzler

Weakly Supervised Semantic Segmentation with Boundary Exploration . . . . . 347


Liyi Chen, Weiwei Wu, Chenchen Fu, Xiao Han, and Yuntao Zhang

GANHopper: Multi-hop GAN for Unsupervised Image-to-Image


Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Wallace Lira, Johannes Merz, Daniel Ritchie, Daniel Cohen-Or,
and Hao Zhang

DOPE: Distillation of Part Experts for Whole-Body 3D Pose Estimation


in the Wild. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
Philippe Weinzaepfel, Romain Brégier, Hadrien Combaluzier,
Vincent Leroy, and Grégory Rogez

Multi-view Adaptive Graph Convolutions for Graph Classification . . . . . . . . 398


Nikolas Adaloglou, Nicholas Vretos, and Petros Daras

Instance Adaptive Self-training for Unsupervised Domain Adaptation . . . . . . 415


Ke Mei, Chuang Zhu, Jiaqi Zou, and Shanghang Zhang
Contents – Part XXVI xli

Weight Decay Scheduling and Knowledge Distillation


for Active Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Juseung Yun, Byungjoo Kim, and Junmo Kim

HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs . . . 448
Hai Victor Habi, Roy H. Jennings, and Arnon Netzer

Truncated Inference for Latent Variable Optimization Problems:


Application to Robust Estimation and Learning. . . . . . . . . . . . . . . . . . . . . . 464
Christopher Zach and Huu Le

Geometry Constrained Weakly Supervised Object Localization . . . . . . . . . . . 481


Weizeng Lu, Xi Jia, Weicheng Xie, Linlin Shen, Yicong Zhou,
and Jinming Duan

Duality Diagram Similarity: A Generic Framework for Initialization


Selection in Task Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
Kshitij Dwivedi, Jiahui Huang, Radoslaw Martin Cichy,
and Gemma Roig

OneGAN: Simultaneous Unsupervised Learning of Conditional Image


Generation, Foreground Segmentation, and Fine-Grained Clustering . . . . . . . 514
Yaniv Benny and Lior Wolf

Mining Self-similarity: Label Super-Resolution with Epitomic


Representations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Nikolay Malkin, Anthony Ortiz, and Nebojsa Jojic

AE-OT-GAN: Training GANs from Data Specific Latent Distribution . . . . . . 548


Dongsheng An, Yang Guo, Min Zhang, Xin Qi, Na Lei, and Xianfang Gu

Null-Sampling for Interpretable and Fair Representations . . . . . . . . . . . . . . . 565


Thomas Kehrenberg, Myles Bartlett, Oliver Thomas,
and Novi Quadrianto

Guiding Monocular Depth Estimation Using Depth-Attention Volume . . . . . . 581


Lam Huynh, Phong Nguyen-Ha, Jiri Matas, Esa Rahtu,
and Janne Heikkilä

Tracking Emerges by Looking Around Static Scenes, with Neural


3D Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
Adam W. Harley, Shrinidhi Kowshika Lakshmikanth, Paul Schydlo,
and Katerina Fragkiadaki

Boosting Weakly Supervised Object Detection with Progressive


Knowledge Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
Yuanyi Zhong, Jianfeng Wang, Jian Peng, and Lei Zhang
xlii Contents – Part XXVI

BézierSketch: A Generative Model for Scalable Vector Sketches. . . . . . . . . . 632


Ayan Das, Yongxin Yang, Timothy Hospedales, Tao Xiang,
and Yi-Zhe Song

Semantic Relation Preserving Knowledge Distillation


for Image-to-Image Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
Zeqi Li, Ruowei Jiang, and Parham Aarabi

Domain Adaptation Through Task Distillation . . . . . . . . . . . . . . . . . . . . . . 664


Brady Zhou, Nimit Kalra, and Philipp Krähenbühl

PatchAttack: A Black-Box Texture-Based Attack with Reinforcement


Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
Chenglin Yang, Adam Kortylewski, Cihang Xie, Yinzhi Cao,
and Alan Yuille

More Classifiers, Less Forgetting: A Generic Multi-classifier Paradigm


for Incremental Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
Yu Liu, Sarah Parisot, Gregory Slabaugh, Xu Jia, Ales Leonardis,
and Tinne Tuytelaars

Extending and Analyzing Self-supervised Learning Across Domains . . . . . . . 717


Bram Wallace and Bharath Hariharan

Multi-source Open-Set Deep Adversarial Domain Adaptation . . . . . . . . . . . . 735


Sayan Rakshit, Dipesh Tamboli, Pragati Shuddhodhan Meshram,
Biplab Banerjee, Gemma Roig, and Subhasis Chaudhuri

Neural Batch Sampling with Reinforcement Learning for Semi-supervised


Anomaly Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
Wen-Hsuan Chu and Kris M. Kitani

LEMMA: A Multi-view Dataset for LEarning Multi-agent Multi-task


Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
Baoxiong Jia, Yixin Chen, Siyuan Huang, Yixin Zhu,
and Song-Chun Zhu

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787


EfficientFCN: Holistically-Guided
Decoding for Semantic Segmentation

Jianbo Liu1 , Junjun He2 , Jiawei Zhang3 , Jimmy S. Ren3 ,


and Hongsheng Li1(B)
1
CUHK-SenseTime Joint Laboratory, The Chinese University of Hong Kong,
Shatin, Hong Kong
liujianbo@link.cuhk.edu.hk, hsli@ee.cuhk.edu.hk
2
Shenzhen Key Lab of Computer Vision and Pattern Recognition,
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences,
Beijing, China
3
SenseTime Research, Beijing, China

Abstract. Both performance and efficiency are important to seman-


tic segmentation. State-of-the-art semantic segmentation algorithms are
mostly based on dilated Fully Convolutional Networks (dilatedFCN),
which adopt dilated convolutions in the backbone networks to extract
high-resolution feature maps for achieving high-performance segmen-
tation performance. However, due to many convolution operations are
conducted on the high-resolution feature maps, such dilatedFCN-based
methods result in large computational complexity and memory consump-
tion. To balance the performance and efficiency, there also exist encoder-
decoder structures that gradually recover the spatial information by com-
bining multi-level feature maps from the encoder. However, the perfor-
mances of existing encoder-decoder methods are far from comparable
with the dilatedFCN-based methods. In this paper, we propose the Effi-
cientFCN, whose backbone is a common ImageNet pretrained network
without any dilated convolution. A holistically-guided decoder is intro-
duced to obtain the high-resolution semantic-rich feature maps via the
multi-scale features from the encoder. The decoding task is converted
to novel codebook generation and codeword assembly task, which takes
advantages of the high-level and low-level features from the encoder. Such
a framework achieves comparable or even better performance than state-
of-the-art methods with only 1/3 of the computational cost. Extensive
experiments on PASCAL Context, PASCAL VOC, ADE20K validate the
effectiveness of the proposed EfficientFCN.

Keywords: Semantic segmentation · Encoder-decoder · Dilated


convolution · Holistic features

1 Introduction
Semantic segmentation or scene parsing is the task of assigning one of the pre-
defined class labels to each pixel of an input image. It is a fundamental yet chal-
lenging task in computer vision. The Fully Convolutional Network (FCN) [15],
c Springer Nature Switzerland AG 2020
A. Vedaldi et al. (Eds.): ECCV 2020, LNCS 12371, pp. 1–17, 2020.
https://doi.org/10.1007/978-3-030-58574-7_1
2 J. Liu et al.

Block4, stride 2 Conv Block4, dilate 4 Conv


OS 32 OS 32 OS 8 OS 8

Block3, stride 2 Block3, dilate 2


OS 16 OS 8

Block2,
Block2,stride
stride22 Bilinear Block2,
Block2,stride
stride22 Bilinear
OS 8 OS 8

Block1,
Block1,stride
stride22 Block1,
Block1,stride
stride22
OS 4 OS 4

Root,
Root,stride
stride22 Root,
Root,stride
stride22
OS 2 OS 2

Input Output Input Output

(a) FCN (b) DilatedFCN

Block4, stride 2 Conv Block4, stride 2


OS 8
OS 32 OS 32

Codebook
OS 32

Assembling

Mulply
Block3, stride 2 Upsample 2 Block3, stride 2
OS 16 OS 16
OS 16

Block2,
Block2,stride
stride22 Upsample 2 Block2,
Block2,stride
stride22
OS 8 OS 8 OS 8

Block1,
Block1,stride
stride22 Conv Block1,
Block1,stride
stride22 Conv
OS 4 OS 8 OS 4 OS 8

Root,
Root,stride
stride22 Bilinear Root,
Root,stride
stride22 Bilinear
OS 2 OS 2

Input Output Input Output

(c) Encoder-Decoder (d) EfficientFCN

Fig. 1. Different architectures for semantic segmentation. (a) the original FCN with
output stride (OS) = 32. (b). DilatedFCN based methods sacrifice efficiency and
exploit the dilated convolution with stride 2 and 4 in the last two stages to generate
high-resolution feature maps. (c)Encoder-Decoder methods employ the U-Net struc-
ture to recover the high-resolution feature maps. (d) Our proposed EfficientFCN with
codebook generation and codeword assembly for high-resolution feature upsampling in
semantic segmentation.

as shown in Fig. 1(a), for the first time demonstrates the success of exploiting a fully
convolutional network in semantic segmentation, which adopts a DCNN as the fea-
ture encoder (i.e., ResNet [9]) to extract high-level semantic feature maps and then
applies a convolution layer to generate the dense prediction. For the semantic seg-
mentation, high-resolution feature maps are critical for achieving accurate segmen-
tation performance since they contain fine-grained structural information to delin-
eate detailed boundaries of various foreground regions. In addition, due to the lack
of large-scale training data on semantic segmentation, transferring the weights pre-
trained on ImageNet can greatly improve the segmentation performance. There-
fore, most state-of-the-art semantic segmentation methods adopt classification
networks as the backbone to take full advantages of ImageNet pre-training. The
resolution of feature maps in the original classification model is reduced with con-
secutive pooling and strided convolution operations to learn high-level feature rep-
resentations. The output stride of the final feature map is 32 (OS = 32), where
the fine-grained structural information is discarded. Such low-resolution feature
maps cannot fully meet the requirements of semantic segmentation where detailed
Holistically-Guided Decoding for Semantic Segmentation 3

spatial information is needed. To tackle this problem, many works exploit dilated
convolution (or atrous convolution) to enlarge the receptive field (RF) while main-
taining the resolution of high-level feature maps. State-of-the-art dilatedFCN
based methods [2,8,24–26] (shown in Fig. 1(b)) have demonstrated that removing
the downsampling operation and replacing convolution with the dilated convolu-
tion in the later blocks can achieve superior performance, resulting in final feature
maps of output stride 8 (OS = 8). Despite the superior performance and no extra
parameters introduced by dilated convolution, the high-resolution feature repre-
sentations require high computational complexity and memory consumption. For
instance, for an input image with 512 × 512 and the ResNet101 as the backbone
encoder, the computational complexity of the encoder increases from 44.6 GFlops
to 223.6 GFlops when adopting the dilated convolution with the strides 2 and 4
into the last two blocks.
Alternatively, as shown in Fig. 1(c), the encoder-decoder based methods (e.g.
[18]) exploit using a decoder to gradually upsample and generate the high-
resolution feature maps by aggregating multi-level feature representations from
the backbone (or the encoder). These encoder-decoder based methods can obtain
high-resolution feature representations efficiently. However, on one hand, the fine-
grained structural details are already lost in the topmost high-level feature maps of
OS = 32. Even with the skip connections, lower-level high-resolution feature maps
cannot provide abstractive enough features for achieving high-performance seg-
mentation. On the other hand, existing decoders mainly utilize the bilinear upsam-
pling or deconvolution operations to increase the resolution of the high-level fea-
ture maps. These operations are conducted in a local manner. The feature vector at
each location of the upsampled feature maps is recovered from a limited receptive
filed. Thus, although the encoder-decoder models are generally faster and more
memory friendly than dilatedFCN based methods, their performances generally
cannot compete with those of the dilatedFCN models.
To tackle the challenges in both types of models, we propose the Efficien-
FCN (as shown in Fig. 1(d)) with the Holistically-guided Decoder (HGD) to
bridge the gap between the dilatedFCN based methods and the encoder-decoder
based methods. Our network can adopt any widely used classification model
without dilated convolution as the encoder (such as ResNet models) to generate
low-resolution high-level feature maps (OS = 8). Such an encoder is both com-
putationally and memory efficient than those in DilatedFCN model. Given the
multi-level feature maps from the last three blocks of the encoder, the proposed
holistically-guided decoder takes the advantages of both high-level but low-
resolution (OS = 32) and also mid-level high-resolution feature maps (OS = 8,
OS = 16) for achieving high-level feature upsampling with semantic-rich features.
Intuitively, the higher-resolution feature maps contain more fine-grained struc-
tural information, which is beneficial for spatially guiding the feature upsam-
pling process; the lower-resolution feature maps contain more high-level seman-
tic information, which are more suitable to encode the global context effectively.
Our HGD therefore generates a series of holistic codewords in a codebook to
summarize different global and high-level aspects of the input image from the
4 J. Liu et al.

low-resolution feature maps (OS = 32). Those codewords can be properly assem-
bled in a high-resolution grid to form the upsampled feature maps with rich
semantic information. Following this principle, the HGD generates assembly
coefficients from the mid-level high-resolution feature maps (OS = 8, OS = 16)
to guide the linear assembly of the holistic codewords at each high-resolution
spatial location to achieve feature upsampling. Our proposed EfficientFCN with
holistically-guided decoder achieves high segmentation accuracy on three popu-
lar public benchmarks, which demonstrate the efficiency and effectiveness of our
proposed decoder.
In summary, our contributions are as follows.

– We propose a novel holistically-guided decoder, which can efficiently generate


the high-resolution feature maps considering holistic contexts of the input
image.
– Because of the light weight and high performance of the proposed holistically-
guided decoder, our EfficientFCN can adopt the encoder without any dilated
convolution but still achieve superior performance.
– Our EfficientFCN achieves competitive (or better) results compared with the
state-of-the-art dilatedFCN based methods on the PASCAL Context, PAS-
CAL VOC, ADE20K datasets, with 1/3 fewer FLOPS.

2 Related Work
In this section, we review recent FCN-based methods for semantic segmentation.
Since the successful demonstration of FCN [15] on semantic segmentation, many
methods were proposed to improve the performance the FCN-based methods,
which mainly include two categories of methods: the dilatedFCN-based methods
and the encoder-decoder architectures.
DilatedFCN. The Deeplab V2 [2,3] proposed to exploit dilated convolution in
the backbone to learn a high-resolution feature map, which increases the output
stride from 32 to 8. However, the dilated convolution in the last two layers of the
backbone adds huge extra computation and leaves large memory footprint. Based
on the dilated convolution backbone, many works [5–7,26] continued to apply
different strategies as the segmentation heads to acquire the context-enhanced
feature maps. PSPNet [28] utilized the Spatial Pyramid Pooling (SPP) module
to increase the receptive field. EncNet [25] proposed an encoding layer to predict
a feature re-weighting vector from the global context and selectively high-lights
class-dependent feature maps. CFNet [26] exploited an aggregated co-occurrent
feature (ACF) module to aggregate the co-occurrent context by the pair-wise
similarities in the feature space. Gated-SCNN [20] proposed to use a new gat-
ing mechanism to connect the intermediate layers and a new loss function that
exploits the duality between the tasks of semantic segmentation and semantic
boundary prediction. DANet [5] proposed to use two attention modules with
the self-attention mechanism to aggregate features from spatial and channel
dimensions respectively. ACNet [6] applied a dilated ResNet as the backbone
and combined the encoder-decoder strategy for the observation that the global
Another random document with
no related content on Scribd:
good annealing a piece should never be hotter in one part than in
another, and no part should be hotter than necessary, usually the
medium orange color. Annealing, then, is a slow process
comparatively, and sufficient time should be allowed.
There are many ways of annealing steel, and generally the plan
used is well adapted to the result desired; it is necessary, however,
to consider the end aimed at and to adopt means to accomplish it,
because a plan that is excellent in one case may be entirely
inefficient in another.
Probably the greatest amount of annealing is done in the
manufacture of wire, where many tons must be annealed daily.
For annealing wire sunken cylindrical pits built of fire-bricks are
used usually; the coils of wire are piled up in the cylinders, which are
then covered tightly, and heat is applied through flues surrounding
the cylinders, so that no flame comes in contact with the steel. For all
ordinary uses this method of annealing wire is quick, economical,
and satisfactory. The wire comes out with a heavy scale of oxide on
the surface; this is pickled off in hot acid, and the steel should then
be washed in limewater, then in clean water, and finally dried.
If it be desired to make drill-wire for drills, punches, graving-tools,
etc., this plan will not answer, because under the removable scale
there is left a thin film of decarbonized iron which cannot be pickled
off without ruining the steel, and which will not harden. It is plain that
this soft surface must be ruinous to steel intended for cutting-tools,
for it prevents the extreme edge from hardening—the very place that
must be hard if cutting is to be done.
Tools for drills, lathe-tools, reamers, punches, etc., are usually
annealed in iron boxes, filled in the spaces between the tools with
charcoal; the box is then looted and heated in a furnace adapted to
the work. This is a satisfactory method generally, because the tools
are either ground or turned after annealing, removing any
decarbonized film that may be found; the charcoal usually takes up
all of the oxygen and prevents the formation of heavy scale and
decarbonized surfaces, but it does not do so entirely, and so for
annealing drill-wire this plan is not satisfactory. It is a common
practice in annealing in this way to continue the heating for many
hours, sometimes as many as thirty-six hours, in the mistaken notion
that long-continued heating produces greater softness, and some
people adhere to this plan in spite of remonstrances, because they
find that pieces so annealed will turn as easily as soft cast iron. This
last statement is true; the pieces may be turned in a lathe or cut in
any way as easily as soft cast iron, for the reason that that is exactly
what they are practically. When steel is made properly, the carbon is
nearly all in a condition of complete solution; it is in the very best
condition to harden well and to be enduring.
When steel is heated above the recalescence-point into the
plastic condition, the carbon at once begins to separate out of
solution and into what is known as the graphitic condition. If it be
kept hot long enough, the carbon will practically all take the graphitic
form, and then the steel will not harden properly, and it will not hold
its temper. To illustrate: Let a piece of 90-carbon steel be hardened
and drawn to a light brown temper; it will be found to be almost file
hard, very strong, and capable of holding a fine, keen edge for a long
time.
Next let a part of the same bar be buried in charcoal in a box and
be closed up air-tight, then let it be heated to a medium orange, no
hotter, and be kept at that heat for twelve hours, a common practice,
and then cooled slowly. This piece will be easily cut, and it will
harden very hard, but when drawn to the same light brown as the
other tool a file will cut it easily; it will not hold its edge, and it will not
do good work.
Clearly in this case time and money have been spent merely in
spoiling good material. There is nothing to be gained, and there is
everything to be lost, in long-continued heating of any piece of steel
for any purpose. When it is hot enough, and hot through, get it away
from the fire as quickly as possible.
This method of box-annealing is not satisfactory when applied to
drill-wire, or to long thin strands intended for clock-springs, watch-
springs, etc.
The coils or strands do not come out even; they will be harder in
one part than in another; they will not take an even temper. When
hardened and tempered, some parts will be found to be just right,
and others will have a soft surface, or will not hold a good temper.
The reason of this seems to be a want of uniformity in the conditions:
the charcoal does not take up all of the oxygen before the steel is hot
enough to be attacked, and so a decarbonized surface is formed in
some parts; or it may be that some of the carbon dioxide which is
formed comes in contact with the surface of the steel and takes
another equivalent of carbon from it. Whatever the reaction may be,
the fact is that much soft surface is formed. This soft surface may not
be more than .001 of an inch thick, but that is enough to ruin a
watch-spring or a fine drill.
Again, it seems to be impossible to heat such boxes evenly; it is
manifest that it must take a considerable length of time to heat a
mass of charcoal up to the required temperature, and if the whole be
not so heated some of the steel will not be heated sufficiently; this
will show itself in the subsequent drawing of the wire or rolling of the
strands. On the other hand, if the whole mass be brought up to the
required heat, some of the steel will have come up to the heat
quickly, and will then have been subjected to that heat during the
balance of the operation, and in this way the carbon will be thrown
out of solution partly. This is proven by the fact that strands made in
this way and hardened and tempered by the continuous process will
be hard and soft at regular intervals, showing that one side of the coil
has been subjected to too much heat. This trouble is overcome by
open annealing, which will be described presently.
When steel is heated in an open furnace, there is always a scale
of oxide formed on the surface; this scale, being hard, and of the
nature of sand or of sandstone, grinds away the edges of cutting-
tools, so that, although the steel underneath may be soft and in good
cutting condition, this gritty surface is very objectionable. This trouble
is overcome by annealing in closed vessels; when charcoal is used,
the difficulties just mentioned in connection with wire- and strand-
annealing operate to some extent, although not so seriously,
because the steel is to be machined, removing the surface.
The Jones method of annealing in an atmosphere of gas is a
complete cure for these troubles.
Jones uses ordinary gas-pipes or welded tubes of sizes to suit
the class of work. One end of the tube is welded up solid; the other
end is reinforced by a band upon which a screw-thread is cut; a cap
is made to screw on this end when the tube is charged. A gas-pipe
of about ½-inch diameter is screwed into the solid end, and a hole of
¹/₁₆- to ⅛-inch diameter is drilled in the cap.
When the tube is charged and the cap is screwed on, a hose
connected with a gas-main is attached to the piece of gas-pipe in the
solid end of the tube; the gas-pipe is long enough to project out of
the end of the furnace a foot or so through a slot made in the end of
the furnace for that purpose.
The gas is now turned on and a flame is held near the hole in the
cap until the escaping gas ignites; this shows that the air is driven
out and replaced by gas.
The pipe is now rolled into the furnace and the door is closed, the
gas continuing to flow through the pipe. By keeping the pipe down to
a proper annealing-heat it is manifest that the steel will not be any
hotter than the pipe. By heating the pipe evenly by rolling it over
occasionally the steel will be heated evenly. A little experience will
teach the operator how long it takes to heat through a given size of
pipe and its contents, so that he need not expose his steel to heat
any longer than necessary.
There is not a great quantity of gas consumed in the operation,
because the expanding gas in the tube makes a back pressure, the
vent in the cap being small. This seems to be the perfection of
annealing. A tube containing a bushel or more of bright, polished
tacks will deliver them all perfectly bright and as ductile as lead,
showing that there is no oxidation whatever. Experiments with drill-
rods, with the use of natural gas, have shown that they can be
annealed in this way, leaving the surface perfectly bright, and
thoroughly hard when quenched. This Jones process is patented.
Although the Jones process is so perfect, and necessary for
bright surfaces, its detail is not necessary when a tarnished surface
is not objectionable.
The charcoal difficulty can be overcome also. Let a pipe be made
like a Jones pipe without a hole in the cap or a gas-pipe in the end.
To charge it first throw a handful of resin into the bottom of the pipe,
then put in the steel, then another handful of resin near the open
end, and screw on the cap. The cap is a loose fit. Now roll the whole
into the furnace; the resin will be volatilized at once, fill the pipe with
carbon or hydrocarbon gases, and unite with the air long before the
steel is hot enough to be attacked.
The gas will cause an outward pressure, and may be seen
burning as it leaks through the joint at the cap. This prevents air from
coming in contact with the steel. This method is as efficient as the
Jones plan as far as perfect heating and easy management are
concerned. It reduces the scale on the surfaces of the pieces,
leaving them a dark gray color and covered with fine carbon or soot.
For annealing blocks or bars it is handier and cheaper than the
Jones plan, but it will not do for polished surfaces. This method is not
patented.

OPEN ANNEALING.
Open annealing, or annealing without boxes or pipes, is practised
wherever there are comparatively few pieces to anneal and where a
regular annealing-plant would not pay, or in a specially arranged
annealing-furnace where drill-wire, clock-spring steel, etc., are to be
annealed.
For ordinary work a blacksmith has near his fire a box of dry lime
or of powdered charcoal. He brings his piece up to the right heat and
buries it in the box, where it may cool slowly. In annealing in this way
it is well not to use blast, because it is liable to force all edges up to
too high a heat and to make a very heavy scale all over the surface.
With a little common-sense and by the use of a little care this way of
annealing is admirable.
It is a common practice where there is a furnace in use in
daytime and allowed to go cold at night to charge the furnace in the
evening, after the fire is drawn, with steel to be annealed, close the
doors and damper, and leave the whole until morning. The furnace
does not look too hot when it is closed up, but no one knows how hot
it will make the steel by radiation: the steel is almost always made
too hot, it is kept hot too long, and so converted into cast iron, and
there is an excessively heavy scale on it.
Many thousands of dollars worth of good steel are ruined
annually in this way, and it is in every way about the worst method of
annealing that was ever devised.
To anneal wire or thin strands in an open furnace the furnace
should be built with vertical walls about two feet high and then
arched to a half circle. The inports for flame should be vertical and
open into the furnace at the top of the vertical wall; the outports for
the gases of combustion should be vertical and at the same level as
the inports and on the opposite side of the furnace from the inports.
These outflues may be carried under the floor of the furnace to keep
it hot.
The bottom of the door should be at the level of the ports to keep
indraught air away from the steel. The annealing-pot is then the
whole size of the furnace—two feet deep—and closed all around.
The draught should be regulated so that the flame will pass
around the roof, or so nearly so as to never touch the steel, not even
in momentary eddies.
In such a furnace clock-spring wire not more than .01 inch in
diameter, or clock-spring strands not more than .006 to .008 inch
thick and several hundred feet long, may be annealed perfectly. The
steel is scaled of course, but the operation is so quick and so
complete that there is no decarbonized surface under the scale.
This plan is better than the Jones method or any closed method,
because the big boxes necessary to hold the strands or coils cannot
be heated up without in some parts overheating the steel; all of
which is avoided in the open furnace, because by means of peep-
holes the operator can see what he is about, and after a little
practice he can anneal large quantities of steel uniformly and
efficiently.
VIII.
HARDENING AND TEMPERING.

For nearly all structural and machinery purposes steel is used in


the condition in which it comes from the rolls or the forge; in
exceptional cases it is annealed, and in some cases such as for wire
in cables or for bearings in machinery, it is hardened and tempered.
For all uses for tools steel must be hardened, or hardened and
tempered. The operations of hardening and tempering, including the
necessary heating, are the most important, the most delicate, and
the most difficult of all of the manipulations to which steel is
subjected; these operations form an art in themselves where skill,
care, good judgment, and experience are required to produce
reliable and satisfactory results. It is a common idea that all that is
necessary is to heat a piece of steel, quench it in water, brine, or
some pet nostrum, and then warm it to a certain color; these are
indeed the only operations that are necessary, but the way in which
they are done are all-important.
An experienced steel-maker is often amazed at the confidence
with which an ignorant person will put a valuable tool in the fire, rush
the heat up to some bright color, or half a dozen colors at once, and
souse it into the cooling-bath without regard to consequences. That
such work does not always result in disastrous fractures shows that
steel does possess marvellous strength to resist even the worst
disregard of rules and facts.
On the other hand, the beautiful work upon the most delicate and
difficult shapes that is done by one skilled in the art cannot but excite
the surprise and admiration of the onlooker who is familiar with the
physics of steel, and who can appreciate the delicacy of handling
required in the operation.
There are a few simple laws to observe and rules to follow which
will lead to success; they will be stated in this chapter as clearly as
may be, in the hope of giving the reader a good starting-point and a
plain path to follow; but he who would become an expert can do so
only by travelling the road carefully step by step. The hair-spring of a
watch, or a little pinion or pivot, so small that it can only be seen
through a magnifying-glass, the exquisitely engraved die costing
hundreds or thousands of dollars, and the huge armor-plate
weighing many tons, must all be hardened and tempered under
precisely the same laws and in exactly the same way; the only
difference is in the means of getting at it in each case.
Referring now to properties mentioned in the previous chapters,
we have first to heat the piece to the right temperature and then to
cool it in the quickest possible way in order to secure the greatest
hardness and the best grain. In doing this we subject the steel to the
greatest shocks or strains, and great care must be used.
The importance of uniformity in heating for forging and for
annealing has been stated, and it has been shown how an error in
this may be rectified by another and a more careful heating; when it
comes to hardening, this uniformity must be insisted upon and
emphasized, for as a rule an error here has no remedy.
There may be cases of bad work that do not cause actual
fracture that can be remedied by re-heating and hardening, but these
are rare, because even if incurable fracture does not occur the error
is not discovered until the piece has been put to work and its failure
develops the errors of the temperer.
If the error is one of merely too low heat, not producing thorough
hardening, it will generally be discovered by the operator, who will
then try again and possibly succeed; but if the error be of uneven
heat, or too much heat, the probabilities are that it will not be
discovered until the piece fails in work, when it will be too late to
apply any remedy.
Referring to Table I, Chap. V, treating of specific gravities, it is
clear that all steel possesses different specific gravities, due to
differences of temperature, and that these differences of specific
gravity increase as the carbon content increases; it follows that if a
piece of steel be heated unevenly, internal strains must be set up in
the mass, and it is certain that if steel be quenched in this condition
violent strains will be set up, even to the causing of fractures.
The theory of this action, as of all hardening, is involved in
discussion which will be considered later; in this chapter the facts will
be dealt with. When a piece of steel is heated, no matter how
unevenly or to what temperature below actual granulation, and is
allowed to cool slowly and without disturbance, it will not break or
crack under the operation. If a piece be heated as unevenly as, say,
medium orange in one part and medium lemon in another, and is
then quenched, it will be almost certain to crack if it contains enough
carbon to harden at all in the common acceptance of the term, that is
to say, file hard or having carbon 40 or higher.
This fact is too well known to be open to discussion; therefore the
quenching of hot steel, the operation of hardening, does set up
violent strains in steel, no matter what the true theory of hardening
may be.
Referring to Chap. V, to the series of squares representing the
apparent sizes of grain due to different temperatures, similar results
follow from hardening, with the exceptions that the different
structures are far more plainly marked, and the squares should be
arranged a little differently; they are shown as continuously larger in
Chap. V, from the grain of the cold bar up to the highest
temperature; this is true if a bar has been rolled or hammered
properly into a fine condition of grain. Of course if a bar be finished
at, say, medium orange it will have a grain due to that heat—No. 3 in
the series of squares. Then if it be heated to dark orange and cooled
from that heat it will take on a grain corresponding to square No. 2,
and No. 1 square will be eliminated.
The series of squares to represent hardened grain will be as
follows:
The heat colors being the same as before, viz.:
1. The natural bar—untreated.
2. Quenched at dark orange or orange red.
3. “ “ medium orange—refined.
4. “ “ bright orange.
5. “ “ dark lemon.
6. “ “ medium lemon.
7. “ “ bright lemon.
8. “ “ very bright lemon or creamy.

Heats 6, 7, 8 will almost invariably produce


cracks although the pieces be evenly heated.
These squares do not represent absolute structures with marked
divisions; they are only the steps on an incline, like the temper
numbers in the carbon series; thus, the carbon-line is continuous,
but the temper divisions represent steps up the incline. So with the
series of squares, the changes of grain or structure are continuous,
as represented by the doubly inclined line; the squares being only
the steps to indicate easily observed divisions. The minuteness of
the changes is illustrated by the fact that in a piece heated
continuously from creamy to dark orange and quenched, differences
of grain have been observed unmistakably on opposite sides of
pieces broken off not more than ⅛ inch thick.
In practice the differences due to the colors given in the list above
are as plain and surely marked as are the differences in the structure
of ingots due to the different temper carbons already described.
In this hardened series each carbon temper gives its own
peculiar grain; in low steel, say 40 carbon compared to 1.00 carbon
or higher, No. 3 will be larger and No. 8 will be smaller in the low
temper than in the high—another illustration of the fact that low steel
is more inert to the action of heat than high steel. All grades and all
tempers go through the same changes, but they are more marked in
the high than in the low steel.
The grain of hardened steel is affected by the presence of silicon,
phosphorus, and manganese, and doubtless by any other
ingredients, these three being the most common.
It is in the grain of hardened steel that the conditions described in
Chap. V as “sappy,” “dry,” and “fiery” are the most easily and
frequently observed, although the same conditions obtain in
unhardened steel in a manner that is useful to an observing steel-
user. But it is in this hardened condition that the excellences or
defects of steel are brought out and emphasized.
When a piece of steel is heated continuously from “creamy,” or
scintillating, down to black, or unheated, and is then quenched, the
grain will be found to be coarsest, hardest, and most brittle at the
hottest end, and with the brightest lustre, even to brilliancy, and to
become finer down to a certain point, noted as No. 3 in the series of
squares, or at a heat which shows about a medium orange color;
here the grain becomes exceedingly fine, and here the steel is found
to be the strongest and to be without lustre. Below this heat the grain
appears coarser and the steel is less hard, until the grain and
condition of the unheated part are reached. This fine condition,
known as the refined condition, is very remarkable. It is the condition
to be aimed at in all hardening operations, with one or two
exceptions which will be noted, because in this state steel is at its
best; it is strongest then, and it would seem to be clear without
argument that the finest grain and the strongest will hold the best at
a fine cutting-edge, and will do the most work with the least wear,
although a coarser grain may be a little harder, the coarser and more
brittle condition of the latter more than counterbalancing its superior
hardness.
The advantages of this refined condition are so great that it is
found to be well to harden and refine mild-steel dies, and battering-
and cutting-tools that are to be used for hot work, although the heat
will draw out all of the temper in the first few minutes, because the
superior strength of the fine grain will enable the tool to do twice to
twenty times more work than an unhardened tool.
The refining-heat, like most other properties, varies with the
carbon; the medium orange given is the proper heat for normal tool-
steel of from about 90 to 110 carbon. Steel of 150 carbon will refine
at about a dark orange, and steel of 50 to 60 carbon will require
about a bright orange to refine it.
This range is small, but it must be observed and worked to if the
best results are desired.
A color-blind person can never learn to harden steel properly.
In studying this phenomenon of refining, the conclusion was
reached that it occurred at or immediately above the temperature
that broke up the crystalline condition of cold steel and brought it
fairly into the second, the plastic condition. Farther observation led to
the conclusion that the coarser grain and greater hardness caused
by higher heats were due to the gradual change from plastic toward
granular condition that takes place as the heat increases. Later
investigations have given no reason for changing these conclusions.
When the phenomenon of recalescence was observed and
investigated by Osmond and others, different theories were
advanced in explanation.
Langley concluded that if recalescence occurred at the change
from a plastic to a crystalline condition, then the heat absorbed and
again set free during such changes would account for the visible
phenomenon of recalescence.
Again, if it should prove that recalescence occurred at the refining
point, the conjunction of these phenomena would indicate strongly,
first, that refining does occur at the point where this change of
structure is complete in the reverse order, from crystalline to plastic;
and second, the first being true, recalescence would be explained as
stated, as indicating the inevitable absorption and emission of heat
due to such a change.
Langley fitted up an electric apparatus for heating steel, in a box
so placed that the light was practically uniform, that is, so that bright
sunlight, or a cloudy sky, or passing clouds would not affect seriously
the observation of heat-colors.
Pieces of steel were heated far above recalescence, up to bright
lemon, and then allowed to cool slowly; in this way recalescence was
shown clearly.
It was found to occur at the refining heat in every case, shifting
for different carbons just as the refining heat shifts.
Immediately under the pieces being observed was a vessel of
water into which the pieces could be dropped and quenched. After
observing the heating and cooling until the eye was well trained,
pieces were quenched at different heats and the results were noted.
It was found that in the ascending heats no great hardness was
produced until the recalescence heat was reached or passed
slightly; and in the descending heat excessive hardening occurred at
a little below the recalescent heat, although no such hardening
occurred at that color during ascending heats. This apparent
anomaly is due simply to lag. If, in ascending, the piece be held for a
few moments at the recalescent point, no increase being allowed,
and then it be quenched, it will harden thoroughly and be refined. If,
in descending, the cooling be arrested at a little below the
recalescence for a few moments, neither increase nor decrease
being allowed, and then the piece be quenched, it will not harden
any better than if it be quenched immediately upon reaching the
same heat in ascending.
Time must be allowed for the changes to take place, and lag
must be provided for.
These experiments show that refining and recalescence take
place at the same temperature.

AS TO HARDNESS.
Prof. J. W. Langley showed by sp. gr. determinations that steel
quenched from 212° F. in water at 60° F. showed the hardening
effect of such quenching, the difference of temperature being only
152° F.
Prof. S. P. Langley, of the Smithsonian, proved the same to be
true by delicate electrical tests, and these again were confirmed by
Prof. J. W. Langley in the laboratory of the Case School of Sciences.
A piece of refined steel will rarely be hard enough to scratch
glass. A piece of steel quenched from creamy heat will almost
always scratch glass. The maximum hardness is produced by the
highest heat, or when temperature minus cold is a maximum; the
least hardness is found by quenching at the lowest heat above the
cooling medium, or when temperature minus cold is a minimum—the
time required to quench being a minimum in both cases.
What occurs between these limits? Is the curve of hardness a
straight line, or an irregular line?
Let a piece of steel be heated as uniformly as possible from a
creamy heat at one end to black at the other, and then be quenched.
Now take a newly broken hard file and draw its sharp corner
gently and firmly over the piece, beginning at the black-heated end.
The file will take hold, and as it is drawn along it will be felt that the
piece becomes slightly harder as the file advances, until suddenly it
will slip, and no amount of pressure will make it take hold above that
point. The piece has become suddenly file hard.
Next try the same thing with a diamond; the diamond will cut
easily until the point is reached where the file slipped, then there will
be found a great increase of hardness.
From this point to the end of the piece it is observed readily by
the action of the diamond that there is a gradual increase of
hardness from the hump to the end of the piece to the creamy-
heated end. Attempts were made to measure this curve of hardness
by putting a load on the diamond and dragging it over the piece; but
no diamond obtainable would bear a load heavy enough to produce
a groove that could be measured accurately by micrometer. An
examination of such a groove, through a strong magnifying-glass
revealed the conditions plainly; the groove of hardness may be
illustrated on an exaggerated scale; thus:

The next question was, Where does this hump occur, and what is
the cause of it?
Careful observation showed that it occurred at the point of
recalescence, at the refining-point. This word point must not be
taken as space without dimension in this connection; it is used in the
common sense of at or adjacent to a given place. There is of course
a small allowable range of temperature above any given exact point
of recalescence, such as 655° C. or 1211° F.
By superimposing Langley’s curves of cooling and of hardening
(see Trans. Am. Soc. Civ. Eng., Vol. XXVII, p. 403), the relation
between recalescence and the hardening-hump is obvious.
It is safe to say that experience proves that the refined condition
is the best for all cutting-tools of every shape and form.
It seems to be obvious; the steel is then in its strongest condition,
and when the grain is finest, the crystals the smallest, a fine edge
should be the most enduring, because there is a more intimate
contact between the particles. That a steel will refine well, and be
strong in that condition is the steel-maker’s final test of quality.
No steel-maker who has a proper regard for the character of his
product will accept raw material upon mere analysis; analysis is of
the utmost importance, for material for steel-making must be of a
quality that will produce a certain quality of steel, or the result will be
an inferior product. This applies to acid Bessemer and open-hearth,
and to crucible-steel especially; the basic processes admit of a
reduction of phosphorus not obtainable in the others.
In making fine-tool steel a bad charge in the pot inevitably means
a bad piece of steel. It may happen also that an iron of apparently
good analysis will not produce a really fine steel; then there must be
a search for unusual elements, such as copper, arsenic, antimony,
etc., or for dirt, left in the iron by careless working. The refining-test
then is as necessary as analysis, for if steel will not refine thoroughly
it will not make good tools. Battering-tools, such as sledges,
hammers, flatters, etc., should be refined carefully, for although their
work is mainly compressive they are liable to receive, and do get,
blows on the corners and edges that would ruin them if they were not
in the strongest condition possible.
The reasons for refining hot-working tools have been stated
already. Engraved dies for use in drop-presses where they are
subjected to heavy blows are undoubtedly in the most durable
condition when they are refined, but they are subjected not only to
impact, but to enormous compression, and therefore they must be
hardened deeply. When a die-block is heated so as to refine, and
then is quenched, it hardens perfectly on the surface and not very
deeply, and it is quite common in such a case to see a die crushed
by a few blows: the hardened part is driven bodily into the soft steel
below it, and the die is ruined; thus:

To avoid this, such a die should be heated to No. 5, or a dark


lemon, and quenched suddenly in a large volume of rushing water.
It will then have the enormous resistance to compression that is
so well known in very hard steel, and it will be hardened so deeply
that the blow of the hammer will not crush through the hard part. This
is the best condition, too, of an armor-plate that is to resist the
impact of a projectile.
It will be brittle, a light blow of a hammer will snip the corners, but
it cannot be crushed by ordinary work. Dies made in this way have
turned out thousands of gross of stamped pieces, showing no
appreciable wear.
To harden a die in this way is a critical operation, because the
strains are so enormous that a very trifling unevenness in the heat
will break the piece, but the skill of expert temperers is so great that
they will harden hundreds of dies in this way and not lose one if the
steel be sound.

HEATING FOR HARDENING.


A smith can heat an occasional piece for hardening, in his
ordinary fire by using care and taking a little time. Where there are
many pieces to be hardened, special furnaces should be used.
For thousands of little pieces, such as saw-teeth or little springs,
a large furnace with a brick floor, and so arranged that the flame will
not impinge on the pieces, is good.
The operator can watch the pieces, and as soon as any come to
the right color he can draw them out, letting them drop into the
quenching-tank, which should be right under the door or close at
hand.
For twist-drills, reamers, etc., a lead bath, or a bath of melted salt
and soda, is used. The lead bath is the best if care be taken to draw
off the fumes so as not to poison the heaters. Because a bath of this
kind is of exactly the right color at the top it is not to be assumed that
pieces can be heated in it and hardened without further attention.
Thousands of tools are ruined, and thousands of dollars are
thrown away annually, by unobserving men who assume that
because a lead bath appears to be exactly the right color at the
surface it is therefore just right.
A dark orange color surface may have underneath it an
increasingly higher temperature, up to a bright lemon at the bottom,
and tools heated in such a bath will have all of the varying
temperatures of the bath; then cracked tools, twisted tools, brittle

You might also like