Professional Documents
Culture Documents
Ebook Computer Vision Eccv 2020 16Th European Conference Glasgow Uk August 23 28 2020 Proceedings Part Xxvi Andrea Vedaldi Online PDF All Chapter
Ebook Computer Vision Eccv 2020 16Th European Conference Glasgow Uk August 23 28 2020 Proceedings Part Xxvi Andrea Vedaldi Online PDF All Chapter
https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-xxvii-andrea-vedaldi/
https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-vi-andrea-vedaldi/
https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-xiii-andrea-vedaldi/
https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-xxix-andrea-vedaldi/
Computer Vision ECCV 2020 16th European Conference
Glasgow UK August 23 28 2020 Proceedings Part IV Andrea
Vedaldi
https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-iv-andrea-vedaldi/
https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-viii-andrea-vedaldi/
https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-i-andrea-vedaldi/
https://ebookmeta.com/product/computer-vision-eccv-2020-16th-
european-conference-glasgow-uk-august-23-28-2020-proceedings-
part-xxx-andrea-vedaldi/
https://ebookmeta.com/product/computer-vision-
eccv-2020-workshops-glasgow-uk-august-23-28-2020-proceedings-
part-ii-lecture-notes-in-computer-science-adrien-bartoli-editor/
Andrea Vedaldi
Horst Bischof
Thomas Brox
Jan-Michael Frahm (Eds.)
LNCS 12371
Computer Vision –
ECCV 2020
16th European Conference
Glasgow, UK, August 23–28, 2020
Proceedings, Part XXVI
Lecture Notes in Computer Science 12371
Founding Editors
Gerhard Goos
Karlsruhe Institute of Technology, Karlsruhe, Germany
Juris Hartmanis
Cornell University, Ithaca, NY, USA
Computer Vision –
ECCV 2020
16th European Conference
Glasgow, UK, August 23–28, 2020
Proceedings, Part XXVI
123
Editors
Andrea Vedaldi Horst Bischof
University of Oxford Graz University of Technology
Oxford, UK Graz, Austria
Thomas Brox Jan-Michael Frahm
University of Freiburg University of North Carolina at Chapel Hill
Freiburg im Breisgau, Germany Chapel Hill, NC, USA
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword
Hosting the European Conference on Computer Vision (ECCV 2020) was certainly an
exciting journey. From the 2016 plan to hold it at the Edinburgh International
Conference Centre (hosting 1,800 delegates) to the 2018 plan to hold it at Glasgow’s
Scottish Exhibition Centre (up to 6,000 delegates), we finally ended with moving
online because of the COVID-19 outbreak. While possibly having fewer delegates than
expected because of the online format, ECCV 2020 still had over 3,100 registered
participants.
Although online, the conference delivered most of the activities expected at a
face-to-face conference: peer-reviewed papers, industrial exhibitors, demonstrations,
and messaging between delegates. In addition to the main technical sessions, the
conference included a strong program of satellite events with 16 tutorials and 44
workshops.
Furthermore, the online conference format enabled new conference features. Every
paper had an associated teaser video and a longer full presentation video. Along with
the papers and slides from the videos, all these materials were available the week before
the conference. This allowed delegates to become familiar with the paper content and
be ready for the live interaction with the authors during the conference week. The live
event consisted of brief presentations by the oral and spotlight authors and industrial
sponsors. Question and answer sessions for all papers were timed to occur twice so
delegates from around the world had convenient access to the authors.
As with ECCV 2018, authors’ draft versions of the papers appeared online with
open access, now on both the Computer Vision Foundation (CVF) and the European
Computer Vision Association (ECVA) websites. An archival publication arrangement
was put in place with the cooperation of Springer. SpringerLink hosts the final version
of the papers with further improvements, such as activating reference links and sup-
plementary materials. These two approaches benefit all potential readers: a version
available freely for all researchers, and an authoritative and citable version with
additional benefits for SpringerLink subscribers. We thank Alfred Hofmann and
Aliaksandr Birukou from Springer for helping to negotiate this agreement, which we
expect will continue for future versions of ECCV.
score computed by the Toronto Paper Matching System (TPMS), which is based on the
paper’s full text, the area chair bids for individual papers, load balancing, and conflict
avoidance. Open Review provides the program chairs a convenient web interface to
experiment with different configurations of the matching algorithm. The chosen con-
figuration resulted in about 50% of the assigned papers to be highly ranked by the area
chair bids, and 50% to be ranked in the middle, with very few low bids assigned.
Assignments to reviewers were similar, with two differences. First, there was a
maximum of 7 papers assigned to each reviewer. Second, area chairs recommended up
to seven reviewers per paper, providing another highly-weighed term to the affinity
scores used for matching.
The assignment of papers to area chairs was smooth. However, it was more difficult
to find suitable reviewers for all papers. Having a ratio of 5.6 papers per reviewer with a
maximum load of 7 (due to emergency reviewer commitment), which did not allow for
much wiggle room in order to also satisfy conflict and expertise constraints. We
received some complaints from reviewers who did not feel qualified to review specific
papers and we reassigned them wherever possible. However, the large scale of the
conference, the many constraints, and the fact that a large fraction of such complaints
arrived very late in the review process made this process very difficult and not all
complaints could be addressed.
Reviewers had six weeks to complete their assignments. Possibly due to COVID-19
or the fact that the NeurIPS deadline was moved closer to the review deadline, a record
30% of the reviews were still missing after the deadline. By comparison, ECCV 2018
experienced only 10% missing reviews at this stage of the process. In the subsequent
week, area chairs chased the missing reviews intensely, found replacement reviewers in
their own team, and managed to reach 10% missing reviews. Eventually, we could
provide almost all reviews (more than 99.9%) with a delay of only a couple of days on
the initial schedule by a significant use of emergency reviews. If this trend is confirmed,
it might be a major challenge to run a smooth review process in future editions of
ECCV. The community must reconsider prioritization of the time spent on paper
writing (the number of submissions increased a lot despite COVID-19) and time spent
on paper reviewing (the number of reviews delivered in time decreased a lot pre-
sumably due to COVID-19 or NeurIPS deadline). With this imbalance the peer-review
system that ensures the quality of our top conferences may break soon.
Reviewers submitted their reviews independently. In the reviews, they had the
opportunity to ask questions to the authors to be addressed in the rebuttal. However,
reviewers were told not to request any significant new experiment. Using the Open
Review interface, authors could provide an answer to each individual review, but were
also allowed to cross-reference reviews and responses in their answers. Rather than
PDF files, we allowed the use of formatted text for the rebuttal. The rebuttal and initial
reviews were then made visible to all reviewers and the primary area chair for a given
paper. The area chair encouraged and moderated the reviewer discussion. During the
discussions, reviewers were invited to reach a consensus and possibly adjust their
ratings as a result of the discussion and of the evidence in the rebuttal.
After the discussion period ended, most reviewers entered a final rating and rec-
ommendation, although in many cases this did not differ from their initial recom-
mendation. Based on the updated reviews and discussion, the primary area chair then
Preface ix
made a preliminary decision to accept or reject the paper and wrote a justification for it
(meta-review). Except for cases where the outcome of this process was absolutely clear
(as indicated by the three reviewers and primary area chairs all recommending clear
rejection), the decision was then examined and potentially challenged by a secondary
area chair. This led to further discussion and overturning a small number of preliminary
decisions. Needless to say, there was no in-person area chair meeting, which would
have been impossible due to COVID-19.
Area chairs were invited to observe the consensus of the reviewers whenever
possible and use extreme caution in overturning a clear consensus to accept or reject a
paper. If an area chair still decided to do so, she/he was asked to clearly justify it in the
meta-review and to explicitly obtain the agreement of the secondary area chair. In
practice, very few papers were rejected after being confidently accepted by the
reviewers.
This was the first time Open Review was used as the main platform to run ECCV. In
2018, the program chairs used CMT3 for the user-facing interface and Open Review
internally, for matching and conflict resolution. Since it is clearly preferable to only use
a single platform, this year we switched to using Open Review in full. The experience
was largely positive. The platform is highly-configurable, scalable, and open source.
Being written in Python, it is easy to write scripts to extract data programmatically. The
paper matching and conflict resolution algorithms and interfaces are top-notch, also due
to the excellent author profiles in the platform. Naturally, there were a few kinks along
the way due to the fact that the ECCV Open Review configuration was created from
scratch for this event and it differs in substantial ways from many other Open Review
conferences. However, the Open Review development and support team did a fantastic
job in helping us to get the configuration right and to address issues in a timely manner
as they unavoidably occurred. We cannot thank them enough for the tremendous effort
they put into this project.
Finally, we would like to thank everyone involved in making ECCV 2020 possible
in these very strange and difficult times. This starts with our authors, followed by the
area chairs and reviewers, who ran the review process at an unprecedented scale. The
whole Open Review team (and in particular Melisa Bok, Mohit Unyal, Carlos
Mondragon Chapa, and Celeste Martinez Gomez) worked incredibly hard for the entire
duration of the process. We would also like to thank René Vidal for contributing to the
adoption of Open Review. Our thanks also go to Laurent Charling for TPMS and to the
program chairs of ICML, ICLR, and NeurIPS for cross checking double submissions.
We thank the website chair, Giovanni Farinella, and the CPI team (in particular Ashley
Cook, Miriam Verdon, Nicola McGrane, and Sharon Kerr) for promptly adding
material to the website as needed in the various phases of the process. Finally, we thank
the publication chairs, Albert Ali Salah, Hamdi Dibeklioglu, Metehan Doyran, Henry
Howard-Jenkins, Victor Prisacariu, Siyu Tang, and Gul Varol, who managed to
compile these substantial proceedings in an exceedingly compressed schedule. We
express our thanks to the ECVA team, in particular Kristina Scherbaum for allowing
open access of the proceedings. We thank Alfred Hofmann from Springer who again
x Preface
serve as the publisher. Finally, we thank the other chairs of ECCV 2020, including in
particular the general chairs for very useful feedback with the handling of the program.
General Chairs
Vittorio Ferrari Google Research, Switzerland
Bob Fisher University of Edinburgh, UK
Cordelia Schmid Google and Inria, France
Emanuele Trucco University of Dundee, UK
Program Chairs
Andrea Vedaldi University of Oxford, UK
Horst Bischof Graz University of Technology, Austria
Thomas Brox University of Freiburg, Germany
Jan-Michael Frahm University of North Carolina, USA
Poster Chair
Stephen Mckenna University of Dundee, UK
Technology Chair
Gerardo Aragon Camarasa University of Glasgow, UK
xii Organization
Tutorial Chairs
Carlo Colombo University of Florence, Italy
Sotirios Tsaftaris University of Edinburgh, UK
Publication Chairs
Albert Ali Salah Utrecht University, The Netherlands
Hamdi Dibeklioglu Bilkent University, Turkey
Metehan Doyran Utrecht University, The Netherlands
Henry Howard-Jenkins University of Oxford, UK
Victor Adrian Prisacariu University of Oxford, UK
Siyu Tang ETH Zurich, Switzerland
Gul Varol University of Oxford, UK
Website Chair
Giovanni Maria Farinella University of Catania, Italy
Workshops Chairs
Adrien Bartoli University of Clermont Auvergne, France
Andrea Fusiello University of Udine, Italy
Area Chairs
Lourdes Agapito University College London, UK
Zeynep Akata University of Tübingen, Germany
Karteek Alahari Inria, France
Antonis Argyros University of Crete, Greece
Hossein Azizpour KTH Royal Institute of Technology, Sweden
Joao P. Barreto Universidade de Coimbra, Portugal
Alexander C. Berg University of North Carolina at Chapel Hill, USA
Matthew B. Blaschko KU Leuven, Belgium
Lubomir D. Bourdev WaveOne, Inc., USA
Edmond Boyer Inria, France
Yuri Boykov University of Waterloo, Canada
Gabriel Brostow University College London, UK
Michael S. Brown National University of Singapore, Singapore
Jianfei Cai Monash University, Australia
Barbara Caputo Politecnico di Torino, Italy
Ayan Chakrabarti Washington University, St. Louis, USA
Tat-Jen Cham Nanyang Technological University, Singapore
Manmohan Chandraker University of California, San Diego, USA
Rama Chellappa Johns Hopkins University, USA
Liang-Chieh Chen Google, USA
Organization xiii
Additional Reviewers
HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs . . . 448
Hai Victor Habi, Roy H. Jennings, and Arnon Netzer
1 Introduction
Semantic segmentation or scene parsing is the task of assigning one of the pre-
defined class labels to each pixel of an input image. It is a fundamental yet chal-
lenging task in computer vision. The Fully Convolutional Network (FCN) [15],
c Springer Nature Switzerland AG 2020
A. Vedaldi et al. (Eds.): ECCV 2020, LNCS 12371, pp. 1–17, 2020.
https://doi.org/10.1007/978-3-030-58574-7_1
2 J. Liu et al.
Block2,
Block2,stride
stride22 Bilinear Block2,
Block2,stride
stride22 Bilinear
OS 8 OS 8
Block1,
Block1,stride
stride22 Block1,
Block1,stride
stride22
OS 4 OS 4
Root,
Root,stride
stride22 Root,
Root,stride
stride22
OS 2 OS 2
Codebook
OS 32
Assembling
Mulply
Block3, stride 2 Upsample 2 Block3, stride 2
OS 16 OS 16
OS 16
Block2,
Block2,stride
stride22 Upsample 2 Block2,
Block2,stride
stride22
OS 8 OS 8 OS 8
Block1,
Block1,stride
stride22 Conv Block1,
Block1,stride
stride22 Conv
OS 4 OS 8 OS 4 OS 8
Root,
Root,stride
stride22 Bilinear Root,
Root,stride
stride22 Bilinear
OS 2 OS 2
Fig. 1. Different architectures for semantic segmentation. (a) the original FCN with
output stride (OS) = 32. (b). DilatedFCN based methods sacrifice efficiency and
exploit the dilated convolution with stride 2 and 4 in the last two stages to generate
high-resolution feature maps. (c)Encoder-Decoder methods employ the U-Net struc-
ture to recover the high-resolution feature maps. (d) Our proposed EfficientFCN with
codebook generation and codeword assembly for high-resolution feature upsampling in
semantic segmentation.
as shown in Fig. 1(a), for the first time demonstrates the success of exploiting a fully
convolutional network in semantic segmentation, which adopts a DCNN as the fea-
ture encoder (i.e., ResNet [9]) to extract high-level semantic feature maps and then
applies a convolution layer to generate the dense prediction. For the semantic seg-
mentation, high-resolution feature maps are critical for achieving accurate segmen-
tation performance since they contain fine-grained structural information to delin-
eate detailed boundaries of various foreground regions. In addition, due to the lack
of large-scale training data on semantic segmentation, transferring the weights pre-
trained on ImageNet can greatly improve the segmentation performance. There-
fore, most state-of-the-art semantic segmentation methods adopt classification
networks as the backbone to take full advantages of ImageNet pre-training. The
resolution of feature maps in the original classification model is reduced with con-
secutive pooling and strided convolution operations to learn high-level feature rep-
resentations. The output stride of the final feature map is 32 (OS = 32), where
the fine-grained structural information is discarded. Such low-resolution feature
maps cannot fully meet the requirements of semantic segmentation where detailed
Holistically-Guided Decoding for Semantic Segmentation 3
spatial information is needed. To tackle this problem, many works exploit dilated
convolution (or atrous convolution) to enlarge the receptive field (RF) while main-
taining the resolution of high-level feature maps. State-of-the-art dilatedFCN
based methods [2,8,24–26] (shown in Fig. 1(b)) have demonstrated that removing
the downsampling operation and replacing convolution with the dilated convolu-
tion in the later blocks can achieve superior performance, resulting in final feature
maps of output stride 8 (OS = 8). Despite the superior performance and no extra
parameters introduced by dilated convolution, the high-resolution feature repre-
sentations require high computational complexity and memory consumption. For
instance, for an input image with 512 × 512 and the ResNet101 as the backbone
encoder, the computational complexity of the encoder increases from 44.6 GFlops
to 223.6 GFlops when adopting the dilated convolution with the strides 2 and 4
into the last two blocks.
Alternatively, as shown in Fig. 1(c), the encoder-decoder based methods (e.g.
[18]) exploit using a decoder to gradually upsample and generate the high-
resolution feature maps by aggregating multi-level feature representations from
the backbone (or the encoder). These encoder-decoder based methods can obtain
high-resolution feature representations efficiently. However, on one hand, the fine-
grained structural details are already lost in the topmost high-level feature maps of
OS = 32. Even with the skip connections, lower-level high-resolution feature maps
cannot provide abstractive enough features for achieving high-performance seg-
mentation. On the other hand, existing decoders mainly utilize the bilinear upsam-
pling or deconvolution operations to increase the resolution of the high-level fea-
ture maps. These operations are conducted in a local manner. The feature vector at
each location of the upsampled feature maps is recovered from a limited receptive
filed. Thus, although the encoder-decoder models are generally faster and more
memory friendly than dilatedFCN based methods, their performances generally
cannot compete with those of the dilatedFCN models.
To tackle the challenges in both types of models, we propose the Efficien-
FCN (as shown in Fig. 1(d)) with the Holistically-guided Decoder (HGD) to
bridge the gap between the dilatedFCN based methods and the encoder-decoder
based methods. Our network can adopt any widely used classification model
without dilated convolution as the encoder (such as ResNet models) to generate
low-resolution high-level feature maps (OS = 8). Such an encoder is both com-
putationally and memory efficient than those in DilatedFCN model. Given the
multi-level feature maps from the last three blocks of the encoder, the proposed
holistically-guided decoder takes the advantages of both high-level but low-
resolution (OS = 32) and also mid-level high-resolution feature maps (OS = 8,
OS = 16) for achieving high-level feature upsampling with semantic-rich features.
Intuitively, the higher-resolution feature maps contain more fine-grained struc-
tural information, which is beneficial for spatially guiding the feature upsam-
pling process; the lower-resolution feature maps contain more high-level seman-
tic information, which are more suitable to encode the global context effectively.
Our HGD therefore generates a series of holistic codewords in a codebook to
summarize different global and high-level aspects of the input image from the
4 J. Liu et al.
low-resolution feature maps (OS = 32). Those codewords can be properly assem-
bled in a high-resolution grid to form the upsampled feature maps with rich
semantic information. Following this principle, the HGD generates assembly
coefficients from the mid-level high-resolution feature maps (OS = 8, OS = 16)
to guide the linear assembly of the holistic codewords at each high-resolution
spatial location to achieve feature upsampling. Our proposed EfficientFCN with
holistically-guided decoder achieves high segmentation accuracy on three popu-
lar public benchmarks, which demonstrate the efficiency and effectiveness of our
proposed decoder.
In summary, our contributions are as follows.
2 Related Work
In this section, we review recent FCN-based methods for semantic segmentation.
Since the successful demonstration of FCN [15] on semantic segmentation, many
methods were proposed to improve the performance the FCN-based methods,
which mainly include two categories of methods: the dilatedFCN-based methods
and the encoder-decoder architectures.
DilatedFCN. The Deeplab V2 [2,3] proposed to exploit dilated convolution in
the backbone to learn a high-resolution feature map, which increases the output
stride from 32 to 8. However, the dilated convolution in the last two layers of the
backbone adds huge extra computation and leaves large memory footprint. Based
on the dilated convolution backbone, many works [5–7,26] continued to apply
different strategies as the segmentation heads to acquire the context-enhanced
feature maps. PSPNet [28] utilized the Spatial Pyramid Pooling (SPP) module
to increase the receptive field. EncNet [25] proposed an encoding layer to predict
a feature re-weighting vector from the global context and selectively high-lights
class-dependent feature maps. CFNet [26] exploited an aggregated co-occurrent
feature (ACF) module to aggregate the co-occurrent context by the pair-wise
similarities in the feature space. Gated-SCNN [20] proposed to use a new gat-
ing mechanism to connect the intermediate layers and a new loss function that
exploits the duality between the tasks of semantic segmentation and semantic
boundary prediction. DANet [5] proposed to use two attention modules with
the self-attention mechanism to aggregate features from spatial and channel
dimensions respectively. ACNet [6] applied a dilated ResNet as the backbone
and combined the encoder-decoder strategy for the observation that the global
Another random document with
no related content on Scribd:
good annealing a piece should never be hotter in one part than in
another, and no part should be hotter than necessary, usually the
medium orange color. Annealing, then, is a slow process
comparatively, and sufficient time should be allowed.
There are many ways of annealing steel, and generally the plan
used is well adapted to the result desired; it is necessary, however,
to consider the end aimed at and to adopt means to accomplish it,
because a plan that is excellent in one case may be entirely
inefficient in another.
Probably the greatest amount of annealing is done in the
manufacture of wire, where many tons must be annealed daily.
For annealing wire sunken cylindrical pits built of fire-bricks are
used usually; the coils of wire are piled up in the cylinders, which are
then covered tightly, and heat is applied through flues surrounding
the cylinders, so that no flame comes in contact with the steel. For all
ordinary uses this method of annealing wire is quick, economical,
and satisfactory. The wire comes out with a heavy scale of oxide on
the surface; this is pickled off in hot acid, and the steel should then
be washed in limewater, then in clean water, and finally dried.
If it be desired to make drill-wire for drills, punches, graving-tools,
etc., this plan will not answer, because under the removable scale
there is left a thin film of decarbonized iron which cannot be pickled
off without ruining the steel, and which will not harden. It is plain that
this soft surface must be ruinous to steel intended for cutting-tools,
for it prevents the extreme edge from hardening—the very place that
must be hard if cutting is to be done.
Tools for drills, lathe-tools, reamers, punches, etc., are usually
annealed in iron boxes, filled in the spaces between the tools with
charcoal; the box is then looted and heated in a furnace adapted to
the work. This is a satisfactory method generally, because the tools
are either ground or turned after annealing, removing any
decarbonized film that may be found; the charcoal usually takes up
all of the oxygen and prevents the formation of heavy scale and
decarbonized surfaces, but it does not do so entirely, and so for
annealing drill-wire this plan is not satisfactory. It is a common
practice in annealing in this way to continue the heating for many
hours, sometimes as many as thirty-six hours, in the mistaken notion
that long-continued heating produces greater softness, and some
people adhere to this plan in spite of remonstrances, because they
find that pieces so annealed will turn as easily as soft cast iron. This
last statement is true; the pieces may be turned in a lathe or cut in
any way as easily as soft cast iron, for the reason that that is exactly
what they are practically. When steel is made properly, the carbon is
nearly all in a condition of complete solution; it is in the very best
condition to harden well and to be enduring.
When steel is heated above the recalescence-point into the
plastic condition, the carbon at once begins to separate out of
solution and into what is known as the graphitic condition. If it be
kept hot long enough, the carbon will practically all take the graphitic
form, and then the steel will not harden properly, and it will not hold
its temper. To illustrate: Let a piece of 90-carbon steel be hardened
and drawn to a light brown temper; it will be found to be almost file
hard, very strong, and capable of holding a fine, keen edge for a long
time.
Next let a part of the same bar be buried in charcoal in a box and
be closed up air-tight, then let it be heated to a medium orange, no
hotter, and be kept at that heat for twelve hours, a common practice,
and then cooled slowly. This piece will be easily cut, and it will
harden very hard, but when drawn to the same light brown as the
other tool a file will cut it easily; it will not hold its edge, and it will not
do good work.
Clearly in this case time and money have been spent merely in
spoiling good material. There is nothing to be gained, and there is
everything to be lost, in long-continued heating of any piece of steel
for any purpose. When it is hot enough, and hot through, get it away
from the fire as quickly as possible.
This method of box-annealing is not satisfactory when applied to
drill-wire, or to long thin strands intended for clock-springs, watch-
springs, etc.
The coils or strands do not come out even; they will be harder in
one part than in another; they will not take an even temper. When
hardened and tempered, some parts will be found to be just right,
and others will have a soft surface, or will not hold a good temper.
The reason of this seems to be a want of uniformity in the conditions:
the charcoal does not take up all of the oxygen before the steel is hot
enough to be attacked, and so a decarbonized surface is formed in
some parts; or it may be that some of the carbon dioxide which is
formed comes in contact with the surface of the steel and takes
another equivalent of carbon from it. Whatever the reaction may be,
the fact is that much soft surface is formed. This soft surface may not
be more than .001 of an inch thick, but that is enough to ruin a
watch-spring or a fine drill.
Again, it seems to be impossible to heat such boxes evenly; it is
manifest that it must take a considerable length of time to heat a
mass of charcoal up to the required temperature, and if the whole be
not so heated some of the steel will not be heated sufficiently; this
will show itself in the subsequent drawing of the wire or rolling of the
strands. On the other hand, if the whole mass be brought up to the
required heat, some of the steel will have come up to the heat
quickly, and will then have been subjected to that heat during the
balance of the operation, and in this way the carbon will be thrown
out of solution partly. This is proven by the fact that strands made in
this way and hardened and tempered by the continuous process will
be hard and soft at regular intervals, showing that one side of the coil
has been subjected to too much heat. This trouble is overcome by
open annealing, which will be described presently.
When steel is heated in an open furnace, there is always a scale
of oxide formed on the surface; this scale, being hard, and of the
nature of sand or of sandstone, grinds away the edges of cutting-
tools, so that, although the steel underneath may be soft and in good
cutting condition, this gritty surface is very objectionable. This trouble
is overcome by annealing in closed vessels; when charcoal is used,
the difficulties just mentioned in connection with wire- and strand-
annealing operate to some extent, although not so seriously,
because the steel is to be machined, removing the surface.
The Jones method of annealing in an atmosphere of gas is a
complete cure for these troubles.
Jones uses ordinary gas-pipes or welded tubes of sizes to suit
the class of work. One end of the tube is welded up solid; the other
end is reinforced by a band upon which a screw-thread is cut; a cap
is made to screw on this end when the tube is charged. A gas-pipe
of about ½-inch diameter is screwed into the solid end, and a hole of
¹/₁₆- to ⅛-inch diameter is drilled in the cap.
When the tube is charged and the cap is screwed on, a hose
connected with a gas-main is attached to the piece of gas-pipe in the
solid end of the tube; the gas-pipe is long enough to project out of
the end of the furnace a foot or so through a slot made in the end of
the furnace for that purpose.
The gas is now turned on and a flame is held near the hole in the
cap until the escaping gas ignites; this shows that the air is driven
out and replaced by gas.
The pipe is now rolled into the furnace and the door is closed, the
gas continuing to flow through the pipe. By keeping the pipe down to
a proper annealing-heat it is manifest that the steel will not be any
hotter than the pipe. By heating the pipe evenly by rolling it over
occasionally the steel will be heated evenly. A little experience will
teach the operator how long it takes to heat through a given size of
pipe and its contents, so that he need not expose his steel to heat
any longer than necessary.
There is not a great quantity of gas consumed in the operation,
because the expanding gas in the tube makes a back pressure, the
vent in the cap being small. This seems to be the perfection of
annealing. A tube containing a bushel or more of bright, polished
tacks will deliver them all perfectly bright and as ductile as lead,
showing that there is no oxidation whatever. Experiments with drill-
rods, with the use of natural gas, have shown that they can be
annealed in this way, leaving the surface perfectly bright, and
thoroughly hard when quenched. This Jones process is patented.
Although the Jones process is so perfect, and necessary for
bright surfaces, its detail is not necessary when a tarnished surface
is not objectionable.
The charcoal difficulty can be overcome also. Let a pipe be made
like a Jones pipe without a hole in the cap or a gas-pipe in the end.
To charge it first throw a handful of resin into the bottom of the pipe,
then put in the steel, then another handful of resin near the open
end, and screw on the cap. The cap is a loose fit. Now roll the whole
into the furnace; the resin will be volatilized at once, fill the pipe with
carbon or hydrocarbon gases, and unite with the air long before the
steel is hot enough to be attacked.
The gas will cause an outward pressure, and may be seen
burning as it leaks through the joint at the cap. This prevents air from
coming in contact with the steel. This method is as efficient as the
Jones plan as far as perfect heating and easy management are
concerned. It reduces the scale on the surfaces of the pieces,
leaving them a dark gray color and covered with fine carbon or soot.
For annealing blocks or bars it is handier and cheaper than the
Jones plan, but it will not do for polished surfaces. This method is not
patented.
OPEN ANNEALING.
Open annealing, or annealing without boxes or pipes, is practised
wherever there are comparatively few pieces to anneal and where a
regular annealing-plant would not pay, or in a specially arranged
annealing-furnace where drill-wire, clock-spring steel, etc., are to be
annealed.
For ordinary work a blacksmith has near his fire a box of dry lime
or of powdered charcoal. He brings his piece up to the right heat and
buries it in the box, where it may cool slowly. In annealing in this way
it is well not to use blast, because it is liable to force all edges up to
too high a heat and to make a very heavy scale all over the surface.
With a little common-sense and by the use of a little care this way of
annealing is admirable.
It is a common practice where there is a furnace in use in
daytime and allowed to go cold at night to charge the furnace in the
evening, after the fire is drawn, with steel to be annealed, close the
doors and damper, and leave the whole until morning. The furnace
does not look too hot when it is closed up, but no one knows how hot
it will make the steel by radiation: the steel is almost always made
too hot, it is kept hot too long, and so converted into cast iron, and
there is an excessively heavy scale on it.
Many thousands of dollars worth of good steel are ruined
annually in this way, and it is in every way about the worst method of
annealing that was ever devised.
To anneal wire or thin strands in an open furnace the furnace
should be built with vertical walls about two feet high and then
arched to a half circle. The inports for flame should be vertical and
open into the furnace at the top of the vertical wall; the outports for
the gases of combustion should be vertical and at the same level as
the inports and on the opposite side of the furnace from the inports.
These outflues may be carried under the floor of the furnace to keep
it hot.
The bottom of the door should be at the level of the ports to keep
indraught air away from the steel. The annealing-pot is then the
whole size of the furnace—two feet deep—and closed all around.
The draught should be regulated so that the flame will pass
around the roof, or so nearly so as to never touch the steel, not even
in momentary eddies.
In such a furnace clock-spring wire not more than .01 inch in
diameter, or clock-spring strands not more than .006 to .008 inch
thick and several hundred feet long, may be annealed perfectly. The
steel is scaled of course, but the operation is so quick and so
complete that there is no decarbonized surface under the scale.
This plan is better than the Jones method or any closed method,
because the big boxes necessary to hold the strands or coils cannot
be heated up without in some parts overheating the steel; all of
which is avoided in the open furnace, because by means of peep-
holes the operator can see what he is about, and after a little
practice he can anneal large quantities of steel uniformly and
efficiently.
VIII.
HARDENING AND TEMPERING.
AS TO HARDNESS.
Prof. J. W. Langley showed by sp. gr. determinations that steel
quenched from 212° F. in water at 60° F. showed the hardening
effect of such quenching, the difference of temperature being only
152° F.
Prof. S. P. Langley, of the Smithsonian, proved the same to be
true by delicate electrical tests, and these again were confirmed by
Prof. J. W. Langley in the laboratory of the Case School of Sciences.
A piece of refined steel will rarely be hard enough to scratch
glass. A piece of steel quenched from creamy heat will almost
always scratch glass. The maximum hardness is produced by the
highest heat, or when temperature minus cold is a maximum; the
least hardness is found by quenching at the lowest heat above the
cooling medium, or when temperature minus cold is a minimum—the
time required to quench being a minimum in both cases.
What occurs between these limits? Is the curve of hardness a
straight line, or an irregular line?
Let a piece of steel be heated as uniformly as possible from a
creamy heat at one end to black at the other, and then be quenched.
Now take a newly broken hard file and draw its sharp corner
gently and firmly over the piece, beginning at the black-heated end.
The file will take hold, and as it is drawn along it will be felt that the
piece becomes slightly harder as the file advances, until suddenly it
will slip, and no amount of pressure will make it take hold above that
point. The piece has become suddenly file hard.
Next try the same thing with a diamond; the diamond will cut
easily until the point is reached where the file slipped, then there will
be found a great increase of hardness.
From this point to the end of the piece it is observed readily by
the action of the diamond that there is a gradual increase of
hardness from the hump to the end of the piece to the creamy-
heated end. Attempts were made to measure this curve of hardness
by putting a load on the diamond and dragging it over the piece; but
no diamond obtainable would bear a load heavy enough to produce
a groove that could be measured accurately by micrometer. An
examination of such a groove, through a strong magnifying-glass
revealed the conditions plainly; the groove of hardness may be
illustrated on an exaggerated scale; thus:
The next question was, Where does this hump occur, and what is
the cause of it?
Careful observation showed that it occurred at the point of
recalescence, at the refining-point. This word point must not be
taken as space without dimension in this connection; it is used in the
common sense of at or adjacent to a given place. There is of course
a small allowable range of temperature above any given exact point
of recalescence, such as 655° C. or 1211° F.
By superimposing Langley’s curves of cooling and of hardening
(see Trans. Am. Soc. Civ. Eng., Vol. XXVII, p. 403), the relation
between recalescence and the hardening-hump is obvious.
It is safe to say that experience proves that the refined condition
is the best for all cutting-tools of every shape and form.
It seems to be obvious; the steel is then in its strongest condition,
and when the grain is finest, the crystals the smallest, a fine edge
should be the most enduring, because there is a more intimate
contact between the particles. That a steel will refine well, and be
strong in that condition is the steel-maker’s final test of quality.
No steel-maker who has a proper regard for the character of his
product will accept raw material upon mere analysis; analysis is of
the utmost importance, for material for steel-making must be of a
quality that will produce a certain quality of steel, or the result will be
an inferior product. This applies to acid Bessemer and open-hearth,
and to crucible-steel especially; the basic processes admit of a
reduction of phosphorus not obtainable in the others.
In making fine-tool steel a bad charge in the pot inevitably means
a bad piece of steel. It may happen also that an iron of apparently
good analysis will not produce a really fine steel; then there must be
a search for unusual elements, such as copper, arsenic, antimony,
etc., or for dirt, left in the iron by careless working. The refining-test
then is as necessary as analysis, for if steel will not refine thoroughly
it will not make good tools. Battering-tools, such as sledges,
hammers, flatters, etc., should be refined carefully, for although their
work is mainly compressive they are liable to receive, and do get,
blows on the corners and edges that would ruin them if they were not
in the strongest condition possible.
The reasons for refining hot-working tools have been stated
already. Engraved dies for use in drop-presses where they are
subjected to heavy blows are undoubtedly in the most durable
condition when they are refined, but they are subjected not only to
impact, but to enormous compression, and therefore they must be
hardened deeply. When a die-block is heated so as to refine, and
then is quenched, it hardens perfectly on the surface and not very
deeply, and it is quite common in such a case to see a die crushed
by a few blows: the hardened part is driven bodily into the soft steel
below it, and the die is ruined; thus: