Professional Documents
Culture Documents
PDF Hardness of Approximation Between P and NP 1St Edition Aviad Rubinstein Ebook Full Chapter
PDF Hardness of Approximation Between P and NP 1St Edition Aviad Rubinstein Ebook Full Chapter
https://textbookfull.com/product/topology-calculus-and-
approximation-1st-edition-vilmos-komornik-auth/
https://textbookfull.com/product/simulation-and-the-monte-carlo-
method-3rd-edition-reuven-y-rubinstein/
https://textbookfull.com/product/finite-elements-i-approximation-
and-interpolation-1st-edition-alexandre-ern/
https://textbookfull.com/product/diophantine-approximation-and-
dirichlet-series-2nd-edition-herve-queffelec/
Topics in Uniform Approximation of Continuous Functions
Ileana Bucur
https://textbookfull.com/product/topics-in-uniform-approximation-
of-continuous-functions-ileana-bucur/
https://textbookfull.com/product/approximation-methods-in-
science-and-engineering-reza-n-jazar/
https://textbookfull.com/product/computational-methods-for-
approximation-of-large-scale-dynamical-systems-1st-edition-
mohammad-monir-uddin-author/
https://textbookfull.com/product/handbook-of-approximation-
algorithms-and-metaheuristics-second-edition-methologies-and-
traditional-applications-volume-1-teofilo-f-gonzalezeditor/
Hardness of Approximation
Between P and NP
ACM Books
Editor in Chief
M. Tamer Özsu, University of Waterloo
ACM Books is a series of high-quality books for the computer science community, published
by ACM and many in collaboration with Morgan & Claypool Publishers. ACM Books
publications are widely distributed in both print and digital formats through booksellers
and to libraries (and library consortia) and individual ACM members via the ACM Digital
Library platform.
Aviad Rubinstein
Stanford University
All rights reserved. No part of this publication may be reproduced, stored in a retrieval
system, or transmitted in any form or by any means—electronic, mechanical, photocopy,
recording, or any other except for brief quotations in printed reviews—without the prior
permission of the publisher.
Designations used by companies to distinguish their products are often claimed as trade-
marks or registered trademarks. In all instances in which the Association for Computing
Machinery is aware of a claim, the product names appear in initial capital or all capital
letters. Readers, however, should contact the appropriate companies for more complete
information regarding trademarks and registration.
This book was typeset in Arnhem Pro 10/14 and Flama using ZzTEX.
First Edition
10 9 8 7 6 5 4 3 2 1
Contents
Preface xiii
Acknowledgments xiv
PART I OVERVIEW 1
Chapter 1 The Frontier of Intractability 3
1.1 PPAD: Finding a Needle You Know Is in the Haystack 5
1.2 Quasi-Polynomial Time and the Birthday Paradox 12
1.3 Approximate Nash Equilibrium 17
Chapter 2 Preliminaries 19
Notation 19
2.1 Nash Equilibrium and Relaxations 19
2.2 PPAD and End-of-a-Line 20
2.3 Exponential Time Hypotheses 22
2.4 PCP Theorems 22
2.5 Learning Theory 24
2.6 Information Theory 25
2.7 Useful Lemmata 27
3.5 Proofs 44
3.6 An Open Problem: Correlated Equilibria in 2-Player Games 60
References 273
Index 291
Author Biography 303
Preface
This book is based on my Ph.D. thesis (by the same title), which I wrote at the Uni-
versity of California, Berkeley, with the wise guidance of Christos Papadimitriou.
In hindsight, it seems only natural that I ended up working on the complexity of
Nash equilibrium, a problem that has been so closely associated with Christos for
three decades. But the true story of how I came to work on equilibrium computa-
tion is completely different. During my first winter break at Berkeley, Abe Othman
contacted me1 and asked about the complexity of finding an Approximate Compet-
itive Equilibrium from Equal Incomes (A-CEEI). At the time, Abe was preparing for
the launch of Course Match, a system that implements a heuristic algorithm for
A-CEEI to assign students to courses at the Wharton School (University of Pennsyl-
vania). Chapter 10, based on joint work with Abe and Christos, is devoted to the
complexity of A-CEEI.
A-CEEI, like the rest of the problems in Part III, belongs to a special class
(PPAD) of intractable problems that are total: their solution always exists, but it
is hard to find it. Given our current understanding of computational complexity,
this implies that they are unlikely to be NP-hard. In Part IV we move on to a different
category of intractable problems: those that can be solved in quasi-polynomial
(nlog(n)) time; this is typically prohibitively slow to implement even in moderate
sized instances, but it is much faster than what we know or conjecture for problems
like Satisfiability. This suggests that these problems are also not NP-hard. In both
cases, proving intractability for problems that are not NP-hard requires both the
right conditional assumptions (P = PPAD and the “Exponential Time Hypothesis”),
as well as special techniques.
The most difficult result in this book, for approximate Nash equilibrium (Part V),
lies in the intersection of the PPAD problems from Part III and the quasi-polynomial
time problems from Part IV. It combines ideas from both parts, together with
new and old techniques from across theoretical computer science. It resolves an
outstanding open problem that, as a student, I didn’t really mean to solve. Well . . .
I’m glad I tried!
Acknowledgments
This book is based on my Ph.D. dissertation [Rubinstein 2017c], completed at
University of California, Berkeley, with the wise advice of Christos Papadimitriou.
My dissertation, in turn, was based on the following publications:
The writing of this book was supported in part by the MSR Ph.D. Fellowship (and
wonderful MSR summer internships).
This has been an amazing journey. I am most grateful for the opportunities I
had to share it with my family and friends.
Aviad Rubinstein
Cambridge, MA
March 2019
I
PART
OVERVIEW
1
The Frontier of
Intractability
After many attempts by economists for more than six decades1 (e.g., [Brown
1951, Foster and Young 2006, Hart and Mas-Colell 2003, Kalai and Lehrer 1993,
Lemke and Howson 1964, Robinson 1951, Scarf 1967, Shapley 1964]), we still don’t
have a satisfying explanation.
About ten years ago, the study of this fundamental question found a surprising
answer in computer science: Chen et al. [2009b] and Daskalakis et al. [2009a]
proved that finding a Nash equilibrium is computationally intractable.2 And if no
centralized, specialized algorithm can find an equilibrium, it is even less likely that
distributed, selfish agents will naturally converge to one. This casts doubt over the
entire solution concept.
For the past decade, the main remaining hope for Nash equilibrium has been
approximation. The central open question in this field has been:
We give a strong negative resolution to this question: our main result rules out
efficient approximation algorithms for finding Nash equilibria.3
Our theorem is the latest development in a long and intriguing technical story.
The first question we computer scientists ask when encountering a new algorithmic
challenge is: is it in P, the class of polynomial time (tractable) problems; or is it
NP-hard, like Satisfiability and the Traveling Salesperson Problem (where the best
known algorithms require exponential time)? Approximate Nash equilibrium falls
into neither category; its complexity lies between P and NP-hard—hence the title
of our book. Let us introduce two (completely orthogonal) reasons why we do not
expect it to be NP-hard.
The first obstacle for NP-hardness is the totality of Nash equilibrium. When
we say that Satisfiability or the Traveling Salesperson Problems are NP-hard, we
formally mean that it is NP-hard to determine whether a given formula has a satis-
fying assignment, or whether a given network allows the salesperson to complete
her travels within a certain budget. In contrast, by Nash’s theorem an equilibrium
always exists. Deciding whether an equilibrium exists is trivial, but we still don’t
know how to find one. This is formally captured by an intermediate complexity
class called PPAD.
1. In fact, “more than six decades” is an understatement: Irving Fisher’s thesis from 1891 dealt
with the closely related question of convergence to market equilibria [Brainard and Scarf 2000].
2. Assuming P = PPAD; see the discussion in the next section about this assumption.
3. Under a complexity assumption stronger than P = PPAD that we call the Exponential Time
Hypothesis (ETH) for PPAD; see Section 1.2 for details.
1.1 PPAD: Finding a Needle You Know Is in the Haystack 5
The second obstacle for NP-hardness is an algorithm (the worst kind of obstacle
for intractability). The best known algorithms for solving NP-hard or PPAD-hard
problems require exponential (≈ 2n) time, and there is a common belief (formu-
lated a few years ago as the “Exponential Time Hypothesis” [Impagliazzo et al.
2001]) that much faster algorithms do not exist. In contrast, an approximate Nash
equilibrium can be found in quasi-polynomial (≈ nlog n) time. Notice that this again
places the complexity of approximate Nash equilibrium between the polynomial
time solvable problems in P and the exponential time required by NP-hard prob-
lems. Therefore, approximate Nash equilibrium is unlikely to be NP-hard—or even
PPAD-hard, since we know of no quasi-polynomial algorithm for any other PPAD-
hard problem.
As illustrated in the last two paragraphs, approximate Nash equilibrium is very
far from our standard notion of intractability, NP-hardness. In some sense, it is one
of the computationally easiest problems that we can still prove to be intractable—
it lies at the frontier of our understanding of intractability. Unsurprisingly, the tech-
niques we had to master to prove the intractability of approximate Nash equilib-
rium are useful for many other problems. In particular, we also prove hardness of
approximation for several other interesting problems that either belong to the class
PPAD or have quasi-polynomial time algorithms.
4. This guarantee is actually without loss of generality [Papadimitriou 1994], but this is not so
important for our purposes.
6 Chapter 1 The Frontier of Intractability
S and P that return, for each vertex, its successor and predecessor in the graph? The
amount of work 5 required for finding another odd-degree vertex by the exhaustive
algorithm, or the path-following algorithm, is still linear in the number of vertices,
but now this number is exponential in the natural parameter of the problem, the
length of the input to the oracle, which equals the logarithm of the number of nodes.
In the oracle model, it is not hard to prove that there are no better algorithms.
Let us consider the computational analog of the black-box oracle model: the
oracles S and P are implemented as explicit (“white-box”) circuits, which are given
as the input to the algorithm. This is the End-of-a-Line problem, which is the
starting point for most of our reductions. The computational class PPAD (which
stands for Polynomial Parity Argument in Directed graphs) is the class of search
problems reducible to End-of-a-Line.
Definition 1.1 End-of-a-Line [Daskalakis et al. 2009a]. Given two circuits S and P , with m input
bits and m output bits each, such that P (0m) = 0m = S(0m), find an input x ∈ {0, 1}m
such that P (S(x)) = x or S(P (x)) = x = 0m.
5. Here work is measured by the number of oracle calls rather than running time; indeed this
model will be the starting point of our reductions for query and communication complexity.
1.1 PPAD: Finding a Needle You Know Is in the Haystack 7
6. There is also a stricter notion that requires x to be close to an x ∗ for which f (x ∗) = x ∗ exactly.
See, e.g., Etessami and Yannakakis [2010].
8 Chapter 1 The Frontier of Intractability
the input is as an arithmetic circuit: this gives a local and easy-to-verify guarantee
that the function is indeed continuous.
Admittedly, there is something dissatisfying about proving computational hard-
ness of “circuit problems”: we are going to prove that this problem is PPAD-hard,
i.e., no easier than End-of-a-Line— but the reason we believe in the first place
that End-of-a-Line is intractable is that we don’t know how to gain insight to the
functionality of the circuit from staring at its structure. Nevertheless, we begin with
the complexity of Brouwer’s fixed point because: this is a fundamental question; it
is the starting point of many of our reductions; and, as we discuss later, even the
black-box hardness in this case is highly non-trivial.
We know of two algorithms for computing an approximate Brouwer fixed point:
there is Scarf’s algorithm [Scarf 1967], but its running time may be exponential in
the description of the function; the same holds for brute-force search.
On the complexity side, Daskalakis et al. [2009a] proved that even in three
dimensions, finding an exponentially close approximation (x such that f (x) −
x∞ ≤ 2−n, where n is the size of the input) is PPAD-complete; Chen and Deng [2009]
proved the same problem continues to be hard even with only two dimensions.
Notice that with a constant number of dimensions, any weaker approximation
desideratum becomes trivial for brute-force search. Chen et al. [2009b] showed that
in n dimensions, polynomial approximations are also PPAD-complete. We will later
prove that even constant approximations are PPAD-complete, i.e., there exists some
absolute constant ε > 0 such that it’s hard to find an x for which f (x) − x∞ ≤ ε;
this is already a significant improvement over the existing state of the art.
We can prove an even stronger form of inapproximability for finding a Brouwer
fixed point: so far, we characterized the approximate fixed point with ∞-norm,
which means that f (x) and x must be close in every coordinate. Our main result in
this regard (Theorem 4.2) is that even if we only require that f (x) and x are close
in most of the coordinates, finding such x and f (x) remains PPAD-complete.
The supply and demand argument has many weak spots, one of which is Giffen
goods [Marshall 1895]: Consider a poor 19th-century English family consuming
10,000 calories a day from meat and bread. They prefer to eat meat, but the bread
is cheaper—so they spend their daily income on 6 loaves of bread and 4 pounds of
meat (each provides 1,000 calories). What happens if the price of bread increases?
They have less free money to spend on the preferred luxury good (meat), which
increases their demand for the Giffen good (bread). The existence of Giffen goods in
practice has been contentiously debated for over a century [Edgeworth 1909, Stigler
1947], with evidence ranging from econometric analysis of historical market data
to experiments in lab rats [Battalio et al. 1991].
Despite counterintuitive issues like Giffen goods, Arrow and Debreu proved
that, under quite general conditions, a market equilibrium always exists [Debreu
and Arrow 1954]. In particular, in the Arrow-Debreu model, agents sell the goods
they own and use the revenue to buy other goods they want; this is in contrast to
Fisher markets, where agents with a monetary budget purchase from a centralized
seller. Market equilibria in both models exist, but can we find them? As in the
case of Nash equilibrium, this question is of particular importance because if a
centralized, omniscient algorithm cannot compute an equilibrium, it is hard to
expect a distributed market with selfish agents to converge to one. In the words of
Kamal Jain [Jain 2004, Nisan 2009b]:
In the same paper, Jain also gave an algorithm for computing Arrow-Debreu’s equi-
librium when consumers’ utilities are linear. Since then, there has been a long
sequence of algorithms for computing or approximating market equilibria (e.g.,
[Birnbaum et al. 2011, Codenotti et al. 2005, Cole and Fleischer 2008, Devanur et al.
2008, Garg et al. 2015, Garg and Kapoor 2006, Jain 2004, Jain and Vazirani, 2010]),
for certain models and utility functions, as well as intractability results in other
settings [Chen et al. 2009a, Deng and Du 2008, Garg et al. 2017, Vazirani and Yan-
nakakis 2011]. In particular, Chen et al. [2013] consider a setting of markets that
exhibit non-monotonicity: when the price of one product increases, its demand in-
creases. Giffen goods, described above, are one example of non-monotone markets;
Chen et al. [2013] construct examples in Arrow-Debreu markets when the increased
price of a product increases the revenue of the seller, who now has more available
money and demands more of the same good. Chen et al. show that for essentially
any class of utility function that exhibits some non-monotonicity, computing the
Arrow-Debreu market equilibrium is PPAD-hard.
We extend the PPAD-hardness proof of Chen et al. [2013] and show that even ap-
proximate market equilibrium is PPAD-hard (Theorem 9.1). We note that although
10 Chapter 1 The Frontier of Intractability
our inapproximability factor is stronger than that showed by Chen et al., the results
are incomparable as ours only holds for the stronger notion of “tight” approxi-
mate equilibrium, by which we mean the more standard definition that bounds
the two-sided error of the market equilibrium. Chen et al., in contrast, prove that
even if we allow arbitrary excess supply, finding a (1/n)-approximate equilibrium
is PPAD-hard. Furthermore, for the interesting case of constant elasticity of sub-
stitution (CES) utilities with parameter ρ < 0, they show that there exist markets
where every (1/2)-tight equilibrium requires prices that are doubly exponentially
large (and thus require an exponential-size representation). Indeed, for a general
non-monotone family of utility functions, the problem of computing a (tight or
not) approximate equilibrium may not belong to PPAD. Nevertheless, the impor-
tant family of additively separable, concave piecewise-linear utilities is known to
satisfy the non-monotone condition [Chen et al. 2013], and yet the computation of
(exact) market equilibrium is in PPAD [Vazirani and Yannakakis 2011]. Therefore,
we obtain as a corollary that computing an ε-tight approximate equilibrium for
Arrow-Debreu markets with additively separable, concave piecewise-linear utilities
is PPAD-complete.
Competitive Equilibrium from Equal Incomes (CEEI) [Foley 1967, Thomson and
Varian 1985, Varian 1974] is a venerable mechanism with many attractive prop-
1.1 PPAD: Finding a Needle You Know Is in the Haystack 11
erties: In CEEI all agents are allocated the same amount of “funny money”; next
they declare their preferences, and then a price equilibrium is found that clears
the market. The market clearing guarantees Pareto efficiency and feasibility. The
mechanism has a strong, albeit technical, ex post fairness guarantee that emerges
from the notion that agents who miss out on a valuable, competitive item will
have extra funny money to spend on other items at equilibrium. Truthfulness is
problematic—as usual with market mechanisms—but potential incentives for any
individual agent to deviate are mitigated by the large number of agents. However,
CEEI only works when the resources to be allocated are divisible and the utilities
are relatively benign. This restriction has both benefits and drawbacks. It ensures
computational feasibility, because CEEI can be computed in polynomial time with
a linear or convex program, depending on the utilities involved [Devanur et al. 2008,
Ghodsi et al. 2011, Varian 1974]; on the other hand, it is easy to construct exam-
ples in which a CEEI does not exist when preferences are complex or the resources
being allocated are not divisible. Indeed, both issues arise in practice in a variety
of allocation problems, including shifts to workers, landing slots to airplanes, and
the setting that we focus on here, courses to students [Budish 2011, Varian 1974].
It was shown in Budish [2011] that an approximation to a CEEI solution, called
A-CEEI, exists even when the resources are indivisible and agent preferences are
arbitrarily complex, as required by the course allocation problems one sees in
practice. The approximate solution guaranteed to exist is approximately fair (in
that the students are given almost the same budget), and approximately Pareto
efficient and feasible (in that all courses are filled close to capacity, with the possible
exception of courses with more capacity than popularity). This result seems to
be wonderful news for the course allocation problem. However, there is a catch:
Budish’s proof is non-constructive, as it relies on Kakutani’s fixed point theorem.
A heuristic search algorithm for solving A-CEEI was introduced in Othman et al.
[2010]. The algorithm resembles a traditional tâtonnement process, in which the
prices of courses that are oversubscribed are increased and the prices of courses
that are undersubscribed are decreased. A modified version of this algorithm that
guarantees courses are not oversubscribed is currently used by the Wharton School
(University of Pennsylvania) to assign their MBA students to courses [Budish et al.
2014]. While it has been documented that the heuristic algorithm often produces
much tighter approximations than the theoretical bound, on some instances it fails
to find even the guaranteed approximation [Budish 2011 (Section 9)].
Thus A-CEEI is a problem where practical interest motivates theoretical inquiry.
We have a theorem that guarantees the existence of an approximate equilibrium—
the issue is finding it. Can the heuristic algorithms currently used to assign
12 Chapter 1 The Frontier of Intractability
Wharton MBAs to their courses be replaced by a fast and rigorous algorithm for
finding an approximate CEEI? Theorem 10.3 answers this question in the negative,
showing that computing an A-CEEI is PPAD-complete.
For all the applications we consider, once we fix some y ∗, finding the best feasible x
that maximizes x Ay ∗ is a tractable problem. (Similarly, if we were given a good x ∗,
finding a matching y is easy.) But optimizing x and y simultaneously is NP-hard.
What about approximations?
Caratheodory’s theorem states that a point v in the convex hull of n points
in Rd can be written as a convex combination of d + 1 points. In general, d + 1
points are necessary, but this number can be reduced drastically if we are willing
to settle for approximation. In particular, Barman7 [2015] proves an approximate
Caratheodory’s theorem that requires only r = O(p/ε 2) points to express a point v̂
such that v̂ − vp < ε, assuming the n points belong to a unit p -ball. In particular,
v̂ can be written as an average over a multi-set of r out of the n points.
Viewing the columns of A as n vectors in Rn, Barman observes that (after proper
normalization) the point v = Ay ∗ is in their convex hull. If we only want to approx-
imately solve the bilinear optimization (1.1), we drastically reduce the dimension
of the problem by enumerating over all choices of v̂, and for each v̂ solving the
optimization problem Av̂. It turns out that in order to guarantee a good addi-
tive approximation of (1.1), we need to set p ≈ log n. The dominant term in the
running time of this meta-algorithm is the enumeration over all choices of v̂ (all
log n
O
multi-sets of the columns of A), which takes approximately nr = n ε2 , i.e., quasi-
polynomial time.
The above meta-algorithm provides evidence that the approximate variant of all
those problems is much easier than solving them exactly: in particular we believe
that NP-hard (respectively, PPAD-hard) problems like 3-SAT (resp. End-of-a-Line)
7. Both the approximate Caratheodory’s theorem and the resulting algorithm have been described
by many researchers (e.g., [Arora et al. 2012, Lipton et al. 2003]); however, the presentation in
Barman [2015] is our favorite.
Another random document with
no related content on Scribd:
lead encephalitis. Bromides and iodides should be given, and the
patient placed in quiet surroundings, and fed on light, nutritious diet,
and every attempt made to produce elimination of the poison.
In the acute attacks vaso-motor spasm is no doubt partially
accountable for the symptoms, and various dilators, previously noted
in discussing colic, may be made use of, such, for instance, as amyl
nitrite, scopolamine, etc., whilst pyramidon, antipyrin, phenacetin,
and other similar drugs may be given between the attacks. Under no
circumstances should any person who has suffered from
encephalitis or other cerebral symptom of lead poisoning be allowed
to resume work in a lead industry.
The treatment of eye affections in lead poisoning requires little
comment, as the essential treatment must be the same as in other
cases, mainly devoted towards the elimination of the poison.
Attempts may be made to treat paresis of the ocular muscles by
means of mild electric currents, but of this we have had no
experience. About 50 per cent. of cases of lead amaurosis and
amblyopia recover, but a number progress to total and permanent
blindness, and prognosis in such cases must always be guarded.
Prognosis.—The prognosis of the first attacks of lead poisoning
of simple colic or even slight unilateral paresis is good; practically all
cases recover under proper treatment. It is unusual for a person to
succumb to a first attack of simple colic, or paresis.
In most cases the serious forms of poisoning only make their
appearance after three or four previous attacks of colic, but a single
attack of paresis is much more frequently followed by a severe form
of poisoning, such as encephalitis.
A limited number of persons are highly susceptible to lead
poisoning, and these persons rapidly show their susceptibility when
working in a dangerous lead process. Lead poisoning occurring in an
alcoholic subject is more likely to result in paretic and mental
symptoms than in a person who is not addicted to alcohol, and the
prognosis of lead poisoning in an alcoholic is much less favourable
than in the case of a normal person.
Mental symptoms very rarely follow from a single attack of lead
colic, and as a rule do not become established under three or four
attacks at least.
A small number of persons exposed to excessive doses of lead
absorption through the lungs develop mental symptoms, such as
acute encephalitis, without any prodromal stage. The prognosis in
such cases is always exceedingly grave.
Sudden generalized forms of paralysis are not common in the
early stages, but are invariably of grave import. A few cases of
paresis, particularly those of the peroneal type, and affecting the
lower limbs, become progressive, and eventually develop into a
condition resembling progressive muscular atrophy with spinal cord
degeneration.
The prognosis of simple colic in women is about as good as for
males, but if an attack of abortion is associated with lead poisoning,
eclampsia often supervenes and permanent mental derangement
may follow. In the dementia associated with lead poisoning the
prognosis is not so grave as in other forms of dementia, especially
alcoholic, but depression is an unfavourable symptom. The mania of
lead poisoning is not so noisy as that of alcoholic mania, but where
there is suspicion of alcoholic as well as lead poisoning the
prognosis is exceedingly grave.
As a rule the prognosis of cases of lead poisoning occurring in
industrial conditions is more favourable when colic is a marked
feature than when it is absent, and there is no doubt that the
prognosis in cases of industrial lead poisoning at the present time is
more favourable than it was before the introduction of exhaust
ventilation and general medical supervision—a fact no doubt to be
explained by the relative decrease in the amount of lead absorbed.
REFERENCES.
[1] Goadby, K. W.: Journ. of Hygiene, vol. ix., 1909.
[2] Hunter, John: Observations of Diseases of the Army in Jamaica.
London, 1788.
[3] Drissole and Tanquerel: Meillère’s Le Saturnisme, p. 164.
[4] Hoffmann: Journ. de Méd., October, 1750.
[5] Weill and Duplant: Gazette des Hôpitaux, lxxix., 796, 1902.
[6] Briquet: Bull. Thérap., Août, 1857.
[7] Peyrow: Thèse de Paris, 1891.
[8] Stevens: Bulletin of Bureau of Labour, U.S.A., No. 95, p. 138, 1911.
[9] Zinn: Berl. Klin. Woch., Nr. 50, 1899.
[10] Serafini: Le Morgagni, No. 11, 1884.
CHAPTER XII
PREVENTIVE MEASURES AGAINST LEAD
POISONING
Were there nothing else to consider but escape of lead fume from
a pot or bath of molten metal, obviously hooding over of the bath and
removal of the fume from the atmosphere of the workroom would be
unnecessary until this temperature was reached. Usually, however,
the bath is kept standing exposed to the air, and the oxide which
forms on the surface has to be skimmed off periodically, and
whenever the ladle is emptied a small cloud of dust arises. Or at
times, in certain processes, chemical interaction takes place in the
bath, as in the dipping of hollow-ware articles previously cleaned in
hydrochloric acid, with evolution of fume of volatile chloride of lead.
Any vessel, therefore, of molten metallic lead in which skimming is
necessary, or in which chemical action gives rise to fume, requires a
hood and exhaust shaft, even although the temperature is little, if at
all, above the melting-point—unless, indeed, a separate exhaust can
be arranged for the removal of the dust immediately above the point
where the skimmings are deposited.
Of many samples of dust collected in workrooms where there are
baths of molten lead, it is impossible to say definitely how much of
the lead present is due to fume, and how much to dust. Thus, a
person tempering the tangs of files was attacked by plumbism, and a
sample of dust collected from an electric pendent directly over the
pot, at a height of 4 feet from the ground, was found to contain 15·6
per cent. of metallic lead. Similarly, a sample taken above a bath for
tempering railway springs contained 48·1 per cent. metallic lead[1].
And, again, a sample collected from the top of the magazine of a
linotype machine contained 8·18 per cent. Such analyses point to
the necessity of enclosing, as far as possible, the sources of danger
—either the fume or the dust, or both. Determination of the melting-
point of the molten mass will often help in deciding whether there is
risk of fume from the pot, and, if there is not (as in the sample of dust
from the linotype machine referred to), will direct attention to the
sources of dust in the room. Proceeding on these lines, S. R.
Bennett[2], using a thermo-electric pyrometer which had been
previously standardized and its rate of error ascertained, and
checking the results in some cases by a mercury-in-glass
thermometer (the bulb of which was protected by metal tubing),
determined the temperature of the various pots and baths of molten
lead used in the Sheffield district. As was anticipated, temporary
cessation of work, stirring up of metal, recoking of furnaces, and
other causes, produced fluctuations of temperatures from minute to
minute in the same pot, and in its different parts. The compensated
pyrometer used gave for file-hardening pots a maximum of 850° C.,
and a minimum of 760° C., the average mean working temperature
being about 800° C. The variations of temperature of lead used for
tempering tangs of files and rasps was found to be high, and largely
unrestricted from a practical standpoint. The maximum was 735° C.,
and the minimum 520° C., the average mean working temperature
being 650° to 700° C., varying more than this within a few hours in
the same pot. Spring tempering is carried out at some comparatively
constant temperature between a maximum of nearly 600° C. and a
minimum of 410° C., depending on the kind of steel and the purpose
for which the steel is to be employed. Generally, the temperature
required rises as the percentage of carbon in the steel is diminished.
As these baths are larger than file-hardening pots, the temperature
range is higher at the bottom than at the top unless well stirred up.
Some lead pots are set in one side of a flue, and the temperature in
the mass is then greater on the furnace side. From further
observation of these pots during experiments, he was inclined to
believe that the lead did not volatilize directly into the atmosphere, as
heated water does, but that the particles of coke, fused oil, etc.,
which rise from the surface, act as carriers of the rapidly oxidized
lead particles which cling to them.
Similar experiments were carried out in letterpress printing works.
The average temperature was 370° C. in the stereo pots, and in the
linotype pots at work 303° C. Scrap lead melting-pots when hottest
registered 424° C., but registered as low as 310° C., according to the
amount of scrap added, the state of the fire underneath, etc. The
best practical working temperature depends largely on the
composition of the metal used. That at some factories is the same
for stereo drums as for lino pots—viz., 81·6 per cent. lead, 16·3 per
cent. antimony, and 2·0 per cent. tin, added to harden the lead. On
the other hand, some printers use a higher percentage of antimony
in the lino than in the stereo metal. Lead melts at 325° C., and
antimony at 630° C., but by adding antimony to lead up to 14 per
cent. the melting-point is reduced at an almost uniform rate to 247°
C., after which further addition of antimony raises the melting-point.
This explains why temperatures as low as 290° C. are practicable for
linotype pots. The molten eutectic has a specific gravity of about
10·5, whereas the cubic crystals average 6·5 only; therefore in these
pots the latter float on the top, and excess of antimony is to be
expected in the skimmings or on the surface.
Administration of certain sections of the Factory and Workshop
Act, 1901, would be simplified were there a ready means available
for determining the extent of contamination of the air—especially of
Section 1, requiring the factory to be ventilated so as to render
harmless, as far as practicable, all gases, vapours, dust, or other
impurities, generated in the course of the manufacturing process,
that may be injurious to health; of Section 74, empowering an
inspector to require a fan or other means if this will minimize
inhalation of injurious fumes or dust; of many regulations having as
their principal object removal of dust and fumes; and of Section 75,
prohibiting meals in rooms where lead or other poisonous substance
is used, so as to give rise to dust or fumes. Unfortunately, owing to
the difficulty hitherto of accurate collection, only a very few
determinations of the actual amount of lead dust and fume present in
the atmosphere breathed have been made. This lends peculiar value
to a series of investigations by G. Elmhirst Duckering, which have
thrown much light on the amount of lead fume present in the air of a
tinning workshop, and the amount of lead dust in the air during
certain pottery processes, and the process of sand-papering after
painting. Incidentally, also, they help to determine the minimal daily
dose of lead which will set up chronic lead poisoning[3]. Aspirating
the air at about the level of the worker’s mouth for varying periods of
time, he determined the amount of lead in the fume, or in the dust,
per 10 cubic metres of air, and from knowledge of the time during
which inhalation took place he calculated the approximate quantity
inhaled per worker daily. We have summarized some of his
conclusions in the table on pp. 204, 205:
Duckering’s experiments as to the presence of fumes containing
compounds of lead in the atmosphere breathed were carried out in a
workshop for the tinning of iron hollow-ware with a mixture consisting
of half lead and half tin. The process of manufacture and the main
sources of lead contamination in the air (knowledge arrived at from
these experiments) are explained on p. 59. As the result of
laboratory experiments designed to show the effect of the violent
escape of vapour produced below the surface of molten metal in
causing contamination of the air, and the nature of the contaminating
substances, he was able to conclude that the chemical action of the
materials (acid and flux) used, and subsequent vaporization of the
products of this action, was a much more important factor than the
mechanical action of escaping vapour. Subsequently, experiments
carried out on factory premises gave the results which are expressed
in the table as to the relative danger, from lead, to (a) a tinner using
an open bath; (b) a tinner working at a bath provided with a hood
and exhaust by means of a furnace flue; and (c) the nature and
extent of air contamination caused by the operation of wiping excess
of metal (while still in a molten state) from the tinned article. In all
three experiments aspiration of air was made slowly: it was
maintained at the rate of 3 to 4 cubic feet an hour in the first
experiment for between seven and eight hours; in the second for
twenty-eight to twenty-nine hours; and in the third for twenty-four to
twenty-five hours. The person engaged in tinning at the open bath
was shown to be exposed to much more danger than one working at
a hooded bath, while the wiper was exposed to even more danger
than the tinner using an open bath, since not only was he inhaling
fume from the hot article, but also fibre to which considerable
quantities of metallic lead and tin adhered.
Analysis of samples of dust collected in different parts of the
workroom bore out the conclusions derived from analysis of the
fumes. Thus, samples collected from ledges at varying heights
above the tinning bath containing the mixture of tin and lead
contained percentages of soluble lead (lead chloride) in striking
amount as compared with samples collected at points in the same
room remote from any source of lead fume, while the insoluble lead
present, as was to be expected from the fact that it consisted of lead
attached to particles of tow floating in the air, was less variable.
TABLE XII., SHOWING QUANTITIES OF LEAD (Pb) IN THE
ATMOSPHERE AT BREATHING LEVEL.
(G. E. Duckering’s Experiments.)
Approximate
Present in Quantities
10 Cubic Metres of Lead (Pb)
of Air Estimated Time expressed
(Milligrammes). (in Hours) in Milligrammes
during which inhaled by Percentage
Total Lead Inhalation Worker of Lead
Occupation. Dust. (Pb). took place. per Day. in Dust.
(1) (2) (3) (4) (5) (6)
Tinner using — 37·79 5¹⁄₂ 10·70 — T
open bath
Sand-papering
and dusting 241 116·10 — — 48·10 R
-
railway
coaches
Sand-papering
coach -
wheels 1343 1025·60 — — 76·40 O
Sand-papering
motor-car -
wheels
35 4·70 — — 13·30 S