Download as pdf or txt
Download as pdf or txt
You are on page 1of 53

Beyond the Worst-Case Analysis of

Algorithms 1st Edition Tim


Roughgarden
Visit to download the full and correct content document:
https://textbookfull.com/product/beyond-the-worst-case-analysis-of-algorithms-1st-edit
ion-tim-roughgarden/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Algorithms Illuminated Part 3 Greedy Algorithms and


Dynamic Programming Tim Roughgarden

https://textbookfull.com/product/algorithms-illuminated-
part-3-greedy-algorithms-and-dynamic-programming-tim-roughgarden/

Algorithms Illuminated Part 3 Greedy Algorithms and


Dynamic Programming 1st Edition Tim Roughgarden

https://textbookfull.com/product/algorithms-illuminated-
part-3-greedy-algorithms-and-dynamic-programming-1st-edition-tim-
roughgarden/

Twenty Lectures on Algorithmic Game Theory 1st Edition


Tim Roughgarden

https://textbookfull.com/product/twenty-lectures-on-algorithmic-
game-theory-1st-edition-tim-roughgarden/

The worst-case scenario survival handbook: expert


advice for extreme situations Borgenicht

https://textbookfull.com/product/the-worst-case-scenario-
survival-handbook-expert-advice-for-extreme-situations-
borgenicht/
Modern Algorithms of Cluster Analysis 1st Edition
Slawomir Wierzcho■

https://textbookfull.com/product/modern-algorithms-of-cluster-
analysis-1st-edition-slawomir-wierzchon/

The Ethics of Silence An Interdisciplinary Case


Analysis Approach 1st Edition Nancy Billias

https://textbookfull.com/product/the-ethics-of-silence-an-
interdisciplinary-case-analysis-approach-1st-edition-nancy-
billias/

Dimensional Analysis Beyond the Pi Theorem 1st Edition


Bahman Zohuri (Auth.)

https://textbookfull.com/product/dimensional-analysis-beyond-the-
pi-theorem-1st-edition-bahman-zohuri-auth/

Tools and Algorithms for the Construction and Analysis


of Systems Dirk Beyer

https://textbookfull.com/product/tools-and-algorithms-for-the-
construction-and-analysis-of-systems-dirk-beyer/

Tools and Algorithms for the Construction and Analysis


of Systems Dirk Beyer

https://textbookfull.com/product/tools-and-algorithms-for-the-
construction-and-analysis-of-systems-dirk-beyer-2/
Beyond the Worst-Case Analysis of Algorithms

There are no silver bullets in algorithm design, and no single algorithmic idea is
powerful and flexible enough to solve every computational problem. Nor are there
silver bullets in algorithm analysis, as the most enlightening method for analyzing
an algorithm often depends on the problem and the application. However, typical
algorithms courses rely almost entirely on a single analysis framework, that of worst-
case analysis, wherein an algorithm is assessed by its worst performance on any input
of a given size.
The purpose of this book is to popularize several alternatives to worst-case
analysis and their most notable algorithmic applications, from clustering to linear
programming to neural network training. Forty leading researchers have contributed
introductions to different facets of this field, emphasizing the most important models
and results, many of which can be taught in lectures to beginning graduate students
in theoretical computer science and machine learning.

Tim Roughgarden is a professor of computer science at Columbia University.


For his research, he has been awarded the ACM Grace Murray Hopper Award,
the Presidential Early Career Award for Scientists and Engineers (PECASE), the
Kalai Prize in Computer Science and Game Theory, the Social Choice and Welfare
Prize, the Mathematical Programming Society’s Tucker Prize, and the EATCS-
SIGACT Gödel Prize. He was an invited speaker at the 2006 International Congress
of Mathematicians, the Shapley Lecturer at the 2008 World Congress of the Game
Theory Society, and a Guggenheim Fellow in 2017. His other books include Twenty
Lectures on Algorithmic Game Theory (2016) and the Algorithms Illuminated book
series (2017–2020).
Beyond the Worst-Case Analysis
of Algorithms

Edited by

Tim Roughgarden
Columbia University, New York
University Printing House, Cambridge CB2 8BS, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India
79 Anson Road, #06–04/06, Singapore 079906

Cambridge University Press is part of the University of Cambridge.


It furthers the University’s mission by disseminating knowledge in the pursuit of
education, learning, and research at the highest international levels of excellence.

www.cambridge.org
Information on this title: www.cambridge.org/9781108494311
DOI: 10.1017/9781108637435
© Cambridge University Press 2021
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2021
Printed in the United Kingdom by TJ Books Limited, Padstow Cornwall
A catalogue record for this publication is available from the British Library.
ISBN 978-1-108-49431-1 Hardback
Cambridge University Press has no responsibility for the persistence or accuracy of
URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
Contents

Preface page xiii


List of Contributors xv

1 Introduction 1
Tim Roughgarden
1.1 The Worst-Case Analysis of Algorithms 1
1.2 Famous Failures and the Need for Alternatives 3
1.3 Example: Parameterized Bounds in Online Paging 8
1.4 Overview of the Book 12
1.5 Notes 20

PART ONE REFINEMENTS OF WORST-CASE ANALYSIS

2 Parameterized Algorithms 27
Fedor V. Fomin, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi
2.1 Introduction 27
2.2 Randomization 31
2.3 Structural Parameterizations 34
2.4 Kernelization 35
2.5 Hardness and Optimality 39
2.6 Outlook: New Paradigms and Application Domains 42
2.7 The Big Picture 46
2.8 Notes 47

3 From Adaptive Analysis to Instance Optimality 52


Jérémy Barbay
3.1 Case Study 1: Maxima Sets 52
3.2 Case Study 2: Instance-Optimal Aggregation Algorithms 60
3.3 Survey of Additional Results and Techniques 64
3.4 Discussion 65
3.5 Selected Open Problems 66
3.6 Key Takeaways 67
3.7 Notes 68

v
CONTENTS

4 Resource Augmentation 72
Tim Roughgarden
4.1 Online Paging Revisited 72
4.2 Discussion 75
4.3 Selfish Routing 77
4.4 Speed Scaling in Scheduling 81
4.5 Loosely Competitive Algorithms 86
4.6 Notes 89

PART TWO DETERMINISTIC MODELS OF DATA

5 Perturbation Resilience 95
Konstantin Makarychev and Yury Makarychev
5.1 Introduction 95
5.2 Combinatorial Optimization Problems 98
5.3 Designing Certified Algorithms 101
5.4 Examples of Certified Algorithms 106
5.5 Perturbation-Resilient Clustering Problems 108
5.6 Algorithm for 2-Perturbation-Resilient Instances 111
5.7 (3 + ε)-Certified Local Search Algorithm for k-Medians 113
5.8 Notes 115

6 Approximation Stability and Proxy Objectives 120


Avrim Blum
6.1 Introduction and Motivation 120
6.2 Definitions and Discussion 121
6.3 The k-Median Problem 125
6.4 k-Means, Min-Sum, and Other Clustering Objectives 132
6.5 Clustering Applications 133
6.6 Nash Equilibria 134
6.7 The Big Picture 135
6.8 Open Questions 136
6.9 Relaxations 137
6.10 Notes 137

7 Sparse Recovery 140


Eric Price
7.1 Sparse Recovery 140
7.2 A Simple Insertion-Only Streaming Algorithm 142
7.3 Handling Deletions: Linear Sketching Algorithms 143
7.4 Uniform Algorithms 148
7.5 Lower Bound 154
7.6 Different Measurement Models 155
7.7 Matrix Recovery 158
7.8 Notes 160

vi
CONTENTS

PART THREE SEMIRANDOM MODELS

8 Distributional Analysis 167


Tim Roughgarden
8.1 Introduction 167
8.2 Average-Case Justifications of Classical Algorithms 171
8.3 Good-on-Average Algorithms for Euclidean Problems 175
8.4 Random Graphs and Planted Models 179
8.5 Robust Distributional Analysis 183
8.6 Notes 184

9 Introduction to Semirandom Models 189


Uriel Feige
9.1 Introduction 189
9.2 Why Study Semirandom Models? 192
9.3 Some Representative Work 196
9.4 Open Problems 209

10 Semirandom Stochastic Block Models 212


Ankur Moitra
10.1 Introduction 212
10.2 Recovery via Semidefinite Programming 215
10.3 Robustness Against a Monotone Adversary 218
10.4 Information Theoretic Limits of Exact Recovery 219
10.5 Partial Recovery and Belief Propagation 221
10.6 Random versus Semirandom Separations 223
10.7 Above Average-Case Analysis 226
10.8 Semirandom Mixture Models 230

11 Random-Order Models 234


Anupam Gupta and Sahil Singla
11.1 Motivation: Picking a Large Element 234
11.2 The Secretary Problem 237
11.3 Multiple-Secretary and Other Maximization Problems 238
11.4 Minimization Problems 247
11.5 Related Models and Extensions 250
11.6 Notes 254

12 Self-Improving Algorithms 259


C. Seshadhri
12.1 Introduction 259
12.2 Information Theory Basics 263
12.3 The Self-Improving Sorter 266
12.4 Self-Improving Algorithms for 2D Maxima 272
12.5 More Self-Improving Algorithms 277
12.6 Critique of the Self-Improving Model 278

vii
CONTENTS

PART FOUR SMOOTHED ANALYSIS

13 Smoothed Analysis of Local Search 285


Bodo Manthey
13.1 Introduction 285
13.2 Smoothed Analysis of the Running Time 286
13.3 Smoothed Analysis of the Approximation Ratio 301
13.4 Discussion and Open Problems 304
13.5 Notes 305

14 Smoothed Analysis of the Simplex Method 309


Daniel Dadush and Sophie Huiberts
14.1 Introduction 309
14.2 The Shadow Vertex Simplex Method 310
14.3 The Successive Shortest Path Algorithm 315
14.4 LPs with Gaussian Constraints 319
14.5 Discussion 329
14.6 Notes 330

15 Smoothed Analysis of Pareto Curves in Multiobjective Optimization 334


Heiko Röglin
15.1 Algorithms for Computing Pareto Curves 334
15.2 Number of Pareto-optimal Solutions 342
15.3 Smoothed Complexity of Binary Optimization Problems 352
15.4 Conclusions 354
15.5 Notes 355

PART FIVE APPLICATIONS IN MACHINE LEARNING


AND STATISTICS

16 Noise in Classification 361


Maria-Florina Balcan and Nika Haghtalab
16.1 Introduction 361
16.2 Model 362
16.3 The Best Case and the Worst Case 363
16.4 Benefits of Assumptions on the Marginal Distribution 365
16.5 Benefits of Assumptions on the Noise 374
16.6 Final Remarks and Current Research Directions 378

17 Robust High-Dimensional Statistics 382


Ilias Diakonikolas and Daniel M. Kane
17.1 Introduction 382
17.2 Robust Mean Estimation 384
17.3 Beyond Robust Mean Estimation 396
17.4 Notes 399

viii
CONTENTS

18 Nearest Neighbor Classification and Search 403


Sanjoy Dasgupta and Samory Kpotufe
18.1 Introduction 403
18.2 The Algorithmic Problem of Nearest Neighbor Search 403
18.3 Statistical Complexity of k-Nearest Neighbor Classification 411
18.4 Notes 419

19 Efficient Tensor Decompositions 424


Aravindan Vijayaraghavan
19.1 Introduction to Tensors 424
19.2 Applications to Learning Latent Variable Models 426
19.3 Efficient Algorithms in the Full-Rank Setting 430
19.4 Smoothed Analysis and the Overcomplete Setting 433
19.5 Other Algorithms for Tensor Decompositions 440
19.6 Discussion and Open Questions 441

20 Topic Models and Nonnegative Matrix Factorization 445


Rong Ge and Ankur Moitra
20.1 Introduction 445
20.2 Nonnegative Matrix Factorization 448
20.3 Topic Models 454
20.4 Epilogue: Word Embeddings and Beyond 461

21 Why Do Local Methods Solve Nonconvex Problems? 465


Tengyu Ma
21.1 Introduction 465
21.2 Analysis Technique: Characterization of the Landscape 466
21.3 Generalized Linear Models 468
21.4 Matrix Factorization Problems 471
21.5 Landscape of Tensor Decomposition 476
21.6 Survey and Outlook: Optimization of Neural Networks 478
21.7 Notes 482

22 Generalization in Overparameterized Models 486


Moritz Hardt
22.1 Background and Motivation 486
22.2 Tools to Reason About Generalization 488
22.3 Overparameterization: Empirical Phenomena 493
22.4 Generalization Bounds for Overparameterized Models 497
22.5 Empirical Checks and Holdout Estimates 500
22.6 Looking Ahead 502
22.7 Notes 502

ix
CONTENTS

23 Instance Optimal Distribution Testing and Learning 506


Gregory Valiant and Paul Valiant
23.1 Testing and Learning Discrete Distributions 506
23.2 Instance Optimal Distribution Learning 507
23.3 Identity Testing 516
23.4 Digression: An Automatic Inequality Prover 519
23.5 Beyond Worst-Case Analysis for Other Testing Problems 522
23.6 Notes 523

PART SIX FURTHER APPLICATIONS

24 Beyond Competitive Analysis 529


Anna R. Karlin and Elias Koutsoupias
24.1 Introduction 529
24.2 The Access Graph Model 530
24.3 The Diffuse Adversary Model 534
24.4 Stochastic Models 537
24.5 Direct Comparison of Online Algorithms 540
24.6 Where Do We Go from Here? 541
24.7 Notes 542

25 On the Unreasonable Effectiveness of SAT Solvers 547


Vijay Ganesh and Moshe Y. Vardi
25.1 Introduction: The Boolean SAT Problem and Solvers 547
25.2 Conflict-Driven Clause Learning SAT Solvers 550
25.3 Proof Complexity of SAT Solvers 554
25.4 Proof Search, Automatizability, and CDCL SAT Solvers 557
25.5 Parameteric Understanding of Boolean Formulas 558
25.6 Proof Complexity, Machine Learning, and Solver Design 562
25.7 Conclusions and Future Directions 563

26 When Simple Hash Functions Suffice 567


Kai-Min Chung, Michael Mitzenmacher, and Salil Vadhan
26.1 Introduction 567
26.2 Preliminaries 571
26.3 Hashing Block Sources 575
26.4 Application: Chained Hashing 576
26.5 Optimizing Block Source Extraction 577
26.6 Application: Linear Probing 578
26.7 Other Applications 580
26.8 Notes 581

27 Prior-Independent Auctions 586


Inbal Talgam-Cohen
27.1 Introduction 586
27.2 A Crash Course in Revenue-Maximizing Auctions 587

x
CONTENTS

27.3 Defining Prior-Independence 591


27.4 Sample-Based Approach: Single Item 593
27.5 Competition-Based Approach: Multiple Items 598
27.6 Summary 602
27.7 Notes 603

28 Distribution-Free Models of Social Networks 606


Tim Roughgarden and C. Seshadhri
28.1 Introduction 606
28.2 Cliques of c-Closed Graphs 607
28.3 The Structure of Triangle-Dense Graphs 612
28.4 Power-Law Bounded Networks 615
28.5 The BCT Model 619
28.6 Discussion 621
28.7 Notes 623

29 Data-Driven Algorithm Design 626


Maria-Florina Balcan
29.1 Motivation and Context 626
29.2 Data-Driven Algorithm Design via Statistical Learning 628
29.3 Data-Driven Algorithm Design via Online Learning 639
29.4 Summary and Discussion 644

30 Algorithms with Predictions 646


Michael Mitzenmacher and Sergei Vassilvitskii
30.1 Introduction 646
30.2 Counting Sketches 649
30.3 Learned Bloom Filters 650
30.4 Caching with Predictions 652
30.5 Scheduling with Predictions 655
30.6 Notes 660
Index 663

xi
Preface

There are no silver bullets in algorithm design – no one algorithmic idea is powerful
and flexible enough to solve every computational problem of interest. The emphasis
of an undergraduate algorithms course is accordingly on the next-best thing: a small
number of general algorithm design paradigms (such as dynamic programming,
divide-and-conquer, and greedy algorithms), each applicable to a range of problems
that span multiple application domains.
Nor are there silver bullets in algorithm analysis, as the most enlightening method
for analyzing an algorithm often depends on the details of the problem and moti-
vating application. However, the focus of a typical algorithms course rests almost
entirely on a single analysis framework, that of worst-case analysis, wherein an
algorithm is assessed by its worst performance on any input of a given size. The
goal of this book is to redress the imbalance and popularize several alternatives to
worst-case analysis, developed largely in the theoretical computer science literature
over the past 20 years, and their most notable algorithmic applications. Forty leading
researchers have contributed introductions to different facets of this field, and
the introductory Chapter 1 includes a chapter-by-chapter summary of the book’s
contents.
This book’s roots lie in a graduate course that I developed and taught several
times at Stanford University.1 While the project has expanded in scope far beyond
what can be taught in a one-term (or even one-year) course, subsets of the book
can form the basis of a wide variety of graduate courses. Authors were requested to
avoid comprehensive surveys and focus instead on a small number of key models and
results that could be taught in lectures to second-year graduate students in theoretical
computer science and theoretical machine learning. Most of the chapters conclude
with open research directions as well as exercises suitable for classroom use. A free
electronic copy of this book is available from the URL https://www.cambridge.org/
9781108494311#resources (with the password ‘BWCA_CUP’).
Producing a collection of this size is impossible without the hard work of many
people. First and foremost, I thank the authors for their dedication and timeliness in
writing their own chapters and for providing feedback on preliminary drafts of other
chapters. I thank Avrim Blum, Moses Charikar, Lauren Cowles, Anupam Gupta,

1 Lecture notes and videos from this course, covering several of the topics in this book, are available from
my home page (www.timroughgarden.org).

xiii
PREFACE

Ankur Moitra, and Greg Valiant for their enthusiasm and excellent advice when this
project was in its embryonic stages. I am also grateful to all the Stanford students who
took my CS264 and CS369N courses, and especially to my teaching assistants Rishi
Gupta, Joshua Wang, and Qiqi Yan. The cover art is by Max Greenleaf Miller. The
editing of this book was supported in part by NSF award CCF-1813188 and ARO
award W911NF1910294.

xiv
Contributors

Maria-Florina Balcan
Carnegie Mellon University

Jérémy Barbay
University of Chile

Avrim Blum
Toyota Technological Institute at Chicago

Kai-Min Chung
Institute of Information Science, Academia Sinica

Daniel Dadush
Centrum Wiskunde Informatica

Sanjoy Dasgupta
University of California at San Diego

Ilias Diakonikolas
University of Wisconsin-Madison

Uriel Feige
The Weizman Institute

Fedor Fomin
University of Bergen

Vijay Ganesh
University of Waterloo

Rong Ge
Duke University

xv
LIST OF CONTRIBUTORS

Anupam Gupta
Carnegie Mellon University

Nika Haghtalab
Cornell University

Moritz Hardt
University of California at Berkeley

Sophie Huiberts
Centrum Wiskunde Informatica

Daniel Kane
University of California at San Diego

Anna R. Karlin
University of Washington at Seattle

Elias Koutsoupias
University of Oxford

Samory Kpotufe
Columbia University

Daniel Lokshtanov
University of California at Santa Barbara

Tengyu Ma
Stanford University

Konstantin Makarychev
Northwestern University

Yury Makarychev
Toyota Technological Institute at Chicago

Bodo Manthey
University of Twente

Michael Mitzenmacher
Harvard University

Ankur Moitra
Massachusetts Institute of Technology

xvi
LIST OF CONTRIBUTORS

Eric Price
The University of Texas at Austin

Heiko Röglin
University of Bonn

Tim Roughgarden
Columbia University

Saket Saurabh
Institute of Mathematical Sciences

C. Seshadhri
University of California at Santa Cruz

Sahil Singla
Princeton University

Inbal Talgam-Cohen
Technion–Israel Institute of Technology

Salil Vadhan
Harvard University

Gregory Valiant
Stanford University

Paul Valiant
Brown University

Moshe Vardi
Rice University

Sergei Vassilvitskii
Google, Inc.

Aravindan Vijayaraghavan
Northwestern University

Meirav Zehavi
Ben-Gurion University of the Negev

xvii
CHAPTER ONE

Introduction
Tim Roughgarden

Abstract: One of the primary goals of the mathematical analysis of


algorithms is to provide guidance about which algorithm is the “best”
for solving a given computational problem. Worst-case analysis
summarizes the performance profile of an algorithm by its worst
performance on any input of a given size, implicitly advocating for
the algorithm with the best-possible worst-case performance. Strong
worst-case guarantees are the holy grail of algorithm design, provid-
ing an application-agnostic certification of an algorithm’s robustly
good performance. However, for many fundamental problems and
performance measures, such guarantees are impossible and a more
nuanced analysis approach is called for. This chapter surveys several
alternatives to worst-case analysis that are discussed in detail later in
the book.

1.1 The Worst-Case Analysis of Algorithms


1.1.1 Comparing Incomparable Algorithms
Comparing different algorithms is hard. For almost any pair of algorithms and
measure of algorithm performance, each algorithm will perform better than the other
on some inputs. For example, the MergeSort algorithm takes (n log n) time to sort
length-n arrays, whether the input is already sorted or not, while the running time of
the InsertionSort algorithm is (n) on already-sorted arrays but (n2 ) in general.1
The difficulty is not specific to running time analysis. In general, consider a com-
putational problem  and a performance measure PERF, with PERF(A,z) quantifying
the “performance” of an algorithm A for  on an input z ∈ . For example,  could
be the Traveling Salesman Problem (TSP), A could be a polynomial-time heuristic for
the problem, and PERF(A,z) could be the approximation ratio of A – i.e., the ratio
of the lengths of A’s output tour and an optimal tour – on the TSP instance z.2
1 A quick reminder about asymptotic notation in the analysis of algorithms: for nonnegative real-valued
functions T(n) and f (n) defined on the natural numbers, we write T(n) = O(f (n)) if there are positive constants c
and n0 such that T(n) ≤ c · f (n) for all n ≥ n0 ; T(n) = (f (n)) if there exist positive c and n0 with T(n) ≥ c · f (n)
for all n ≥ n0 ; and T(n) = (f (n)) if T(n) is both O(f (n)) and (f (n)).
2 In the Traveling Salesman Problem, the input is a complete undirected graph (V,E) with a nonnegative
cost c(v,w) for each edge (v,w) ∈ E, and the goal is to compute an ordering v1,v2, . . . ,vn of the vertices V that

minimizes the length ni=1 c(vi ,vi+1 ) of the corresponding tour (with vn+1 interpreted as v1 ).

1
T. ROUGHGARDEN

Or  could be the problem of testing primality, A a randomized polynomial-


time primality-testing algorithm, and PERF(A,z) the probability (over A’s internal
randomness) that the algorithm correctly decides if the positive integer z is prime. In
general, when two algorithms have incomparable performance, how can we deem one
of them “better than” the other?
Worst-case analysis is a specific modeling choice in the analysis of algorithms,
in which the performance profile {PERF(A,z)}z∈ of an algorithm is summarized
by its worst performance on any input of a given size (i.e., minz : |z|=n PERF(A,z) or
maxz : |z|=n PERF(A,z), depending on the measure, where |z| denotes the size of the
input z). The “better” algorithm is then the one with superior worst-case performance.
MergeSort, with its worst-case asymptotic running time of (n log n) for length-n
arrays, is better in this sense than InsertionSort, which has a worst-case running time
of (n2 ).

1.1.2 Benefits of Worst-Case Analysis


While crude, worst-case analysis can be tremendously useful and, for several reasons,
it has been the dominant paradigm for algorithm analysis in theoretical computer
science.

1. A good worst-case guarantee is the best-case scenario for an algorithm, certifying


its general-purpose utility and absolving its users from understanding which inputs
are most relevant to their applications. Thus worst-case analysis is particularly
well suited for “general-purpose” algorithms that are expected to work well
across a range of application domains (such as the default sorting routine of a
programming language).
2. Worst-case analysis is often more analytically tractable to carry out than its
alternatives, such as average-case analysis with respect to a probability distribution
over inputs.
3. For a remarkable number of fundamental computational problems, there are
algorithms with excellent worst-case performance guarantees. For example, the
lion’s share of an undergraduate algorithms course comprises algorithms that run
in linear or near-linear time in the worst case.3

1.1.3 Goals of the Analysis of Algorithms


Before critiquing the worst-case analysis approach, it’s worth taking a step back to
clarify why we want rigorous methods to reason about algorithm performance. There
are at least three possible goals:

1. Performance prediction. The first goal is to explain or predict the empirical perfor-
mance of algorithms. In some cases, the analyst acts as a natural scientist, taking
an observed phenomenon such as “the simplex method for linear programming is
fast” as ground truth, and seeking a transparent mathematical model that explains
it. In others, the analyst plays the role of an engineer, seeking a theory that
3 Worst-case analysis is also the dominant paradigm in complexity theory, where it has led to the develop-
ment of NP-completeness and many other fundamental concepts.

2
INTRODUCTION

gives accurate advice about whether or not an algorithm will perform well in an
application of interest.
2. Identify optimal algorithms. The second goal is to rank different algorithms accord-
ing to their performance, and ideally to single out one algorithm as “optimal.” At
the very least, given two algorithms A and B for the same problem, a method for
algorithmic analysis should offer an opinion about which one is “better.”
3. Develop new algorithms. The third goal is to provide a well-defined framework in
which to brainstorm new algorithms. Once a measure of algorithm performance
has been declared, the Pavlovian response of most computer scientists is to
seek out new algorithms that improve on the state-of-the-art with respect to
this measure. The focusing effect catalyzed by such yardsticks should not be
underestimated.

When proving or interpreting results in algorithm design and analysis, it’s impor-
tant to be clear in one’s mind about which of these goals the work is trying to
achieve.
What’s the report card for worst-case analysis with respect to these three goals?

1. Worst-case analysis gives an accurate performance prediction only for algorithms


that exhibit little variation in performance across inputs of a given size. This is
the case for many of the greatest hits of algorithms covered in an undergraduate
course, including the running times of near-linear-time algorithms and of many
canonical dynamic programming algorithms. For many more complex prob-
lems, however, the predictions of worst-case analysis are overly pessimistic (see
Section 1.2).
2. For the second goal, worst-case analysis earns a middling grade – it gives good
advice about which algorithm to use for some important problems (such as many
of those in an undergraduate course) and bad advice for others (see Section 1.2).
3. Worst-case analysis has served as a tremendously useful brainstorming organizer.
For more than a half-century, researchers striving to optimize worst-case algo-
rithm performance have been led to thousands of new algorithms, many of them
practically useful.

1.2 Famous Failures and the Need for Alternatives


For many problems a bit beyond the scope of an undergraduate course, the
downside of worst-case analysis rears its ugly head. This section reviews four
famous examples in which worst-case analysis gives misleading or useless advice
about how to solve a problem. These examples motivate the alternatives to worst-
case analysis that are surveyed in Section 1.4 and described in detail in later chapters
of the book.

1.2.1 The Simplex Method for Linear Programming


Perhaps the most famous failure of worst-case analysis concerns linear programming,
the problem of optimizing a linear function subject to linear constraints (Figure 1.1).
Dantzig proposed in the 1940s an algorithm for solving linear programs called
the simplex method. The simplex method solves linear programs using greedy local

3
T. ROUGHGARDEN

Figure 1.1 A two-dimensional linear programming problem.

search on the vertices of the solution set boundary, and variants of it remain
in wide use to this day. The enduring appeal of the simplex method stems from
its consistently superb performance in practice. Its running time typically scales
modestly with the input size, and it routinely solves linear programs with millions of
decision variables and constraints. This robust empirical performance suggested that
the simplex method might well solve every linear program in a polynomial amount
of time.
Klee and Minty (1972) showed by example that there are contrived linear programs
that force the simplex method to run in time exponential in the number of decision
variables (for all of the common “pivot rules” for choosing the next vertex). This
illustrates the first potential pitfall of worst-case analysis: overly pessimistic perfor-
mance predictions that cannot be taken at face value. The running time of the simplex
method is polynomial for all practical purposes, despite the exponential prediction of
worst-case analysis.
To add insult to injury, the first worst-case polynomial-time algorithm for linear
programming, the ellipsoid method, is not competitive with the simplex method in
practice.4 Taken at face value, worst-case analysis recommends the ellipsoid method
over the empirically superior simplex method. One framework for narrowing the gap
between these theoretical predictions and empirical observations is smoothed analysis,
the subject of Part Four of this book; see Section 1.4.4 for an overview.

1.2.2 Clustering and NP-Hard Optimization Problems


Clustering is a form of unsupervised learning (finding patterns in unlabeled data),
where the informal goal is to partition a set of points into “coherent groups”
(Figure 1.2). One popular way to coax this goal into a well-defined computational
problem is to posit a numerical objective function over clusterings of the point set,
and then seek the clustering with the best objective function value. For example, the
goal could be to choose k cluster centers to minimize the sum of the distances between
points and their nearest centers (the k-median objective) or the sum of the squared

4 Interior-point methods, developed five years later, led to algorithms that both run in worst-case polynomial
time and are competitive with the simplex method in practice.

4
INTRODUCTION

Figure 1.2 A sensible clustering of a set of points.

such distances (the k-means objective). Almost all natural optimization problems that
are defined over clusterings are NP-hard.5
In practice, clustering is not viewed as a particularly difficult problem. Lightweight
clustering algorithms, such as Lloyd’s algorithm for k-means and its variants, regu-
larly return the intuitively “correct” clusterings of real-world point sets. How can
we reconcile the worst-case intractability of clustering problems with the empirical
success of relatively simple algorithms?6
One possible explanation is that clustering is hard only when it doesn’t matter.
For example, if the difficult instances of an NP-hard clustering problem look like
a bunch of random unstructured points, who cares? The common use case for a
clustering algorithm is for points that represent images, or documents, or proteins, or
some other objects where a “meaningful clustering” is likely to exist. Could instances
with a meaningful clustering be easier than worst-case instances? Part Three of this
book covers recent theoretical developments that support an affirmative answer; see
Section 1.4.2 for an overview.

1.2.3 The Unreasonable Effectiveness of Machine Learning


The unreasonable effectiveness of modern machine learning algorithms has thrown
the gauntlet down to researchers in algorithm analysis, and there is perhaps no other
problem domain that calls out as loudly for a “beyond worst-case” approach.
To illustrate some of the challenges, consider a canonical supervised learning
problem, where a learning algorithm is given a data set of object-label pairs and the
goal is to produce a classifier that accurately predicts the label of as-yet-unseen objects

5 Recall that a polynomial-time algorithm for an NP-hard problem would yield a polynomial-time algorithm
for every problem in NP – for every problem with efficiently verifiable solutions. Assuming the widely believed
P = NP conjecture, every algorithm for an NP-hard problem either returns an incorrect answer for some inputs
or runs in super-polynomial time for some inputs (or both).
6 More generally, optimization problems are more likely to be NP-hard than polynomial-time solvable. In
many cases, even computing an approximately optimal solution is an NP-hard problem. Whenever an efficient
algorithm for such a problem performs better on real-world instances than (worst-case) complexity theory would
suggest, there’s an opportunity for a refined and more accurate theoretical analysis.

5
T. ROUGHGARDEN

(e.g., whether or not an image contains a cat). Over the past decade, aided by massive
data sets and computational power, neural networks have achieved impressive levels
of performance across a range of prediction tasks. Their empirical success flies in
the face of conventional wisdom in multiple ways. First, there is a computational
mystery: Neural network training usually boils down to fitting parameters (weights
and biases) to minimize a nonconvex loss function, for example, to minimize the
number of classification errors the model makes on the training set. In the past such
problems were written off as computationally intractable, but first-order methods
(i.e., variants of gradient descent) often converge quickly to a local optimum or even
to a global optimum. Why?
Second, there is a statistical mystery: Modern neural networks are typically over-
parameterized, meaning that the number of parameters to fit is considerably larger
than the size of the training data set. Overparameterized models are vulnerable
to large generalization error (i.e., overfitting), since they can effectively memorize
the training data without learning anything that helps classify as-yet-unseen data
points. Nevertheless, state-of-the-art neural networks generalize shockingly well –
why? The answer likely hinges on special properties of both real-world data sets and
the optimization algorithms used for neural network training (principally stochastic
gradient descent). Part Five of this book covers the state-of-the-art explanations
of these and other mysteries in the empirical performance of machine learning
algorithms.
The beyond worst-case viewpoint can also contribute to machine learning by
“stress-testing” the existing theory and providing a road map for more robust
guarantees. While work in beyond worst-case analysis makes strong assumptions
relative to the norm in theoretical computer science, these assumptions are usually
weaker than the norm in statistical machine learning. Research in the latter field
often resembles average-case analysis, for example, when data points are modeled
as independent and identically distributed samples from some underlying structured
distribution. The semirandom models described in Parts Three and Four of this book
serve as role models for blending adversarial and average-case modeling to encourage
the design of algorithms with robustly good performance.

1.2.4 Analysis of Online Algorithms


Online algorithms are algorithms that must process their input as it arrives over time.
For example, consider the online paging problem, where there is a system with a small
fast memory (the cache) and a big slow memory. Data are organized into blocks called
pages, with up to k different pages fitting in the cache at once. A page request results
in either a cache hit (if the page is already in the cache) or a cache miss (if not). On a
cache miss, the requested page must be brought into the cache. If the cache is already
full, then some page in it must be evicted. A cache replacement policy is an algorithm
for making these eviction decisions. Any systems textbook will recommend aspiring
to the Least Recently Used (LRU) policy, which evicts the page whose most recent
reference is furthest in the past. The same textbook will explain why: Real-world
page request sequences tend to exhibit locality of reference, meaning that recently
requested pages are likely to be requested again soon. The LRU policy uses the recent
past as a prediction for the near future. Empirically, it typically suffers fewer cache
misses than competing policies like First-In First-Out (FIFO).

6
INTRODUCTION

Worst-case analysis, straightforwardly applied, provides no useful insights about


the performance of different cache replacement policies. For every deterministic
policy and cache size k, there is a pathological page request sequence that triggers
a page fault rate of 100%, even though the optimal clairvoyant replacement policy
(known as Bélády’s furthest-in-the-future algorithm) would have a page fault rate of
at most 1/k (Exercise 1.1). This observation is troublesome both for its absurdly pes-
simistic performance prediction and for its failure to differentiate between competing
replacement policies (such as LRU vs. FIFO). One solution, described in Section 1.3,
is to choose an appropriately fine-grained parameterization of the input space and to
assess and compare algorithms using parameterized guarantees.

1.2.5 The Cons of Worst-Case Analysis


We should celebrate the fact that worst-case analysis works so well for so many
fundamental computational problems, while at the same time recognizing that
the cherrypicked successes highlighted in undergraduate algorithms can paint a
potentially misleading picture about the range of its practical relevance. The
preceding four examples highlight the chief weaknesses of the worst-case analysis
framework.

1. Overly pessimistic performance predictions. By design, worst-case analysis gives a


pessimistic estimate of an algorithm’s empirical performance. In the preceding
four examples, the gap between the two is embarrassingly large.
2. Can rank algorithms inaccurately. Overly pessimistic performance summaries can
derail worst-case analysis from identifying the right algorithm to use in practice.
In the online paging problem, it cannot distinguish between the FIFO and LRU
policies; for linear programming, it implicitly suggests that the ellipsoid method is
superior to the simplex method.
3. No data model. If worst-case analysis has an implicit model of data, then it’s the
“Murphy’s Law” data model, where the instance to be solved is an adversarially
selected function of the chosen algorithm.7 Outside of security applications, this
algorithm-dependent model of data is a rather paranoid and incoherent way to
think about a computational problem.
In many applications, the algorithm of choice is superior precisely because
of properties of data in the application domain, such as meaningful solutions
in clustering problems or locality of reference in online paging. Pure worst-case
analysis provides no language for articulating such domain-specific properties of
data. In this sense, the strength of worst-case analysis is also its weakness.

These drawbacks show the importance of alternatives to worst-case analysis, in


the form of models that articulate properties of “relevant” inputs and algorithms
that possess rigorous and meaningful algorithmic guarantees for inputs with these
properties. Research in “beyond worst-case analysis” is a conversation between
models and algorithms, with each informing the development of the other. It has
both a scientific dimension, where the goal is to formulate transparent mathematical

7 Murphy’s Law: If anything can go wrong, it will.

7
T. ROUGHGARDEN

models that explain empirically observed phenomena about algorithm performance,


and an engineering dimension, where the goals are to provide accurate guidance about
which algorithm to use for a problem and to design new algorithms that perform
particularly well on the relevant inputs.
Concretely, what might a result that goes “beyond worst-case analysis” look like?
The next section covers in detail an exemplary result by Albers et al. (2005) for the
online paging problem introduced in Section 1.2.4. The rest of the book offers dozens
of further examples.

1.3 Example: Parameterized Bounds in Online Paging


1.3.1 Parameterizing by Locality of Reference
Returning to the online paging example in Section 1.2.4, perhaps we shouldn’t be
surprised that worst-case analysis fails to advocate LRU over FIFO. The empirical
superiority of LRU is due to the special structure in real-world page request sequences
(locality of reference), which is outside the language of pure worst-case analysis.
The key idea for obtaining meaningful performance guarantees for and compar-
isons between online paging algorithms is to parameterize page request sequences
according to how much locality of reference they exhibit, and then prove param-
eterized worst-case guarantees. Refining worst-case analysis in this way leads to
dramatically more informative results. Part One of the book describes many other
applications of such fine-grained input parameterizations; see Section 1.4.1 for an
overview.
How should we measure locality in a page request sequence? One tried and true
method is the working set model, which is parameterized by a function f from the
positive integers N to N that describes how many different page requests are possible
in a window of a given length. Formally, we say that a page sequence σ conforms to f if
for every positive integer n and every set of n consecutive page requests in σ , there are
requests for at most f (n) distinct pages. For example, the identity function f (n) = n
imposes no restrictions on the page request
√ sequence. A sequence can only conform
to a sublinear function like f (n) =  n or f (n) = 1 + log2 n if it exhibits locality
of reference.8 We can assume without loss of generality that f (1) = 1, f (2) = 2, and
f (n + 1) ∈ {f (n),f (n) + 1} for all n (Exercise 1.2).
We adopt as our performance measure PERF(A,z) the fault rate of an online
algorithm A on the page request sequence z – the fraction of requests in z on which A
suffers a page fault. We next state a performance guarantee for the fault rate of the
LRU policy with a cache size of k that is parameterized by a number αf (k) ∈ [0,1].
The parameter αf (k) is defined below in (1.1); intuitively, it will be close to 0 for
slow-growing functions f (i.e., functions that impose strong locality of reference) and
close to 1 for functions f that grow quickly (e.g., near-linearly). This performance
guarantee requires that the function f is approximately concave in the sense that the
number my of inputs with value y under f (that is, | f −1 (y)|) is nondecreasing in y
(Figure 1.3).

8 The notation x means the number x, rounded up to the nearest integer.

8
INTRODUCTION

f (n) 1 2 3 3 4 4 4 5 ···

n 1 2 3 4 5 6 7 8 ···

Figure 1.3 An approximately concave function, with m1 = 1, m2 = 1, m3 = 2, m4 = 3, . . .

Theorem 1.1 (Albers et al., 2005) With αf (k) defined as in (1.1) below:

(a) For every approximately concave function f , cache size k ≥ 2, and deterministic
cache replacement policy, there are arbitrarily long page request sequences
conforming to f for which the policy’s page fault rate is at least αf (k).
(b) For every approximately concave function f , cache size k ≥ 2, and page request
sequence that conforms to f , the page fault rate of the LRU policy is at most
αf (k) plus an additive term that goes to 0 with the sequence length.
(c) There exists a choice of an approximately concave function f , a cache size k ≥ 2,
and an arbitrarily long page request sequence that conforms to f , such that the
page fault rate of the FIFO policy is bounded away from αf (k).

Parts (a) and (b) prove the worst-case optimality of the LRU policy in a strong
and fine-grained sense, f -by-f and k-by-k. Part (c) differentiates LRU from FIFO, as
the latter is suboptimal for some (in fact, many) choices of f and k.
The guarantees in Theorem 1.1 are so good that they are meaningful even when
taken at face value – for strongly sublinear f ’s, αf (k) goes to 0 reasonably quickly
with k. The precise definition of αf (k) for k ≥ 2 is

k−1
αf (k) = , (1.1)
f −1 (k + 1) − 2

where we abuse notation and interpret f −1 (y) as the smallest value of x such that
f (x) = y. That is, f −1 (y) denotes the smallest window length in which page requests
for y distinct pages might appear. As expected, for the function f (n) = n we have
αf (k) = 1 for all k. (With no restriction on the input sequence, an adversary can force
√ √
a 100% fault rate.) If f (n) =  n, however, then αf (k) scales with 1/ k. Thus with
a cache size of 10,000, the page fault rate is always at most 1%. If f (n) = 1 + log2 n,
then αf (k) goes to 0 even faster with k, roughly as k/2k .

1.3.2 Proof of Theorem 1.1


This section proves the first two parts of Theorem 1.1; part (c) is left as Exercise 1.4.

Part (a). To prove the lower bound in part (a), fix an approximately concave function
f and a cache size k ≥ 2. Fix a deterministic cache replacement policy A.
We construct a page sequence σ that uses only k + 1 distinct pages, so at any given
time step there is exactly one page missing from the algorithm’s cache. (Assume that
the algorithm begins with the first k pages in its cache.) The sequence comprises k − 1
blocks, where the jth block consists of mj+1 consecutive requests for the same page
pj , where pj is the unique page missing from the algorithm A’s cache at the start of the

9
T. ROUGHGARDEN

Figure 1.4 Blocks of k − 1 faults, for k = 3.

block. (Recall that my is the number of values of x such that f (x) = y.) This sequence
conforms to f (Exercise 1.3).
By the choice of the pj ’s, A incurs a page fault on the first request of a block, and
not on any of the other (duplicate) requests of that block. Thus, algorithm A suffers
exactly k − 1 page faults.
The length of the page request sequence is m2 + m3 + · · · + mk . Because m1 = 1,

this sum equals ( kj=1 mj ) − 1 which, using the definition of the mj ’s, equals (f −1 (k +
1) − 1) − 1 = f −1 (k + 1) − 2. The algorithm’s page fault rate on this sequence matches
the definition (1.1) of αf (k), as required. More generally, repeating the construction
over and over again produces arbitrarily long page request sequences for which the
algorithm has page fault rate αf (k).

Part (b). To prove a matching upper bound for the LRU policy, fix an approximately
concave function f , a cache size k ≥ 2, and a sequence σ that conforms to f . Our
fault rate target αf (k) is a major clue to the proof (recall (1.1)): we should be looking
to partition the sequence σ into blocks of length at least f −1 (k + 1) − 2 such that each
block has at most k − 1 faults. So consider groups of k − 1 consecutive faults of the
LRU policy on σ . Each such group defines a block, beginning with the first fault of
the group, and ending with the page request that immediately precedes the beginning
of the next group of faults (see Figure 1.4).

Claim Consider a block other than the first or last. Consider the page requests
in this block, together with the requests immediately before and after this block.
These requests are for at least k + 1 distinct pages.

The claim immediately implies that every block contains at least f −1 (k + 1) − 2


requests. Because there are k − 1 faults per block, this shows that the page fault rate is
at most αf (k) (ignoring the vanishing additive error due to the first and last blocks),
proving Theorem 1.1(b).
We proceed to the proof of the claim. Note that, in light of Theorem 1.1(c), it is
essential that the proof uses properties of the LRU policy not shared by FIFO. Fix
a block other than the first or last, and let p be the page requested immediately prior
to this block. This request could have been a page fault, or not (cf., Figure 1.4). In
any case, p is in the cache when this block begins. Consider the k − 1 faults contained
in the block, together with the kth fault that occurs immediately after the block. We
consider three cases.
First, if the k faults occurred on distinct pages that are all different from p, we have
identified our k + 1 distinct requests (p and the k faults). For the second case, suppose
that two of the k faults were for the same page q = p. How could this have happened?
The page q was brought into the cache after the first fault on q and was not evicted
until there were k requests for distinct pages other than q after this page fault. This
gives k + 1 distinct page requests (q and the k other distinct requests between the two

10
INTRODUCTION

faults on q). Finally, suppose that one of these k faults was on the page p. Because p
was requested just before the first of these faults, the LRU algorithm, subsequent to
this request and prior to evicting p, must have received requests for k distinct pages
other than p. These requests, together with that for p, give the desired k + 1 distinct
page requests.9

1.3.3 Discussion
Theorem 1.1 is an example of a “parameterized analysis” of an algorithm, where
the performance guarantee is expressed as a function of parameters of the input
other than its size. A parameter like αf (k) measures the “easiness” of an input, much
like matrix condition numbers in linear algebra. We will see many more examples of
parameterized analyses later in the book.
There are several reasons to aspire toward parameterized performance guarantees.

1. A parameterized guarantee is a mathematically stronger statement, containing


strictly more information about an algorithm’s performance than a worst-case
guarantee parameterized solely by the input size.
2. A parameterized analysis can explain why an algorithm has good “real-world”
performance even when its worst-case performance is poor. The approach is to
first show that the algorithm performs well for “easy” values of the parameter
(e.g., for f and k such that αf (k) is close to 0), and then make a case that “real-
world” instances are “easy” in this sense (e.g., have enough locality of reference
to conform to a function f with a small value of αf (k)). The latter argument can
be made empirically (e.g., by computing the parameter on representative bench-
marks) or mathematically (e.g., by positing a generative model and proving that
it typically generates easy inputs). Results in smoothed analysis (see Section 1.4.4
and Part Four) typically follow this two-step approach.
3. A parameterized performance guarantee suggests when – for which inputs, and
which application domains – a given algorithm should be used. (Namely, on the
inputs where the performance of the algorithm is good!) Such advice is useful
to someone who has no time or interest in developing their own algorithm from
scratch, and merely wishes to be an educated client of existing algorithms.10
4. Fine-grained performance characterizations can differentiate algorithms when
worst-case analysis cannot (as with LRU vs. FIFO).
5. Formulating a good parameter often forces the analyst to articulate a form of
structure in data, like the “amount of locality” in a page request sequence.
Ideas for new algorithms that explicitly exploit such structure often follow soon
thereafter.11

9 The first two arguments apply also to the FIFO policy, but the third does not. Suppose p was already in
the cache when it was requested just prior to the block. Under FIFO, this request does not “reset p’s clock”; if
it was originally brought into the cache long ago, FIFO might well evict p on the block’s very first fault.
10 For a familiar example, parameterizing the running time of graph algorithms by both the number of
vertices and the number of edges provides guidance about which algorithms should be used for sparse graphs
and which ones for dense graphs.
11 The parameter α (k) showed up only in our analysis of the LRU policy; in other applications, the chosen
f
parameter also guides the design of algorithms for the problem.

11
Another random document with
no related content on Scribd:
“I have a growing doubt of the value and justice of the
system, whether as regards the interests of the public or the
inventors.”

Lord Granville, then Vice-President of the Board of Trade, the


Chairman of the Committee on the Patent Bills, told the House of
Lords, on July 1, 1851—

“The last witness was the Master of the Rolls, who,


notwithstanding the experience he had had as one of the law
officers of the Crown in administering the Patent-Laws, and
although he took charge of the first Bill which the Government
proposed on the subject, was decidedly of opinion that
Patent-Laws were bad in principle, and were of no advantage
either to the public or inventors.... All the evidence that had
been brought before the Committee, both of the gentlemen
who were opposed to the system of Patents and those who
were most strongly in favour of it, had only tended to confirm
his previous opinion that the whole system is unadvisable for
the public, disadvantageous to inventors, and wrong in
principle. The result of the experience acquired by the present
Vice-Chancellor and Lord Chief Justice of the Queen’s Bench
had raised great doubts in their minds as to whether a law of
Patents was advantageous. The Chief Justice of the Common
Pleas likewise had written him a letter, which he authorised
him to make what public use of he pleased, declaring his
concurrence in his opinion that a law of Patents was neither
advantageous to the public nor useful to inventors.... The only
persons, he believed, who derived any advantage from the
Patent-Laws were members of the legal profession. Except
perhaps warranty of horses, there was no subject which
offered so many opportunities for sharp practice as the law of
Patents. As regards scientific men, too, the practice of
summoning them as witnesses on trials respecting Patents
had an injurious, if not a demoralising, effect.... They
sometimes allowed themselves to be betrayed into giving a
more favourable opinion of the merits of an invention than
was strictly accurate.”

Lord Harrowby judiciously said, in reference to the proposition


then for the first time made to exempt the Colonies from the
incidence of British Patents—

“The colonial refiner would be enabled to avail himself of


every new invention in the manufacture of sugar, to the
prejudice of the home refiner, who would have to pay for the
Patent-right.”

Lord Campbell—

“Having been some years a law officer of the Crown, had


some experience as regarded the question at issue, and he
begged to say that he entirely approved of the view of his
noble friend, Earl Granville.”

Sir James Graham, on Aug. 5 of the same year, observed—

“There was also evidently great division of opinion among


Her Majesty’s Ministers upon this subject. The Vice-President
of the Board of Trade, in the House of Lords, when
introducing this Bill, expressed a decided opinion adverse to
the principle of Patents altogether. The noble Secretary for
the Colonies (Earl Grey) agreed with the Vice-President of the
Board of Trade, and now it was found that the advisers of the
Crown had put an end altogether to Patents in the colonies.
Was it right, then, to continue a system in England which had
been condemned in principle by the advisers of the Crown?
And were they to legislate upon a question which the divisions
in Her Majesty’s Council rendered still more doubtful?”

Mr. Cardwell, sensibly and patriotically,


“Would remind the House of the case of the sugar-refiners
of Liverpool, who complained of this part of the Bill.”

I need not quote Mr. Ricardo, whose lamented death prevented


him from urging the present subject as he intended. Allow only the
following observations of Mr. Roche, who on the same occasion—

“Entirely agreed that the Patent-Laws should be abolished


altogether. They might depend on it that nine-tenths of the
Patent inventions, under any law that could be passed, would
be nothing less than so many stumbling-blocks in the way of
improvement.”

Here is an extract from the proceedings of the British Association


at Glasgow:—

“Mr. Archibald Smith was convinced that a majority of


scientific men and the public were in favour of a repeal of the
Patent-Law, and he believed its days were numbered. He
held it was the interest of the public, and not the patentees,
that should be consulted in the matter. This was a growing
opinion amongst lawyers and young men of his
acquaintance.”

I revert to the injurious influence of Patents in incapacitating


manufacturers to compete with their foreign rivals, and am able to
submit Continental testimony that such is the inevitable effect. The
following lengthy quotation will suffice from M. Legrand, Auditor of
the Council of State of France:—

“There is in this institution not only an obstacle to the


development of home trade, but also a shackle on foreign
commerce.
“The doors which we open by our Treaties of Commerce
may by means of Patents be closed.
“Let an invention be freely worked in Belgium; if in France it
be patented, Belgian produce cannot enter there. Let the
contrary be the case; we cannot export to Belgium the
production which is free with us, but patented at Brussels.
“Let us suppose, for example, that a new colour is patented
alone in France, and that the patentee only permits the
manufacture of the colour on payment of a high royalty: this
colour will become dear, to the profit of the patentee alone,
and the detriment of all; its exportation, or the exportation of
articles dyed with this colour, into a country where the
manufacture is free, will become impossible, because in that
country they will begin to fabricate it, and its price will be
diminished to the extent of the royalty exacted for it by the
patentee.
“The French producer will necessarily be placed in such a
situation that he will be unable to sustain any foreign
competition.
“It is of consequence, so far as it depends on legislators, to
place those countries on the same footing who unite in the
peaceful, beneficent struggle of competition.
“But with the sound notions which prevail amongst persons
of intelligence, it is evident that the uniform solution to which
every one would adhere cannot be one which would
recognise Patents.
“The making all discoveries free is the system which alone
would have the chance of being adopted by all nations.
“It would certainly put an end to more injustice than it would
originate.”

I had the pleasure of being present at a numerously-attended


meeting of the Economists of Germany held at Dresden in 1863,
which almost unanimously adopted a resolution against all Patents;
quite in harmony, I may say, with formal resolutions of commercial
and industrial associations in that country and France.
The House must long ago have been prepared for the following
conclusions, which close the Royal Commission’s Report on the Law
relating to Letters Patent for Inventions:—

“That in all Patents hereafter to be granted a proviso shall


be inserted to the effect that the Crown shall have the power
to use any invention therein patented without previous licence
or consent of the patentee, subject to payment of a sum to be
fixed by the Treasury.
“While, in the judgment of the Commissioners, the changes
above suggested will do something to mitigate the
inconveniences now generally complained of by the public as
incident to the working of the Patent-Law, it is their opinion
that these inconveniences cannot be wholly removed. They
are, in their belief, inherent in the nature of a Patent-Law, and
must be considered as the price which the public consents to
pay for the existence of such a law.”

This is signed by Lord Stanley, Lord Overstone, Sir W. Erle, Lord


Hatherley, Lord Cairns, H. Waddington, W. R. Grove, W. E. Forster,
Wm. Fairbairn.
The public understood this to mean that the Commission were by
no means satisfied that there should be any longer any Patent-Law
at all. The Journal of Jurisprudence gives it this interpretation.
But I can adduce a higher and more authoritative exposition with
regard to the views of at least the noble Lord the Chairman of the
Commission. When the question was put as to legislation in
conformity with the Report, Lord Stanley told this House on June 10,
1865:—

“The House ought first to have an opportunity fairly and


deliberately of deciding upon that larger question which had
not been submitted to the Patent-Law Commission—viz.,
whether it was expedient that Patents for invention should
continue to be a part of the law.”
We all know there is in general society, and even among
politicians and men in business, an acquiescence almost amounting
to approval of Patents in the abstract. Its existence I attribute to
unacquaintance with actualities. I acknowledge that when the more
able advocates of the system state their reasons, these look
conclusive enough, and would be so if there were but one side of the
case. What we, their opponents, claim is that our objections be met.
This, I apprehend, cannot be done without, at least, leaving so much
inevitable evil confessed as must turn the scale. Some of these
arguments that we hear are futile and far-fetched enough to deserve
to be repeated. Admitting obstructiveness, a Chancery-lane writer
pleads thus:—

“This very prohibition causes others to exert themselves to


invent different means by which the same or a better result
may be obtained than by the invention which they are
prevented from using, except by payment, and the result is
competition, in the highest degree beneficial to trade, and an
unceasing advancement and striving.”

Really no better is the reasoning of an official witness, who told the


Commission:—

“Three-fourths of the Patents, Inventions of Englishmen.—


Three-fourths of the applications for Patents, or thereabouts,
are for the inventions of Englishmen; the remaining one-fourth
are for the inventions of foreigners, for the most part
Frenchmen and Americans. The country in which inventions
are of the highest value will draw inventions to it from all
others, and so long as any one country protects inventions by
Patent, so long must all countries protect. Were England to
abolish protection of inventions, inventors would carry their
inventions to other countries. Switzerland does not protect,
and consequently the Swiss take their inventions to other
countries.”
Why? What harm though the British inventor should go abroad to
patent or even to work his invention? He must specify it in the
country he goes to; and cannot, will not, our artisans at once avail
themselves, and revel in the free use, of what he there records? Call
our nation’s not rewarding him a piece of doubtful policy, or want of
generosity; but banish the notion that our trade will suffer. It will gain.
But there are defenders of very different calibre: Mr. MacCulloch,[3]
Sir David Brewster, Mr. John Stuart Mill. It is meet I should inform the
House what are their arguments. I find them succinctly stated and
well put in Mr. Mill’s “Political Economy.” I will read the whole of that
gentleman’s observations, interlacing, for brevity’s sake, very short
and unargumentative dissents, if not replies:—

“The condemnation of monopolies ought not to extend to


Patents, by which the originator—”

Does Mr. Mill know that many an invention is patented by some


person who is not the originator, but only the first promulgator in
Britain; still more often, who is not the only originator?

“of an improved process—”

I have already shown that the law, rightly read, can hardly be said
to sanction the patenting of a “process.”

“is allowed to enjoy, for a limited period, the exclusive


privilege of using his own improvement.”

Which means, the privilege of debarring all other people—some of


whom may, after him, or at the same time as he, or even before him,
have invented it—from doing what he is, and they also should be,
allowed to do.
“This is not making the commodity dear for his benefit, but
merely postponing—”

For his benefit, and still more frequently and surely for the benefit
of a multitude of other individuals, who have less claim, or no claim
at all.

“a part of the increased cheapness, which the public owe to


the inventor—”

But not to him only, for he invents often along with others, and
always in consequence of knowledge which he derives from the
common store, and which he ought, as its participant, to let others
share, if doing so does himself no harm.

“in order to compensate and reward him for the service.”

The real service, if it be “service,” is the communicating his


knowledge.

“That he ought to be both compensated and rewarded for it,


will not be denied;”

But it does not follow, surely, even in Mr. Mill’s logic, that he should
be invested with monopoly powers, which “raise prices” and “hurt
trade,” and cause “general inconvenience.”

“and also, that if all were at once allowed to avail themselves


of his ingenuity, without having shared the labours or the
expenses which he had to incur in bringing his idea into a
practical shape—”

But which, very likely, were trifling, and if heavy, were incurred for
his own sake, and may have produced benefits to himself that
sufficiently compensated all.

“either such expenses and labours would be undergone by


nobody—”

Which is a wild assumption.

“except very opulent and very public-spirited persons.”

The former are numerous; the latter ought to be; and the service is
one the nation may well expect of them. Why should not there be
innumerable Lord Rosses, Sir Francis Crossleys, Sir David Baxters,
and Sir William Browns, promoting beneficent commerce by their
generosity; and why should not manufacturers systematically
combine as an association to procure through science and
experiment every possible improvement?

“Or the State must put a value on the service rendered by an


inventor, and make him a pecuniary reward.”

And why should we not prefer this alternative?

“This has been done in some instances, and may be done


without inconvenience in cases of very conspicuous public
benefit.”

Well: that is a great deal; but why not in cases that are not
conspicuous?

“But in general an exclusive privilege of temporary duration is


preferable—”

Now, mark the only reasons adduced:—


“because it leaves nothing to any one’s discretion—”

That is, I suppose, Mr. Mill, to avoid trusting anybody—the danger


from doing which is imaginary, or at least avoidable—would let the
nation remain subject to proved frightful inconvenience and loss.

“and the greater the usefulness, the greater the reward—”

Which, Mr. Mill rightly thinks, is what ought to be, but it is not and
cannot be what happens under Patents; for, on the contrary, rewards
depend mainly on the extent of use and the facility of levying
royalties.

“and because it is paid by the very persons to whom the


service is rendered, the consumers of the commodity.”

Here Mr. Mill appears to regard, and it is right he should,


manufacturers as mere intermediates. Well: can they shift the
burden which they, in the first instance exclusively bear, from their
own shoulders to those of the consumer? Perhaps they could have
done so before the inauguration of Free Trade; but since that time,
the thing is impossible, and so will it ever be until the day arrive
when either Patents shall apply to all countries, and in all countries
exactly the same royalties shall be charged for their use, or else they
are abolished.

“So decisive, indeed, are these considerations, that if the


system of Patents were abandoned for that of rewards by the
State, the best shape which these could assume would be
that of a small temporary tax imposed for the inventor’s
benefit—”

Would he in general get it? And, let me ask, how collected—how


distributed?
“on all persons making use of the invention.”

A thing impossible, however, even for conspicuous inventions; and


to which there is the further fatal objection that there must be none
but such recognised, which might be unfairness, as it certainly would
be partiality. If, as indicated, a tax on all users and consumers, will
not grants from the Exchequer be in the main fair enough as to
incidence?

“To this, however, or to any other system which would vest in


the State—”

Why the State? Why not let inventors decide?

“the power of deciding whether an inventor should derive any


pecuniary advantage for the public benefit which he confers,
the objections are evidently [!] stronger and more fundamental
than the strongest which can possibly be urged against
Patents. It is generally admitted that the present Patent-Laws
need much improvement.”

It is not admitted that they can be made satisfactory, do what we


will; and I contend that no extent of mere improvement can
overcome the objectionableness of the restraints and burdens
inseparable from the system.

“But in this case, as well as in the closely analogous one of


Copyright, it would be a gross immorality in the law to set
everybody free”—

Why, everybody is naturally free, and would continue free if the


law did not step in and cruelly take their freedom away, doing which
is the real immorality.

“to use a person’s work”—


A fallacy—to use, it may be, his thoughts, which, as soon as they
are communicated, are no longer his only—and not at all to use his
“work” in any proper sense.

“without his consent, and without giving him an equivalent.”

As if consent were needed to use one’s knowledge, and as if there


could or should be any equivalent.

“I have seen with real alarm several recent attempts, in


quarters carrying some authority, to impugn the principle of
Patents altogether; attempts which, if practically successful,
would enthrone free stealing under the prostituted name of
free trade, and make the men of brains, still more than at
present, the needy retainers and dependents of the men of
money-bags.”

As to “free stealing,” hear what the greatest political economist of


France thinks—
“C’est dans une mesure la même question que le free
trade.”

As to the “money-bags,” Mr. Mill plainly is not aware that the


dependence he deprecates is the invariable, almost the inevitable,
consequence of a Patent system.
I am extremely sorry to differ on a question of political economy
from Mr. Mill. But with all due respect I submit that he has not, when
writing the passage which has now been given in extenso, realised
what a Patent is in practice. It is the price at which the State buys a
specification. The purchase is a compulsory one, with this peculiarity,
that whereas the inventor may or may not offer to sell—for he is left
at perfect liberty, as in a free country he ought to be, whether to
patent and reveal (sell) or not—yet if he do offer, it is the State, the
maker of the law, which, through the Sovereign, voluntarily puts itself
under compulsion to accept the offer, and—with a defiant violation
which the frequency of the deed in my view makes flagrant of sound
principle—pays not out of public revenues or any funds over which it
has legitimate control, but out of the means of private individuals,
reached and extracted either in the form of exceptional profits on
goods the monopolist makes, or by his levying of a tax called
royalties on any of his fellow-subjects whom he may of grace, if they
comply with his demands, associate with himself as sharers of the
monopoly.
Such opponents’ impulses are excellent, but their plan is
incompatible with actual pre-existent interests. They omit to take into
full account the conditions of the everyday world which the
statesman has to do with, and might not unprofitably call to mind a
story or parable of juvenile days wherein certain wise men were
represented as, after due counsel, placing a favourite bird within high
and close hedges in order to gratify their tastes and enjoy melodious
notes all the year round. The conditions of winged existence had not
been taken into account; theory and sentiment could not be reduced
to practice. Favouritism, constraint, and isolation, being contrary to
nature, failed. The nightingale loved, needed, sought, and found
freedom. To recall another book of youthful days. Think of Robinson
Crusoe, and the many new inventions his peculiar position required
and elicited. Let me suppose the neighbouring islanders saw for the
first time in his hands a cocoa-nut turned into a cup, in his hut
potatoes roasting in the fire, in his garden guano used as manure.
What would they have thought of Christianity and civilization, if he,
anticipating the pretensions of modern inventors, had alleged, on the
ground of first use, exclusive property in these manufactures,
processes, and applications, and had debarred the imitation for
fourteen years? The unsophisticated savages would have said, “We
understand and allow your claims to possess what you yourself
make, but we do not understand, and we dare not allow, your claim
to possess what we make ourselves. You are welcome to learn what
we shall learn, and to do whatever you see us do. We cannot sell for
money the odours that rise from the fruits that sustain our life; should
we forbid to pick up and plant their seeds that we throw away?
Should we grudge the runnings over from the brimming cup of
knowledge which heaven puts into the hand, and the froth at the top
which the wind blows away?” Heathens are pleased to even work at
what is good for all according to opportunity. The fact is, the right of
inventors is too shadowy to have any recognisable existence where
there is not a submissive society to vend to or trample on, and a
complaisant state to compel their submission.
If he were a member this night present with us, I would appeal to
Mr. Mill as a philosopher. Seeing that the world is so framed that
whereas acquisitions of material property or things cannot be
possessed in common without the share or enjoyment of each
person being lessened or lost, it is universally possible that any
number of persons, however many, can possess and use, without
any diminution of individual enjoyment, knowledge or ideas in
common, do not wisdom and humanity justly interpret this as an
indication that to interfere is to oppose the order of nature?
Let me appeal to him as a moralist. Seeing that to so interfere with
the communication and enjoyment of knowledge or ideas by limiting
the power and right to apply inventions to use is to withhold that
whereby one man, without loss to himself, may benefit his fellows,
do not ethics favour the philanthropic course which accords with the
course that Nature indicates?
I appeal to Mr. Mill as a political economist. Seeing that the order
of nature and the promptings of philanthropy are favourable to the
communication of inventions and their free use, is it the part of a
State to provide for the gratification of the selfish principle in man by
legislation framed to endorse, and facilitate, and almost to
necessitate it?
I appeal to Mr. Mill as a statesman, and ask, Is it consistent with
enlightened policy to place manufacturers in such a position, that
they are constantly tempted to conceal improvements they are using,
from fear to discover that they are infringing? Does he know so little
of mankind, that he expects them, the poorest as well as the richest,
to employ (and this would be requisite) suitable agents to search
whether any improvement they mean to adopt is already the subject
of a Patent that renders its adoption illegal, and also to institute
inquiries as to who, and where, in the wide world, is the holder of the
Patent or Patents, whom in that case he must first negotiate with and
sue for a licence? Does Mr. Mill think a manufacturer’s time is so free
from absorbing occupations that he can attend to the daily
transactions of the Patent-office, so as to inquire whether such and
such a mysterious application is an unintended, it may be, but in
result an effectual, ousting him from use of a process that he is
about to introduce or has already in operation? Yet these are the
superhuman efforts and gifts which compliance with, and subjection
to, any Patent system presupposes and requires.
Is it nothing in the eyes of this legislator, whose absence from this
House is so generally regretted, that by means of the Patent-Laws
there are thrown loose on men in trade thousands of individuals
whose interests run counter to those of society, men trusted with
letters of marque to prey, not on foreign commerce, but on British? Is
it a small matter, that, having surrendered the principle of
discriminating duties leviable by the State for national purposes, we
continue to expose those from whom this protection is withdrawn to
an ever-increasing burden of taxes, in favour of individuals, levied
without State control or any regard to equality? Does Mr. Mill
conceive it is short of recklessness to continue to stimulate invention
by rewards which often turn out ruinous to those whom they are
meant to favour, and which bear not the smallest proportion to the
cleverness, the beneficial results, the cost of elaborating, the merits
or the wants of the inventor, and scarcely to the originality and
legitimacy of the claim of whoever is the applicant? Is he aware that
the advantage reaped by inventors, sometimes very large, is
obtained at so frightful a cost that, as some persons believe, for
every pound which actually reaches him the country loses to the
extent of one hundred pounds? Surely we are asked to obtain our
stimulus by a folly (only his was voluntary, and not habitual) like that
of the fabulous sailor who, for the sake of a tumbler of rum,
swallowed the bucketful of salt water amid which the dangerous
stimulant had by accident fallen. I honour the candour of Mr. Mill, and
I hope yet to have his concurrence in my views. He cannot have
reflected on and realised actual facts. One illustration more, and this
of another difficulty which I commend to his attention and that of any
honourable gentlemen who have been carried away along with him, I
give by narrating an incident in my late canvass.
A deputation of the trades of Scotland did the candidates the
honour of submitting to us a very judicious list of questions. One of
these concerned the Patent-Law. They asked, would I support a
motion for reducing the cost of Patents? I answered I would,
because I think the cost too high for the working man; but I added
that I would rather see Patents swept away. One of the deputation
properly animadverted on the hardship this might inflict, and he
instanced the case of his brother, who had invented an improved
apparatus for use on board ship. I rejoined that I accepted the case
as sufficient to confirm the conviction that Patents are on the whole
not good, but bad, for working men or any men. My reasoning was
substantially this: In order to reap his reward, the inventor is required
or expected to visit every ship or shipowner at the port, and
endeavour to get the apparatus understood, believed in, and
adopted; and not at Leith only—at every Scotch port, every English
port, and every Irish port. But not to let British shipowners suffer by
the inequality of paying, while rivals use without paying, and at the
same time to promote his own interests, the inventor must take out
Patents in France, Belgium, Holland, and all maritime countries and
their colonies. After he obtains these many Patents he has to sell his
apparatus at all the ports of those countries. The first thing obvious
is, that to do a tithe of that work the inventor must relinquish his own
business, which is the solid beef in the mouth of the dog in the fable,
for the delusive shadow in the water. But never mind that in the
meantime: after the business is relinquished, there remains the
insuperable difficulty of conducting a business so much beyond the
power of man as that I have sketched. He might of course attempt to
overcome that by appointing agents to manufacture abroad or act
abroad for him; but where is the capital to hazard on so great an
enterprise? If he were as rich as a Rothschild, has he the gift of
tongues to enable him to correspond in all languages? And if he had,
how can all this work, requiring simultaneity, be done at once? The
end, of course, must be, at the very best—the Patents, if, indeed,
actually taken, are sold for a trifle, and the persons who secure
them, which they only do if valuable, in their turn sell, for a trifle too;
so that the lucky inventor gets but little out of the tens of thousands
or hundreds of thousands of pounds which the public are made to
bear the burden of. Ex uno disce omnes.
I am unwilling to leave this part of my theme without adverting to a
point which deserves some attention—I mean the tendency the
Patent system has to lower the tone of men of science. In a
quotation from Lord Granville it is seen to be more than insinuated
that the sacred claims of truth are in danger of being compromised
by the evidence men of science are asked and tempted to give in
courts of law. But the evil of Patents begins in the laboratory and the
closet; for there is felt the impulse to conceal anything new and likely
to be useful, in order to patent; so that a conflict is generated
between, on the one hand, the theory of the academic chair which
supposes in the very name “university” universalism, community of
knowledge, and on the other, law-created personal interests, whose
nature it is to stifle the man of science’s inherent desire to spread
knowledge and exchange thoughts in order to benefit mankind.
But Mr. Mill presents an alternative. I, for one, have no objection to
see it considered. I have long advocated State rewards; they cannot
be condemned on principle; they are sanctioned by another
philosopher. When I say that I had the honour long ago to receive
the following from M. Chevalier, I am sure of this House’s attention.

Extract of a Letter from M. Michel Chevalier to Mr. Macfie.


“The Patent system, as constituted in all countries where it
is established, is a monopoly that outrages liberty and
industry. It has consequences that are disastrous, seeing
there are cases where it may stop trade for exportation and
even for home consumption, because it places manufacturers
who work in a country where Patents are established at a
great disadvantage in competing with others who live in
States, such as Switzerland, where Patents are interdicted by
law. Practice, experience, which is the supreme authority in
the world, shows daily, in France particularly, that the system
is a scourge to industry. What might be substituted is a
system of recompenses, either national or European, as you
have proposed, to be awarded when practical use has
pronounced on the merit of each invention, and when the
originality shall admit of being established. All the friends of
industrial and social progress ought to unite their efforts to
liberate industry from the shackles that have been
bequeathed from the past. That of Patents is one of those
which there will be most urgency to get rid of.”

The Continental Association for Promoting the Progress of Social


Sciences favours such rewards. Allow me to quote from a Report of
M. Tilliere, Avocât of Brussels, which was adopted by that body:—

“It is proper to introduce, in respect to industrial inventions,


the principle of expropriation [or acquisition for behoof of the
public], with a view to general benefit, in order to reconcile the
interests of industry and the requirements of free trade (libre
échange) with the interests of the inventor.
“It is desirable, for the satisfaction of the same interests, to
establish between the different countries by means of
stipulations with reference to Patents in International Treaties,
uniformity of system, and, pursuant thereto, to provide a
depôt where, without the necessity to patent in every
particular country, specifications might be lodged that shall be
recognised and published in all.”

The House will observe that in connexion with the principle of


State rewards, or, what is nearly allied to it, of expropriation, the
Association commended another principle, that of international
arrangements as to inventions. On the occasion when the report I
quote from was adopted, another eminent French economist,
Professor Wolowski, spoke as follows:—

“The free competition which ought to exist between peoples


requires that Patents should be everywhere ruled by uniform
laws. Intellectual property must everywhere have limits within
which there shall be exchange, in order that its products may
everywhere circulate under the same conditions. International
legislation with regard to Patents is an object to be earnestly
pursued. It responds to the demands of free-trade, satisfies
the needs of liberty of manufacture, and provides a
compensation for a shortened term of Patent-right by
extension of area.”

But I come nearer home, and am happy to be able to quote


concurrence in the idea of national rewards on the part of one of our
great staple manufacturers, the sugar refiners. The refiners of
Scotland many years ago petitioned Parliament in the following
terms:—

“That, in the opinion of the petitioners, it is highly desirable


that your honourable House should devise some means
whereby discoverers of valuable inventions (to whom alone
Patents should be granted) might be rewarded by the State,
and trade be relieved from the restrictive operation and
expense of Patents altogether.”

Tending in favour of rewards rather than Patents is the following


evidence, given before the Royal Commission by Sir William
Armstrong:—

“How would you give these rewards in the absence of a


Patent-Law?—I am not prepared to say that. If the country
would expend in direct rewards a tithe of what is paid for
Patent licences and expenses, there would be ample
provision for the purpose. As a matter of opinion, I believe
that if you let the whole thing alone, the position which a man
attains, the introduction and the prestige, and the natural
advantages which result from a successful invention and from
the reputation which he gains as a clever and able man, will
almost always bring with them a sufficient reward.”

A successful inventor writes me:—

“I should be very glad to see a good round sum set apart by


Government for the purpose of being awarded to real
inventors by competent and impartial authority. Then the poor
inventor might have some chance.”

It is not out of place to inform the House that so far back as the
earliest years of the Patent system a precedent can be adduced. In
1625, Sir F. Crane received a grant of £2,000 a-year for introducing a
tapestry manufacture. There are several other precedents for similar
grants of public money.
Of course, to reward is not to purchase. We do not buy any man’s
invention or secret. But if he thinks proper, as a good subject, to
reveal that secret, we mean he shall have a substantial mark of
favour. Something like this was, no doubt, the original intention of
Patents; only the favour took the form of monopoly for introducing

You might also like