Professional Documents
Culture Documents
Cuellar Cyberdelegation-And-The-Administrative-State
Cuellar Cyberdelegation-And-The-Administrative-State
This chapter considers how developments in the use and design of expert com-
puter systems, machine learning mechanisms, and other forms of so-called
“artificial intelligence” are poised to affect debates about what government
agencies do and how they do it. These debates often pivot on questions of why
power is, or should be, delegated to agencies. Questions like these are asso-
ciated with the writings of scholars such as Max Weber and Herbert Simon,
and have been significantly enriched by Jerry L. Mashaw’s scholarship on
the internal norms and practices of government agencies. Weber cogently
observed that the modern nation-state deploys bureaucracy – the public orga-
nizations we now call administrative agencies – as its essential technology.1
We rely on agencies to keep the air clean and mitigate climate change, choose
new citizens, sort individuals by the risk each allegedly poses, protect public
health, and safeguard the integrity of financial markets. Indeed, when schol-
ars undertake to understand how societies are governed, they often do so by
investigating how laws, organizational routines, and administrators affect what
agencies do. Whether those administrators exercise appropriate judgment to
protect the integrity and accountability of agency decisions has been, in turn,
the subject of some classic administrative law cases.2
It is precisely the contested role of agencies and their leaders – as public
problem-solvers yet also targets of perennial concerns about opacity and exces-
sive discretion – that animates my specific interest in this chapter. Agencies
may be affected by ever more sophisticated computer programs – sometimes
∗ I appreciate the research assistance of Emily Zhang and Brendan Ballou, thoughtful feedback
from Ed Feigenbaum and Dan Ho, and helpful comments of participants at Yale Law School’s
symposium honoring Jerry L. Mashaw.
1 See 1 Max Weber, Economy and Society 956–965 (Guenther Roth and Claus Wittich eds.,
1978) (1922). See also infra Section I.
2 See, e.g., Heckler v. Campbell, 461 US 458 (1983); Morgan v. United States, 304 US 1 (1938).
134
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 135
3 See Jerry L. Mashaw, Prodelegation: Why Administrators Should Make Political Decisions, 1 J.
L. Econ. & Org. 81 (1985).
4 See generally Jerry L. Mashaw, Bureaucratic Justice: Managing Social Security
Disability Claims (1983); Jerry L. Mashaw & David L. Harfst, The Struggle for
Auto Safety (1990).
5 See infra Section III.
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
136 Mariano-Florentino Cuéllar
adaptive ones that modify themselves over time – may complicate public
deliberation about administrative decisions, because few observers may be
entirely capable of understanding how a given decision was reached.
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 137
to cheat death or taxes.9 . Which is why Mashaw took on the question of why it’s
acceptable to delegate, and answered it with precision and theoretical depth.
But there is also a second, slightly different way to read Mashaw’s work: not
as simply an effort to defend the legitimacy of politically aware and legally
empowered agency administrators, but as a commentary on why humans
and their political judgment – rather than, for example, automated systems
based on analytical tools such as the social security disability grid at issue in
Heckler v. Campbell – deserve a prominent place in the administrative state.
After all, there would be little need to consider whether administrators should
make political decisions if we could protect the environment, choose drugs
to approve, or select targets to freeze assets without involving people in the
implementation of general statutory commands.
So what if we had at our disposal an alternative to the conventional pic-
ture of the agency? Picture an official who labors in an ordinary, hierarchi-
cal agency. Administrators oversee the official and her colleagues. Whatever
their virtues, the people who constitute the agency will vary in capacity and
motivation to advance the agency’s mission. Contrast this image with a new
kind of bureaucrat, perhaps even one that some agency administrators would
be tempted to describe as “perfect.” Imagine a series of sleek black boxes –
capable of sustaining a cogent conversation with an expert, and networked
to an elaborate structure of machines, data, and coded instruction sets – to
deliver bureaucratic justice? It could begin innocently enough, with anodyne
decision support programs for administrative law judges adjudicating disabil-
ity claims, or EPA hearing examiners. But as the interfaces became more
intuitive and the analytical capacity more sophisticated, the black box might
steadily climb up the bureaucratic latter, displacing (or at least quite robustly
supplementing and eventually constraining) supervisors, division heads, and
even agency administrators.
All of which could recast or even disrupt legally sanctioned bureaucratic
authority the technology of our existing administrative state. The black boxes
would not necessarily or immediately displace the whole bureaucratic orga-
nization; institutions sometimes change slowly because their structure is
enshrined in law and etched into the public imagination. But equally plain
is the likelihood that the administrative state would work quite differently as
the black boxes proliferated and became more ensconced within it. At least
two themes from the arguments Mashaw advances in Prodelegation might be
especially relevant as we consider the fate of the current core technology of
9 See Jerry L. Mashaw, Reinventing Government and Regulatory Reform: Studies in the Neglect
and Abuse of Administrative Law, 57 U. Pitt. L. Rev. 405 (1996).
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
138 Mariano-Florentino Cuéllar
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 139
Under Secretary responsible for food safety with an expert system – one that
could replay in exquisite detail the sequence of decision rules it relied on to
render a judgment. Any reasonable effort to judge the quality of that judgment
depends on more than simply how a statute or regulatory rule defines a domain-
specific metric of success. Because such delegation could affect variables
that cut across domains – such as perceptions of government legitimacy,
cybersecurity risks, and the extent of presidential power, even more important
would be an uncontroversial (and, ideally, transsubstantive) metric of social
welfare, along with certain assumptions to minimize the difficult trade-offs
across domains. We could imagine, for example, the question of whether a
network of expert systems might better allocate resources to minimize the risk
of contamination in imported seafood. Even in such situations, as we’ll see,
subtle, systemic costs may arise that are worth mapping out and considering
carefully; more on this below.
Yet more profound challenges would arise in the myriad situations where
the unambiguous metric is not so easily available. Lawmakers, courts, and
the public would also find it harder to assess the consequences of automated
agency decision-making where the trade-offs are complex. That statutes them-
selves routinely incorporate such ambiguities at their very core is in part why
administrative law is left with so many problems to solve.14 And as I explain
further, we may ultimately find that the choices we make about automation
will be part of a broader conflict about the role of people in an economy that
sheds a large proportion of existing job categories more quickly than expected,
even as it continues to enhance automation technologies that humans find,
like a sweet-smelling artificial strawberry dessert, occasionally more satisfying
than the “natural” alternative.
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
140 Mariano-Florentino Cuéllar
bounded rationality, which was itself grounded in the prescient realization that
administrative challenges could be understood in part as problems of gathering
expertise and engaging in decision making under uncertainty and subject to
a variety of constraints.16 And indeed, the decisions administrators make every
day about drug approval, transportation regulation, and security illustrate how
core problems in administrative behavior involve not only power and account-
ability. Those problems also raise questions about the proper role of human
judgment and the enormous analytical complexity of making decisions under
uncertainty.
That human cognition fails to reflect the properties of optimal decision-
making, then, is a proposition that inspires virtually no controversy among
scholars today. Social psychologists, economists, political scientists, and even
policymakers routinely acknowledge the limitations of how humans tend
to consider probabilities or otherwise weigh the consequences of particular
decisions.17 Decision-makers may exhibit racial or gender biases, may over- or
under-weigh the importance of a particular piece of information, may naively
assume their own wisdom or truculently insist on the naiveté of rivals. Faced
with an intricate decision that has implications for an individual as well as (in
the aggregate) systemic consequences, an administrative adjudicator consid-
ering whether to impose sanctions on an alleged polluter may fail to consider
complex strategic interactions that could result from a particular application
of government power.
The difficulty of making and grading these choices helps explain why soci-
eties could become interested in making greater use of computer systems
that hold the promise of improving administrative decisions. For different
reasons, greater reliance on artificial intelligence may appeal to government
officials making their case to the public, contractors with new technologies
to sell, lawyers criticizing arbitrary decisions, and civil society groups making
the case for more predictable and analytically sound administrative decisions.
The arguments could leverage the value of relying on data to make regulatory
decisions. They could expand on conceptual critiques of administrative dis-
cretion, such as that articulated by Kenneth Culp Davis, or even incorporate
further insights from cognitive psychology and bounded rationality about the
limits of human cognition.
Given those limits, it is hardly surprising that American agencies have been
using computers since early during the Cold War. As computers continue
to improve in both analytical capacity and user interface, some of the most
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 141
intriguing legal and policy questions will arise from the use of computer sys-
tems with certain characteristics: (1) They are able to use data to evaluate
possible outcomes and make recommendations or tentative decisions, either
by assessing the fit between a particular pattern of data and a decision-making
rubric, or by using inductive methods to assess patterns and recommend new
standards for decisions. (2) Their interfaces – whether they involve graphical,
written, or spoken-language – allow for explanations that can persuade human
observers, or perhaps even domain-level experts. (3) They are capable of adapt-
ing and changing over time, whether with respect to analytical functions or
explanatory capacity. While not all the examples I discuss here involve systems
that fully embody these characteristics, the systems at issue are examples of
how computer systems will almost certainly reflect – to an increasing degree –
these characteristics.
Nor is computing architecture as static as it may sometimes appear: genetic
algorithms, neural networks, similarly, deep learning/self-modifying algo-
rithms. Even if it proved more difficult to automate the role of an agency
general counsel with a diverse portfolio of employment, litigation, regulatory,
and legislative work (given present trends) than, say, a mid-level analyst screen-
ing licensing applications, predictions about the relative insulation of some
cognitive tasks could be wrong.
But the promise of automation in the administrative state is not entirely
contingent on computer systems that are more articulate, and come perhaps
asymptotically closer to simulating human creativity. Perhaps some travelers
(though certainly not all of them) would prefer to be screened by even a fairly
conventional computer system rather than by an agent whose biases and limi-
tations could color her judgment. After all, human decision-makers get things
wrong. The use of statistical and other predictive techniques by computers
could improve not only individual decisions, but systemic bureaucratic per-
formance as well. As computing technology improves, new possibilities will
emerge to juxtapose two seemingly opposite qualities that could make automa-
tion more difficult to resist – the ability to analyze data and make predictions
in subtle fashion that does not easily track human intuition, coupled with the
capacity to make increasingly persuasive arguments (whether through data
visualization, or written or spoken expressions) to defend a decision.
So what, exactly, could more robust reliance on sophisticated information
technologies accomplish that could enhance social welfare? Consider at least
three different uses: (1) The simplest scenario is one where computer systems
duplicate what a human administrator could do, at a lower cost. The more
feasible this goal becomes, the more pressing the case for saving resources
and using them for socially desirable objectives. (2) The right expert systems
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
142 Mariano-Florentino Cuéllar
could screen out biases and certain heuristics considered, in the aggregate,
undesirable (e.g., availability heuristic, vividness heuristic). (3) Even more
intriguing, computer programs make it possible to analyze information for the
purpose of predicting outcomes or responding to potential strategic behavior in
a fashion that would be difficult for a human decision-maker to approximate
(e.g., how and when car companies will use software to fake emissions?).
One example is the EPA’s approach to analyzing the toxicology of pesticides.
These possibilities make it hard to ignore – and normatively questionable to
ignore – the opportunities for automating some aspects of the administrative
state.
The aforementioned black box and its brethren would rightly command
attention if they could deliver all or most of the benefits of a human decision-
maker while substantially curbing some of the preceding flaws. Optimizing
the allocation of resources for immigration or environmental enforcement, for
example, or determining how best to dispose of nuclear waste – these are the
kinds of problems that arguably call for a degree of analytical sophistication
that may routinely elude even quite carefully chosen and insulated teams
of humans. Presumably such analytical prowess, whether found in machines
or humans working with machines, could conceivably translate into social
welfare gains. And as an added benefit, perhaps the network of black boxes (or
even its clunkier precursors) could conceivably yield greater transparency by
making it easier to follow what precise considerations were used in driving a
particular outcome.
If these feats may seem too much to expect from our existing coterie of
computer programs, data, and computing power, it is worth recognizing that
developments in artificial intelligence are neither dependent on a single ana-
lytical architecture, nor static relative to technological and societal changes.
Conventional algorithms can be built around deductive logic, the use of
induction leveraging traditional statistical theory, or Bayesian approaches.
Cheap computing power, cheap storage, greater and easily available band-
width, greater surveillance data, and software innovations are improving
user interfaces and capacity to simulate human interaction. These trends
will almost certainly further hasten use of automation to support or govern
decision-making in private sector bureaucracies. Though the public sector
may pose its own practical challenges to automation (more on this later),
pressures will also exist for the use of these technologies because of some
of the often-described limitations on bureaucracies and human decision-
makers.
Neural networks, for example, are computer programs (whether or not using
specialized hardware designs) that, as a very general approximation, simulate
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 143
the structure of the brain. Such networks are made up of nodes similar to
neurons.18 Units are connected to one another, and these connections have
different “weights.”19 The higher the weight, the more influence one unit has
with another. In a normal neural network, there are input units, output units,
and “hidden” units that connect the two. Input units are activated, which in
turn activate hidden nodes, which activate different output nodes, depending
on the activated nodes’ weight. (So, for instance, in a handwriting-recognition
program, pixels in a drawing will trigger input units, which will trigger hidden
units, which will weigh the probability of the drawing being a particular letter,
which in turn will trigger a final output letter that guesses the correct letter.)
Importantly, once the neural network is operating and the actual results can
be compared to the ideal “right” results, the weights between nodes can be
adjusted through a process called “backpropagation” to make the outputs of
the network more accurate.20 Because neural networks can appear to become
better over time at clustering objects or recommending outcomes in this
fashion, they are often best suited for pattern recognition.21
In contrast, genetic algorithms are meant to approximate the evolutionary
process. Such networks consider a set of “solutions” called a “population.”22
Perhaps these solutions are optimal routes from home to work, or ideal designs
for the shape of an antenna. In a given population, the best solutions are
chosen and either recombined (in the travel example, perhaps by switching
over from Highway 101 to the 280 at some point) or mutate as part of a stochastic
process (perhaps by taking a local street for a while to avoid gridlock). These
recombined and mutated solutions are compared, and the best are chosen
and recombined or mutated again.23 The process repeats itself until some test
condition (perhaps a minimum driving time) is achieved.24 As the example
suggests, genetic algorithms may be best for solving shortest-distance “traveling
salesman”-type problems.25
18 See, e.g., Raul Rojas Gonzalez, Neural Networks: A Systematic Introduction (2013);
see also Chris Woodford, Neural Networks, Explain That Stuff (Nov. 19, 2014), http://www
.explainthatstuff.com/introduction-to-neural-networks.html.
19 Woodford, supra note 18. 20 Id.
21 Daniel Shiffman, Neural Networks, The Nature of Code, http://natureofcode.com/book/
chapter-10-neural-networks.
22 See Masatoshi Sakawa, Genetic Algorithms and Fuzzy Multiobjective Optimiza-
tion (2002); Marek Obitko, Genetic Algorithms, Genetic Algorithms (1998), http://www
.obitko.com/tutorials/genetic-algorithms/ga-basic-description.php.
23 Obitko, supra note 22. 24 Id.
25 Stack Overflow, When to Use Genetic Algorithms vs. When to Use Neural Networks?
(Sept. 9, 2009), http://stackoverflow.com/questions/1402370/when-to-use-genetic-algorithms-
vs-when-to-use-neural-networks.
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
144 Mariano-Florentino Cuéllar
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 145
at all. Humans pilot them remotely, and humans make the ultimate decision
on whether to launch an attack. This appears to be changing. Since at least
2011 the military has been testing drones that could both fly and identify targets
for attack automatically.28 And the US military has successfully tested an
experimental missile that, when launched, can autonomously choose which
of several targets to attack,29 a task that several years ago would have had to
have been performed by a human.
An example involving federal regulation of pesticides highlights the subtle
ways in which computer-based analysis and legal standards could interact –
as well as the reasons why agencies may embrace new analytical techniques
heavily relying on automation. The Environmental Protection Agency admin-
isters the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), which
requires the registration of pesticides before marketing in interstate or foreign
commerce.30 The current version of the statute requires proof, before a pes-
ticide can be marketed in interstate or foreign commerce, that the substance
does not cause “unreasonable adverse effects” on humans or the environment.
That amendment was largely responsible for the testing strategy that eventu-
ally emerged in the EPA.31 Later, the Food Quality Protection Act of 199632
amended the Food, Drug and Cosmetic Act (FDCA) and FIFRA, leading
the EPA to establish a risk-based standard for pesticide residues on all foods,
the requirement that the EPA “consider all non-occupational sources of expo-
sure,” and the requirement that the EPA set tolerances that would ensure
safety for infants and children.33 In conclusion, “FIFRA, the FDCA, and the
28 Peter Finn, A Future for Drones: Automated Killing, Wash. Post (Sept. 19, 2011), http://www
.washingtonpost.com/national/national-security/a-future-for-drones-automated-killing/2011/
09/15/gIQAVy9mgK story.html.
29 John Markoff, Fearing Bombs That Can Pick Whom to Kill, N.Y. Times (Nov.
11, 2014), http://www.nytimes.com/2014/11/12/science/weapons-directed-by-robots-not-humans-
raise-ethical-questions.html.
30 Id.; see also David Krewski et al., Toxicity Testing in the 21st Century: A Vision and a Strategy,
13 J. Toxicology & Envtl Health (Part B: Critical Reviews) 51, 53 (citing J.D. Connor
et al., Pesticide Regulation Handbook (1983)).
31 See Krewski et al., Toxicity Testing in the 21st Century, supra note 30, at 53.
32 Food Quality Protection Act (FQPA) of 1996, Pub. L. No. 104–170, 110 Stat. 1513. Congress
also enacted the Toxic Substances and Control Act in 1976 to control risks from chemicals
in commerce. It requires the government to review most new chemicals while they are being
developed, and it gives the government the power to regulate chemicals already in or entering
commerce if they create an “unreasonable risk” to health or to the environment. The Office
of Pollution Prevention and Toxics was created to apply computational methods in regulating
commercial chemicals. Brinda Mahadevan et al., Genetic Toxicology in the 21st Century:
Reflections and Future Directions, 52 Envtl. & Molecular Mutagenesis 339, 342 (2011).
33 Summary of the Food Quality Protection Act of 1996, US Environmental Protection
Agency, http://www.epa.gov/laws-regulations/summary-food-quality-protection-act (last vis-
ited Jan. 24, 2016).
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
146 Mariano-Florentino Cuéllar
34 Krewski et al., Toxicity Testing in the 21st Century, supra note 30, at 53. Further, while the EPA
also enforces the Clean Air Act, the Clean Water Act, and the Safe Drinking Water Act, it
does not conduct toxicity testing in those areas. Rather, the EPA relies mainly on information
available in the scientific literature or government laboratory reports. See id.; see also Richard
Denison, Ten Essential Elements in TSCA Reform, 39 Envtl. L. Rep. 10020 (2009).
35 National Research Council, Toxicity Testing in the 21st Century: A Vision and
A Strategy 5 (2007), http://www.nap.edu/catalog/11970/toxicity-testing-in-the-21st-century-a-
vision-and-a.
36 Id. at 3. 37 See Krewski et al., Toxicity Testing in the 21st Century, supra note 30, at 61.
38 Id. at 62. 39 National Research Council, supra note 35, at 5. 40 Id. at 21.
41 See generally id. The recommendations of the National Research Council’s report reflect
a global trend – Europe, Canada, and Japan are similarly facing legislative pressures to
use computational methods to prioritize large lists of chemicals for testing. Robert Kavlock
et al., Computational Toxicology – A State of the Science Mini Review, Toxicology Sci. 1,
9 (2007). See Mahadevan et al., supra note 32, at 343. In the European Union, countries are
beginning to de-emphasize animal testing and have put into place laws for regulating new and
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 147
existing substances. Id. at 342. Similarly, Canada has used computational methods to prioritize
their Domestic Substances List, as directed by the Canadian Environmental Protection Act.
Id. Japan is considering the use of computational toxicology under the Law Concerning the
Evaluation of Chemical Substances and Regulation for their Manufacture, whereas Australia
has implemented the National Industrial Chemicals Notification and Assessment Scheme. Id.
at 342–43.
42 Michael Firestone et al., The US Environmental Protection Agency Strategic Plan for Evaluating
the Toxicity of Chemicals, 13 J. Toxicology & Environmental Health (Part B: Critical
Reviews) 139, 147–48 (2010).
43 Id. 44 Id. 45 Id. 46 National Research Council, supra note 35, at 14.
47 Specifically, Hierarchical Bayesian models or Monte-Carlo methods may be employed. See
generally Robert Kavlock et al., Computational Toxicology – A State of the Science Mini Review,
103 Toxicological Sciences 14, 20 (2007).
48 National Research Council, supra note 35, at 14, 29.
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
148 Mariano-Florentino Cuéllar
relational data warehouse for chemical and toxicity data from various public
sources to support data mining and modeling.49 ACToR draws from many
sources, including those from the EPA, CDC, FDA, NIH, state agencies, cor-
responding government agencies in Canada, Europe, and Japan, universities,
the WHO, and even NGOS.50 The EPA is poised to start using virtual tissues.
The goal is to be able to computationally create multi-scale model biologi-
cal systems. The agency is currently developing a “virtual liver” at the EPA’s
National Center for Computational Toxicology.51
The EPA’s reliance on computational toxicology underscores how agency
decisions may increasingly implicate not only human choices about research
methods, but also architectural choices in the development of algorithms and
neural networks to analyze data in new ways. Along with the considerable
benefits of this approach come emerging challenges – for example, about the
opacity or transparency of the simplifying assumptions driving the performance
of the virtual human liver, about the risks that subtle tampering arising from
cybersecurity breaches could change the analysis, and about the gap between
how administrators explain the resulting toxicology analysis to congressional
staffers and how the simulation actually functions.
As agencies embrace the tools of automation and machine learning to detect
compounds that should be subject to stricter regulation, military bureaucracies
rely on software to make decisions about lethal force. Britain, Israel, and
Norway, for example, already deploy drones and missiles that can carry out
attacks without human control. In the 2011 air campaign against Muammar
Gaddafi:
49 Id. at 8–9.
50 For more information on what is in ACToR, how the data is structured, and how to use
ACToR, see Richard Judson et al., ACToR – Aggregated Computational Toxicology Resource,
233 Toxicology & Applied Pharmacology 7, 8 (2008).
51 National Research Council, supra note 35, at 29.
52 John Markoff, Fearing Bombs That Can Pick Whom to Kill, N.Y. Times (Nov.
11, 2014), http://www.nytimes.com/2014/11/12/science/weapons-directed-by-robots-not-humans-
raise-ethical-questions.html.
53 See e.g., Kenneth Anderson and Matthew C. Waxman, Law and Ethics for Autonomous
Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can, (Columbia
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 149
Law Sch. Pub. Law & Legal Theory Working Paper Grp., Paper No. 13–351 2013), http://papers
.ssrn.com/sol3/papers.cfm?abstract id=2250126.
54 Bob Brewin, Goodbye Paper: VA Installs Automated Claims System in All Regional Offices,
NextGov (Jun. 17, 2013), http://www.nextgov.com/health/2013/06/goodbye-paper-va-installs-
automated-claims-system-all-regional-offices/65030/.
55 Daniel Huang, Automated System Often Unjustly Boosts Veterans’ Disability Benefits, Wall
St. J. (May 11, 2015), http://www.wsj.com/articles/automated-system-often-unjustly-boosts-
veterans-disability-benefits-1431387826.
56 Id. 57 Id.
58 Watch List Filtering, Fiserv, http://www.financialcrimerisk.fiserv.com/watchlistfiltering.aspx
(last visited Jan. 24, 2016).
59 A Risk Based Approach, Fiserv, http://www.financialcrimerisk.fiserv.com/customerduediligence
.aspx (last visited Jan. 24, 2016).
60 SAS Anti-Money Laundering, SAS, http://www.sas.com/en_us/industry/banking/anti-money-
laundering (last visited Jan. 24, 2016).
61 Oracle, Oracle Financial Services Anti Money Laundering On the Oracle
Exadata Database Machine (2013), http://www.oracle.com/us/industries/financial-services/
anti-money-exadata-ds-2062792.pdf.
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
150 Mariano-Florentino Cuéllar
extent that these sorts of risk determinations were done at all, they would have
had to have been done manually.
Another domain involving a space of shared public and private activity –
the testing of pharmaceutical products – also serves as a harbinger of greater
automation. As part of its review of new drug applications, the FDA often
considers “Population Pharmacokinetics” models: essentially tests to see how
drugs will interact with different bodies, depending on age, weight, and other
factors.62 Traditionally, experts known as “pharmacometricians” would select
several hundred statistical models (not real people) on which to test these
drug interactions. Choosing which models to include was apparently time
consuming and labor intensive.63
Recently, however, the FDA approved a new drug application in which mod-
els were selected, not by a pharmacometrician, but by an algorithm. According
to the developer’s press release announcing the fact, such “automated model
selection provides pharmaceutical and biotech companies results in less than
half the time and at a lower cost compared to the traditional method.”64
Courts sometimes reference population pharmacokinetics report in discussing
a drug’s safety.65 The reliability of such reports, and the underlying automation
that helped to produce them, may come before a court again in the future.
62 Food & Drug Admin., Guidance for Industry: Population Pharmacokinetics (Feb.
1999), http://www.fda.gov/downloads/Drugs/.../Guidances/UCM072137.pdf.
63 Automated Model Selection for Population PK Analysis–‘DARWIN’ Software Program, Nuven-
tra, http://www.nuventra.com/services/darwin/ (last visited Jan. 24, 2016).
64 Daniel Roy, FDA Approves First NDA Using Nuventra’s Software for Automated Popula-
tion PK Model Selection–DARWIN, Nuventra (Nov. 2, 2014), http://www.nuventra.com/
fda-darwin-population-pk-automated-model-selection/.
65 La. Wholesale Drug Co. v. Sanofi-Aventis, No. 07-CV-7343 (HB), 2008 WL 4580016, at ∗4 n.5
(SDNY, Oct. 14, 2008).
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 151
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
152 Mariano-Florentino Cuéllar
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 153
In fact, biases that facilitate an ability to anticipate how the public will
respond and help protect public support for a regulatory program – these may
be considered inappropriate to codify into instruction sets but would be diffi-
cult to screen out if human decision-makers retain their crucial role. Accord-
ingly, at least in some circumstances, quirks of human decision-making often
treated as “biases” to be screened out by technocratically oriented scholars
in the tradition of Sunstein may merit an (increasingly important) impor-
tant place in legal decision making as many routine decisions are guided by
algorithms.68
Second, although doctrinal developments may encourage increasing
reliance on automation for decision-making, the incorporation of computer
programs into the decision-making framework of the administrative state could
nonetheless have major path-dependent effects that are difficult to undo, that
carry with them cybersecurity risks, and that will not necessarily be weighed in
a calculus that may encourage reliance on computer programs. Agencies may
face a variety of pressures to use automated technologies. Courts have long
scrutinized agency regulatory or adjudicatory decisions to determine whether
they are arbitrary and capricious (or otherwise unjustified). The availability
of computer programs to analyze data, provide cogent simulations, and apply
a variety of analytical tools could help agencies overcome such scrutiny in
new ways. Changes in common law standards of care could also promote
changes in norms affecting the willingness of agencies to eschew computer
programs that could provide a new kind of justification for administrative
decisions.
Amidst these pressures, it may be tempting to ignore cybersecurity problems
because society faces daunting difficulties in quantifying, let alone eliminat-
ing, the risks. But it would be a serious mistake to consider the benefits of
automation without considering the associated security problems. Cyberse-
curity problems should loom especially large in discussions of automation
because of the many examples of governmental failures involving information
technology. These failures, moreover, reflect not only episodic problems or the
inherent complexity of information technology, but recurring difficulties the
public sector may encounter in contracting, reconciling competing objectives,
and integrating long-term strategy with short-term constraints.
Yet major path-dependent effects would be difficult to undo as we incorpo-
rate algorithms into legal decision-making. Path dependence arises because
infrastructure is costly to replace and habituates people to make decisions
in a particular way. Organized interests tend to defend the resulting status
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
154 Mariano-Florentino Cuéllar
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 155
[t]he agency would wrest primary interpretive authority from the courts if
but only if a particular agency official – the official Congress named in the
relevant delegation – personally assumed responsibility for the decision prior
to issuance. The courts would retain primary interpretive authority (subject
only to Skidmore-style deference) if, alternatively, this named person passed
her decision-making authority to lower-level officials.75
The deference regime Kagan and Barron describe is hardly immune from
prescriptive as well as practical criticism. But it serves to illustrate the kinds of
dilemmas agencies and the public may face with respect to automation, par-
ticularly if the goal is to regulate decision-makers’ reliance on computer pro-
grams. The goal is to channel decision-making toward politically accountable
71 See also US Postal Serv. v. Gregory, 534 US 1, 10 (2001) (“[A] presumption of regularity attaches
to the actions of Government agencies.”).
72 David J. Barron and Elena Kagan, Chevron’s Nondelegation Doctrine, 2001 Sup. Ct. Rev. 201,
201 (2001).
73 Id. at 203–04. 74 Id. at 204. 75 Id. at 235–36.
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
156 Mariano-Florentino Cuéllar
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 157
conclusion
Software is poised to drive more than cars. The dilemmas associated with the
use of expert systems, neural networks, and other forms of “artificial intelli-
gence” implicate nothing less than how society governs itself. Even leaving
aside the pace of change in information technology, our evolving norms about
the role that technology plays in daily life suggest that the administrative
76 See, e.g., John Markoff, Google Car Exposes Divide on Computers as Drivers, N.Y.
Times (Feb. 10, 2016), http://www.nytimes.com/2016/02/11/technology/nhtsa-blurs-the-line-
between-human-and-computer-drivers.html. In describing deep learning algorithms, Markoff
observes that “the programs are able to perform humanlike tasks, such as . . . understanding
speech, [but] at this point, researchers admit they do not completely understand how the deep
learning networks make decisions.”
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
158 Mariano-Florentino Cuéllar
state – and, indeed, the nation-state itself – may be entering uncharted ter-
ritory. Although no map exists to decide precisely how to navigate it, the
following ideas may garner interest as agencies, policymakers, and courts con-
sider how to increase society’s capacity to make informed choices about the
journey.
Return first to the problems posed by the Morgan line of cases. Instead of
relying on the “presumption of regularity” espoused in those cases, courts may
explore how to better police the extent of human decision-maker engagement
in those crucial government decisions that we still expect someone to be able
to explain, understand, and defend. As reliance on information technology
increases, courts and policymakers should consider taking more seriously the
lines of argument in decisions that require accountability to be lodged in
specific decision-makers. Perhaps it’s time to consider recalibrating the “pre-
sumption of regularity” to ensure that decision-makers recognize the risks of
relying on automated analytical techniques they do not entirely understand.
The presumption of regularity should have its limits. As a practical matter, this
approach raises difficult further questions about the scope of discovery in suits
to review administrative action, but perhaps those questions are worth facing
given the risks that decision-makers will rely on algorithms that they don’t fully
understand.
Second, on a related note, courts may conclude that arbitrary and capricious
review is most meaningful if it encompasses whether there is consistency
between substantive explanations offered in, say, legally critical justifications
for rulemaking and the analytical techniques actually used to make decisions.
It is one thing to justify a program to freeze assets associated with organizations
that meet a specific, statutorily grounded threshold of suspicion; it is quite
another to deploy algorithms that entirely redefine that threshold, dynamically,
in response to new information. Some attention to cybersecurity risks may also
fit within the context of arbitrary and capricious review.
Finally, some agencies may accelerate efforts to engage scholars, civil soci-
ety, and other stakeholders in increasing our understanding of how to harness
the analytical capacity of “artificially intelligent” computer systems without
eroding our sense of how decisions are made. As part of this process, agencies
should consider engaging in medium-to-long-term planning about how they
would address some of the scenarios outlined earlier. The FDA could further
investigate how trends in artificial intelligence could change its use of outside
experts. Labor Department officials may face unexpected challenges arising
from labor market changes driven by automation. Virtually all agencies will
benefit from explicitly experimenting with different models of decision-making
that aim to leverage artificial intelligence technologies while keeping humans
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 159
in the loop. These efforts will matter because, increasingly, agencies and entire
governments will face the challenge of how to instruct complex machines so
routinely guiding human decisions that it becomes exceedingly difficult to
tell where an algorithm’s nudge ends and an accountable individual’s choice
begins.
True: government agencies seem to face trouble even when updating con-
ventional information technology infrastructure, so the ability to integrate
artificial intelligence into administrative tasks may seem far-fetched. Ironi-
cally, such weakness could strengthen the case for using systems that adapt
and learn. Such systems may prove crucial to reducing the gap between a
machine’s capacity and that of a person familiar with an agency’s culture
and organizational routines. As a general matter, the more that computer sys-
tems engaged to perform administrative tasks become adaptive, and capable of
modifying themselves, the more they are likely to avoid the problems of efficacy
and cost that sometimes plague government information technology projects.
As software becomes more analytically sophisticated, however, and, in partic-
ular, more adaptive to the point of being able to rewrite much of its own code,
it will be more difficult to predict longer-term consequences ranging from
subtle changes in function to unexpected rapid growth of analytical capac-
ity. As machines become more capable of optimizing to achieve the goals
we articulate, higher stakes attach to how we articulate those goals and what
trade-offs we authorize. Crucial to our ability to navigate these dilemmas will
be a cadre of lawyers and policymakers who understand artificial intelligence,
its possibilities and limits, and particularly its capacity to adapt in unexpected
ways.
At core the administrative state is about reconciling social welfare calcu-
lations with procedural constraints and political pragmatism. When Mashaw
made his case for administrator discretion in the 1980s, the context for it appears
to have been an appreciation of that reconciliation – how it is imperfect even
on rare days when the administrative state is at its best, and how it pivots in
subtle and profound ways on human institutions, assumptions, and aspirations
for deliberation. Now we are on the cusp of being offered a gift – a black box
that promises to make the administrative process eminently more tractable,
technically precise, and less arbitrary by leaning more heavily on algorithms
and neural networks. That gift is alluring because human decisions are as
messy and imperfect as human societies themselves are. What we risk if we
blithely accept it is eliding precisely the sort of deliberation about the nature
of social welfare that justifies the administrative state in the first place.
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006