Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

5

Cyberdelegation and the Administrative State


Mariano-Florentino Cuéllar∗

This chapter considers how developments in the use and design of expert com-
puter systems, machine learning mechanisms, and other forms of so-called
“artificial intelligence” are poised to affect debates about what government
agencies do and how they do it. These debates often pivot on questions of why
power is, or should be, delegated to agencies. Questions like these are asso-
ciated with the writings of scholars such as Max Weber and Herbert Simon,
and have been significantly enriched by Jerry L. Mashaw’s scholarship on
the internal norms and practices of government agencies. Weber cogently
observed that the modern nation-state deploys bureaucracy – the public orga-
nizations we now call administrative agencies – as its essential technology.1
We rely on agencies to keep the air clean and mitigate climate change, choose
new citizens, sort individuals by the risk each allegedly poses, protect public
health, and safeguard the integrity of financial markets. Indeed, when schol-
ars undertake to understand how societies are governed, they often do so by
investigating how laws, organizational routines, and administrators affect what
agencies do. Whether those administrators exercise appropriate judgment to
protect the integrity and accountability of agency decisions has been, in turn,
the subject of some classic administrative law cases.2
It is precisely the contested role of agencies and their leaders – as public
problem-solvers yet also targets of perennial concerns about opacity and exces-
sive discretion – that animates my specific interest in this chapter. Agencies
may be affected by ever more sophisticated computer programs – sometimes

∗ I appreciate the research assistance of Emily Zhang and Brendan Ballou, thoughtful feedback
from Ed Feigenbaum and Dan Ho, and helpful comments of participants at Yale Law School’s
symposium honoring Jerry L. Mashaw.
1 See 1 Max Weber, Economy and Society 956–965 (Guenther Roth and Claus Wittich eds.,
1978) (1922). See also infra Section I.
2 See, e.g., Heckler v. Campbell, 461 US 458 (1983); Morgan v. United States, 304 US 1 (1938).

134

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 135

described as examples of “artificial intelligence” – that officials could use to


make or support decisions, and that could therefore assume an increasingly
prominent role in the administrative state. In the coming years, computing
power and storage will almost certainly become even cheaper, surveillance
more pervasive, software architecture more flexible and capable of learning
and self-modification, and the limitations of human decision-makers become
more salient. Reliance on computer programs to make administrative deci-
sions – whether designed as conventional expert systems, more elaborate
genetic or otherwise self-modifying algorithms, neural or “deep learning”
networks, or other machine learning mechanisms – will likely accelerate.
Although computer programs analyzing vast amounts of data may hold some
promise for making better use of information, enhancing transparency, and
reducing inconsistency in bureaucratic justice, such reliance may bring about
both drastic and more subtle consequences that merit careful scrutiny.
Professor Jerry L. Mashaw provided such scrutiny in the 1980s when he
analyzed the proper role of agency administrators in a democratic political
system.3 In a similar vein, I consider here some of the trade-offs associated
with the delegation of agency decisions to computer programs, whether they
constitute expert systems, neural networks, or other examples of “artificial
intelligence” architectures The analysis begins from a premise familiar to
readers of Professor Mashaw’s work – that public law emerges from a mix
of routines, ideals, and constraints playing out within organizations.4 I offer
the following preliminary observations to further discussion of the relevant
opportunities and risks.5 First, neither conventional expert systems nor those
machine learning systems on the horizon will be in a position to resolve (with-
out human intervention) highly context-specific debates about how society
should assess the goal of a regulatory rule or an administrative adjudication.
And those goals must also inform whether we assign value to the character-
istics of human cognition that contrast with what a computer program could
accomplish. Second, although doctrinal developments may encourage greater
reliance on computer systems, we must also consider path-dependent conse-
quences that prove difficult to change, and cybersecurity risks. Third, it may
prove difficult to limit the influence of computer programs even if they are, in
principle, mere decision support tools rather than the actual means of making
a decision. Finally, heavy reliance on computer programs – particularly

3 See Jerry L. Mashaw, Prodelegation: Why Administrators Should Make Political Decisions, 1 J.
L. Econ. & Org. 81 (1985).
4 See generally Jerry L. Mashaw, Bureaucratic Justice: Managing Social Security
Disability Claims (1983); Jerry L. Mashaw & David L. Harfst, The Struggle for
Auto Safety (1990).
5 See infra Section III.

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
136 Mariano-Florentino Cuéllar

adaptive ones that modify themselves over time – may complicate public
deliberation about administrative decisions, because few observers may be
entirely capable of understanding how a given decision was reached.

i context: institutional change and the


administrative state
Addressing the recurring question of who should control the organizations that
are at the core of modern government, Professor Mashaw made the case for
giving administrators robust initiative in making major policy – and, by impli-
cation, political – decisions.6 Running somewhat counter to the US Supreme
Court’s occasional penchant for declaring certain policy issues too important
for administrators,7 Mashaw instead made a case that is sometimes reminiscent
of the case for a robust executive in foreign affairs. Still, Mashaw’s argument
was both more subtle (suggesting that broad delegation could advance social
welfare and not just relying on text, structure or history) and more compre-
hensive (applying not only to traditional domains of executive authority but to
agencies across the board).
One could accept the core of Mashaw’s analysis and still acknowledge that
difficult line-drawing problems could arise in reconciling traditional executive
functions such as setting enforcement priorities with administrative adjudi-
cation involving individual claimants. Yet Mashaw’s arguments for political
decision-making by administrators were more than a gentle repudiation of
the nondelegation doctrine. Instead, Mashaw’s ideas could accommodate a
variety of nuances. What he articulated was a compelling political theory of
how the political exercise of bureaucratic power was complementary to some
of the presumed virtues of the bureaucratic administrative state as it further
adapted its pursuits from the narrow goal of amassing capacity to best its mili-
tary rivals. Adjudicating cases involving disability benefits or rationing asylum
opportunities, determining the size of a fine for polluting the water, approving
a new drug – all this happened in a political context that an administrator
needed to navigate, and could be held accountable for navigating.8
Serious discussions in the vein of Prodelegation took as a given that Weber
was right: conventionally understood bureaucratic authority was going to be
with us for the long haul. Expecting that society could harvest some of the
benefits of the nation-state without some bureaucratic authority was like trying

6 Mashaw, Prodelegation, supra note 3.


7 See King v. Burwell, 135 S. Ct. 475 (2015); FDA v. Brown & Williamson Tobacco Corp., 529 US
120 (2000).
8 Mashaw, Prodelegation, supra note 3, at 95–99.

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 137

to cheat death or taxes.9 . Which is why Mashaw took on the question of why it’s
acceptable to delegate, and answered it with precision and theoretical depth.
But there is also a second, slightly different way to read Mashaw’s work: not
as simply an effort to defend the legitimacy of politically aware and legally
empowered agency administrators, but as a commentary on why humans
and their political judgment – rather than, for example, automated systems
based on analytical tools such as the social security disability grid at issue in
Heckler v. Campbell – deserve a prominent place in the administrative state.
After all, there would be little need to consider whether administrators should
make political decisions if we could protect the environment, choose drugs
to approve, or select targets to freeze assets without involving people in the
implementation of general statutory commands.
So what if we had at our disposal an alternative to the conventional pic-
ture of the agency? Picture an official who labors in an ordinary, hierarchi-
cal agency. Administrators oversee the official and her colleagues. Whatever
their virtues, the people who constitute the agency will vary in capacity and
motivation to advance the agency’s mission. Contrast this image with a new
kind of bureaucrat, perhaps even one that some agency administrators would
be tempted to describe as “perfect.” Imagine a series of sleek black boxes –
capable of sustaining a cogent conversation with an expert, and networked
to an elaborate structure of machines, data, and coded instruction sets – to
deliver bureaucratic justice? It could begin innocently enough, with anodyne
decision support programs for administrative law judges adjudicating disabil-
ity claims, or EPA hearing examiners. But as the interfaces became more
intuitive and the analytical capacity more sophisticated, the black box might
steadily climb up the bureaucratic latter, displacing (or at least quite robustly
supplementing and eventually constraining) supervisors, division heads, and
even agency administrators.
All of which could recast or even disrupt legally sanctioned bureaucratic
authority the technology of our existing administrative state. The black boxes
would not necessarily or immediately displace the whole bureaucratic orga-
nization; institutions sometimes change slowly because their structure is
enshrined in law and etched into the public imagination. But equally plain
is the likelihood that the administrative state would work quite differently as
the black boxes proliferated and became more ensconced within it. At least
two themes from the arguments Mashaw advances in Prodelegation might be
especially relevant as we consider the fate of the current core technology of

9 See Jerry L. Mashaw, Reinventing Government and Regulatory Reform: Studies in the Neglect
and Abuse of Administrative Law, 57 U. Pitt. L. Rev. 405 (1996).

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
138 Mariano-Florentino Cuéllar

the administrative state – legally sanctioned bureaucratic authority. The first


is an idea about the world’s shortcomings, wherein quirky and admittedly
imperfect political decision-makers are tolerated because bureaucratic power
depends on them.10 Taking this idea seriously, we might consider the (presum-
ably rationalist, welfare-enhancing) goals a bureaucratic agency is supposed
to serve, and how empowering political administrators might advance those
goals better than the alternatives despite administrators’ human imperfections.
But one might explain some of the justifications for empowering political
decision-makers by drawing from a second theme. Political administrators
could prove to be so important because of their capacity (though contingent
and not always realized) to advance deliberation and sound implementa-
tion in a manner that an anodyne bureaucracy adhering to rigid rules could
not. Where the first theme is more concerned with models of administrative
authority to achieve unambiguous goals (relative to the alternatives), another
is focused more on how we decide our goals, and what the process of imple-
mentation says about them.11
Both of these themes are implicated in any sound exploration of what
automation could mean for the administrative state. My premise is simply
that programs designed to supplement or even replace human decision-
makers are becoming more sophisticated, cheaper, more ubiquitous, and are
likely to incorporate a greater capacity for self-modification.12 More exten-
sive use of such programs will be commonplace as computing power and
memory become cheaper, data from (online and offline, consensual and
non-consensual) surveillance becomes more pervasive, and competitive eco-
nomic and military pressures drive adoption. Nor are social norms static: as
autonomous and elaborate decision-support programs become more com-
mon, norms may change about the propriety of relying on computers to make
decisions.13
At least conceptually, it may seem simple enough to determine the expected
value, in social welfare terms, of expanding society’s reliance on algorithms
and neural networks to make decisions. As an example, consider the choice
of whether to replace an ALJ working on disability determinations or even an

10 Cf. Daniel Carpenter, The Forging of Bureaucratic Autonomy: Reputations, Net-


works, and Policy Innovation in Executive Agencies, 1862–1928 (2001).
11 Mashaw, Prodelegation, supra note 3.
12 On self-modification, see Eliezer Yudkowsky, Machine Intel. Res. Inst., Intelligence
Explosion Microeconomics (Sept. 13, 2013).
13 But note that a backlash may also emerge, raising concern about such delegation and resulting
in pressure to regulate the use of computer programs in government decision making. See, e.g.,
Douglas McAdam, Sidney Tarrow, & Charles Tilly, The Dynamics of Contention
(2001).

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 139

Under Secretary responsible for food safety with an expert system – one that
could replay in exquisite detail the sequence of decision rules it relied on to
render a judgment. Any reasonable effort to judge the quality of that judgment
depends on more than simply how a statute or regulatory rule defines a domain-
specific metric of success. Because such delegation could affect variables
that cut across domains – such as perceptions of government legitimacy,
cybersecurity risks, and the extent of presidential power, even more important
would be an uncontroversial (and, ideally, transsubstantive) metric of social
welfare, along with certain assumptions to minimize the difficult trade-offs
across domains. We could imagine, for example, the question of whether a
network of expert systems might better allocate resources to minimize the risk
of contamination in imported seafood. Even in such situations, as we’ll see,
subtle, systemic costs may arise that are worth mapping out and considering
carefully; more on this below.
Yet more profound challenges would arise in the myriad situations where
the unambiguous metric is not so easily available. Lawmakers, courts, and
the public would also find it harder to assess the consequences of automated
agency decision-making where the trade-offs are complex. That statutes them-
selves routinely incorporate such ambiguities at their very core is in part why
administrative law is left with so many problems to solve.14 And as I explain
further, we may ultimately find that the choices we make about automation
will be part of a broader conflict about the role of people in an economy that
sheds a large proportion of existing job categories more quickly than expected,
even as it continues to enhance automation technologies that humans find,
like a sweet-smelling artificial strawberry dessert, occasionally more satisfying
than the “natural” alternative.

ii organization, cognition, and the promise


of artificial intelligence
By the mid-twentieth century, Herbert Simon and the group of scholars he’d
attracted to Carnegie Mellon University had begun in earnest to pursue a
research agenda foreshadowing the connections between the study of admin-
istrative behavior, cognition, and artificial intelligence. At the core of that
agenda was a recognition that administrative states rely on people – imperfect
individuals and flawed organizations – yet are committed by law to pursuing
staggeringly ambitious goals.15 This led Simon to his influential conception of

14 See, e.g., Whitman v. Am. Trucking Ass’ns, 531 US 457 (2001).


15 See Herbert A. Simon, Administrative Behavior (4th ed. 1997).

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
140 Mariano-Florentino Cuéllar

bounded rationality, which was itself grounded in the prescient realization that
administrative challenges could be understood in part as problems of gathering
expertise and engaging in decision making under uncertainty and subject to
a variety of constraints.16 And indeed, the decisions administrators make every
day about drug approval, transportation regulation, and security illustrate how
core problems in administrative behavior involve not only power and account-
ability. Those problems also raise questions about the proper role of human
judgment and the enormous analytical complexity of making decisions under
uncertainty.
That human cognition fails to reflect the properties of optimal decision-
making, then, is a proposition that inspires virtually no controversy among
scholars today. Social psychologists, economists, political scientists, and even
policymakers routinely acknowledge the limitations of how humans tend
to consider probabilities or otherwise weigh the consequences of particular
decisions.17 Decision-makers may exhibit racial or gender biases, may over- or
under-weigh the importance of a particular piece of information, may naively
assume their own wisdom or truculently insist on the naiveté of rivals. Faced
with an intricate decision that has implications for an individual as well as (in
the aggregate) systemic consequences, an administrative adjudicator consid-
ering whether to impose sanctions on an alleged polluter may fail to consider
complex strategic interactions that could result from a particular application
of government power.
The difficulty of making and grading these choices helps explain why soci-
eties could become interested in making greater use of computer systems
that hold the promise of improving administrative decisions. For different
reasons, greater reliance on artificial intelligence may appeal to government
officials making their case to the public, contractors with new technologies
to sell, lawyers criticizing arbitrary decisions, and civil society groups making
the case for more predictable and analytically sound administrative decisions.
The arguments could leverage the value of relying on data to make regulatory
decisions. They could expand on conceptual critiques of administrative dis-
cretion, such as that articulated by Kenneth Culp Davis, or even incorporate
further insights from cognitive psychology and bounded rationality about the
limits of human cognition.
Given those limits, it is hardly surprising that American agencies have been
using computers since early during the Cold War. As computers continue
to improve in both analytical capacity and user interface, some of the most

16 See Herbert A. Simon, A Behavioral Model of Rational Choice, 69 Q. J. Econ. 99 (1957).


17 See Mark Kelman, The Heuristics Debate 18-21 (2011).

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 141

intriguing legal and policy questions will arise from the use of computer sys-
tems with certain characteristics: (1) They are able to use data to evaluate
possible outcomes and make recommendations or tentative decisions, either
by assessing the fit between a particular pattern of data and a decision-making
rubric, or by using inductive methods to assess patterns and recommend new
standards for decisions. (2) Their interfaces – whether they involve graphical,
written, or spoken-language – allow for explanations that can persuade human
observers, or perhaps even domain-level experts. (3) They are capable of adapt-
ing and changing over time, whether with respect to analytical functions or
explanatory capacity. While not all the examples I discuss here involve systems
that fully embody these characteristics, the systems at issue are examples of
how computer systems will almost certainly reflect – to an increasing degree –
these characteristics.
Nor is computing architecture as static as it may sometimes appear: genetic
algorithms, neural networks, similarly, deep learning/self-modifying algo-
rithms. Even if it proved more difficult to automate the role of an agency
general counsel with a diverse portfolio of employment, litigation, regulatory,
and legislative work (given present trends) than, say, a mid-level analyst screen-
ing licensing applications, predictions about the relative insulation of some
cognitive tasks could be wrong.
But the promise of automation in the administrative state is not entirely
contingent on computer systems that are more articulate, and come perhaps
asymptotically closer to simulating human creativity. Perhaps some travelers
(though certainly not all of them) would prefer to be screened by even a fairly
conventional computer system rather than by an agent whose biases and limi-
tations could color her judgment. After all, human decision-makers get things
wrong. The use of statistical and other predictive techniques by computers
could improve not only individual decisions, but systemic bureaucratic per-
formance as well. As computing technology improves, new possibilities will
emerge to juxtapose two seemingly opposite qualities that could make automa-
tion more difficult to resist – the ability to analyze data and make predictions
in subtle fashion that does not easily track human intuition, coupled with the
capacity to make increasingly persuasive arguments (whether through data
visualization, or written or spoken expressions) to defend a decision.
So what, exactly, could more robust reliance on sophisticated information
technologies accomplish that could enhance social welfare? Consider at least
three different uses: (1) The simplest scenario is one where computer systems
duplicate what a human administrator could do, at a lower cost. The more
feasible this goal becomes, the more pressing the case for saving resources
and using them for socially desirable objectives. (2) The right expert systems

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
142 Mariano-Florentino Cuéllar

could screen out biases and certain heuristics considered, in the aggregate,
undesirable (e.g., availability heuristic, vividness heuristic). (3) Even more
intriguing, computer programs make it possible to analyze information for the
purpose of predicting outcomes or responding to potential strategic behavior in
a fashion that would be difficult for a human decision-maker to approximate
(e.g., how and when car companies will use software to fake emissions?).
One example is the EPA’s approach to analyzing the toxicology of pesticides.
These possibilities make it hard to ignore – and normatively questionable to
ignore – the opportunities for automating some aspects of the administrative
state.
The aforementioned black box and its brethren would rightly command
attention if they could deliver all or most of the benefits of a human decision-
maker while substantially curbing some of the preceding flaws. Optimizing
the allocation of resources for immigration or environmental enforcement, for
example, or determining how best to dispose of nuclear waste – these are the
kinds of problems that arguably call for a degree of analytical sophistication
that may routinely elude even quite carefully chosen and insulated teams
of humans. Presumably such analytical prowess, whether found in machines
or humans working with machines, could conceivably translate into social
welfare gains. And as an added benefit, perhaps the network of black boxes (or
even its clunkier precursors) could conceivably yield greater transparency by
making it easier to follow what precise considerations were used in driving a
particular outcome.
If these feats may seem too much to expect from our existing coterie of
computer programs, data, and computing power, it is worth recognizing that
developments in artificial intelligence are neither dependent on a single ana-
lytical architecture, nor static relative to technological and societal changes.
Conventional algorithms can be built around deductive logic, the use of
induction leveraging traditional statistical theory, or Bayesian approaches.
Cheap computing power, cheap storage, greater and easily available band-
width, greater surveillance data, and software innovations are improving
user interfaces and capacity to simulate human interaction. These trends
will almost certainly further hasten use of automation to support or govern
decision-making in private sector bureaucracies. Though the public sector
may pose its own practical challenges to automation (more on this later),
pressures will also exist for the use of these technologies because of some
of the often-described limitations on bureaucracies and human decision-
makers.
Neural networks, for example, are computer programs (whether or not using
specialized hardware designs) that, as a very general approximation, simulate

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 143

the structure of the brain. Such networks are made up of nodes similar to
neurons.18 Units are connected to one another, and these connections have
different “weights.”19 The higher the weight, the more influence one unit has
with another. In a normal neural network, there are input units, output units,
and “hidden” units that connect the two. Input units are activated, which in
turn activate hidden nodes, which activate different output nodes, depending
on the activated nodes’ weight. (So, for instance, in a handwriting-recognition
program, pixels in a drawing will trigger input units, which will trigger hidden
units, which will weigh the probability of the drawing being a particular letter,
which in turn will trigger a final output letter that guesses the correct letter.)
Importantly, once the neural network is operating and the actual results can
be compared to the ideal “right” results, the weights between nodes can be
adjusted through a process called “backpropagation” to make the outputs of
the network more accurate.20 Because neural networks can appear to become
better over time at clustering objects or recommending outcomes in this
fashion, they are often best suited for pattern recognition.21
In contrast, genetic algorithms are meant to approximate the evolutionary
process. Such networks consider a set of “solutions” called a “population.”22
Perhaps these solutions are optimal routes from home to work, or ideal designs
for the shape of an antenna. In a given population, the best solutions are
chosen and either recombined (in the travel example, perhaps by switching
over from Highway 101 to the 280 at some point) or mutate as part of a stochastic
process (perhaps by taking a local street for a while to avoid gridlock). These
recombined and mutated solutions are compared, and the best are chosen
and recombined or mutated again.23 The process repeats itself until some test
condition (perhaps a minimum driving time) is achieved.24 As the example
suggests, genetic algorithms may be best for solving shortest-distance “traveling
salesman”-type problems.25

18 See, e.g., Raul Rojas Gonzalez, Neural Networks: A Systematic Introduction (2013);
see also Chris Woodford, Neural Networks, Explain That Stuff (Nov. 19, 2014), http://www
.explainthatstuff.com/introduction-to-neural-networks.html.
19 Woodford, supra note 18. 20 Id.
21 Daniel Shiffman, Neural Networks, The Nature of Code, http://natureofcode.com/book/
chapter-10-neural-networks.
22 See Masatoshi Sakawa, Genetic Algorithms and Fuzzy Multiobjective Optimiza-
tion (2002); Marek Obitko, Genetic Algorithms, Genetic Algorithms (1998), http://www
.obitko.com/tutorials/genetic-algorithms/ga-basic-description.php.
23 Obitko, supra note 22. 24 Id.
25 Stack Overflow, When to Use Genetic Algorithms vs. When to Use Neural Networks?
(Sept. 9, 2009), http://stackoverflow.com/questions/1402370/when-to-use-genetic-algorithms-
vs-when-to-use-neural-networks.

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
144 Mariano-Florentino Cuéllar

These approaches are somewhat distinct from more traditional computer


algorithms, in which a computer’s instructions must be recorded in advance
and the data must be as precise as possible.26 With both neural networks and
genetic algorithms, the instructions themselves can change over time. With
traditional computing, this adaptation occurs in a somewhat different fashion –
through algorithms empowered to make modifications in certain aspects of
their routines.
One can easily imagine the role of conventional algorithms in social secu-
rity disability determinations, drug approval, and decisions to freeze assets.
And it would be a mistake to assume that expert systems or similar mecha-
nisms will be limited, long term, to conventional deductive or rule-reinforced
inductive reasoning. Despite some of the limitations on their early promise,
neural networks offer a different architecture for analyzing problems com-
pared to conventional algorithms (and even compared to the use of Bayesian
approaches). By simulating some key architectural features of the brain itself,
neural networks reinforce certain combinations of nodes and signals associated
with particular responses, and are ultimately able to mimic certain aspects of
tacit knowledge. One limitation of neural networks is the inability to glean
much of a (tractable or logical) explanation for why a decision was taken,
but the use of this approach nonetheless highlights the potential for stark
and discontinuous improvements in our technological capacity for automated
decision making.

iii cyberdelegation and governmental decisions: examples


of a changing context
Whether one considers reliance on expert systems, neural networks, or other
so-called “artificial intelligence” techniques, few corners of the administrative
state will be immune from the effects of these technologies. Agencies may
find that expert systems fill a niche in administrative adjudication. Over time,
neural networks and genetic algorithms will almost certainly inform judgments
about the proper scope of a rule, or the merits of a particular enforcement
strategy. And changes in analytical capacity will exist alongside improvements
in the capacity of computer programs to interact using natural language – with
consequences to follow in the labor market.27
Agencies are indeed beginning to rely more on computer programs to make
decisions. Drones capable of using kinetic force to make lethal strikes, at least as
they have been discussed in the popular media, are not presently autonomous

26 Kevin Gurney, An Introduction to Neural Networks 124 (1997).


27 David H. Autor & David Dorn, The Growth of Low-Skill Service Jobs and the Polarization of
the U.S. Labor Market, 103 Am. Econ. Rev. 1553, 1553 (2013).

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 145

at all. Humans pilot them remotely, and humans make the ultimate decision
on whether to launch an attack. This appears to be changing. Since at least
2011 the military has been testing drones that could both fly and identify targets
for attack automatically.28 And the US military has successfully tested an
experimental missile that, when launched, can autonomously choose which
of several targets to attack,29 a task that several years ago would have had to
have been performed by a human.
An example involving federal regulation of pesticides highlights the subtle
ways in which computer-based analysis and legal standards could interact –
as well as the reasons why agencies may embrace new analytical techniques
heavily relying on automation. The Environmental Protection Agency admin-
isters the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), which
requires the registration of pesticides before marketing in interstate or foreign
commerce.30 The current version of the statute requires proof, before a pes-
ticide can be marketed in interstate or foreign commerce, that the substance
does not cause “unreasonable adverse effects” on humans or the environment.
That amendment was largely responsible for the testing strategy that eventu-
ally emerged in the EPA.31 Later, the Food Quality Protection Act of 199632
amended the Food, Drug and Cosmetic Act (FDCA) and FIFRA, leading
the EPA to establish a risk-based standard for pesticide residues on all foods,
the requirement that the EPA “consider all non-occupational sources of expo-
sure,” and the requirement that the EPA set tolerances that would ensure
safety for infants and children.33 In conclusion, “FIFRA, the FDCA, and the

28 Peter Finn, A Future for Drones: Automated Killing, Wash. Post (Sept. 19, 2011), http://www
.washingtonpost.com/national/national-security/a-future-for-drones-automated-killing/2011/
09/15/gIQAVy9mgK story.html.
29 John Markoff, Fearing Bombs That Can Pick Whom to Kill, N.Y. Times (Nov.
11, 2014), http://www.nytimes.com/2014/11/12/science/weapons-directed-by-robots-not-humans-
raise-ethical-questions.html.
30 Id.; see also David Krewski et al., Toxicity Testing in the 21st Century: A Vision and a Strategy,
13 J. Toxicology & Envtl Health (Part B: Critical Reviews) 51, 53 (citing J.D. Connor
et al., Pesticide Regulation Handbook (1983)).
31 See Krewski et al., Toxicity Testing in the 21st Century, supra note 30, at 53.
32 Food Quality Protection Act (FQPA) of 1996, Pub. L. No. 104–170, 110 Stat. 1513. Congress
also enacted the Toxic Substances and Control Act in 1976 to control risks from chemicals
in commerce. It requires the government to review most new chemicals while they are being
developed, and it gives the government the power to regulate chemicals already in or entering
commerce if they create an “unreasonable risk” to health or to the environment. The Office
of Pollution Prevention and Toxics was created to apply computational methods in regulating
commercial chemicals. Brinda Mahadevan et al., Genetic Toxicology in the 21st Century:
Reflections and Future Directions, 52 Envtl. & Molecular Mutagenesis 339, 342 (2011).
33 Summary of the Food Quality Protection Act of 1996, US Environmental Protection
Agency, http://www.epa.gov/laws-regulations/summary-food-quality-protection-act (last vis-
ited Jan. 24, 2016).

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
146 Mariano-Florentino Cuéllar

amendments to them are responsible for the current toxicity-testing strategy


for pesticides, which typically requires extensive testing before a pesticide can
be marketed.”34
Toxicity testing as currently conducted depends heavily on assessing ani-
mals’ reactions to chemicals – a technique that can be easily criticized as
costly, slow, and inhumane. At the most basic level, it limits the number of
chemicals the EPA can test, when it faces strong pressures to test more than
80,000 chemicals.35 Further, it limits the number of toxicity pathways one
can test, the levels of biological organizations one can examine, the range of
exposure conditions one can consider, and the life stages, genders and species
one can cover.36
Given these limitations, animal testing proves to be at best a cumbersome
and strikingly imperfect way to study toxicity for humans. In order to gain
statistical power, animal testing uses high doses of toxic chemicals. But these
quantities are much greater than expected human exposure.37 And, ultimately,
extrapolation of animal testing results to human is at best imperfect and at worst
uninformative.38 Moreover, current methods of predicting where chemicals
will end up, and how they will be transported often rely on unrealistic simpli-
fying assumptions.39 They do not adequately take into account the complex
ways in which chemicals may be exposed to the environment – by water, food,
air, skin, etc. Calculations of environmental exposure to toxic chemicals are
overly simplistic – most only provide information on possible exposure levels
over a population, but the effect on each individual is not known.40
Given the inadequacy of current methods of toxicity research, the National
Academy of Sciences published a report in 2007 calling for a transforma-
tive shift in toxicity and risk assessment.41 In the report, the Academy urged

34 Krewski et al., Toxicity Testing in the 21st Century, supra note 30, at 53. Further, while the EPA
also enforces the Clean Air Act, the Clean Water Act, and the Safe Drinking Water Act, it
does not conduct toxicity testing in those areas. Rather, the EPA relies mainly on information
available in the scientific literature or government laboratory reports. See id.; see also Richard
Denison, Ten Essential Elements in TSCA Reform, 39 Envtl. L. Rep. 10020 (2009).
35 National Research Council, Toxicity Testing in the 21st Century: A Vision and
A Strategy 5 (2007), http://www.nap.edu/catalog/11970/toxicity-testing-in-the-21st-century-a-
vision-and-a.
36 Id. at 3. 37 See Krewski et al., Toxicity Testing in the 21st Century, supra note 30, at 61.
38 Id. at 62. 39 National Research Council, supra note 35, at 5. 40 Id. at 21.
41 See generally id. The recommendations of the National Research Council’s report reflect
a global trend – Europe, Canada, and Japan are similarly facing legislative pressures to
use computational methods to prioritize large lists of chemicals for testing. Robert Kavlock
et al., Computational Toxicology – A State of the Science Mini Review, Toxicology Sci. 1,
9 (2007). See Mahadevan et al., supra note 32, at 343. In the European Union, countries are
beginning to de-emphasize animal testing and have put into place laws for regulating new and

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 147

increased use of computational toxicology. In response, the EPA is introducing


many different forms of computational methods in regulating pesticides.
The most ambitious effort aims to radically change the way in which tox-
icity testing occurs. Instead of using animal testing, also known as in vivo
testing, it introduces a new method of testing called “high-throughput assays.”
These assays are efficiently designed experiments that can be automated and
rapidly performed to measure the effect of substances on cells.42 In essence,
these assays automatically expose living cells (as opposed to whole animals)
to isolated chemicals. Then the cells are screened with robotics technology to
determine the effect that the chemicals have had. These assays can therefore
evaluate hundreds to many thousands of chemicals over a wide concentra-
tion range to identify chemical actions on multiple genes, pathways, and cell
functions.43 It has a major cost advantage – toxicity pathways can be compre-
hensively assessed at a relatively low cost per chemical.44 Current methods
of animal testing cost about $20,000–25,000 per chemical.45 And computa-
tional methods help agency officials analyze, across a variety of dimensions,
the enormous amount of data produced by the assays that agencies are using.46
Computation is also helping the EPA better calculate and predict envi-
ronmental exposure to chemicals. Modern computational methods can build
complex models that take into account many variables that determine the level
of exposure to toxic chemicals, such as the difference in exposure to animals
versus to humans, variability in exposure to humans, and the overall uncer-
tainty of these predictions.47 Such models can also incorporate risk assessment
methods to help manage risk.48 These are especially useful for ecological risk
assessment, since there is relatively little data on the effect of chemicals on a
wide range of non-human organisms.
To support these efforts, the EPA is involved in massive data collection.
It created the Aggregated Computational Toxicology Resource (ACToR), a

existing substances. Id. at 342. Similarly, Canada has used computational methods to prioritize
their Domestic Substances List, as directed by the Canadian Environmental Protection Act.
Id. Japan is considering the use of computational toxicology under the Law Concerning the
Evaluation of Chemical Substances and Regulation for their Manufacture, whereas Australia
has implemented the National Industrial Chemicals Notification and Assessment Scheme. Id.
at 342–43.
42 Michael Firestone et al., The US Environmental Protection Agency Strategic Plan for Evaluating
the Toxicity of Chemicals, 13 J. Toxicology & Environmental Health (Part B: Critical
Reviews) 139, 147–48 (2010).
43 Id. 44 Id. 45 Id. 46 National Research Council, supra note 35, at 14.
47 Specifically, Hierarchical Bayesian models or Monte-Carlo methods may be employed. See
generally Robert Kavlock et al., Computational Toxicology – A State of the Science Mini Review,
103 Toxicological Sciences 14, 20 (2007).
48 National Research Council, supra note 35, at 14, 29.

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
148 Mariano-Florentino Cuéllar

relational data warehouse for chemical and toxicity data from various public
sources to support data mining and modeling.49 ACToR draws from many
sources, including those from the EPA, CDC, FDA, NIH, state agencies, cor-
responding government agencies in Canada, Europe, and Japan, universities,
the WHO, and even NGOS.50 The EPA is poised to start using virtual tissues.
The goal is to be able to computationally create multi-scale model biologi-
cal systems. The agency is currently developing a “virtual liver” at the EPA’s
National Center for Computational Toxicology.51
The EPA’s reliance on computational toxicology underscores how agency
decisions may increasingly implicate not only human choices about research
methods, but also architectural choices in the development of algorithms and
neural networks to analyze data in new ways. Along with the considerable
benefits of this approach come emerging challenges – for example, about the
opacity or transparency of the simplifying assumptions driving the performance
of the virtual human liver, about the risks that subtle tampering arising from
cybersecurity breaches could change the analysis, and about the gap between
how administrators explain the resulting toxicology analysis to congressional
staffers and how the simulation actually functions.
As agencies embrace the tools of automation and machine learning to detect
compounds that should be subject to stricter regulation, military bureaucracies
rely on software to make decisions about lethal force. Britain, Israel, and
Norway, for example, already deploy drones and missiles that can carry out
attacks without human control. In the 2011 air campaign against Muammar
Gaddafi:

British warplanes fired two dozen Brimstone missiles at a group of Libyan


tanks that were shelling civilians. Eight or more of the tanks were destroyed
simultaneously, according to a military spokesman, saving the lives of many
civilians. It would have been difficult for human operators to coordinate the
swarm of missiles with similar precision.52

In other words, autonomous weapons systems capable of identifying targets


without human direction are not just in development; they are in active use.53

49 Id. at 8–9.
50 For more information on what is in ACToR, how the data is structured, and how to use
ACToR, see Richard Judson et al., ACToR – Aggregated Computational Toxicology Resource,
233 Toxicology & Applied Pharmacology 7, 8 (2008).
51 National Research Council, supra note 35, at 29.
52 John Markoff, Fearing Bombs That Can Pick Whom to Kill, N.Y. Times (Nov.
11, 2014), http://www.nytimes.com/2014/11/12/science/weapons-directed-by-robots-not-humans-
raise-ethical-questions.html.
53 See e.g., Kenneth Anderson and Matthew C. Waxman, Law and Ethics for Autonomous
Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can, (Columbia

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 149

Changes in disability claims, too, may emerge as agencies seek to resolve


logistical problems while compensating for inconsistencies of human judg-
ment. In 2013, in an effort to reduce its reliance on paper records, to increase
consistency across cases, and to automate some of its workflow, the Depart-
ment of Veterans Affairs launched a computerized case management system
for incoming disability claims.54 According to the Wall Street Journal, the
software “ . . . automates [the determination of] veterans’ disability levels for
compensation.”55 To do so, the system “calculates the level of disability –
from zero to 100% – solely on the vet’s symptoms from the [self-reporting]
questionnaire.”56 In essence, the software took over this responsibility for
determining levels of disability from Department “raters” – human beings
charged with determining a claimant’s entitlements. Unsurprisingly, many
raters within the agency were upset that the case management system took
over this part of their work, and claimed that the software considered too few
inputs, including their own, in determining the level of disability.57
And in another domain of responsibility shared by the public and private
sectors, computer programs are reshaping efforts to detect fraud and money
laundering. New software is coming to replace many of the due diligence
and “Know Your Customer” tasks once performed by human auditors. Soft-
ware structures how banks check new customers’ names against government
watchlists,58 and how they automatically create anti-money laundering risk
profiles for individual clients.59 Some anti-money laundering software prod-
ucts can analyze bulk bank data to identify transactional relationships that
suggest organized crime networks.60 Others claim to be able to detect sus-
pected money laundering or terrorist financing activity automatically, over
hundreds of millions of records, in hours.61 Even a few years ago, to the

Law Sch. Pub. Law & Legal Theory Working Paper Grp., Paper No. 13–351 2013), http://papers
.ssrn.com/sol3/papers.cfm?abstract id=2250126.
54 Bob Brewin, Goodbye Paper: VA Installs Automated Claims System in All Regional Offices,
NextGov (Jun. 17, 2013), http://www.nextgov.com/health/2013/06/goodbye-paper-va-installs-
automated-claims-system-all-regional-offices/65030/.
55 Daniel Huang, Automated System Often Unjustly Boosts Veterans’ Disability Benefits, Wall
St. J. (May 11, 2015), http://www.wsj.com/articles/automated-system-often-unjustly-boosts-
veterans-disability-benefits-1431387826.
56 Id. 57 Id.
58 Watch List Filtering, Fiserv, http://www.financialcrimerisk.fiserv.com/watchlistfiltering.aspx
(last visited Jan. 24, 2016).
59 A Risk Based Approach, Fiserv, http://www.financialcrimerisk.fiserv.com/customerduediligence
.aspx (last visited Jan. 24, 2016).
60 SAS Anti-Money Laundering, SAS, http://www.sas.com/en_us/industry/banking/anti-money-
laundering (last visited Jan. 24, 2016).
61 Oracle, Oracle Financial Services Anti Money Laundering On the Oracle
Exadata Database Machine (2013), http://www.oracle.com/us/industries/financial-services/
anti-money-exadata-ds-2062792.pdf.

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
150 Mariano-Florentino Cuéllar

extent that these sorts of risk determinations were done at all, they would have
had to have been done manually.
Another domain involving a space of shared public and private activity –
the testing of pharmaceutical products – also serves as a harbinger of greater
automation. As part of its review of new drug applications, the FDA often
considers “Population Pharmacokinetics” models: essentially tests to see how
drugs will interact with different bodies, depending on age, weight, and other
factors.62 Traditionally, experts known as “pharmacometricians” would select
several hundred statistical models (not real people) on which to test these
drug interactions. Choosing which models to include was apparently time
consuming and labor intensive.63
Recently, however, the FDA approved a new drug application in which mod-
els were selected, not by a pharmacometrician, but by an algorithm. According
to the developer’s press release announcing the fact, such “automated model
selection provides pharmaceutical and biotech companies results in less than
half the time and at a lower cost compared to the traditional method.”64
Courts sometimes reference population pharmacokinetics report in discussing
a drug’s safety.65 The reliability of such reports, and the underlying automation
that helped to produce them, may come before a court again in the future.

iv living with cyberdelegation: implications of emergence


and institutional change
Whatever advantages may emerge as these trends accelerate, however, come
with a mix of challenges and subtle complexities that deserve careful scrutiny.
One of the challenges associated with the administrative state most recogniz-
able to lawyers is how society resolves administrative adjudication problems,
such as disability determinations or the freezing of assets that could be linked
to illicit activity. In any such adjudication, policymakers might face a trade-
off about whether to reduce discretion with a rule-based system that more
thoroughly matches different fact-patterns to particular outcomes, or whether
to allow for application of more individualized considerations pursuant to a

62 Food & Drug Admin., Guidance for Industry: Population Pharmacokinetics (Feb.
1999), http://www.fda.gov/downloads/Drugs/.../Guidances/UCM072137.pdf.
63 Automated Model Selection for Population PK Analysis–‘DARWIN’ Software Program, Nuven-
tra, http://www.nuventra.com/services/darwin/ (last visited Jan. 24, 2016).
64 Daniel Roy, FDA Approves First NDA Using Nuventra’s Software for Automated Popula-
tion PK Model Selection–DARWIN, Nuventra (Nov. 2, 2014), http://www.nuventra.com/
fda-darwin-population-pk-automated-model-selection/.
65 La. Wholesale Drug Co. v. Sanofi-Aventis, No. 07-CV-7343 (HB), 2008 WL 4580016, at ∗4 n.5
(SDNY, Oct. 14, 2008).

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 151

broader standard. In Heckler v. Campbell,66 the Supreme Court concluded


that even individuals who are entitled to a hearing may nonetheless face a sit-
uation where a previously specified rule determines much of what drives how
their case is decided. One can readily imagine how a similar principle could
justify the use of an algorithm – understood as nothing more than an elaborate
version of the “grid” at issue in Campbell – to determine case outcomes. From
a social welfare perspective, the question of whether to use such an algorithm
evokes the familiar trade-off between rules and standards.
But some of the trade-offs may not fit the conventional choice between rules
and standards that Campbell addressed. Consider, as an example, the challenge
of shaping a computer system’s understanding of conventional administrative
law categories such as rulemaking, adjudication, enforcement, guidance, arbi-
trary and capricious review, and so on. It is one thing for an artificial neural
network to learn to predict how a given individual decision maker or even
an organization would define “arbitrary and capricious.” It is quite another to
decide how such a system should be trained in adjudicating among competing
definitions. And as further progress is made on interfaces for computer-human
interactions, further challenges will arise in mapping algorithms onto con-
ventional doctrine, policing the fit between what a user interface says about
how a decision was made and how it was actually made, and addressing labor
market debates about jobs lost to automation and the value of quirky human
decision-making traits.
Moreover, automation may bring with it dilemmas involving asymmetric
expertise and information. Automation in the private sector to determine
(for example) what financial transactions are considered suspicious could
give rise to public sector information technology challenges, such as how
to conduct “algorithmic” audits that assess the fit between regulatory rules
and algorithms relied on by the private sector. Over time, regulators and
the public may face further difficulties contending with the emergence of
analytical methods rooted in deeper learning and neural networks that prove
more difficult to fully understand (or that depend on the use of still other
computer programs to understand). Eventually, administrators and the public
could find themselves confronting the emergence of difficult-to-anticipate
properties from the interaction of multiple algorithmic decision-makers.
Although these scenarios carry with them somewhat different benefits and
risks, virtually all of them suggest that the administrative state will increasingly
face several dilemmas. First, as long as we are dealing with systems that fall well
short of what could be termed “artificial super-intelligence,” the societal value

66 461 US 458 (1983).

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
152 Mariano-Florentino Cuéllar

of our reliance on computer programs in the administrative state will depend


on often highly contestable assessments of the objectives that the programs
are designed to maximize. Put simply, deciding how to instruct computer
programs on matters of broad public concern – and telling them what to
maximize – can be more difficult in practice than in theory. These difficulties
arise even when there is widespread societal agreement about a given general
goal, such as reducing the risk of contaminated food at a reasonable cost,
in part because agreement at a high level of generality rarely translates into
consensus on how to implement policies through administrative agencies. And
as computer programs maximize a given parameter – such as ensuring that
suspicious financial transactions are flagged and blocked – they can ignore
the consequences for other parameters, such as the resilience of the technical
architecture for cross-border banking transactions in light of cybersecurity
vulnerabilities.
One would expect plenty of debate about what the precise mix of goals
are that should animate the imposition of economic sanctions, or the imple-
mentation of procedures to adjudicate knife-edge asylum cases with contested
evidence. Regarding economic sanctions, for example, it is easy enough to
suggest that the goal is to change the behavior of the target country. But the
details matter. Trade-offs could arise, concerning the acceptable administra-
tive and efficiency costs of imposing certain sanctions (for example), or the
relative merits of strategies that maximize the goal of galvanizing further inter-
national action at the expense of raising the probability of policy change in the
target country. Often, implementing policy involves political trade-offs that an
expert system or a machine learning mechanism could elide but would still
be making, implicitly, because it is applying a particular analytical technique.
And it is our often-undertheorized goals that must also inform whether we
should try to screen out features of human cognition diverging from conven-
tional and easily systematized accounts of rationality. There is no reason to
think that all heuristics and biases are bad from a social welfare perspective.
Whether a heuristic is valuable depends on what goal one has for society. Mass
neglect of probabilistic information all but certainly makes it more difficult
to fix certain problems. But some features of human cognition that vary from
conventional rationality – such as the tendency to weigh more heavily the sto-
ries of specific individuals rather than aggregate statistical information – may
reflect domain-specific responses to discrete kinds of problems, or indeed, may
prove integral to qualities such as empathy, or to the ability of policymakers to
explain governmental decisions to the public.67

67 See Mark Kelman, The Heuristics Debate 180 (2011).

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 153

In fact, biases that facilitate an ability to anticipate how the public will
respond and help protect public support for a regulatory program – these may
be considered inappropriate to codify into instruction sets but would be diffi-
cult to screen out if human decision-makers retain their crucial role. Accord-
ingly, at least in some circumstances, quirks of human decision-making often
treated as “biases” to be screened out by technocratically oriented scholars
in the tradition of Sunstein may merit an (increasingly important) impor-
tant place in legal decision making as many routine decisions are guided by
algorithms.68
Second, although doctrinal developments may encourage increasing
reliance on automation for decision-making, the incorporation of computer
programs into the decision-making framework of the administrative state could
nonetheless have major path-dependent effects that are difficult to undo, that
carry with them cybersecurity risks, and that will not necessarily be weighed in
a calculus that may encourage reliance on computer programs. Agencies may
face a variety of pressures to use automated technologies. Courts have long
scrutinized agency regulatory or adjudicatory decisions to determine whether
they are arbitrary and capricious (or otherwise unjustified). The availability
of computer programs to analyze data, provide cogent simulations, and apply
a variety of analytical tools could help agencies overcome such scrutiny in
new ways. Changes in common law standards of care could also promote
changes in norms affecting the willingness of agencies to eschew computer
programs that could provide a new kind of justification for administrative
decisions.
Amidst these pressures, it may be tempting to ignore cybersecurity problems
because society faces daunting difficulties in quantifying, let alone eliminat-
ing, the risks. But it would be a serious mistake to consider the benefits of
automation without considering the associated security problems. Cyberse-
curity problems should loom especially large in discussions of automation
because of the many examples of governmental failures involving information
technology. These failures, moreover, reflect not only episodic problems or the
inherent complexity of information technology, but recurring difficulties the
public sector may encounter in contracting, reconciling competing objectives,
and integrating long-term strategy with short-term constraints.
Yet major path-dependent effects would be difficult to undo as we incorpo-
rate algorithms into legal decision-making. Path dependence arises because
infrastructure is costly to replace and habituates people to make decisions
in a particular way. Organized interests tend to defend the resulting status

68 See, e.g., Cass R. Sunstein, Simpler: The Future of Government (2013).

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
154 Mariano-Florentino Cuéllar

quo, and organizations often develop internal political dynamics favoring


continuity over change. Given recent advances in DNA sequencing and
genetic medicine, it is not difficult to envision an ever-greater role for expert
systems in analyzing information relevant to the approval of specialized drugs.
But where computer programs and organizational expertise may be comple-
ments at one point (for instance, when an expert system is acquiring initial
knowledge), they may become substitutes later. Once an agency’s organiza-
tional expertise – embodied in its experienced lawyers and scientists – begins to
erode, at a minimum the agency faces steep costs in recovering it. The agency
may be far better equipped to tweak its system for generating computer-driven
choices than to rebuild its infrastructure of experienced human decision-
makers.
And where those decision-makers are eclipsed by computer programs, an
agency would be forced to shoulder the risks of cybersecurity breaches. Com-
puter programs are all but certain to leverage the Internet or a similar network.
The EPA could deploy a mix of pervasive data gathering about possible pollu-
tion violations, along with computer programs to target enforcement and even
adjust rules or guidelines in real time where necessary. The result could be a
world with less pollution, but one more vulnerable to cybersecurity threats that
could, at a minimum, undermine the integrity of the regulatory process and,
at worst, exploit vulnerabilities to undermine industrial infrastructure more
severely.
Third, policing the line between automated decision and decisions support
is difficult. Yet in how we police that line lies the distinction between essen-
tially turning a decision over to a computer program, and having the program
merely support a human decision-maker. To understand why, recall that peo-
ple underappreciate the influence of certain technologies and information
on their decisions. Moreover, consider the consequences of the so-called pre-
sumption of regularity that emerged over time in light of Morgan v. United
States.69 In rejecting a challenge to an order by the Secretary of Agriculture
fixing maximum rates to be charged by market agencies at the Kansas City
Stockyards, the Supreme Court declined to allow an intrusive analysis of the
Secretary’s actual decision-making process and considerations. “[I]t was not
the function of the court to probe the mental processes of the secretary in
reaching his conclusion,” the Morgan court concluded, “if he gave the hear-
ing which the law required.”70 With courts loath to stray from this presumption
of regularity over the decades, the presumption of regularity has persisted –
and, with it, courts’ unwillingness to police exactly by whom a decision is

69 304 US 1, 18 (1938). 70 Id.

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 155

taken.71 It conserves judicial resources and respects the deliberative process


in executive bureaucracies. Even if this presumption were jettisoned by the
judiciary, it’s far from obvious how to allow human decision-makers access to
computer programs for the kind of extensive decision support that such tools
can provide (given human cognitive limitations) while promoting consistency
with a cogent standard of how much reliance on a computer is too much.
If such a standard were possible to develop, though, it could furnish intrigu-
ing possibilities for institutional design. To wit: such a standard would allow
agencies and judges to implement a version of the approach Barron and Kagan
suggested with respect to reliance on subordinates for decisions. Kagan and
Barron distinguish two kinds of delegation: the delegation of authority from
Congress to an agency official, and the delegation of authority from an agency
official to her subordinates.72 Since Chevron, the Supreme Court has tried to
find workable limits on the deference due to agency decisions. Most impor-
tantly, in Mead the Court suggested that the judiciary should give greater
deference to decisions made through formal rulemaking, and to decisions
that are general, rather than particular.73 To Kagan and Barron, this defer-
ence regime creates perverse incentives for agencies to rely on rulemaking
procedures that are both expansive and expensive. Instead, Kagan and Barron
recommend that the judiciary apply Chevron deference if and only if the con-
gressional delegatee of the statutory authority takes personal responsibility for
the decision.74 As they would have it:

[t]he agency would wrest primary interpretive authority from the courts if
but only if a particular agency official – the official Congress named in the
relevant delegation – personally assumed responsibility for the decision prior
to issuance. The courts would retain primary interpretive authority (subject
only to Skidmore-style deference) if, alternatively, this named person passed
her decision-making authority to lower-level officials.75

The deference regime Kagan and Barron describe is hardly immune from
prescriptive as well as practical criticism. But it serves to illustrate the kinds of
dilemmas agencies and the public may face with respect to automation, par-
ticularly if the goal is to regulate decision-makers’ reliance on computer pro-
grams. The goal is to channel decision-making toward politically accountable

71 See also US Postal Serv. v. Gregory, 534 US 1, 10 (2001) (“[A] presumption of regularity attaches
to the actions of Government agencies.”).
72 David J. Barron and Elena Kagan, Chevron’s Nondelegation Doctrine, 2001 Sup. Ct. Rev. 201,
201 (2001).
73 Id. at 203–04. 74 Id. at 204. 75 Id. at 235–36.

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
156 Mariano-Florentino Cuéllar

agency “delegatees.” Automated systems – whether software to determine dis-


ability benefits or to identify bombing targets – may be “political” in the
sense that political beliefs are embedded in them (e.g., who should qual-
ify for disability benefits; what constitutes a legitimate bombing target). But
automated systems are not politically responsive – to Congress, to the Pres-
ident, and to the public – in the way that agency leaders are. The oppor-
tunity for greater judicial deference could – as in Kagan and Barron’s ini-
tial application – encourage agencies to push more decision-making toward
politically responsive leaders. Similar pressures could encourage agencies
to claim that humans remain crucially involved in agency decisions that
might otherwise be produced by an array of automated decision-making
procedures.
It may seem easier at first to draw clear distinctions between automated
and human decision-making within an agency than to distinguish – as Bar-
ron and Kagan’s original proposal required – between lower-level and senior
administrators who may be constantly working together. In practice, though, a
simple distinction between the decisions of human administrators immersed
in technology and agency choices that are primarily driven by the persua-
sive recommendations of machines may prove enormously elusive. Courts
may struggle to define automation, as often outcomes depend on the judg-
ment of both people and software. For instance, a pilot may target a missile
within a 100-yard radius, with an automated system ultimately deciding where
within that radius the missile should land. In such a system, what deference
is due to the agency making the decision? And as the example just now sug-
gests, many decisions being automated would not otherwise be made by the
statutory delegatee, but rather a far lower-level figure. In such circumstances,
should a court distinguish the deference due when the decision is made by an
automated system versus a low-level agency official?
Fourth, heavy reliance on computer programs – particularly ones with more
adaptive properties, such as genetic algorithms, self-modifying deep learning
systems, and more sophisticated neural networks – may turn out to burden the
extent of deliberation that still occurs in the administrative state. Human delib-
eration is replete with all the limitations associated with human cognition, but
implicit in the endeavor is an aspiration for dialogue and exchange of reasons
that are capable of being understood, accepted, or rejected by policymakers,
representatives of organized interests, and members of the public. Epistemic
communities depend on a sufficient degree of shared understanding to permit
participants to assess the rationales justifying public action or inaction. When
analytical techniques lack the relatively straightforward, rule-like structure of
expert systems, difficulties can arise in supplying explanations of how deci-
sions were made that could be sufficiently understood by policymakers and

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 157

members of the public.76 Suppose an autonomous drone targeting system


used a mix of genetic algorithms and neural networks to make decisions about
whom to target. Or a similarly designed system helped determine how to
allocate scarce inspection resources among processing facilities handling the
increasing proportion of the American food supply that comes from abroad. It
would probably matter, at the margin, that these systems would not be built
to yield carefully reasoned explanations for the choices undertaken. These
examples reflect how, in some sense, the administrative state is about transla-
tion – from expert knowledge and legal authority discourses to a conversation
that allows at least somewhat more public deliberation. A key question is what
happens to that process when automated systems have an ever more prominent
role in the administrative state, and their capacity to explain their decisions
implicates at best a big principal agent problem and at worst is limited given
their very architecture and ability to encourage decisions that humans would
not have otherwise made.
This is not to say that the status quo is a deliberative panacea. Indeed, it’s not
difficult to critique the current incarnation of the administrative state for its
lack of opportunities allowing the public to participate in decisions – and it’s far
from obvious, at least in some cases, that the extent of public deliberation is in
anything like a linear relationship to advances in social welfare. That said, the
growing reliance on automated computer programs to make (or heavily justify)
sensitive decisions in the administrative state could complicate what (little)
deliberation does occur, by diffusing responsibility for the decision between
what is left of the agency’s leadership and the team or set of machines that
designed the relevant software architecture, and by raising the likelihood that
decisions would be made on a basis phenomenologically different from what
could easily be understood or even explained by human participants.

conclusion
Software is poised to drive more than cars. The dilemmas associated with the
use of expert systems, neural networks, and other forms of “artificial intelli-
gence” implicate nothing less than how society governs itself. Even leaving
aside the pace of change in information technology, our evolving norms about
the role that technology plays in daily life suggest that the administrative

76 See, e.g., John Markoff, Google Car Exposes Divide on Computers as Drivers, N.Y.
Times (Feb. 10, 2016), http://www.nytimes.com/2016/02/11/technology/nhtsa-blurs-the-line-
between-human-and-computer-drivers.html. In describing deep learning algorithms, Markoff
observes that “the programs are able to perform humanlike tasks, such as . . . understanding
speech, [but] at this point, researchers admit they do not completely understand how the deep
learning networks make decisions.”

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
158 Mariano-Florentino Cuéllar

state – and, indeed, the nation-state itself – may be entering uncharted ter-
ritory. Although no map exists to decide precisely how to navigate it, the
following ideas may garner interest as agencies, policymakers, and courts con-
sider how to increase society’s capacity to make informed choices about the
journey.
Return first to the problems posed by the Morgan line of cases. Instead of
relying on the “presumption of regularity” espoused in those cases, courts may
explore how to better police the extent of human decision-maker engagement
in those crucial government decisions that we still expect someone to be able
to explain, understand, and defend. As reliance on information technology
increases, courts and policymakers should consider taking more seriously the
lines of argument in decisions that require accountability to be lodged in
specific decision-makers. Perhaps it’s time to consider recalibrating the “pre-
sumption of regularity” to ensure that decision-makers recognize the risks of
relying on automated analytical techniques they do not entirely understand.
The presumption of regularity should have its limits. As a practical matter, this
approach raises difficult further questions about the scope of discovery in suits
to review administrative action, but perhaps those questions are worth facing
given the risks that decision-makers will rely on algorithms that they don’t fully
understand.
Second, on a related note, courts may conclude that arbitrary and capricious
review is most meaningful if it encompasses whether there is consistency
between substantive explanations offered in, say, legally critical justifications
for rulemaking and the analytical techniques actually used to make decisions.
It is one thing to justify a program to freeze assets associated with organizations
that meet a specific, statutorily grounded threshold of suspicion; it is quite
another to deploy algorithms that entirely redefine that threshold, dynamically,
in response to new information. Some attention to cybersecurity risks may also
fit within the context of arbitrary and capricious review.
Finally, some agencies may accelerate efforts to engage scholars, civil soci-
ety, and other stakeholders in increasing our understanding of how to harness
the analytical capacity of “artificially intelligent” computer systems without
eroding our sense of how decisions are made. As part of this process, agencies
should consider engaging in medium-to-long-term planning about how they
would address some of the scenarios outlined earlier. The FDA could further
investigate how trends in artificial intelligence could change its use of outside
experts. Labor Department officials may face unexpected challenges arising
from labor market changes driven by automation. Virtually all agencies will
benefit from explicitly experimenting with different models of decision-making
that aim to leverage artificial intelligence technologies while keeping humans

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Cyberdelegation and the Administrative State 159

in the loop. These efforts will matter because, increasingly, agencies and entire
governments will face the challenge of how to instruct complex machines so
routinely guiding human decisions that it becomes exceedingly difficult to
tell where an algorithm’s nudge ends and an accountable individual’s choice
begins.
True: government agencies seem to face trouble even when updating con-
ventional information technology infrastructure, so the ability to integrate
artificial intelligence into administrative tasks may seem far-fetched. Ironi-
cally, such weakness could strengthen the case for using systems that adapt
and learn. Such systems may prove crucial to reducing the gap between a
machine’s capacity and that of a person familiar with an agency’s culture
and organizational routines. As a general matter, the more that computer sys-
tems engaged to perform administrative tasks become adaptive, and capable of
modifying themselves, the more they are likely to avoid the problems of efficacy
and cost that sometimes plague government information technology projects.
As software becomes more analytically sophisticated, however, and, in partic-
ular, more adaptive to the point of being able to rewrite much of its own code,
it will be more difficult to predict longer-term consequences ranging from
subtle changes in function to unexpected rapid growth of analytical capac-
ity. As machines become more capable of optimizing to achieve the goals
we articulate, higher stakes attach to how we articulate those goals and what
trade-offs we authorize. Crucial to our ability to navigate these dilemmas will
be a cadre of lawyers and policymakers who understand artificial intelligence,
its possibilities and limits, and particularly its capacity to adapt in unexpected
ways.
At core the administrative state is about reconciling social welfare calcu-
lations with procedural constraints and political pragmatism. When Mashaw
made his case for administrator discretion in the 1980s, the context for it appears
to have been an appreciation of that reconciliation – how it is imperfect even
on rare days when the administrative state is at its best, and how it pivots in
subtle and profound ways on human institutions, assumptions, and aspirations
for deliberation. Now we are on the cusp of being offered a gift – a black box
that promises to make the administrative process eminently more tractable,
technically precise, and less arbitrary by leaning more heavily on algorithms
and neural networks. That gift is alluring because human decisions are as
messy and imperfect as human societies themselves are. What we risk if we
blithely accept it is eliding precisely the sort of deliberation about the nature
of social welfare that justifies the administrative state in the first place.

Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006
Downloaded from https://www.cambridge.org/core. University of Texas Libraries, on 17 Nov 2019 at 00:58:48, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316671641.006

You might also like