Download as pdf or txt
Download as pdf or txt
You are on page 1of 273

Resolved: The United States Federal Government should ban the

collection of personal data through biometric recognition technology.

April 2023 Public Forum Brief*

*Published by Victory Briefs, PO Box 803338 #40503, Chicago, IL 60680‑3338. Edited by Nick
Smith. Written by Lawrence Zhou, Jacob Nails, and Amadea Datel. Evidence cut by
Lawrence Zhou. For customer support, please email help@victorybriefs.com.
Contents

1 Topic Analysis by Lawrence Zhou 7


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Pro Thoughts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.1 Is the Pro Position Winnable? . . . . . . . . . . . . . . . . . . . . . 9
1.2.2 Biometrics Bad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.3 Developing Ban Key Warrants . . . . . . . . . . . . . . . . . . . . 10
1.3 Con Thoughts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.1 Two Types of Regulations . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.2 Two Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.3 What Does This Mean for the Con? . . . . . . . . . . . . . . . . . . 13
1.3.4 Why Not Other Regulations? . . . . . . . . . . . . . . . . . . . . . 14
1.3.5 Potential Regulations . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2 Topic Analysis by Jacob Nails 16


2.1 Intro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 The topic is a “ban” topic first, and a “biometric” topic second. . . . . . . 17
2.2.1 The Con side will have greater control over the direction of the
debate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.2 The Pro case must be robust against different angles of attack. . . 18
2.2.3 The Pro side will benefit from keeping the debate focused on po‑
tential uses, not intended uses. . . . . . . . . . . . . . . . . . . . . 18
2.2.4 Be cognizant of the interrelation between your own offense and
defense on the Con. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.5 If you define one word, it should be “ban.” . . . . . . . . . . . . . 22
2.3 The Pro should justify why privacy matters, and how much. . . . . . . . 22
2.4 The Con should become skilled in the art of the reversal test. . . . . . . . 23
2.4.1 Prepare to coopt the other side’s primary offense with turns. . . . 24
2.4.2 What about other countries? . . . . . . . . . . . . . . . . . . . . . 25

2
Contents

3 Topic Analysis by Amadea Datel 27


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Background Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Interpreting the Topic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.4 Pro Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4.1 Human Rights Violations . . . . . . . . . . . . . . . . . . . . . . . 31
3.4.2 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.5 Con Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.5.1 Solving Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.5.2 Economic Harms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.5.3 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4 Background Evidence 37
4.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1.1 Collection Definition . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1.2 Ban . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.1.3 Bans vs. Regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.1.4 Wide Range of Bans . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1.5 Personal Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1.6 Biometric Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.1.7 Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.1.8 Biometric Recognition . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.1.9 Verification vs. Identification . . . . . . . . . . . . . . . . . . . . . 51
4.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2.1 Existing Regulations . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2.2 No Federal Law Now . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.2.3 Existing Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5 Pro Evidence 57
5.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.1.1 Constitutionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.1.2 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.1.3 Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.1.4 Discrimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1.5 Privacy—Link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.1.6 Privacy—Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

3
Contents

5.1.7 Free Expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87


5.1.8 FRT Bad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.1.9 Other BRT is Dangerous . . . . . . . . . . . . . . . . . . . . . . . . 94
5.1.10 Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.1.11 Dehumanization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.1.12 Right to Obscurity . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.1.13 Due Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.1.14 Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.1.15 Harms Children . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.1.16 Private Companies Bad . . . . . . . . . . . . . . . . . . . . . . . . 121
5.1.17 Security Breaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5.1.18 Scientifically Flawed . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5.1.19 Lethal Autonomous Weapons . . . . . . . . . . . . . . . . . . . . . 126
5.1.20 No Accountability . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.1.21 Now Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.1.22 AT: Convenience . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.1.23 AT: Overreaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.1.24 AT: Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.1.25 AT: Airports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.2 Ban Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5.2.1 Ban Key—General . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5.2.2 Ban Key—Creep . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5.2.3 Ban Key—Danger . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
5.2.4 Ban Key—Normalization . . . . . . . . . . . . . . . . . . . . . . . 152
5.2.5 Ban Key—Alts Fail . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.2.6 Ban Key—Tradeoff . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
5.2.7 Ban Key—Human Rights . . . . . . . . . . . . . . . . . . . . . . . 155
5.2.8 Regs Fail—General . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5.2.9 Regs Fail—Creep . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.2.10 Regs Fail—Children . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.2.11 Regs Fail—Freedom . . . . . . . . . . . . . . . . . . . . . . . . . . 169
5.2.12 Regs Fail—Chilling . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5.2.13 AT: Consent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
5.2.14 AT: Accuracy Requirements . . . . . . . . . . . . . . . . . . . . . . 175
5.2.15 AT: Data Anonymity . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.2.16 AT: Ban Only for LE . . . . . . . . . . . . . . . . . . . . . . . . . . 177

4
Contents

5.2.17 AT: Moratorium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178


5.2.18 AT: Notice and Choice . . . . . . . . . . . . . . . . . . . . . . . . . 181
5.2.19 AT: FRT Bans Insufficient . . . . . . . . . . . . . . . . . . . . . . . 182
5.2.20 AT: Opt‑Out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5.2.21 AT: Targeted Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 184
5.2.22 AT: Specific Use Restrictions . . . . . . . . . . . . . . . . . . . . . 185
5.2.23 AT: Enforce Existing Laws . . . . . . . . . . . . . . . . . . . . . . . 187

6 Con Evidence 188


6.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.1.1 Laundry List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.1.2 No Solvency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
6.1.3 Bans Impossible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
6.1.4 Circumvention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
6.1.5 Rollback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
6.1.6 Biometrics Are More Secure . . . . . . . . . . . . . . . . . . . . . . 199
6.1.7 Border Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
6.1.8 Law Enforcement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
6.1.9 Terrorism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
6.1.10 Trafficking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
6.1.11 Airports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
6.1.12 Office Buildings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
6.1.13 Stadiums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
6.1.14 Medicine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6.1.15 Ukraine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
6.1.16 Distracted Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
6.1.17 Consumer Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
6.1.18 Innovation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
6.1.19 AT: Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6.1.20 AT: Moratorium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
6.1.21 AT: Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
6.1.22 AT: Privacy—Uniqueness . . . . . . . . . . . . . . . . . . . . . . . 247
6.2 Regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6.2.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6.2.2 Terrorism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
6.2.3 Accountability Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 262

5
Contents

6.2.4 Transparency Requirements . . . . . . . . . . . . . . . . . . . . . . 264


6.2.5 Privacy Protections . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
6.2.6 Semi‑Open Public Areas . . . . . . . . . . . . . . . . . . . . . . . . 268
6.2.7 Ban Facial Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . 271

6
1 Topic Analysis by Lawrence Zhou

Lawrence Zhou is a Fulbright Taiwan Debate Program Coach and Trainer and an
Assistant Debate Coach at Apple Valley High School in Minnesota. He is also
the former Director of Lincoln‑Douglas Debate and Director of Publishing at Vic‑
tory Briefs. He debated at Bartlesville HS in Oklahoma (2010‑2014) in Lincoln‑
Douglas debate where he was the 2014 NSDA Lincoln‑Douglas national champion.
Lawrence graduated from the University of Oklahoma in 2019 with degrees in MIS,
Marketing, and Philosophy. While attending the University of Oklahoma, he placed
as the National Runner Up at the 2018 Intercollegiate Ethics Bowl National Compe‑
tition, advanced to outrounds at the 2016 and 2018 Cross Examination Debate As‑
sociation National Tournament, and championed the Beijing Language and Culture
University in British Parliamentary debate. He received his Masters in Communi‑
cation and Journalism from the University of Wyoming in 2022. He is formerly
the Debate League Director at the National High School Debate League of China,
a graduate assistant at the University of Wyoming, head coach of Team Wyoming,
and an assistant coach at The Harker School. His students have advanced to late
outrounds at numerous regional and national invitational tournaments, including
finals appearances at the NSDA National Tournament and semifinals appearances
at the Tournament of Champions.

1.1 Introduction

The topic quality this season has varied wildly between excellent topic areas and clear
topic wordings to the two choices we were presented for this month. The AI regulation
topic area is interesting but the wording afforded the Pro far too much latitude to specify
what those AI standards would be, a death knell for a Public Forum topic that lasts only
a single month (and debating about global AI standards purely in the abstract would
be quite difficult).

7
1 Topic Analysis by Lawrence Zhou

However, the chosen topic suffers from what I think are unclear wording issues. For
example, there is substantial ambiguity about what the “collection of personal data”
means. Coupled with what I think to be a very high burden for the Pro to clear—I
struggle to find that many sources who think a total and complete ban on biometric
recognition technology is the solution—and I find this topic to be very difficult to debate
in any meaningful sense.

While you’ll find more than enough sources that are staunchly opposed to facial recogni‑
tion technology (Selinger and Hartzog have too much free time on their hands to write
about FRT), those sources usually take care to really focus in on FRT as a unique danger
while downplaying the threat from other forms of biometric recognition technology (I
don’t see people going after fingerprint identification with such vigor in the literature).

Thankfully, most people are not me. I imagine that most debaters will be able to have
debates on this topic mostly by running towards just debating about whether FRT is
good or bad. My intuition is that such a move will ensure that debates can happen but
will be a strategic blunder for most Con teams because I think the literature decisively
favors the Pro on the question of whether biometrics are net good or bad but fairly
comfortably favors the Con on the question of whether the nuclear option of a ban is
the correct solution.

Thus sets the terrain for this topic—I think the Pro is likely to read arguments about
biometrics (most likely using FRT as their main example) being bad in the abstract while
the Con is likely to read arguments about biometrics being good in some limited sense
(e.g., for law enforcement or monitoring airports) and there will be little contestation
over what I think is the main question of the topic—whether or not a complete and total
ban is the correct solution to the issues raised by biometrics.

In some sense, this topic reminds me of the 2021 November/December Lincoln‑Douglas


topic, Resolved: A just government ought to recognize an unconditional right of work‑
ers to strike. The notion that any right is “unconditional,” let alone a right to strike, was
dubious from the beginning. Barring a skill differential, the affirmative should have
won precisely zero rounds on that topic because there was simply no coherent argu‑
ment that justified the unconditional nature of a right to strike.

However, negative teams rarely exploited this issue and instead devoted most of their
energy towards attacking the right to strike in general (a far greater uphill battle) rather
than the actually controversial part of the resolution—the unconditional nature of that
right. Part of this can be explained away by the general intuition of debaters to try and

8
1 Topic Analysis by Lawrence Zhou

maximize the size of their offense. The differential between “strikes good” and “strikes
bad” was large and clear enough that negative debaters could be guaranteed to get a
fairly sizable impact to their disadvantage. By contrast, by largely agreeing that a right
to strike was good, negative debaters had far more space to generate offense against just
the “unconditional” part of the topic.

I think something similar will emerge here. In my opinion, Con teams should spend
most of their time contesting whether a complete and total ban on the collection of liter‑
ally any personal data through biometric recognition technology is the right solution to
whatever privacy and security concerns Pro teams mention. The fact that the vast major‑
ity of proposals dealing with biometric recognition technology only ban the collection
of data in certain circumstances should be proof enough that this is the right strategic
angle for the Con to take.

So, instead of developing a standard topic analysis essay that walks through all the
standard controversies of the topic (the other ones will do that), I just want to walk you
through my thought process as I was putting together the evidence for the evidence
packet section of these briefs and how that has informed my thoughts about the argu‑
ments that each side should make.

1.2 Pro Thoughts

1.2.1 Is the Pro Position Winnable?

I came into this topic thinking that it would be near unwinnable for the Pro. A com‑
plete and total ban? On using my iPhone? Seems quite ludicrous to me. However, after
spending a lot of time trying to find good Pro evidence (you’ll notice that the Pro ev‑
idence section of the packet is a bit more developed for this reason), I am now of the
belief that it is now just hard to win for the Pro, but not nearly unwinnable.

Two things have changed my mind on this. First, I am fairly persuaded by many of the
arguments about just how dangerous widespread FRT would be. The fact that most
of the articles out there seem to agree that there are serious issues concerning the ac‑
curacy and misuse of FRT systems swayed me in this direction. This shouldn’t be too
surprising — concerns over FRT have been quite widely discussed in the popular media
for quite some time now. I think it shouldn’t be too hard to win that, on balance, the
unconstrained use of FRT causes more problems than it solves.

9
1 Topic Analysis by Lawrence Zhou

Second, I think the few authors that write in favor of a ban have done a pretty good
job elucidating upon the necessity of a ban and have come up with some clever argu‑
ments for why regulations would be insufficient. The main Pro authors will obviously
be Selinger and Hartzog who have written at least half a dozen strong articles advocat‑
ing for a wholesale ban on FRT.

While I personally think that a complete ban on the collection of biometric data seems
like an overreaction, I think Selinger and Hartzog do advance some strong arguments
against regulations, which is what we’ll talk about below.

1.2.2 Biometrics Bad

I personally don’t think it’s all that hard to win that unregulated FRT or other biometrics
are bad. Not even companies who manufacture FRT or other biometric recognition
technology take that stance. The accuracy, bias, and abuse concerns seem very well‑
documented and you won’t find too many people arguing otherwise.

The only word of caution I offer here is that you should avoid rushing towards argu‑
ments about specific uses of biometrics being bad. If your only argument for why bio‑
metric recognition technology is bad is that live facial recognition technology is bad,
then the Con will obviously just argue to ban that one specific use case (or that one
specific technology).

Instead, I think you want to develop arguments about why biometric recognition tech‑
nology broadly, left unchecked, poses extreme threats to freedom and privacy. And the
way the Pro gets there is by arguing that allowing biometric recognition technology in
some parts of life leads to both normalization and technology creep (there are lots of
cards in the packet to substantiate these points).

Once the Pro gets there, then the trick is to argue that a ban, rather than targeted regu‑
lation, is a better solution to the problem.

1.2.3 Developing Ban Key Warrants

I think you can loosely separate out ban key warrants into two buckets: arguments for
the necessity of a ban and arguments for the shortcomings of regulation. While there is
obviously quite a lot of overlap between these arguments, it makes sense to distinguish
these in some sense. I think the ban key arguments are more going to center around

10
1 Topic Analysis by Lawrence Zhou

the message that a ban sends while the regulations fail arguments are going to be a bit
more about details of how a ban will fail.

Let’s start with the value of a ban. This is where it’s important to develop, perhaps
with some hyperbolic language, just why biometric recognition technology (and the
collection of personal data from said technology) is so dangerous. The more dangerous
biometric recognition technology is, the stronger the case for a ban. If there are few
upsides to such a technology, then regulation necessarily becomes less attractive.

Similarly, bans might be important because they send an important signal or message to
society that these technologies are deeply harmful and must be stigmatized by society
such that their acceptance never happens.

These arguments are decent (and I think the Pro should take care to develop good argu‑
ments for why biometric recognition technology is really dangerous in order to bolster
the “ban key” part of their case), but I think the smarter arguments for the Pro are going
to be about the shortcomings of regulations.

You have the standard slate of arguments against regulations: that they’re too slow, that
there will inevitably be some exception that they overlooked, that there will be some use
case that is accidentally included, how loopholes will be exploited, and how regulations
will inevitably be drafted to favor corporate interests. But the better arguments, in my
opinion, will be about normalization and creep.

Normalization means that we accept some of the more benign uses of biometric recog‑
nition technology as inevitable and just learn to live with the downsides. Regulations
will inevitably normalize the use of such technology, even if just in some limited way,
which means that future abuse will likely occur. Creep means that the basis for the
regulation can easily be expanded and the technology will get deployed in new, unpre‑
dictable ways. There are lots of good pieces of evidence in the packet to substantiate
these arguments and I would highly recommend spending some time sifting through
them to get a good understanding of them.

I think once the Pro has their favorite “ban key/regulations fail” arguments down, then
the Pro is in a good spot to win some debates, especially if they can muster some clever
spin on these common deficits to regulations coupled with some good examples.

11
1 Topic Analysis by Lawrence Zhou

1.3 Con Thoughts

1.3.1 Two Types of Regulations

The purpose of arguing for regulations is to undercut uniqueness for the Pro—arguing
that you can capture the same benefits (i.e., mitigating or eliminating the downsides of
totally unregulated biometric recognition technology) while allowing the use of biomet‑
ric recognition technology for certain beneficial uses.

Yes, arguing in favor of regulations does technically count as reading a counterplan,


and as I’ve written about previously, counterplans are obviously good and should be
included, silly rules aside. Truthfully, without the regulations counterplan, the Con is
hosed—there is simply no one who thinks totally unrestrained biometric recognition
technology is good. But I think there are rhetorical tricks teams can use to disguise the
fact that regulations are, indeed, a “counterplan.” Introduce the idea as “Instead of…”
or “An alternative could be…” Using this language helps avoid guilt by association.

There are so many different types of regulations out there. One could regulate the agent
(who uses biometrics), the type of biometric recognition technology, the use case, the
circumstances in which it is deployed, the location it is deployed, etc. I don’t think it’s
necessary to go into too much depth over every potential regulation because there are an
infinite number of regulatory proposals out there worth exploring yourself. Instead, I’ll
highlight the two types of regulations that I think are worth exploring: very permissive
and very restrictive regulations.

Why only two types of regulations? Because the Con faces a tradeoff when considering
the scope and scale of the regulation. The smaller the exception (or the more restrictive
the regulation), the smaller the size of the net benefit; the larger the exception (or the
more permissive the regulation), the larger the size of the net benefit. However, that
comes at a cost. While the smaller exceptions will be harder to win a deficit against
(e.g., creep, loopholes, etc.), if the Pro manages to win one of the deficits, then it’ll almost
certainly outweigh the net benefit. The larger exceptions will be much easier to win a
deficit against, but the Con will have a larger net benefit to weigh against the deficit.

That was quite jargon laden, so let me illustrate with two examples.

12
1 Topic Analysis by Lawrence Zhou

1.3.2 Two Examples

Example 1 (Restrictive): The Con argues we should create an exception just for FRT to
detect certain medical conditions and ban biometrics in all other circumstances. That
is going to be a tiny net benefit. In other words, the value of FRT in detecting very
uncommon medical conditions is quite small.

The value of this? It’s much harder for the Pro to argue that specific use of FRT is bad,
and it’ll be tough for the Pro to win that the use of FRT in detecting certain medical
conditions causes mass surveillance that destroys all privacy. If the Con is ready to
defend this argument, then this seems like a reasonably strategic choice for the Con.

However, if the Pro wins that there is a sizable risk that allowing it in the medical field
sets a precedent for the use of FRT for promoting public safety and could possibly nor‑
malize FRT in other settings which leads to the erosion of privacy writ large, then that
argument is almost certainly going to outweigh the value of FRT in that limited medical
setting.

Example 2 (Permissive): The Con argues we should regulate FRT to maintain a mini‑
mum standard of accuracy and consistency, ensure data transparency, provide oppor‑
tunities for meaningful consent, and require third‑party oversight on the use of FRT and
storage of data. The Con then argues that FRT is necessary for law enforcement to do
their job.

The value of this? The value of FRT is heavily inflated—the importance of crime fight‑
ing or counter‑terrorism is going to be much larger than the value of detecting some
uncommon medical conditions.

Here, the value of the regulations is less to ban FRT in a lot of settings but rather to
try and solve some of the specific downsides of FRT, e.g., accuracy or privacy concerns.
This enables the Con to try and win on arguments for why FRT is good in the abstract
while avoiding some of the more common criticisms the Pro is likely to make.

1.3.3 What Does This Mean for the Con?

Consequently, the Con gets to set the stage—if they choose to focus more on restrictive
regulations, then the debate will almost certainly be about whether a ban is key; if they
choose to focus more on more permissive regulations, then the debate will be more
likely to be about the particular use of that biometric recognition technology.

13
1 Topic Analysis by Lawrence Zhou

If I were the Con, I’d almost certainly choose the former. I want as many debates that
are going to be less about whether FRT is good or bad and more focused on whether a
ban is necessary.

This also provides insight into how the Pro should deal with these arguments. The
narrower the exception, the more relevant argumentation will be about how regulations
fail (because it’ll be hard to win that specific use of FRT is bad), thus a ban is needed; the
broader the exception, the more relevant argumentation will be about how dangerous
FRT is (because it’ll be irrelevant if you win that regulations fail if the value of FRT
outweighs the downsides).

1.3.4 Why Not Other Regulations?

The pairing of the regulations with the specific arguments about the value of FRT is
important. For example, you cannot have highly restrictive regulations with a very
powerful argument about why FRT is good. If your argument is that FRT is useful
for combating terrorism because it enables the government to pick a single face out of
a huge crowd which requires real‑time monitoring of public spaces, you cannot pair
that with a regulation that restricts real‑time facial recognition. In this case, you cannot
access the value of FRT since your restriction is severe.

On the flip side, if you have very permissive restrictions, you don’t want to pair it with
arguments about how biometrics are useful in only very limited circumstances, e.g.,
when a company wants to use fingerprints as a login for employees. In this case, the
Pro will quickly win that permissive regulations fail and that the harms of biometrics
far outweigh the small impact of biometrics.

And in the middle, you don’t want medium regulations with middling arguments for
the value of FRT because that just gives the Pro the option from attacking from both
sides (arguing that regulations fail and that the harms of biometrics outweigh).

1.3.5 Potential Regulations

While I personally think that the best Con arguments will be much more along the lines
of restrictive regulations coupled with very narrow use cases, I suspect that strategy
will be less common compared to strategies that only loosely govern biometrics (mostly
because I think debaters naturally gravitate towards trying to generate as much offense

14
1 Topic Analysis by Lawrence Zhou

as possible without consideration about the relative strength of that offense compared
to what their opponent is likely to say).

If I were to take the strategy of having fairly permissive regulations, I’d suggest leaning
all the way in on biometrics good. Just become a hardliner on these issues. FRT stops
terrorism, it captures criminals, it finds kidnapping victims, it is the greatest tool for
public safety ever! While I think that is obviously silly, it is not that hard to win in
the context of a debate round and I could see well prepared Con teams winning a few
rounds on this strategy. And I could see the value of popping a few Pro teams with
this argument if they spend too much of their case winning that a ban is better than
regulations but not enough time explaining why biometrics are so bad.

However, I would much rather take the more restrictive approach. I would want to im‑
pose a set of regulations so strict so as to avoid most of the common Pro objections they
could raise. I mentioned the medical example above, but other potential regulations
include limiting it to only when there is a warrant out for an arrest for certain crimes,
limiting it only to finding kidnapping victims, utilizing it only in instances where peo‑
ple have actively consented to use it as a part of their employment and when security is
paramount, or other circumstances when biometric regulation technology would hardly
ever be used.

After you pick a regulation, then the only thing left to do is to spend some time going
through and blocking out all the common Pro arguments against regulations and I think
it shouldn’t be too hard to win some debates as the Con.

1.4 Conclusion

I’m not a fan of this topic, but I guess the value of extreme topic rotation is that even if
the topic is not great, you only have to debate it for a month. I’d spend a while work‑
shopping various regulatory proposals with your team because that’s where I think all
of the interesting debate is going to lie. And even if the topic itself isn’t that great to
debate, it does tap into a fun area of literature and I hope you enjoy reading through
the literature as much as I did!

15
2 Topic Analysis by Jacob Nails

Jacob Nails is Head Researcher at Victory Briefs. He debated 4 years for Starr’s Mill
High School (GA) in Lincoln Douglas debate, graduating in 2012. As a competitor,
he won the Georgia state tournament, cleared at NSDA nationals, and qualified to
the TOC. In college, he qualified twice to the National Debate Tournament in policy
debate. Jacob has 7 years of experience coaching LD debate, including coaching
debaters to top seed at the TOC, as well as Top Speaker awards at Harvard, Yale,
and Bronx. He has taught at over twenty sessions of the Victory Briefs Institute.

2.1 Intro

The United States Department of Homeland Security holds data on over 260 million
unique identities and processes 350,000 biometric transactions per day.¹ By its stan‑
dards, these biometrics constitute “unique physical characteristics, such as fingerprints,
that can be used for automated recognition.” The uniqueness of individuals’ finger‑
prints, retinal patterns, facial features, and so on creates the theoretical possibility that
a government, corporation, or other entity could collect data on these characteristics
from across the population and then harness this data to identify, monitor, or verify a
person’s identify, potentially without their consent or even knowledge. Technological
advancements increasingly render this possibility not merely theoretical but practical.
Facial recognition technologies in particular have generated considerable debate as of
late, with diverse interest groups calling for bans.² While facial recognition will likely
arise as a central example in many debates, the upcoming topic will ask debaters to
analyze biometric recognition technology as a whole in its many forms and determine
whether a ban of this emerging technology is the appropriate response. In what follows,

¹Department of Homeland Security. “Biometrics.” 14 December 2021. JDN.


https://www.dhs.gov/biometrics#:~:text=Biometrics%20are%20unique%20physical%20characteristics,
be%20used%20for%20automated%20recognition.
²Fight for the future. “Ban Facial Recognition.” 2019. JDN. https://www.banfacialrecognition.com/

16
2 Topic Analysis by Jacob Nails

I present a list of core strategic suggestions I believe debaters should consider when de‑
termining their approach to this topic.

2.2 The topic is a “ban” topic first, and a “biometric” topic


second.

Having debated, judged, and coached many prior topics concerning government bans
and prohibitions on numerous technologies, I wager that the single most common over‑
sight of debaters on this topic will be to undervalue the operative verb–“ban.” Many
debaters will mentally shorten the resolution to “Biometrics: Good or Bad?” and this
simplification will be reflected in the cases they construct. While discussing whether
biometric technologies are good or bad in the abstract will roughly reflect the core Pro
and Con arguments on the topic to a first approximation, many of the strongest and
most nuanced arguments will focus not on their generic goodness or badness but more
specifically on whether a federal ban is the appropriate policy response. A case that
is not constructed with these arguments in mind will most likely contain glaring holes
that well‑prepared opponents can exploit. I view this first point as being far and away
the most significant piece of advice I can impart. Accordingly, I have elected to give it
sub‑points to underscore its importance, as any seasoned debater appreciates the value
of sub‑points.

2.2.1 The Con side will have greater control over the direction of the debate.

Because the Pro side is locked into roughly the most radical policy available—the
most severe mechanism (a ban) covering a broad swath of technologies (biometrics)—
flexibility will rest with the Con side. The Con could take the far opposite tack and
defend the goodness of essentially all biometric technologies, but nothing in the topic’s
wording necessitates this. A vast range of middle ground stances exists, and it’s the
Con’s for the choosing. The Con could argue that the dangers of biometrics are real
but will be better dealt with through piecemeal regulations than out‑and‑out bans,
and they could even meet the Pro side most of the way in agreeing that biometrics
are most often harmful, but that they have a few specific uses and thus merit less
than a full ban. If one side will benefit from having multiple or modular cases on this
topic, particularly when speaking second, it will be the Con. Different Pro cases will

17
2 Topic Analysis by Jacob Nails

have better or worse match‑ups versus cases taking the form of “Biometrics are great,
actually” versus “Tailored regulations are more effective” versus “Some exceptional
uses cases exist,” and the Con has the room to adapt accordingly.

2.2.2 The Pro case must be robust against different angles of attack.

A strong Pro case will not be able to be summarized as “Biometrics are bad.” That is
one component of the Pro stance, but far from sufficient. The Pro must answer at least
two additional questions: (1) Why ban? Why not a regulation? Or moratorium? Etc.,
(2) Why biometrics as a whole? Why not just specific problematic technologies?

Isolated examples, for example, will be naturally much weaker on the Pro side than the
Con. Showing a few positive uses can go a long way toward proving something should
not be entirely prohibited, but showing a few bad abuses of the same thing does not on
its own justify a sweeping ban. A Pro contention focusing entirely on one specific use
or type of biometric recognition and its faults, for instance, will be exceptionally weak
versus a Con that simple grants that regulations may be needed for that particular use
while showing positive benefits in other industries or applications.

A strong Pro contention will not just identify a case where biometrics have harms
but rather identify a systematic harm afflicting biometrics across the board, and an
intractable harm that can’t be fixed short of banning its existence entirely. Short of
doing so, the Pro side will have a weak defense of why the exact stance the topic asks
them to defend is the correct one and risks strategic cooption from the Con.

2.2.3 The Pro side will benefit from keeping the debate focused on potential
uses, not intended uses.

The preceding two points seem to paint Pro ground in a stark light, and that’s true to
an extent, but the Pro side has strategic recourses available. If the Pro side is tasked
with justifying a sweeping conclusion, then it behooves debaters on the Pro to paint
the controversy as being inherently sweeping. If biometrics is by its nature an all‑or‑
nothing matter, then many of the strategic Con tactics aimed at claiming the strategic
middle ground are ruled out.

How can the Pro accomplish this? I believe a strong Pro case will strive to keep the
focus centered on biometrics’ *potential* as having inherent dangers. The mantra of

18
2 Topic Analysis by Jacob Nails

the Pro side is “Don’t let the genie out of the bottle” because the threats of biometrics
are infectious, and even allowing their development for singular well‑intentioned uses
or in limited circumstances carries the risks of broader abuse. The Con side can find
seemingly noble isolated cases: perhaps the government should only use the data it col‑
lects to surveil terrorists, or catch human traffickers, or deliver cake to people when it’s
their birthday. All of these purposes nonetheless require the government to catalogue
sensitive data on hundreds of millions of people, guaranteeing it has the potential to
surveil and identify targets for other purposes should it choose to, and it’s that poten‑
tial wherein the harm lies. Likewise, restricting the government from using 3 out of
every 4 forms of biometric recognition doesn’t mean one has reduced the threat by 75%.
A single database for a single biometric characteristic will be sufficient for identifying
who they see fit.

One term worth knowing here is “mission creep,” the idea that once a technology like
biometrics is legitimized in even highly limited circumstances, its use becomes normal‑
ize and the range of applications gradually expands. If biometrics have this naturally
infectious property, and if biometrics once fully realized pose grave dangers, then it may
be that the only solution is to stop them in their infancy.³ The preceding sub‑point noted
that isolated examples will fare poorly in justifying the Pro side, and on their own they
will, but arguments like mission creep can serve to round out otherwise insufficient
arguments that might be strong in their own context but overly narrow. “Biometrics
are sometimes very bad” won’t justify a ban in itself, but coupled with “and we can’t
effectively discriminate the good uses from the bad” it might.

2.2.4 Be cognizant of the interrelation between your own offense and


defense on the Con.

I have in mind one common mistake many debaters make on topics in general, but
especially on “ban”‑style topics, and I will outline it by way of example.

Suppose the topic calls on the Pro side to ban nuclear weapons. They cite the standard
risk of escalation to a nuclear war. The Con side presents two main arguments: firstly,
that nuclear weapons help deter conventional wars through risk of escalation to mutu‑
ally assured destruction (MAD), and secondly, that the downsides of nuclear armament

³Adam Schwartz (senior staff attorney). “Mistakes, misuse, mission creep: Biometric screening must
end.” The Hill. 18 July 2017. JDN. https://thehill.com/blogs/pundits‑blog/technology/342586‑mistakes‑
misuse‑and‑mission‑creep‑biometric‑screening‑must‑end/

19
2 Topic Analysis by Jacob Nails

could be resolved through a “No First Use” (NFU) doctrine in which one agrees not to
use nuclear weapons except in retaliation against a nuclear attack. What is the weakness
here? (Now would be the time to ponder before reading further.)

The title gives away the answer, sorry, but what’s a sub‑point without a title? The nega‑
tive’s two main arguments butt heads with one another. MAD and NFU are both strong
arguments against banning nuclear weapons in their own right, but not together. If the
main function of a nuclear arsenal is to threaten other countries with nuclear escalation
so they don’t attack, then the last thing one would want to do is to promise to never
use nuclear weapons in response to conventional attack. Any NFU promise credible
enough to achieve its aims would be credible enough to decimate the benefits of MAD.

Here we see the Con side trying to scoop the coveted middle ground between Yay Nukes
and Boo Nukes by pointing out that tailored regulations (an NFU doctrine) can exist for
the technology short of an outright ban. The danger, if one isn’t careful, is that stepping
toward the Pro side to dodge the opponents’ offense can often leave the Con standing
close enough to the Pro to get hit with their own offense. I’d call this the “linking to
your own net benefit” problem.

What might this issue look like on the upcoming topic? Suppose the Con’s primary of‑
fensive argument centers around technological innovation and how bans on emerging
technologies will put the United States behind in the race to be a world leader in them.
And then the Con answers the Pro contentions with arguments like “state‑level govern‑
ments will crack down even if the federal government does nothing,” or “temporary
moratoria will avoid dangerous and shortsighted applications.” These arguments may
severely damage the Pro contentions (whatever they might be), but they cause blow‑
back and undermine the neg contention in turn. If domestic biometrics are inevitably
hamstrung by regulations at other levels or delayed by a moratorium, then leadership
in emerging technology will also be shot.

The Con can easily fall into this pitfall when the topic allots them a plethora of stances to
take and a variety of offensive arguments to choose from, so how do they take care not
to? Cross‑reference your core offense against your own vision of the world. Firstly, ask
yourself, what’s your main problem with a biometrics ban? Is it harming the economy,
preventing counter‑terrorism, restricting law enforcement, etc.? Secondly, what do you
think the state of biometrics ideally looks like if not a ban? Minimal restrictions, state‑
level regulation, case‑by‑case bans, or what? Thirdly, and the key question, will the
problems outlined in your first answer still exist in the world you imagined in your
second answer? If so, either revise your offensive anti‑ban arguments to focus on an

20
2 Topic Analysis by Jacob Nails

issue that won’t apply to yourself, or revise your vision of the Con’s world so that it’s
one that avoids those arguments.

As often as I’ve seen one side commit this sort of error, the even greater certainty in
my experience has been that the opposing side will let it slip by when it happens, and
so this sub‑point should ultimately be read primarily as a cautionary lesson to the Pro.
For whatever reason, this argument flies under the radar more than almost any other. I
suspect it’s that the problem doesn’t rely with any one singular argument and so even
a debater meticulously asking of each of their opponent’s arguments one‑by‑one “Does
this seem true? Why might this be wrong?” can overlook in their checklist problems
that arise as a conjunction of two arguments. I recommend that in any scenario where
the Con stance is something less than an enthusiastic “We love biometrics in all its forms,
unmarred by any regulation,” the Pro side should accustom themselves to asking “But
don’t you cause this too?” for every Con argument as the answer will not infrequently
be Yes.

Lastly, when thinking through these muddy middle scenarios, it can be important to
keep in mind the relative size of the effect in each direction. If one thinks of degrees
of regulatory stringency as existing essentially along a spectrum between two compet‑
ing values (say, total privacy versus uninhibited economic efficiency), then it will usu‑
ally be the case that each value is maximized at its respective pole (a ban best achieves
privacy, while minimal regulations best ensure efficiency) and other more moderate
stances achieve some measure of each. However, there is no necessary reason to as‑
sume that “half‑way” measures will achieve half as much privacy and half as much
efficiency. The Con, for example, could accept that regulations might complicate their
own offense by somewhat impeding efficiency but nonetheless argue that well‑tailored
regulations would only reduce efficiency slightly while severely mitigating biometrics’
worst effects, thus striking the best balance of values even if regulation links a little bit
to its own net benefit. Conversely, one Pro‑leaning framing would be to suggest that
regulations severely undercut the effectiveness of biometrics at producing positive out‑
comes while only minorly mitigating the downsides relative to a ban, thus making the
middle ground solution the worst of both worlds. Controlling the debate regarding the
size of the link can be especially relevant when the debate between final impacts is close.
For a judge inclined to think both privacy and security have value, the most strategic
move can be to paint the opponents as trading off a lot of one for only a little of the
other.

21
2 Topic Analysis by Jacob Nails

2.2.5 If you define one word, it should be “ban.”

Don’t take for granted the good will of your opponents. They’re in a zero‑sum game
with you. You might assume you have a common interpretation of the topic over which
to debate, but your opponents are dirty cheaters who will steal bases on you if you let
them, and you should be prepared. “Ban” jumps out to me as the easiest place on this
topic for clever debaters to legalistically gerrymander the ground in their favor, in part
due to its significance. Might one side argue that a “ban” only need apply to the private
sector, so as to avoid Con arguments about criminal justice and national security or Pro
impacts about government abuse? After all, when one discusses “banning” guns, they
don’t envision disarming the police. Is one “banning” something if it is prohibited for
a period time? That question will determine which side of the topic moratoria fall on.
Is a “ban” still a ban if it contains some exceptions? If so, just how large an “exception”
could the Pro make for certain cases while still claiming to defend the topic. Are there
even more niche interpretations of “ban” I haven’t conceived of? Probably.

I’m sure *you* would never define the topic in a self‑serving fashion, but your oppo‑
nents will and you shouldn’t trust them. Even if your case relies on fairly mundane
assumptions about what the topic means, those charlatans won’t always accept your
mundane assumptions, and so you should have a qualified definition on hand spelling
them out explicitly. This need not always be in the case itself, but it should be nearby
so that you can break the glass and pull it out if required.

The portion of this analysis devoted to the contours of the “ban” debate hopefully drives
home my thoughts about where this topic’s strategic nuances lie, but I do have a few
other pieces of advice.

2.3 The Pro should justify why privacy matters, and how much.

Most people think privacy is pretty nice. But how nice is it? Dig into that question one
layer deeper than you think you need to. Many of the strongest Pro arguments will
boil down to some flavor of privacy considerations. Keeping in mind point 1C above,
privacy is one of those issues that strikes at the heart of collection of biometric data as
a concept, rather than any particular use or application, and as such will be among the
most strategic of Pro impacts. The “Biometrics: Good or Bad?” portion of this resolution
is in very large part a “Privacy: Important or Not?” discussion.

22
2 Topic Analysis by Jacob Nails

The Pro should not rest their laurels on the mere invocation of privacy as a value. Privacy
has an intuitive appeal that can make it seem unnecessary or frivolous to spell out in
detail why privacy matters. Do it anyway. Your goal is not just to establish that privacy
matters, but that it matters so much as to outweigh other pressing concerns, and that
requires talking it up to its maximum ebb. Is privacy an intrinsic value that most of us
recognize regardless of its further effects? Is it essential to safeguard other liberties like
freedom of expression? Does it provide a strong check on tyranny? All of the above?

2.4 The Con should become skilled in the art of the reversal
test.

The reversal test⁴ is a cognitive heuristic most strongly associated with philosopher and
Oxford professor Nick Bostrom that attempts to counteract status quo bias. To slightly
oversimplify the test: if someone strongly believes “I don’t want more of that thing,”
reverse the framing and ask, “well, do you want less of that thing?” If the answer is
still no, then their view begins to look less like a consistent opposition to that thing and
more like a psychological bias against change (though in principle, they could have a
good reason for thinking that the status quo level of the thing just happens to be exactly
correct already).

To put the argument in a more concrete context, the test was originally leveraged by
Bostrom against opponents of a different emerging technology—human cognitive en‑
hancements. Many people express opposition to cognitive enhancements and a skep‑
ticism that the average human life would be better if science were had means of im‑
proving brain function—ignorance is bliss and a little knowledge is a dangerous thing,
the sayings go. To test these intuitions, Bostrom proposed reformulating the question:
if cognitive disenhancement technologies were developed, would it be good to widely
adopt them? Naturally, most average people also have a strong opposition to their brain
function being turned down a notch. The seeming conclusion is that most people who
oppose enhancement technologies on the grounds that higher cognitive function would
be worse are engaged in rationalization—their preference isn’t consistently in favor of
lower cognitive function, only consistently resistant to change.

What are the applications to the current resolution? The Pro side will often find them‑

⁴Nick Bostrom and Toby Ord. “The Reversal Test: Eliminating Status Quo Bias in Applied Ethics.” Ethics
116 (July 2006): 656–679. JDN. https://nickbostrom.com/ethics/statusquo.pdf

23
2 Topic Analysis by Jacob Nails

selves in a similar position to the opponents of transhumanism Bostrom was addressing.


When confronted with a new and frightening technology, they claim to reject it on prin‑
cipled grounds (e.g. that privacy should be paramount), but what if their beliefs merely
reflect a fear of change?

I believe the reversal test offers an effective analytic tool for Con teams to disentangle
privacy concerns from generalized fear of the unknown and examine whether privacy
alone is a strong enough rationale to stand on its own merits. So how would the negative
apply the test in the resolutional context? Many affirmatives will argue, for instance,
that privacy is a fundamental value that should not be traded for safety (most likely,
they also read this topic analysis where I just advised them to do so). As the Con, you
may want to insinuate that they’re less consistent believers in privacy than they let on.
And so, you should take the debate out of contexts where we are dealing with new
technologies that threaten to decrease privacy to increase security, and resituate it in a
context where society has already come to widely accept a decrease in privacy, and then
ask whether we would want to reverse that.

Imagine, for example, that a security‑focused government chose to assign its popu‑
lace state‑issued identification badges so that its officers could demand to check a cit‑
izen’s badge whenever they suspected them of criminal activity. Well, that’s basically
a driver’s license. Or, more closely tailored to bodily identification, does the Pro think
that the Justice Department should end the practice of fingerprinting that has been in
use for over 120 years in this country (and as far back as 300 BC in China)?⁵ Were the
many convictions of criminals using such means unjust?

These examples are not knockdown objections, of course, and the opposition may
choose to bite the bullet, to argue that there exists some relevant distinction, etc.
However, I believe that framing privacy trade‑offs in contexts that are familiar rather
than foreign will make them appear much more palatable for most and may expose the
Pro as being a much more fair‑weather fan of privacy than they seemed.

2.4.1 Prepare to coopt the other side’s primary offense with turns.

I anticipate that many rounds on this topic will be resolved on a broad and deep‑seated
clash between two fundamentally opposing worldviews: privacy versus security, rights
versus welfare, due process versus law and order, and so on. But sometimes, it can be
⁵Jeffery Barnes. The Fingerprint Sourcebook. Office of Justice Programs. July 2011. JDN.
https://www.ojp.gov/pdffiles1/nij/225321.pdf

24
2 Topic Analysis by Jacob Nails

strategic to sidestep those debates entirely, or at least build yourself an escape hatch that
allows you to, in case they’re not going your way. To that end, one genre of argument
that can have great practical utility is any point that shows the opposing side’s solution
to fail on its own terms. If, for example, the Pro side shows that sacrificing liberty to pur‑
chase safety would result in neither liberty nor safety, then they give the judge a reason
to vote Pro regardless of which of liberty or safety is proven more important. Includ‑
ing an argument that fits this criterion into your speech can help preserve a strategic
back‑up plan and offer a second path to the ballot.

So which arguments can serve this function? For the Pro, the main class of argument
that comes to mind is those relating to bias. If the data collection and/or analysis is so
flawed that it produces systematically inaccurate results, then it fails on its own terms
as it doesn’t achieve its own purpose. Much literature has been penned on the accuracy
(or lack thereof) of machine learning algorithms. One upside to this argument is that
it can also generate independent impacts of its own—for example, if the bias is along
racial lines, it might also perpetuate inequity as well as inaccuracy. A downside is that
a strong version of this argument will want to show that these biases are so ingrained
and intrinsic to the process that better regulation or more technological advancement is
unlikely to solve them (so as to justify a full “ban”), and of course only a subset of the
literature goes that far.

For the Con, I expect one way to generate turns will be to question what current practices
biometric recognition is likely to replace. Might those be even worse? Does thumbprint
recognition help one protect one’s digital privacy relative to traditional text passwords?
Will police rely more on gut suspicions or eye witness testimony without biometric ev‑
idence, and might those have biases of their own? And so on.

2.4.2 What about other countries?

The topic is about the United States, but it may still interact with foreign nations in nu‑
merous ways, and exploring the international elements may generate unique forms of
offense or defense. Could the United States position itself as a world leader on privacy
(or perhaps security) via its stance on biometrics? Does it need lax regulations to out‑
compete more unsavory states in this emerging field, or will its own lack of restraint
signal a greenlight to others that they can do the same? What about biometrics’ use in
counterterror? While the primary controversy is domestic, more global concerns also
abound.

25
2 Topic Analysis by Jacob Nails

And just one last time because I’m not sure it has been sufficiently emphasized, it’s all
about “ban.” You heard it here first.

26
3 Topic Analysis by Amadea Datel

Amadea Datel is a junior at Dartmouth College who debates in college policy and
also previously competed at Columbia University. She has reached the quarterfinals
at the Gonzaga Jesuit Debates and has won the University of Minnesota College
Invitational, the Crowe Warken Debates at USNA, and the Mid America Champi‑
onship, ranking as the 25th team nationally last year. In high school, she built and
coached her school’s LD debate team, won several tournaments in Massachusetts,
and was the top speaker and a semifinalist at the MSDL State Championship and
the first student from her school to qualify for NSDA and NCFL Nationals, clearing
at the former. She is currently an Assistant Coach at Apple Valley High School and
has been an LD instructor for the Victory Briefs Institute over the past two summers.

3.1 Introduction

The April PF topic centers the conflict between liberty and national security by focusing
on an increasing concern: tools that collect identifying information about body measure‑
ments and human characteristics, otherwise known as biometric recognition technolo‑
gies. Despite the relevant topic area, the resolution is poorly worded since it requires
the pro to defend a ban on such technologies while failing to specify the terms of that
ban, which will lead to unpredictable definition debates. Nevertheless, I’ll do my best
to break down the background context behind the topic, the wording, and the most
common arguments on both sides.

3.2 Background Context

“Biometric recognition technology” refers to a systematic and scientific basis for human
identification based on two premises about body traits: distinctiveness (meaning that
they must be individual and easily distinguishable from the traits of other people) and

27
3 Topic Analysis by Amadea Datel

permanence (meaning that they must remain unchanged throughout people’s lives).¹
Interest in such technology is growing because of its increasing ability to easily and
repeatedly recognize individuals, which can streamline business processes in several
economic sectors. Biometrics can reduce error rates and improve accuracy in identity
detection, reduce fraud and opportunities for circumvention, decrease operating costs,
improve scalability, increase physical safety, and improve confidence.² Due to such
potential, the global biometric market is projected to top 50 billion dollars by 2024, with
North America representing over 30% of that share.³

Biometrics first gained prominence in the 19th century with the Bertillon System, which
collected measurements of bony portions of the body – including skull width, foot
length, and left middle finger length – to identify criminals. Human fingerprints later
replaced this system as the accepted method in forensic investigation and have since
remained the most successful and popular of all biometrics. Since the pores and land‑
mark points in ridges of fingerprints are unique to each individual and easy to collect,
all forensic and law enforcement agencies use the system.⁴

Yet increasing concerns over terrorist activities, security breaches, and financial fraud
have motivated law enforcement agencies to explore other human characteristics that
hold promise for identification, despite the technical difficulties accompanying them:

1. Faces: The spatial relationship between different facial features varies between indi‑
viduals, but suffers from issues with illumination, gesture, makeup, and occlusion.

2. Irises: The colored ring that surrounds the pupil contains complex texture patterns
with individual attributes, but sensor costs remain high and the system suffers from a
failure to enroll.

3. Palmprints: These are similar to fingerprints but have become available much more
recently and have not yet been deployed in civilian applications due to their large phys‑
ical size.

4. Voices: These are highly suitable for apps like telebanking but are sensitive to back‑
ground noise and playback spoofing.

5. DNA samples: These are popular in forensic and law enforcement but require tangi‑
ble hair, fingernail, saliva, or blood samples and cannot be done in real‑time.

¹http://biometrics.cse.msu.edu/
²https://dataprivacylab.org/TIP/2011sept/Biometric.pdf
³https://www.thalesgroup.com/
⁴http://biometrics.cse.msu.edu/

28
3 Topic Analysis by Amadea Datel

6: Hand veins: Vein patterns are stable for most adults, although the system cost and
lack of large‑scale studies on vein individuality and stability have prevented the estab‑
lishment of a national vascular biometric system.⁵

In addition to such physiological characteristics, law enforcement has at times used be‑
havioral traits – such as signature, gait, and keystroke dynamics – but their distinctive‑
ness and permanence remain weak. For example, people’s signatures tend to change
over time, making them an unreliable basis for identification.

Along with considering these types of drawbacks to particular methods, law enforce‑
ment must also take into account the identification application when selecting their
technique of choice – mobile phones might lend themselves to voice biometrics, while
fingerprints are popular for computers.⁶

So far, I have focused on the role of biometric recognition technology in law enforcement
agencies, but debaters should be aware that these technologies span a wide range of
industries, from the military to the commercial sector, and both the pro and con can
read impacts to any one of these industries.

One such area is the military – although we have little knowledge about how defense
agencies use biometric data due to the sensitive nature of that information, the U.S. mil‑
itary has been collecting faces, irises, fingerprints, and DNA data in a biometric identi‑
fication system since January 2009, with the vast majority of identities coming from the
government’s military operations in Iraq and Afghanistan. The Department of Defense
has further reported arresting or killing 1,700 individuals between 2008 and 2017 based
on biometric matches.⁷

Another prominent area for biometric technology use is border control, travel, and mi‑
gration. Debaters might be familiar with the e‑passport, a biometric travel document
that provides authentication by comparing individuals’ faces and fingerprints to those
in their passports at immigration checkpoints. The process can speed up border cross‑
ings and prevent people from attempting to use others’ passports to gain unauthorized
entry into countries.⁸

Additional uses for biometric technologies include healthcare (particularly in various


European, Middle Eastern, and African countries, where citizens carry national cards
to confirm their identity before accessing governmental services or healthcare); civil
⁵Ibid
⁶Ibid
⁷https://www.thalesgroup.com/
⁸Ibid

29
3 Topic Analysis by Amadea Datel

identity, population registration, and voter registration; physical and logistical access
control (which prevents individuals from accessing facilities and computer systems and
networks, with a prominent example being the facial recognition devices introduced on
the iPhone X in 2017); and the commercial sector (which include banks, fintech organiza‑
tions, and telecom operators who use the technology to complete mandatory customer
checks faster and more efficiently).⁹

3.3 Interpreting the Topic

Two terms in the resolution are critical in determining the division of pro and con
ground: “collection of personal data” and “ban.”

The phrase “ban the collection of personal data through biometric recognition technol‑
ogy” seems to imply that biometric recognition technology can – but does not necessar‑
ily have to – collect personal data. However, since “biometrics” refers to “the measure‑
ment of an individual’s unique physical characteristics and the matching of those char‑
acteristics against previously recorded information to determine a person’s identity,”¹⁰
such technology must collect data to serve its purpose. Biometrics would become use‑
less without the ability to collect personal data, meaning that the resolution results in a
de facto ban on biometric recognition technology, creating an uphill burden for the pro.

The pro might have a little more recourse when it comes to the term “ban.” Both sides
should interpret this term to their advantage – several dictionaries define “ban” as a
“prohibition…by statute or legal order” but do not specify whether bans can be partial
or conditional. Pros could write cases that ban a specific type of biometric technology or
create a ban under certain conditions (such as over a limited time period), while the con
should find definitions that establish that bans must be complete and unconditional so
that such proposals can become PICs (plan inclusive counterplans): positions that prove
that negating one exception to the topic justifies a con ballot.¹¹

The term “ban…collection” also raises a question about whether the pro’s ban would
affect previously collected data (and force governments and companies to delete the
information) or only prohibit actors from collecting future data. This question seems to
hinge on whether “collection” refers to the verb form of the word (the act or process of

⁹Ibid
¹⁰https://heinonline.org/
¹¹https://dictionary.findlaw.com/definition/ban.html

30
3 Topic Analysis by Amadea Datel

collecting, which would mean the resolution would only impact the action of collecting
data) or the noun form (collection as in the accumulation of objects, which would man‑
date that the pro eliminate pre‑existing collections of data). The meaning is ambiguous
given the wording of the resolution, but either way, determining that the ban would
only implicate new data seems to undercut pro solvency to the same extent as con argu‑
ments because both the benefits and drawbacks to the existence of previously collected
data would remain.

The one scenario where determining that the resolution leaves current data intact would
hurt the pro more than the con would be in the context of a perception‑based argument
that only applies to future data collection. For example, the con could read an economic
contention that claims that banning data collection will crash confidence in the biomet‑
ric technology market. The fact that the pro doesn’t eliminate existing data couldn’t
solve these impacts because the resolution would still shut down efforts to expand the
market, which is sufficient to trigger the link. However, it would undercut the pro’s sol‑
vency because violations of privacy and surveillance through data collection (the most
common pro argument) would still occur due to the existence of current data.

3.4 Pro Arguments

3.4.1 Human Rights Violations

Almost all pros on the topic will read some variation of the human rights violations
argument – that biometric recognition technology will allow the government to infringe
on its citizens’ constitutional rights, to the particular detriment of minorities.

The irrevocable link between the collection of biometric data and the persistent infor‑
mation record about individuals creates serious privacy issues, which will generate so‑
cietal distrust of the institutions that had deployed the technology, preventing the con
from accessing arguments about the benefits of such tools because they become useless
absent public acceptance.¹² The EU has agreed that the implications on privacy rights
could lead to “a feeling of constant surveillance and indirectly dissuade the exercise of
the freedom of assembly and other fundamental rights,” even if the government never
uses the data to violate individual rights.¹³

¹²https://dataprivacylab.org/TIP/2011sept/Biometric.pdf
¹³https://heinonline.org/

31
3 Topic Analysis by Amadea Datel

Yet biometric data collection also threatens to directly undermine people’s rights
– specifically, their Fourth, Fifth, and Sixth Amendment rights as it is increasingly
integrated into criminal investigations and used as evidence without conforming to
constitutional protections.¹⁴

The Fourth Amendment guarantees that “the right of the people to be secure in their
persons, houses, papers, and effects, against unreasonable searches and seizures, shall
not be violated,” and has expanded to not only protect people from the police but to
restrict all government action. In United States v. Jones (2012), the Court considered
the government’s use of Global Positioning System (GPS) tracking and concluded that
the installation of a GPS device to monitor a vehicle’s movements constitutes an un‑
reasonable search, with the plurality opinion repudiating widespread and warrantless
tracking of citizens’ movements,¹⁵ demonstrating the threats of such technology on pri‑
vacy rights.

Biometric data possession could also easily lead to “function creep” – even if the govern‑
ment initially restricts its use of the data, it could expand the purposes of such data over
time. Social Security numbers are a case in point. Although the government conceived
them to implement the Social Security System, they slowly crept into other aspects of our
lives to become a universal identifier, as did almost every other government database.¹⁶
By their nature, biometric tools also do not stop with a simple data point like the digital
image of a face – they become integrated with other data sources that support new AI
innovations in criminal enforcement and national security contexts to predict criminal
or terrorist intent.¹⁷

In response to such Fourth Amendment concerns, states have banned certain uses of
biometric technology. For example, Florida was the first state to implement a ban on
the collection of public school students’ biometric data, and Illinois, Louisiana, and Ari‑
zona all proposed or enacted legislation that would require notice and consent before
collecting any biometric information.¹⁸

Biometric recognition technology also threatens the Fifth Amendment, which guar‑
antees citizens due process and the right to refuse to answer questions to avoid
self‑incrimination. In one case, a judge denied an application for a search warrant
that would have compelled an individual to unlock digital services through biometric
¹⁴Ibid
¹⁵https://heinonline.org/
¹⁶Ibid
¹⁷https://heinonline.org/
¹⁸https://heinonline.org/

32
3 Topic Analysis by Amadea Datel

identification like facial recognition – but one cannot expect all future rulings to align
with this one.¹⁹

Finally, the Sixth Amendment contains a Confrontation Clause that allows criminal de‑
fendants to confront the witnesses and evidence used against them. Machine sources
of accusation might act as witnesses against defendants under the clause but “black‑
box” concerns prevent us from understanding how AI reached their decisions, mak‑
ing it impossible for people to confront the machine. The impacts are significant, as
biometric systems have already led to wrongful arrests and jail time. When used on
Congress members, Amazon’s facial recognition tool falsely matched 28 legislators with
mugshots, demonstrating that AI is still inaccurate – but overconfidence in the technol‑
ogy is dangerous, especially when we cannot interrogate the tools to reach final verdicts
on the innocence or guilt of a defendant.²⁰

Notably, human rights organizations such as Amnesty International and Human Rights
Watch have spoken out against biometric recognition technology, stating that it enables
a litany of human rights abuses by harming people’s rights to privacy and free assembly
and association. It has led to wrongful arrests of innocent individuals and the surveil‑
lance of communities in China, Thailand, and Italy, demonstrating that its potential for
abuse is too great to risk.²¹

Such risk also brings a racial and cultural dimension – biometric tools like facial recogni‑
tion systems are less accurate when attempting to identify racial minorities because they
learn from homogenous sample sets, which leads to greater policing in neighborhoods
with majority‑minority populations and increases the chances of wrongful convictions
among such groups.²² People could also refuse to participate in the data collection sys‑
tems because of religious or cultural norms, creating de facto discrimination against
certain groups whose members cannot travel freely, apply for jobs, or obtain public ser‑
vices.²³

3.4.2 Security

The collection of biometric data could also lead to security issues since cyberattacks on
such systems are not uncommon, especially given that nations are finding themselves
¹⁹https://heinonline.org/
²⁰Ibid
²¹https://www.sciencedirect.com/science/article/abs/pii/S0969476521000849
²²https://heinonline.org/
²³https://dataprivacylab.org/

33
3 Topic Analysis by Amadea Datel

unprepared to secure rapidly growing biometric data or indicators with existing pro‑
cesses, tools, and technologies. The data is especially vulnerable while being transmit‑
ted across networks to a central storage database, which could allow malicious actors
like terrorists or rogue states to gain access.²⁴

This issue is difficult to solve via advantage counterplans since protective measures that
tighten restrictions (such as requiring encryption for the storage of data in authentica‑
tion systems) risk denying access to legitimate users and cannot guarantee that the un‑
derlying data will not be exposed.²⁵

3.5 Con Arguments

3.5.1 Solving Crime

Biometric data collection holds promise in assisting law enforcement’s efforts to solve
serious crimes, including child trafficking and sexual exploitation.²⁶ One example is the
FBI’s new fingerprint system – Next Generation Identification (NGI) – which recently re‑
placed the Integrated Automated Fingerprint ID System (IAFIS). Since then, the match‑
ing accuracy rate has increased from 92 to 99 percent, and the average response time
has dropped from two hours to 10 minutes. The accuracy rate for latent fingerprints
(those found at the crime scene) also rose from 25 to 80 percent due to the improved
algorithm.²⁷

Other examples exist as well – palm prints are left at 30 percent of crime scenes, so
innovation in that area of biometrics also holds promise in tracking down criminals,
and facial recognition, albeit less accurate, has proven helpful in identifying unknown
subjects who commit identity theft and fraud. DNA evidence can also be analyzed and
compared to offender profiles, with the promise to not only identify the guilty but also
exonerate the innocent.²⁸

²⁴https://www.forbes.com/
²⁵https://dataprivacylab.org/
²⁶https://www.sciencedirect.com/
²⁷https://www.m2sys.com/
²⁸Ibid

34
3 Topic Analysis by Amadea Datel

3.5.2 Economic Harms

As discussed above, the biometric technologies market is rapidly expanding – and the
pro’s ban would shut down such technologies. Such a shutdown would potentially
create ripple effects across the economy since the biometrics market is intertwined with
other sectors, such as the healthcare, military and defense, and automotive industries.²⁹
In addition to this business confidence style argument, the neg could also access more
long‑term economic impacts since biometrics hold promise for increasing economic de‑
velopment: they can improve credit market efficiency, reduce corruption and improve
government welfare payment efficiency, decrease gaming of the system of in‑kind ben‑
efit transfers, and reduce worker absenteeism and data forgery, which could also turn
many of the pro’s impacts about human rights for marginalized populations.³⁰

3.5.3 Alternatives

The topic tasks the pro with the steep burden of defending that we should ban biometric
recognition systems, rather than prohibit or regulate them. Naturally, the con should
take advantage of all the available alternatives to a complete ban, especially given that
most authors in the literature prefer to avoid the “nuclear” option of a ban and instead
adopt a more restrained approach.³¹
For example, while the EU’s AI Act bans the use of recognition technologies that present
an “unacceptable risk to safety, livelihood, and rights of the people” (defined as those
that manipulate human behavior to circumvent users’ free wills), it leaves open the po‑
tential to use biometric systems in various situations. It even creates narrow exceptions
for using live facial recognition technology (considered the most intrusive technology)
when such tools are needed to search for a missing child, prevent a specific and immi‑
nent terrorist threat, or identify the suspect of a serious criminal offense. Such situations
must be authorized by a judicial or other independent body which should set appropri‑
ate limits on time, geographic reach, and databases searched.³² A similar model is the
U.S. Congress’s “Facial Recognition and Biometric Technology Moratorium Act,” which
calls for a moratorium on U.S. federal, state, and local use of facial recognition technolo‑
gies pending a Commission study that assesses their impact.³³
²⁹https://www.globenewswire.com/
³⁰https://voxdev.org/
³¹https://www.sciencedirect.com/
³²Ibid
³³https://heinonline.org/

35
3 Topic Analysis by Amadea Datel

Finally, the EU’s General Data Protection Regulation (GDPR) offers a more moderate
option that could be useful in informing governments how to best adapt Bill of Rights
protections around the new challenges posed by biometrics. Such a model of AI regula‑
tions emphasizes transparency and greater accountability, with Articles 13‑15 address‑
ing a subject’s right to access data and Articles 21‑22 addressing a subject’s right to opt
out of automated decision‑making.³⁴

Con debaters should familiarize themselves with the different alternatives to a ban so
they can select the ones that best avoid the pro’s most common deficit: that any option
short of a complete ban will be subject to loopholes.

3.6 Conclusion

While the topic area is interesting, I imagine that the pro will struggle to both defend
banning all biometric recognition technology (a seemingly untenable position given its
prevalence in our lives) and generate unique deficits to the thousands of potential coun‑
terplans/alternatives that the con could propose. I expect pros will end up relying on
definitions of the word “ban” that allow them to narrow the resolution in their favor,
given the potential subsets of biometric data and conditions on bans that could exist
– which might end up flipping the skew on the topic in the other direction. Despite
the unpredictable nature of the topic, I hope this topic analysis was helpful in breaking
down some of the core topic arguments available to both sides.

Good luck to everyone debating!

³⁴Ibid

36
4 Background Evidence

4.1 Definitions

4.1.1 Collection Definition

Locally stored biometric data is not “collected”—Illinois court rulings on Apple


prove.

Rattigan 23

Kathryn Rattigan (Robinson & Cole LLP), “Court Rules Apple’s Face‑ID Does Not Vio‑
late BIPA,” JD Supra, 13 January 2023, https://www.jdsupra.com/legalnews/court‑rules‑
apple‑s‑face‑id‑does‑not‑8209807/, accessed 28 February 2023

An Illinois appellate court has ruled that Apple’s biometric unlock features, including
Touch ID fingerprint scanning and Face ID facial geometry scanning, do not violate the
state’s Biometric Information Privacy Act (BIPA). The case involved a group of Illinois
residents who alleged that Apple’s Face ID feature impermissibly collects facial geome‑
tries from pictures stored in the Photo app on Apple devices. The plaintiff class claimed
that Apple violated BIPA by collecting, possessing, and profiting from biometric infor‑
mation without the knowledge or consent of users. According to the complaint, Apple
did not have an established retention policy for biometric data and failed to obtain writ‑
ten permission to collect the information.

According to the appellate opinion, Apple never collected, stored, or managed the data
collected by Touch ID and Face ID because the biometric data are stored locally on
the user’s device. The court distinguished this local storage, which Apple contends
is strictly controlled by the user, from cloud‑based storage that takes the data out of
the user’s custody. BIPA doesn’t define “possession,” so this ruling supports a narrow
reading of the law based on the data’s physical storage location.

The court did not address whether technology that stores biometric data locally but

37
4 Background Evidence

still actively “phones home” for updates would change the calculus. For now, tech
companies have a tested roadmap for BIPA‑compliant security features: store the data
locally and encrypt it.

38
4 Background Evidence

4.1.2 Ban

Ban means to “prohibit”

Merriam‑Webster n.d.

Merriam‑Webster Dictionary, no date, https://www.merriam‑webster.com/dictionary/


ban

Ban: to prohibit especially by legal means

Ban means to “forbid”

Cambridge n.d.

Cambridge Dictionary, no date, https://dictionary.cambridge.org/us/dictionary/


english/ban

to forbid (= refuse to allow) something, especially officially:

39
4 Background Evidence

4.1.3 Bans vs. Regulation

Bans and regulation are distinct.

Spivack and Garvie 20

Jameson Spivack (Georgetown Center on Privacy and Technology) and Clare Garvie
(Georgetown Center on Privacy and Technology), “A Taxonomy of Legislative Ap‑
proaches to Face Recognition in the United States,” In: Regulating Biometrics: Global
Approaches and Urgent Questions, Amba Kak, ed., AI Now Institute, 1 September 2020,
https://ainowinstitute.org/regulatingbiometrics.html

PROPOSED AND ENACTED LEGISLATION Generally, there have been three legisla‑
tive approaches to regulating face recognition in the United States: complete bans, mora‑
toria, and regulatory bills. Moratoria can be further broken down into two types: time‑
bound moratoria, which “pause” face recognition use for a set amount of time; and direc‑
tive moratoria, which “pause” face recognition use and require legislative action—such
as a task force or express statutory authorization—to supersede the moratoria. Most of
these bills have covered all government use of face recognition, with particular attention
given to limits placed on police use. This section focuses on police use as well.

[TABLE OMITTED]

A. Bans

The strongest legislative response is to ban the use and acquisition of the technology
completely. Bans can focus on state use of face recognition, commercial or private
sector use, or both. To date, only local municipal governments have implemented
bans, concentrated in towns and cities in California and Massachusetts. As of July
2020, the following municipalities had banned face recognition: Alameda, California;
Berkeley, California; Boston, Massachusetts; Brookline, Massachusetts; Cambridge,
Massachusetts; Easthampton, Massachusetts; Northampton, Massachusetts; Oakland,
California; San Francisco, California; and Somerville, Massachusetts.28 A number
of states proposed bans on face recognition during the 2019–2020 legislative session:
Nebraska, New Hampshire, New York, and Vermont.29

City governments have passed bans following robust public dialogue about the risks
and benefits of face recognition technology. They represent what is possible with a
transparent, democratic process, and the power of proactive localities. In the words of
the San Francisco city supervisor who sponsored the ban: “We have an outsize respon‑

40
4 Background Evidence

sibility to regulate the excesses of technology precisely because they are headquartered
here.”30 It is unclear at this point, however, whether face recognition bans will take
hold at the local, state, or federal level. Some jurisdictions may also find the bans to be
unintentionally overbroad, restricting uses of the technology deemed to be necessary or
uncontroversial.31
B. Moratoria
Another strong measure that a legislature can take is to place a moratorium on the tech‑
nology,32 which has two forms: time‑bound and directive.

1. Time‑bound moratoria

Time‑bound moratoria stop virtually all use of face recognition for a predetermined
amount of time.33 The purpose of this pause is to give elected officials and the public
time to learn about face recognition, reconvening later once the moratorium expires. At
this point, legislators can decide if, and how, to regulate face recognition.
At the municipal level, in early 2020, Springfield, Massachusetts, placed a moratorium
on face recognition until 2025.34 At the state level, a 2020 bill introduced in the Mary‑
land legislature would prohibit all public and private use of face recognition for one
year.35 The bill does not include any other provisions or directions, but rather states
the moratorium “shall remain effective for a period of one year from the date it is en‑
acted and, at the end of the one–year period, this Act, with no further action required
by the General Assembly, shall be abrogated and of no further force and effect.”36
Time‑bound moratoria raise the possibility for public engagement and the future imple‑
mentation of either a permanent ban or strong regulation. These bills prompt discus‑
sion within legislative committees—the members of which are often unfamiliar with
face recognition—about the technology, including its potential harms. There is a risk,
however, that if the legislature fails to act once the moratorium period is over, use of
face recognition will recommence with no safeguards in place.

2. Directive moratoria

Directive moratoria temporarily stop face recognition use while explicitly instructing
the legislature or other government officials to take additional steps. Often this entails
the creation of a task force, working group, or commission organized by either the legis‑
lature or attorney general to study face recognition and recommend policy responses.37
A bill introduced in Washington state in 2019 proposed a moratorium on government
use of face recognition technology while setting up a task force to study the technology.

41
4 Background Evidence

The task force would be composed of members of historically oversurveilled communi‑


ties, and would deliver a report to the legislature about potential effects. The bill would
also require the attorney general to provide a report certifying the tools in use did not
contain accuracy or bias issues, as tested by an independent third party.38

Directive moratoria can also pause face recognition use until the legislature passes cer‑
tain laws. In contrast to the above example, in which decisions about future policy are
left to the working group, this kind of moratorium sets minimum thresholds that future
legislation must achieve.

For example, a bill introduced in Massachusetts in 2019 would place a moratorium on


government use of biometric surveillance, including face recognition, “[a]bsent express
statutory authorization.” That authorization must provide guidance on who is able to
use biometric surveillance systems, their purposes, and prohibited uses; standards for
data use and management; auditing requirements; and rigorous protections for civil
rights and liberties, including compliance mechanisms.39

At the federal level, the Facial Recognition and Biometric Technology Moratorium Act
of 2020 prohibits federal use of certain biometric technologies such as face recognition
until Congress explicitly allows their use, with certain limitations. It also conditions
federal grant funding to state and local agencies on their adoption of moratoria similar
to that proposed in the federal bill.40

These bills encourage jurisdictions to research the full implications of face recognition
use and engage with members of the public before enacting a more permanent law.
Moratoria also limit the risk of reverting to status quo use once the time period is over.
However, there is a risk that a task force or commission may not be representative of
affected communities; may lack authority; or may be inadequately funded, restricting
its effectiveness.41

C. Regulatory Bills

Regulatory bills seek to place restrictions on face recognition’s use, rather than stop it
altogether. Regulatory bills range along a spectrum from more narrowly focused (reg‑
ulating only specific uses or other elements of face recognition) to broader (regulating
more of these elements).

1. Common elements of regulatory bills

Face recognition bills propose a wide range of measures, including:

42
4 Background Evidence

• Task force or working group: groups must study face recognition and make policy
recommendations.

• Requirements on companies: face recognition vendors must open up their software to


accuracy and bias testing; commercial users must get consent or provide notice of use,
as well as allow data access, correction, and removal.

• Accountability and transparency reports: implementing agencies must provide de‑


tails on the face recognition tools they use, including how and how often, to elected
officials. Some require reports before implementation, and many require ongoing re‑
ports.42

• Implementing officer process regulations: officers must receive periodic trainings, con‑
duct meaningful reviews of face recognition search results, and disclose to criminal de‑
fendants that face recognition was used in identifying them.

• Explicit civil rights and liberties protections: such as prohibiting the use of face recog‑
nition to surveil people based on characteristics including but not limited to race, immi‑
gration status, sexual orientation, religion, or political affiliation.

• Data and access restrictions: such as prohibiting the sharing of face recognition data
with immigration enforcement authorities, limiting federal access to face recognition
systems, and prohibiting use on state driver’s license databases.

• Targeted bans: prohibiting specific uses, such as live facial recognition, or in conjunc‑
tion with body‑worn cameras or drones. Face recognition use can also be limited by
type of crime—for example, only to investigate violent felonies.

• Court order requirements: law enforcement must obtain a court order backed by prob‑
able cause (or, in some instances, only reasonable suspicion43) to run face recognition
searches. Some bills more narrowly apply this requirement to ongoing surveillance or
real‑time tracking only.44 This can also apply narrowly to law enforcement seeking face
recognition data from private entities that have collected it, rather than law enforcement
searches themselves.

2. Examples of regulatory bills

A narrower bill proposed in Indiana calls for a “surveillance technology impact and use
policy,” but includes no other restrictions.45 In New Jersey, a proposed bill requires
the attorney general to arrange for third‑party accuracy and bias testing.46 In 2019, the
California legislature passed a law prohibiting “a law enforcement agency or law en‑

43
4 Background Evidence

forcement officer from installing, activating, or using any biometric surveillance system
in connection with an officer camera or data collected by an officer camera.”47

At the other end of the spectrum, broader regulatory bills address multiple elements of
face recognition development and use. Though they address a wider range of concerns,
this does not mean they necessarily address all legitimate areas of concern related to
face recognition, or that the proposed rules are substantive or enforceable.

For example, in March 2020, Washington state passed a law that regulates nu‑
merous elements of face recognition.48 The bill includes provisions like these: a
pre‑implementation accountability report documenting use practices and data man‑
agement policies for any new face recognition systems; “meaningful human review”
when face recognition is used in legal decisions; testing in operational conditions; face
recognition service APIs made available for independent accuracy and bias testing;
periodic training for officers; mandatory disclosure to criminal defendants; warrants
for ongoing, “real‑time” or “near‑real‑time” use; civil rights and liberties protections;
and prohibitions against image tampering in face recognition searches.49

Regulatory bills seek to strike a balance between the benefits and harms of face recogni‑
tion use. For example, while a separate privacy bill introduced in Washington in 2019
garnered industry support for its light‑touch approach to regulating face recognition, it
elicited criticism from privacy advocates for containing loopholes and providing inad‑
equate enforcement mechanisms.50 Narrowly targeted bills have a greater likelihood
of passing through support from well‑resourced law enforcement and company stake‑
holders, yet often fail to meaningfully protect against the true scope of possible harms.51
Some advocates are also critical of regulatory bills, particularly more limited ones, for
using up available political capital and possibly eliminating the chance of stronger reg‑
ulation in the future.

44
4 Background Evidence

4.1.4 Wide Range of Bans

Bans can be wide or narrow.

Lewis and Crumpler 21

James Andrew Lewis (Senior Vice President; Pritzker Chair; and Director, Strategic
Technologies Program) and William Crumpler, “Facial Recognition Technology:
Responsible Use Principles and the Legislative Landscape,” Center for Strategic
and International Studies, 29 September 2021, https://www.csis.org/analysis/facial‑
recognition‑technology‑responsible‑use‑principles‑and‑legislative‑landscape, accessed
6 March 2023

Bans on Government Facial Recognition Use

Instead of trying to regulate the use of facial recognition, some jurisdictions have instead
decided to ban the technology. Some of these bans are relatively narrow, prohibiting
the use of FRT in schools or residential complexes, outlawing its use in connection with
drones or police body cameras, or preventing authorities from using the technology
to support federal immigration authorities. Other bans have been broader in scope,
outlawing any use of FRT by government agencies.

As of the time of writing, two states and 19 municipalities have enacted bans on the
government’s use of FRT. For example, the King County Council in Washington passed
an ordinance prohibiting FRT usage by county officials, including those in its local po‑
lice agency, the King County Sheriff’s Office. A further seven states, as well as a large
number of municipalities, are currently considering similar bans. No ban has been en‑
acted at the national level, though two pieces of legislation introduced during the 116th
Congress would have created broad prohibitions for federal agencies. The Facial Recog‑
nition and Biometric Technology Moratorium Act of 2021 (S.2052/H.R.3907)—which
was recently reintroduced in the 117th Congress after its initial introduction in 2020
(S.4084/H.R.7356)—would ban the use of FRT by federal officials. This legislation also
reiterates the prohibition first introduced in H.R.3875 against any federal funds from
being used to purchase or use facial recognition systems.

Most of these bans are indefinite, but in some cases, they may be limited to a certain
number of years, or until some condition is met, such as the passage of a more compre‑
hensive set of safeguards. The bans enacted by Vermont and Virginia and proposed in
S.4084/H.R.7356, for example, explicitly state that the bans are to remain in place un‑
til the technology’s use is expressly authorized through new legislation. In ordinances

45
4 Background Evidence

enacted by Springfield and Portland, as well as legislation proposed in New Jersey, re‑
strictions would only be in place until more comprehensive protections can be enacted
by the legislature. Washington’s proposed ban is an example of one that is time limited;
this ban would only last until July 1, 2026.

46
4 Background Evidence

4.1.5 Personal Data

Personal data doesn’t include publicly available information.

Code of Virginia 23

Code of Virginia, Title 59.1. Trade and Commerce, Chapter 53. Consumer Data Protec‑
tion Act, Effective January 1, 2023, 1 January 2023, https://law.lis.virginia.gov/vacodefull/
title59.1/chapter53/, accessed 27 February 2023

“Personal data” means any information that is linked or reasonably linkable to an iden‑
tified or identifiable natural person. “Personal data” does not include de‑identified data
or publicly available information.

47
4 Background Evidence

4.1.6 Biometric Data

Biometric data includes the following:

Code of Virginia 23

Code of Virginia, Title 59.1. Trade and Commerce, Chapter 53. Consumer Data Protec‑
tion Act, Effective January 1, 2023, 1 January 2023, https://law.lis.virginia.gov/vacodefull/
title59.1/chapter53/, accessed 27 February 2023

“Biometric data” means data generated by automatic measurements of an individual’s


biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises, or other
unique biological patterns or characteristics that is used to identify a specific individ‑
ual. “Biometric data” does not include a physical or digital photograph, a video or
audio recording or data generated therefrom, or information collected, used, or stored
for health care treatment, payment, or operations under HIPAA.

Biometric data includes both biological and behavioral data:

World Bank Group 19

“Biometric data,” ID4D Practitioner’s Guide, World Bank Group’s Identification for De‑
velopment (ID4D) Initiative, 1 June 2019, https://id4d.worldbank.org/guide/biometric‑
data, accessed 28 February 2023

Countries that plan to use biometric recognition for deduplication and/or authentica‑
tion can chose from a variety of biometric characteristics (i.e., “modes”). In general
biometrics fall into two major categories:

Biological: fingerprints, face, iris, veins, etc.

Behavioral: keystroke dynamics, gait, signature, voice, etc.

48
4 Background Evidence

4.1.7 Recognition

Recognition is distinct from identification.

Honovich

John Honovich (founder of IPVM), “Detection, Classsification, Recognition and


Identification (DCRI),” IPVM, 4 February 2013, https://ipvm.com/reports/detection‑
recognition‑and‑identification, accessed 9 March 2023

Detection, classification, recognition and identification is all very easy until you put it
in practice. Then you realize that there are big practical and subjective problems that
make using them difficult.

Academically, the terms are straightforward and reflect what they mean in real English:

Detection ‑ The ability to detect if there is some ‘thing’ vs nothing.

Recognition ‑ The ability to recognize what type of thing it is (person, animal, car, etc.)

Identification ‑ The ability to identify a specific individual from other people

Obviously, each level is harder and requires more detail, going from detection to recog‑
nition to identification.

49
4 Background Evidence

4.1.8 Biometric Recognition

Biometric recognition includes both identification and verification.

World Bank Group 19

“Biometric data,” ID4D Practitioner’s Guide, World Bank Group’s Identification for De‑
velopment (ID4D) Initiative, 1 June 2019, https://id4d.worldbank.org/guide/biometric‑
data, accessed 28 February 2023

Biometric recognition encompasses both biometric identification—the process of search‑


ing against a biometric enrollment database to find and return the biometric reference
identifier(s) attributable to a single individual (i.e. 1:n)—and biometric verification—the
process of confirming a biometric claim through biometric comparison (i.e. 1:1) (ISO/IEC
2382‑37). These processes can be used to perform two distinct tasks in foundational ID
systems:

Deduplication of identity records. To ensure that each person in a database is unique,


ID systems can use biometric identification to perform a duplicate biometric enrollment
check. This involves comparing a template generated from a captured biometric against
all or a subset of templates stored in biometric database to detect a duplicate registration
(a 1:N search), after which the new template is added to the database. This process
involves automation as well as manual checks to adjudicate matches.

Authentication of individuals. Some authentication protocols require biometric verifi‑


cation of the user. This involves a one‑to‑one (1:1) comparison of a template generated
from a captured biometric against a single stored template (e.g., one stored on an ID
card or mobile phone, or in a database).

50
4 Background Evidence

4.1.9 Verification vs. Identification

Difference between verification and identification.

Ringel and Reid 23

Evan Ringel (PhD student at Hussman School of Journalism and Media, University of
North Carolina at Chapel Hill) and Amanda Reid (Associate Professor at Hussman
School of Journalism and Media, University of North Carolina at Chapel Hill), Regu‑
lating Facial Recognition Technology: A Taxonomy of Regulatory Schemata and First
Amendment Challenges. Communication Law and Policy, 15 February 2023, Forthcom‑
ing, http://dx.doi.org/10.2139/ssrn.4360358

The next step in the process depends on the intended use of the technology: verification
or identification. The simplest task for FRT systems is verification, which is used to con‑
firm that an individual is who they claim to be,5 For verification, the biometric template
collected by the FRT software is compared against a single image seeking what is called
a “one‑to one” match.6 This facial image lies within a larger database, but the image is
filtered by an attributed identifier such as a driver’s license number or name to ensure
only one attempted match.7 On the other hand, the more complex and technologically
demanding use of FRT is identification, which is used to ascertain, rather than confirm,
an individual’s identity.8 For identification, the biometric template that is collected by
the facial recognition software is then compared against an entire database or “gallery”
of images in what is called a “one‑to‑many” match.9 While both verification and iden‑
tification uses raise potential concerns, it is the use of FRT for identification purposes
that is especially problematic. When used for identification, a larger database of face im‑
ages is needed, which opens the door to additional concerns about accuracy and data
security.

51
4 Background Evidence

4.2 Background

4.2.1 Existing Regulations

List of existing US regulations on FRT.

Ringel and Reid 23

Evan Ringel (PhD student at Hussman School of Journalism and Media, University of
North Carolina at Chapel Hill) and Amanda Reid (Associate Professor at Hussman
School of Journalism and Media, University of North Carolina at Chapel Hill), Regu‑
lating Facial Recognition Technology: A Taxonomy of Regulatory Schemata and First
Amendment Challenges. Communication Law and Policy, 15 February 2023, Forthcom‑
ing, http://dx.doi.org/10.2139/ssrn.4360358

Clearview AI and other FRT developers and users operate in a contested regulatory
space. The European Parliament passed a resolution in 2021 calling for a ban on the
use of FRT by law enforcement,16 while Belgium has imposed a national ban on the
technology.17 However, there is no comprehensive U.S. federal legislation either lim‑
iting the use of FRT or protecting the data privacy of U.S. citizens. Instead, FRT use
is governed by a patchwork of state and local legislation, allowing government agen‑
cies and private companies to deploy FRT with relatively few regulatory limits. The
most prominent and stringent of these state‑level regulations is the Illinois Biometric
Information Privacy Act of 2008 (BIPA), which at the time of its passage represented the
first successful attempt by a state legislature to provide protection for biometric data.18
Other states have also passed laws targeting facial recognition technology explicitly19
or as part of a larger regulatory scheme protecting biometric privacy. 20 A handful of
U.S. municipalities, including cities in California, like San Francisco and Oakland, and
cities in Massachusetts, like Boston and Somerville, have passed ordinances banning
the use of the technology by law enforcement and other government officials; while not
imposing bans, other local governments have passed ordinances designed to increase
transparent use of surveillance technologies like FRT.21

52
4 Background Evidence

4.2.2 No Federal Law Now

There is no federal law now.

Ringel and Reid 23

Evan Ringel (PhD student at Hussman School of Journalism and Media, University of
North Carolina at Chapel Hill) and Amanda Reid (Associate Professor at Hussman
School of Journalism and Media, University of North Carolina at Chapel Hill), Regu‑
lating Facial Recognition Technology: A Taxonomy of Regulatory Schemata and First
Amendment Challenges. Communication Law and Policy, 15 February 2023, Forthcom‑
ing, http://dx.doi.org/10.2139/ssrn.4360358

Though a patchwork of state, local, and foreign laws exert some level of regulatory con‑
trol over the use of facial recognition technology, there is no comprehensive federal
law regulating use of the technology. Few states have passed laws explicitly focused
towards facial recognition technology; at the local level, approximately twenty‑five mu‑
nicipalities scattered across the U.S. have ordinances that limit or ban the use of fa‑
cial recognition technology by government agencies. With a continued lack of action
from Congress—despite pressure from prominent companies, privacy scholars, and
constituents—these state and local laws are the only barrier preventing unregulated
nationwide use of facial recognition technology. If FRT developers mount a successful
First Amendment challenge to limit states and municipalities from regulating the use
of facial recognition technology, the consequences would be significant.

53
4 Background Evidence

4.2.3 Existing Uses

FRT is used at ports of entry.

Ringel and Reid 23

Evan Ringel (PhD student at Hussman School of Journalism and Media, University of
North Carolina at Chapel Hill) and Amanda Reid (Associate Professor at Hussman
School of Journalism and Media, University of North Carolina at Chapel Hill), Regu‑
lating Facial Recognition Technology: A Taxonomy of Regulatory Schemata and First
Amendment Challenges. Communication Law and Policy, 15 February 2023, Forthcom‑
ing, http://dx.doi.org/10.2139/ssrn.4360358

Federal agencies use facial recognition technology without significant regulatory con‑
straints. The most publicly visible federal use of facial recognition is by Customs and
Border Protection (CBP) and the Transportation Safety Administration (TSA), who have
jointly unveiled their “Biometric Entry‑Exit” program in at least twenty airports and
ports throughout the U.S.. 28 The program has reduced wait times for international
air travelers, saving as much as 10 minutes per traveler. 29 Both agencies indicated an
intention to further expand their use of facial recognition. 30

FRT is used for law enforcement.

Ringel and Reid 23

Evan Ringel (PhD student at Hussman School of Journalism and Media, University of
North Carolina at Chapel Hill) and Amanda Reid (Associate Professor at Hussman
School of Journalism and Media, University of North Carolina at Chapel Hill), Regu‑
lating Facial Recognition Technology: A Taxonomy of Regulatory Schemata and First
Amendment Challenges. Communication Law and Policy, 15 February 2023, Forthcom‑
ing, http://dx.doi.org/10.2139/ssrn.4360358

In addition to expediting travel, FRT is used to aid law enforcement efforts. The FBI
has used FRT in the “hunt for suspected criminals.”31 The FBI’sfacial images are com‑
bined with fingerprint data and biographical information to create detailed profiles of
millions of U.S. citizens.32 The FBI’s facial recognition search capacity has access to “lo‑
cal, state and federal databases”—which contain more than 641 million face images.33
However, the FBI’s use of FRT is not without controversy; it may raise due process con‑
cerns.34 As the ACLU has noted, the FBI’s use of a facial recognition database does not

54
4 Background Evidence

require a warrant, probable cause, or even reasonable suspicion.35 In addition, there is


little accountability as the agency has refused to confirm whether they provide notice to
criminal defendants about having been matched (or not matched) via facial recognition
technology.36

The FBI’s use of FRT is not unique. Local police departments throughout the U.S. have
also used FRT. And many of these departments have been criticized for a lack of public
transparency.37 Law enforcement officers have also partnered with private companies
to expand police surveillance networks. Ring, an Amazon‑owned company that sells
doorbell video cameras, has agreements with more than 400 police departments around
the country allowing officers to request footage collected by cameras within a specific
time and area.38

State and local police have also used FRT as an investigative tool in the aftermath of
nationwide civil unrest following the murders of George Floyd and Breonna Taylor by
law enforcement officers. After protests in May‑June 2020, local police departments and
the FBI asked the public to provide photographs and video footage for law enforcement
to run through facial recognition systems.39 While some departments submitted images
to the FBI to use its facial recognition database, other departments have signed their own
contracts with private facial recognition vendors.40

FRT is used extensively by private companies.

Ringel and Reid 23

Evan Ringel (PhD student at Hussman School of Journalism and Media, University of
North Carolina at Chapel Hill) and Amanda Reid (Associate Professor at Hussman
School of Journalism and Media, University of North Carolina at Chapel Hill), Regu‑
lating Facial Recognition Technology: A Taxonomy of Regulatory Schemata and First
Amendment Challenges. Communication Law and Policy, 15 February 2023, Forthcom‑
ing, http://dx.doi.org/10.2139/ssrn.4360358

Use of FRT is not limited to state and federal government agencies. Private use of facial
recognition can take several different forms. Some corporations have used FRT in a law
enforcement‑adjacent fashion, scanning their own workers as they enter the building
to identify “high risk individuals.”41 Pop star Taylor Swift’s security team used facial
recognition at her concerts to identify potential stalkers,42 and other celebrities have
indicated that they use the technology to assist with security.43

55
4 Background Evidence

Companies have also used FRT for verification purposes.44 Many smartphone owners
use Apple’s FaceID to unlock their phones or authenticate purchases, while some lap‑
tops use facial recognition as a substitute for entering a password. 45 Organizations
used FRT for large gatherings—like the NFL’s Super Bowl46 And other corporations
have used FRT in ways that can be more invasive of individual privacy. Several of
the world’s largest corporations have experimented with FRT for evaluating customer
satisfaction and employee performance47 or making other consequential business de‑
cisions,48 leading to the characterization of the technology as “ubiquitous” in public
spaces.49

56
5 Pro Evidence

5.1 General

5.1.1 Constitutionality

FRT violates the First and Fourth Amendments by eroding privacy and chilling free
expression via mass surveillance—a complete ban reflects that the potential upsides
of FRT are far outweighed by the downsides.

Waghorne 22

Sylvia Waghorne (J.D., Washington University School of Law), The Price Of Privacy: A
Call For A Blanket Ban On Facial Recognition In The City Of St. Louis, Volume 69, Issue
1, 2022, After the Trump Administration: Lessons and Legacies for the Legal Profession,
pages 421‑447, https://journals.library.wustl.edu/lawpolicy/article/id/8639/

While the potential for increased security and crime‑solving capabilities might make
facial recognition appear desirable, it is necessary to consider what is at stake when pri‑
vate and public entitles implement this technology. Increased surveillance capabilities
inhibit privacy, which, for many, can be too high a cost. Timothy Birch, Police Services
Manager for the Oakland Police Department, expressed his discomfort regarding the
implementation of facial recognition in 2018—“we don’t see the benefit of facial recog‑
nition software in terms of the cost, the impact to community privacy . . . . Until we
identify an incredible benefit for facial recognition, the cost is just too high.”99 One year
later in 2019, Oakland would become the third American city to ban facial recognition
in public places.100

The effects of facial recognition on individual privacy implicate important constitutional


concerns, particularly with respect to the First and Fourth Amendments. There are cur‑
rently no Supreme Court cases that indicate how the court might apply the First or
Fourth Amendment to the use of facial recognition by law enforcement or other govern‑

57
5 Pro Evidence

mental agencies,101 but it is not difficult to supply reasons why such technology might
impinge on constitutional rights. Selinger and Hartzog argue that

[t]he technology can be used to create chill that routinely prevents citizens
from engaging in First Amendment protected activities, such as free asso‑
ciation and free expression. They could also gradually erode due process
ideals by facilitating a shift to a world where citizens are not presumed in‑
nocent but are codified as risk profiles with varying potentials to commit a
crime.102

Along that vein, Jeramie Scott remarked that facial recognition signals “a change and
a shift that undermines our democracy, because everyone becomes suspicious.” 103
One can easily imagine how facial recognition might affect freedom of expression or
assembly—police use of facial recognition during the summer 2020 protests in response
to police brutality against Black Americans provides a stark example. Throughout the
tumultuous summer, multiple instances of police using facial recognition to track and ar‑
rest protesters were reported by news stations.104 There has been a long history of using
surveillance on social activists, especially Black activists, going back to the Civil Rights
activists of the 1960s.105 Facial recognition technology makes the tracking of political
activists easier than ever before, meaning that people could become less comfortable
partaking in constitutionally protected activities for fear of retribution from state au‑
thorities. Some facial recognition software companies argue that the First Amendment
protects and justifies their facial recognition business. 106 According to Neil Richards
and Woodrow Hartzog, however, this is a perversion of what the First Amendment is
meant to protect and not an accurate interpretation of the law. The authors argue that:

[t]he core of the First Amendment’s commitment to free speech is protecting


individual speakers like protestors and journalists form government oppres‑
sion, not giving constitutional protection to dangerous business models that
inhibit expression and give new authoritarian tools to governments.107

The increasing use of facial recognition in public and private spaces likewise threatens
a core tenant of the Fourth Amendment. The Fourth Amendment was written largely
in response to the British Crown’s use of Writs of Assistants, general warrants that al‑
lowed generalized, suspicionless searches of Colonists homes.108 The Fourth Amend‑
ment is meant to prevent such unreasonable searches, but according to a report from
the Georgetown Law Center on Privacy and Technology, different uses of facial recog‑
nition present a range of threats to these Fourth Amendment rights.109 The types of

58
5 Pro Evidence

practices deemed to pose a “moderate risk” to Fourth Amendment protections involve


circumstances where police employ a targeted search with a targeted database, and thus
are “conducting a targeted search pursuant to a particularized suspicion.” 110 The risk
increases the more generalized a database becomes—‑for example, a law enforcement
agent attempting to identify a suspect can perform face recognition searches against
the photos of every registered driver in a state database to find a match, thus creating
“a virtual line‑up of millions of law‑abiding Americans” who are often unaware how
their personal data is being used.111 The risk to Fourth Amendment protections is at
its highest level when facial recognition technology is applied to real‑time or historical
surveillance of the general public, enabling law enforcement to conduct indiscriminate
and instantaneous searches of any individuals that happens to walk down the street.
112

The ability of law enforcement to utilize new technology in the name of stopping crime
has been both protected and prevented by the Supreme Court.113 An example of the
court protecting novel police crime solving methods includes the 1971 case United States
v. White, where the Court held that a police informant using a concealed recording
device to record conversations did not require a warrant for such surveillance nor vio‑
late the Fourth Amendment.114 Even with the limited technology available at the time,
some members of the court recognized the danger this holding could pose to all Ameri‑
cans. In his dissent, Justice Douglas cautioned: “Today no one perhaps notices because
only a small, obscure criminal is the victim. But every person is the victim, for the
technology we exalt today is everyman’s master.” 115 Douglas’ words ring true to‑
day; with facial recognition, it is not only the “obscure criminal” whose privacy and
constitutional rights are at stake. According to the Georgetown Law Center report,
by 2016, half of all U.S. adults—117 million people—were already included in police
facial‑recognition data bases, many of whom are lawabiding citizens.116 Although fa‑
cial recognition comes with the potential benefits of increased security and crime pre‑
vention, the potential detriment to First and Fourth Amendment rights is too high a
price to pay.

59
5 Pro Evidence

5.1.2 Accuracy

FRT systems face serious accuracy challenges due to technical challenges that are
unlikely to be fixed anytime soon.

Lynch 20

Jennifer Lynch (Surveillance Litigation Director at the Electronic Frontier Foundation),


Face Off: Law Enforcement Use of Face Recognition Technology, Electronic Frontier
Foundation, 20 April 2020. http://dx.doi.org/10.2139/ssrn.3909038

Accuracy Challenges Face recognition systems vary in their ability to identify people,
and no system is 100 percent accurate under all conditions. For this reason, every face
recognition system should report its rate of errors, including the number of false posi‑
tives (also known as the “false accept rate” or FAR) and false negatives (also known as
the “false reject rate” or FRR). A “false positive” is generated when the face recognition
system matches a person’s face to an image in a database, but that match is incorrect.
This is when a police officer submits an image of “Joe,” but the system erroneously
tells the officer that the photo is of “Jack.” A “false negative” is generated when the
face recognition system fails to match a person’s face to an image that is, in fact, con‑
tained in a database. In other words, the system will erroneously return zero results
in response to a query. This could happen if, for example, you use face recognition to
unlock your phone but your phone does not recognize you when you try to unlock it.
When researching a face recognition system, it is important to look closely at the “false
positive” rate and the “false negative” rate, because there is almost always a trade‑off.
For example, if you are using face recognition to unlock your phone, it is better if the
system fails to identify you a few times (false negative) than if it misidentifies other peo‑
ple as you and lets those people unlock your phone (false positive). Matching a person’s
face to a mugshot database is another example. In this case, the result of a misidentifica‑
tion could be that an innocent person is treated as a violent fugitive and approached by
the police with weapons drawn or even goes to jail, so the system should be designed
to have as few false positives as possible. Technical issues endemic to all face recogni‑
tion systems mean false positives will continue to be a common problem for the fore‑
seeable future. Face recognition technologies perform well when all the photographs
are taken with similar lighting and from a frontal perspective (like a mug shot). How‑
ever, when photographs that are compared to one another contain different lighting,
shadows, backgrounds, poses, or expressions, the error rates can be significant.5 Face
recognition is also extremely challenging when trying to identify someone in an image

60
5 Pro Evidence

shot at low resolution6 or in a video,7 and performs worse overall as the size of the data
set (the population of images you are checking against) increases, in part because so
many people within a given population look similar to one another. Finally, it is also
less accurate with large age discrepancies (for example, if people are compared against
a photo taken of themselves when they were ten years younger).

61
5 Pro Evidence

5.1.3 Bias

FRT exacerbates bias along racialized and gendered lines. False matches make the
system unreliable and leads to serious infringements of civil rights.

Waghorne 22

Sylvia Waghorne (J.D., Washington University School of Law), The Price Of Privacy: A
Call For A Blanket Ban On Facial Recognition In The City Of St. Louis, Volume 69, Issue
1, 2022, After the Trump Administration: Lessons and Legacies for the Legal Profession,
pages 421‑447, https://journals.library.wustl.edu/lawpolicy/article/id/8639/

A second major concern is the existing bias within facial recognition technology, partic‑
ularly when it comes to people of color. In response to “assertions that demographic de‑
pendencies could lead to accuracy variations and potential bias,”117 the National Insti‑
tute of Standards and Technology (NIST) conducted research to quantify the accuracy of
facial recognition algorithms for demographic groups defined by sex, age, and race.118
The 2019 report confirmed what many had already believed to be true—facial recog‑
nition algorithms are susceptible to both racial and gender bias. The study revealed
that Asian and African American individuals are up to one hundred times more likely
to be misidentified than white men, and that Native Americans had the highest false‑
positive rate of all ethnicities.119 Overall, false positives were higher for women than
for men. 120 The compounded bias toward women and people of color leave women
of color in a particularly vulnerable position when facial recognition technologies are
utilized. According to NIST’s findings, “[t]he faces of African American women were
falsely identified more often in the kinds of searches used by police investigators where
an image is compared to thousands or millions of others in hopes of identifying a sus‑
pect.”121 Rep. Bennie G. Thompson, chairman of the Committee on Homeland Security,
stated that the report revealed that “facial recognition systems are even more unreliable
and racially biased than we feared.”122

False matches can have serious consequences—according to Jay Stanley, a policy ana‑
lyst for the ACLU, “[o]ne false match can lead to missed flights, lengthy interrogations,
tense police encounters, false arrests, or worse.”123 Such a result occurred in January
2020, when Detroit police arrested Robert Julian‑Borchak Williams for a crime he did not
commit.124 Detectives ran grainy footage from security videos through facial recogni‑
tion software after a retail store was robbed; the software incorrectly identified the man
in the footage to be Williams, who was subsequently arrested.125 He spent thirty hours

62
5 Pro Evidence

in a jail cell before being released on bond, and ultimately all charges were dropped
due to insufficient evidence.126 Williams’ experience received media attention due to
its novelty, being potentially the first known account of a wrongful arrest based on a
false match by facial recognition software in the United States.127

Law enforcement is often adamant that facial recognition is just one tool to identify
suspects, but it is difficult to be certain how large of a role facial recognition plays in an
investigation, given that police do not often reveal whether facial recognition technology
was utilized in an investigation.128 According to former Solicitor General of the United
States Paul Clement, who was hired by Clearview AI, law enforcement “don’t have to
tell defendants that they were identified via Clearview’s technology as long as it isn’t
the sole basis for getting a warrant to arrest them.”129

While false matches are particularly a concern for people of color due to the racial bias
present in in many of the algorithms used today,130 errors in facial recognition technol‑
ogy are a serious concern for people of all races, ages, and genders. In 2018, civil liber‑
ties groups in the United States tested Amazon’s facial recognition software, Amazon
Rekognition, which had already been utilized by various police departments and orga‑
nizations across the country.131 The test compared the photos of all federal lawmakers
against a database of 25,000 mugshots.132 The software mismatched twentyeight mem‑
bers of Congress in total, incorrectly identifying them as individuals featured in the
mugshot photos.133 Furthering concerns about racial bias, Black and Latino members
of Congress were disproportionately identified as the individuals in the mug shot—for
example, the late Representative John Lewis was incorrectly identified.134 Two years
later in June 2020, Amazon would announce a one‑year moratorium on police use of
Rekognition.135 The short blog post did not give an explanation for the reasoning be‑
hind the sudden pause on the use of the software, other than to say that it “might give
Congress enough time to implement appropriate rules” regarding the ethical use of fa‑
cial recognition technology.136

Larger databases, such as Clearview AI’s database with billions of photos, increase the
risk that misidentification will occur due to the doppelgänger effect. 137 Doppelgängers
“usually refer to biologically unrelated lookalikes. Apart from demographic attributes,
doppelgängers also share facial properties such as facial shape.”138 Studies have in‑
dicated that automatic facial recognition algorithms can fail to distinguish lookalikes,
which “may lead to serious risks in various scenarios, e.g. blacklist checks, where in‑
nocent subjects may have a higher chance to match to a lookalike in the list.” 139 The
reality is that technology is not foolproof, and we cannot expect it to be. As Nila Bala,

63
5 Pro Evidence

Associate Director of Criminal Justice Policy and Civil Liberties at R Street Institute, ar‑
ticulates, “[t]echnology is seen as immune to the racial biases that humans possess, and
individuals view artificial intelligence with blind faith. But artificial intelligence is only
as smart as the data used to develop it.”140

Machine learning discriminates based on race and gender—specifically against


darker‑skinned women.

Buolamwini and Gebru 18

Joy Buolamwini (MIT Media Lab) Timnit Gebru (Microsoft Research), “Gender Shades:
Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings
of Machine Learning Research 81:1–15, 2018 Conference on Fairness, Accountability,
and Transparency

Recent studies demonstrate that machine learning algorithms can discriminate based
on classes like race and gender. In this work, we present an approach to evaluate bias
present in automated facial analysis algorithms and datasets with respect to phenotypic
subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification sys‑
tem, we characterize the gender and skin type distribution of two facial analysis bench‑
marks, IJB‑A and Adience. We find that these datasets are overwhelmingly composed
of lighter‑skinned subjects (79.6% for IJB‑A and 86.2% for Adience) and introduce a new
facial analysis dataset which is balanced by gender and skin type. We evaluate 3 com‑
mercial gender classification systems using our dataset and show that darker‑skinned
females are the most misclassified group (with error rates of up to 34.7%). The max‑
imum error rate for lighter‑skinned males is 0.8%. The substantial disparities in the
accuracy of classifying darker females, lighter females, darker males, and lighter males
in gender classification systems require urgent attention if commercial companies are
to build genuinely fair, transparent and accountable facial analysis algorithms.

FRT is discriminatory—it does not accurately identify people with darker skin,
women, transgender, non‑binary individuals, and the elderly.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies

64
5 Pro Evidence

for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Facial recognition technologies inflict what can be described as quasi‑universal harms


by virtue of the fact that they are a dragnet surveillance tool—anyone with a picture in
a government database, who posts a picture on a commercial internet service, or ven‑
tures outside in public with their face uncovered is implicated. The term “universal”
risks implying that the harms of facial recognition are equally dispersed when they are
not—populations that were already more vulnerable to surveillance and over‑policing
are much more susceptible. I use it to distinguish the category of harms that, theoreti‑
cally, could affect the vast majority of the population, from facial recognition’s accuracy
problems with different demographic groups, which are not universal in the same way.

Facial recognition technologies are all the more damaging because they do not perform
with equal accuracy for different demographic groups. A range of studies show that fa‑
cial recognition algorithms are less accurate for people with darker skin,125 women,126
transgender and non‑binary individuals,127 the elderly,128 and (as will be discussed in
the next section) children.129 Facial recognition poses specific harms for each of the
demographic groups for whom it performs poorly.

The life cycle of the development and use of a facial recognition system creates a number
of junctures that researchers have posited could be responsible for inaccuracies based
on race, gender, and age, including composition of the datasets of face images used to
train facial recognition algorithms130 and the composition of the datasets used as bench‑
marks.131 A dataset that contains faces that are mostly white, middle‑aged men will es‑
tablish its criteria for how to evaluate faces according to the samples in that set, meaning
that such an algorithm would likely be more accurate for those faces, and less so for oth‑
ers.132 As an example, a recent National Institute of Standards and Technology (NIST)
study that evaluated 189 algorithms from 99 different developers, the study found high
rates of false positives of Asian, Black, and native faces relative to white faces among
U.S.‑developed algorithms, whereas algorithms developed in Asian countries showed
no dramatic difference in accuracy for assessments of Asian faces and white faces.133
While the researchers do not go so far as to claim a causal relationship, they do note
that the disparity supports the idea that the diversity of the training data impacts the
accuracy of the algorithm for the demographic groups it is used to identify.134

65
5 Pro Evidence

The disparity in accuracy leads to false accusations by law enforcement and


exacerbates structural racism.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

In the case of law enforcement searches, this disparity makes those groups of people
more vulnerable to erroneous identification and accusation of crimes they did not com‑
mit.135 The 2019 NIST study found that the faces of Black women showed the highest
rates of false positives in one‑to‑many matching—the kind of search a law enforcement
official would use to attempt to identify a shortlist of suspects.136 In one such case, a
man in Florida named Willie Lynch is appealing an eight‑year prison sentence after the
Jacksonville Sherriff’s Office identified him using facial recognition, despite the fact that
he was never permitted to see the four other “possible matches” that the system identi‑
fied.137 Lynch is Black, supporting the likelihood that the system erroneously identified
him—the system only reported a likelihood of “one star” that the assessment was cor‑
rect, and the analyst running the test was not even aware of how many “stars” were
available.138 Mr. Williams, the man wrongfully arrested on larceny charges due to er‑
roneous identification by the Detroit Police Department’s facial recognition system, is
also Black.139

The opaque use of facial recognition systems that misidentify people of color exacer‑
bates other forms of structural racism in the criminal justice system. Black people are
already the disproportionate targets and victims of unjust policing practices that en‑
danger their lives and liberty more than other demographic groups, and the risks these
technologies pose to their freedom, and to the function of a fair criminal justice system,
are neither remote nor abstract.140 Overpolicing of communities of color means that
facial recognition technology is more likely to be used against them as surveillance or
investigative tool.141 The 2019 NIST study also reported that Native American faces
had the highest rate of false positives,142 putting them at similar risk— like Black peo‑
ple, Native Americans are incarcerated and killed by law enforcement at far higher rates
than other demographic groups.143 The opaque use of a surveillance technology that
has a higher chance of misidentifying them exacerbates these racist and undemocratic
structural failures. Everyone is hypothetically subject to the criminal justice system and

66
5 Pro Evidence

capable of experiencing the kinds of arbitrariness and unfairness that facial recognition
systems can inject—but in reality, these harms will not be dispersed equally among de‑
mographic groups.

FRT is biased against women, transgender, and non‑binary individuals.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Gender‑based inaccuracies also have disturbing implications for the use of facial recog‑
nition technologies. NIST’s most recent study found that false‑positives were between
two to five times more likely for women than men, with a range across the particular
algorithm tested, and the country of origin and age of the subject.144 The study found
particularly high failed match rates in the mugshots of Asian and Black women,145 sim‑
ilarly putting then at risk of wrongful identification in a law enforcement search, or a
search conducted by a private company for security purposes.

Inaccurate results, including higher false positive rates for women than men, are one
issue. Another is flawed gender classification algorithms, namely when an algorithm is
designed to assess a person’s gender by analyzing facial geometry or other metrics like
skin texture.146 They are often not created to assess any option beyond the male‑female
binary.147 As an example, in the December 2019 NIST test of false match rates for cer‑
tain age groups, researchers discarded the images for which sex was not listed as male
or female,148 and none of the tests include a sex option beyond male or female. As Os
Keyes explains in their research on automated gender recognition, most facial recogni‑
tion systems that assess the gender of the subject operationalize a binary conception of
gender, and ignore the existence of transgender or non‑binary people entirely.149 This,
too, can have discriminatory effects: some transgender Uber drivers have been unable
to drive for the company when the app’s verification mechanism incorrectly failed to
verify their identity.150 Incorrect assessments of someone’s gender could have incon‑
venient, discriminatory or dangerous implications for the subject, depending on the
context. The erasure of transgender identity by coding a refusal to acknowledge it into
facial recognition systems is dehumanizing and regressive for the transgender and non‑
binary people whose identities the system ignores.151

67
5 Pro Evidence

Inaccuracy leads to profiling, denial of job opportunities, and harrassment.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

The discriminatory effects of commercial uses of facial recognition and analysis are still
alarming, even when they do not immediately threaten the subject’s safety. Consider
HireVue, the company that offers a job candidate screening service that employs facial
analysis technology to assess actions and attributes like “vocal indications of enthusi‑
asm,” facial expression and eye contact from job applicants.152 Aside from the fact that
the service’s claims of being able to deduce enthusiasm and aptitude from facial ex‑
pression, eye contact, and similar attributes is worrisomely reminiscent of 19th century
physiognomy, an algorithm that cannot assess a given demographic group accurately
puts candidates who belong to those groups at risk of being misinterpreted and unwit‑
tingly discriminated against in the hiring process.153 Certain commercial uses, such as
surveillance of retail spaces to identify people who are likely to shoplift, can also con‑
tribute to the racial profiling of Black people that already occurs without any algorith‑
mic help.154 Other commercial uses may pose repeated inconvenience, embarrassment,
hassle, or other significant problems that cannot be accepted as an inevitable side effect.

Being unable to access one’s smartphone, bank account, or apartment is not a trivial
inconvenience that people can brush aside as unfortunate wrinkles insufficient to out‑
weigh the overall value of a service that provides additional convenience for others. The
ways in which facial recognition technologies enable discrimination by poor accuracy
for faces of certain demographic groups threaten the privacy, well‑being, prosperity,
and safety of those people simply by virtue of who they are.

Even if accuracy improves, bias remains.

Castelvecchi 20

Davide Castelvecchi (reports for Nature from London), “Is facial recognition too biased
to be let loose?,” Nature, Vol 587, pgs. 347‑349, 19 November 2020, https://www.nature.
com/articles/d41586‑020‑03186‑4

68
5 Pro Evidence

The accuracy of facial recognition has improved drastically since ‘deep learning’ tech‑
niques were introduced into the field about a decade ago. But whether that means it’s
good enough to be used on lower‑quality, ‘in the wild’ images is a hugely controversial
issue. And questions remain about how to transparently evaluate facial‑recognition
systems.

In 2018, a seminal paper by computer scientists Timnit Gebru, then at Microsoft Re‑
search in New York City and now at Google in Mountain View, California, and Joy Buo‑
lamwini at the Massachusetts Institute of Technology in Cambridge found that leading
facial‑recognition software packages performed much worse at identifying the gender
of women and people of colour than at classifying male, white faces2. Concerns over
demographic bias have since been quoted frequently in calls for moratoriums or bans
of facial‑recognition software.

In June, the world’s largest scientific computing society, the Association for Computing
Machinery in New York City, urged a suspension of private and government use of
facial‑recognition technology, because of “clear bias based on ethnic, racial, gender, and
other human characteristics”, which it said injured the rights of individuals in specific
demographic groups. Axon, a maker of body cameras worn by police officers across
the United States, has said that facial recognition isn’t accurate enough to be deployed
in its products. Some US cities have banned the use of the technology in policing, and
US lawmakers have proposed a federal moratorium.

Companies say they’re working to fix the biases in their facial‑recognition systems, and
some are claiming success. But many researchers and activists are deeply sceptical.
They argue that even if the technology surpasses some benchmark in accuracy, that
won’t assuage deeper concerns that facial‑recognition tools are used in discriminatory
ways.

More accurate but still biased

Facial‑recognition systems are often proprietary and swathed in secrecy, but specialists
say that most involve a multi‑stage process (see ‘How facial recognition works’) using
deep learning to train massive neural networks on large sets of data to recognize pat‑
terns. “Everybody who does face recognition now uses deep learning,” says Anil Jain,
a computer scientist at Michigan State University in East Lansing.

The first stage in a typical system locates one or more faces in an image. Faces in the
feed from a surveillance camera might be viewed in a range of lighting conditions and
from different angles, making them harder to recognize than in a standard passport

69
5 Pro Evidence

photo, for instance. The algorithm will have been trained on millions of photos to locate
‘landmarks’ on a face, such as the eyes, nose and mouth, and it distils the information
into a compact file, ranging from less than 100 bytes to a few kilobytes in size.

The next task is to ‘normalize’ the face, artificially rotating it into a frontal, well‑
illuminated view. This produces a set of facial ‘features’ that can be compared with
those extracted from an existing database of faces. This will typically consist of pictures
taken under controlled conditions, such as police mugshots. Because the feature
representations are compact, structured files, a computer can quickly scan millions of
them to find the closest match.

Matching faces to a large database — called one‑to‑many identification — is one of two


main types of facial‑recognition system. The other is one‑to‑one verification, the rel‑
atively simple task of making sure that a person matches their own photo. It can be
applied to anything from unlocking a smartphone to passport control at national bor‑
ders.

One measure of progress is the Face Recognition Vendor Test, an independent bench‑
marking assessment that the US National Institute of Standards and Technology (NIST)
in Gaithersburg, Maryland, has been conducting for two decades. Dozens of laborato‑
ries, both commercial and academic, have voluntarily taken part in the latest round of
testing, which began in 2018 and is ongoing. NIST measures the performance of each
lab’s software package on its own image data sets, which include frontal and profile
police mugshots, and pictures scraped from the Internet. (The US technology giants
Amazon, Apple, Google and Facebook have not taken part in the test.)

In reports released late last year, the NIST team described massive steps forward in
the technology’s performance during 2018, both for one‑to‑many searches3 and for one‑
to‑one verification4 (see also go.nature.com/35pku9q). “We have seen a significant im‑
provement in face‑recognition accuracy,” says Craig Watson, an electrical engineer who
leads NIST’s image group. “We know that’s largely because of convolutional neural net‑
works,” he adds, a type of deep neural network that is especially efficient at recognizing
images.

The best algorithms can now identify people from a profile image taken in the wild —
matching it with a frontal view from the database — about as accurately as the best facial‑
recognition software from a decade ago could recognize frontal images, NIST found.
Recognizing a face in profile “has been a long‑sought milestone in face recognition re‑
search”, the NIST researchers wrote.

70
5 Pro Evidence

But NIST also confirmed what Buolamwini and Gebru’s gender‑classification work sug‑
gested: most packages tended to be more accurate for white, male faces than for people
of colour or for women5. In particular, faces classified in NIST’s database as African
American or Asian were 10–100 times more likely to be misidentified than those classi‑
fied as white. False positives were also more likely for women than for men.

This inaccuracy probably reflects imbalances in the composition of each company’s


training database, Watson says — a scourge that data scientists often describe as
‘garbage in, garbage out’. Still, discrepancies varied between packages, indicating that
some companies might have begun to address the problem, he adds.

NEC, which supplies Scotland Yard’s software, noted that in NIST’s analysis, it was
“among a small group of vendors where false positives based on demographic differen‑
tials were undetectable”, but that match rates could be compromised by outdoor, poorly
lit or grainy images.

False faces

One‑to‑one verification, such as recognizing the rightful owner of a passport or smart‑


phone, has become extremely accurate; here, artificial intelligence is as skilful as the
sharpest‑eyed humans. In this field, cutting‑edge research focuses on detecting malev‑
olent attacks. The first facial‑recognition systems for unlocking phones, for example,
were easily fooled by showing the phone a photo of the owner, Jain says; 3D face recog‑
nition does better. “Now the biggest challenge is very‑highquality face masks.” In one
project, Jain and his collaborators are working on detecting such impersonators by look‑
ing for skin texture.

But one‑to‑many verification, as Murray found, isn’t so simple. With a large enough
watch list, the number of false positives flagged up can easily outweigh the true hits.

This is a problem when police must make quick decisions about stopping someone.
But mistakes also occur in slower investigations. In January, Robert Williams was ar‑
rested at his house in Farmington Hills, Michigan, after a police facial‑recognition sys‑
tem misidentified him as a watch thief on the basis of blurry surveillance footage of a
Black man, which it matched to his driving licence. The American Civil Liberties Union
(ACLU), a non‑profit organization in New York City, filed a complaint about the inci‑
dent to Detroit police in June, and produced a video in which Williams recounts what
happened when a detective showed him the surveillance photos on paper. “I picked
that paper up, held it next to my face and said, ‘This is not me. I hope y’all don’t think
all Black people look alike.’ And then he said: ‘The computer says it’s you,’ ” Williams

71
5 Pro Evidence

said. He was released after being detained for 30 hours. ACLU attorney Phil Mayor
says the technology should be banned. “It doesn’t work, and even when it does work,
it remains too dangerous a tool for governments to use to surveil their own citizens for
no compelling return,” he says.

Shortly after the ACLU complaint, Detroit police chief James Craig acknowledged that
the software, if used by itself, would misidentify cases “96% of the time”. Citing con‑
cerns over racial bias and discrimination, at least 11 US cities have banned facial recogni‑
tion by public authorities in the past 18 months. But Detroit police still use the technol‑
ogy. In late 2019, the force adopted policies to ban live camera surveillance and to use
the software only on still images and as part of criminal investigations; Williams was
arrested before the policy went into practice, Craig said in June. (He did not respond to
Nature’s requests for comment.)

Other aspects of facial analysis, such as trying to deduce someone’s personality on the
basis of their facial expressions, are even more controversial. Researchers have shown
this doesn’t work6 — even the best software can only be trained on images tagged with
other people’s guesses. But companies around the world are still buying unproven tech‑
nology that assesses interview candidates’ personalities on the basis of videos of them
talking.

Nuria Oliver, a computer scientist based in Alicante, Spain, says that governments
should regulate the use of facial recognition and other potentially useful technologies to
prevent abuses (see page 350). “Systems are being brought to the wild without a proper
evaluation of their performance, or a process of verification and reproducibility,” says
Oliver, who is co‑founder and vice‑president of a regional network called the European
Laboratory for Learning and Intelligent Systems.

Technical standards cannot solve the underlying bias problem nor do they address
the deeper question of how this technology ought to be used.

Castelvecchi 20

Davide Castelvecchi (reports for Nature from London), “Is facial recognition too biased
to be let loose?,” Nature, Vol 587, pgs. 347‑349, 19 November 2020, https://www.nature.
com/articles/d41586‑020‑03186‑4

Persistent problems

72
5 Pro Evidence

Some proposals for regulation have called for authorities to establish accuracy stan‑
dards and require that humans review any algorithm’s conclusions. But a standard
based on, say, passing NIST benchmarks is much too low a bar on its own to justify
deploying the technology, says Deborah Raji, a technology fellow in Ottawa with the
Internet foundation Mozilla who specializes in auditing facial‑recognition systems.

This year, Raji, Buolamwini, Gebru and others published another paper on the perfor‑
mance of commercial systems, and noted that although some firms had improved at
classifying gender across lighter‑ and darker‑skinned faces, they were still worse at
guessing a person’s age from faces with darker skin7. “The assessment process is in‑
credibly immature. Every time we understand a new dimension to evaluate, we find
out that the industry is not performing at the level that it thought it did,” Raji says. It is
important, she says, that companies disclose more about how they test and train their
facial‑recognition systems, and consult with the communities in which the technology
will be used.

Technical standards cannot stop facial‑recognition systems from being used in discrimi‑
natory ways, says Amba Kak, a legal scholar at New York University’s AI Now Institute.
“Are these systems going to be another tool to propagate endemic discriminatory prac‑
tices in policing?” Human operators often end up confirming a system’s biases rather
than correcting it, Kak adds. Studies such as the Scotland Yard external review show
that humans tend to overestimate the technology’s credibility, even when they see the
computer’s false match next to the real face. “Just putting in a clause ‘make sure there
is a human in the loop’ is not enough,” she says.

Kak and others support a moratorium on any use of facial recognition, not just because
the technology isn’t good enough yet, but also because there needs to be a broader dis‑
cussion of how to prevent it from being misused. The technology will improve, Murray
says, but doubts will remain over the legitimacy of operating a permanent dragnet on
innocent people, and over the criteria by which people are put on a watch list.

Concerns about privacy, ethics and human rights will grow. The world’s largest bio‑
metric programme, in India, involves using facial recognition to build a giant national
ID card system called Aadhaar. Anyone who lives in India can go to an Aadhaar centre
and have their picture taken. The system compares the photo with existing records on
1.3 billion people to make sure the applicant hasn’t already registered under a differ‑
ent name. “It’s a mind‑boggling system,” says Jain, who has been a consultant for it.
“The beauty of it is, it ensures one person has only one ID.” But critics say it turns non‑
card owners into second‑class citizens, and some allege it was used to purge legitimate

73
5 Pro Evidence

citizens from voter rolls ahead of elections.

And the most notorious use of biometric technology is the surveillance state set up by
the Chinese government in the Xinjiang province, where facial‑recognition algorithms
are used to help single out and persecute people from religious minorities (see page
354).

“At this point in history, we need to be a lot more sceptical of claims that you need ever‑
more‑precise forms of public surveillance,” says Kate Crawford, a computer scientist at
New York University and co‑director of the AI Now Institute. In August 2019, Craw‑
ford called for a moratorium on governments’ use of facial‑recognition algorithms (K.
Crawford Nature 572, 565; 2019).

74
5 Pro Evidence

5.1.4 Discrimination

FRT is discriminatory by design—the whole point of the technology is to identify


people which exacerbates inequality.

Houwing 20

Lotte Houwing (researcher and policy advisor at the Dutch digital rights NGO Bits of
Freedom), Stop the Creep of Biometric Surveillance Technology, European Data Protec‑
tion Law Review (EDPL), Vol. 6, No. 2, 2020, pp. 174‑177. HeinOnline

Third, facial recognition surveillance is discriminatory by design. A lot of attention


has been given to the fact that the technology has an accuracy problem. It is less accu‑
rate when pointed at women, transgender and non‑binary people and people of colour,
meaning these people have a higher risk of being misidentified. It is unclear if this prob‑
lem can be solved. Although it is completely legitimate to be concerned about the harms
of this technology burdening some more than others ‑ and hardly ever are the ‘some’ the
people designing, engineering and signing off on the deployment of the technology ‑ we
should not forget that as long as technology that is built to identify, profile and analyse
people in order to treat them differently, is deployed by people within societies that suf‑
fer from systemic inequality, the technology will most likely reinforce and exacerbate
those inequalities.

FRT disproportionately impacts people of color.

Lynch 20

Jennifer Lynch (Surveillance Litigation Director at the Electronic Frontier Foundation),


Face Off: Law Enforcement Use of Face Recognition Technology, Electronic Frontier
Foundation, 20 April 2020. http://dx.doi.org/10.2139/ssrn.3909038

Disproportionate Impact on People of Color

The false‑positive risks discussed above will likely disproportionately impact African
Americans and other people of color.15 Research—including research jointly con‑
ducted by one of FBI’s senior photographic technologists—found that face recognition
misidentified African Americans and ethnic minorities, young people, and women
at higher rates than whites, older people, and men, respectively.16 Due to years of
well‑documented racially‑biased police practices, all criminal databases—including

75
5 Pro Evidence

mugshot databases—include a disproportionate number of African Americans, Lati‑


nos, and immigrants.17 These two facts mean people of color will likely shoulder
exponentially more of the burden of face recognition inaccuracies than whites.

False positives can alter the traditional presumption of innocence in criminal cases by
placing more of a burden on suspects and defendants to show they are not who the
system identifies them to be. This is true even if a face recognition system offers several
results for a search instead of one; each of the people identified could be brought in for
questioning, even if there is nothing else linking them to the crime. Former German
Federal Data Protection Commissioner Peter Schaar has noted that false positives in
face recognition systems pose a large problem for democratic societies: “[I]n the event
of a genuine hunt, [they] render innocent people suspects for a time, create a need for
justification on their part and make further checks by the authorities unavoidable.”18

Face recognition accuracy problems also unfairly impact African American and minor‑
ity job seekers who must submit to background checks. Employers regularly rely on
FBI’s data, for example, when conducting background checks. If job seekers’ faces are
matched mistakenly to mug shots in the criminal database, they could be denied em‑
ployment through no fault of their own. Even if job seekers are properly matched to
a criminal mug shot, minority job seekers will be disproportionately impacted due to
the notorious unreliability of FBI records as a whole. At least 50 percent of FBI’s arrest
records fail to include information on the final disposition of the case: whether a person
was convicted, acquitted, or if charges against them were dropped.19 Because at least 30
percent of people arrested are never charged with or convicted of any crime, this means
a high percentage of FBI’s records incorrectly indicate a link to crime. If these arrest
records are not updated with final disposition information, hundreds of thousands of
Americans searching for jobs could be prejudiced and lose work. Due to disproportion‑
ately high arrest rates, this uniquely impacts people of color.

76
5 Pro Evidence

5.1.5 Privacy—Link

FRT use impinges on the right to privacy.

Nakar and Greenbaum 17

Sharon Nakar (Fellow at the Zvi Meitar Institute for Legal Implications of Emerging
Technologies, Radzyner Law School, Interdisciplinary Center Herzliya, Israel) and Dov
Greenbaum (JD, PhD is an Associate Professor of Molecular Biophysics and Biochem‑
istry (adj) at Yale University), Now You See Me. Now You Still Do: Facial Recognition
technology and the Growing Lack of Privacy, Boston University School of Law Jour‑
nal of Science & Technology Law, Vol 23.1, Winter 2017, https://www.bu.edu/jostl/files/
2017/04/Greenbaum‑Online.pdf

In general, the expansive use of FRT raises several universal ethical concerns. Most
prominently, the tension between the technology and the right to privacy highlight the
dialectic between national security and law enforcement, economic efficiency or pub‑
lic health promoted through the application of facial recognition systems, on the one
side, and concerns relating to the potential for disproportionately violating fundamen‑
tal principles on our society such as the right to personal autonomy, anonymity, to be
forgotten, to control one’s own personal identifying information, and the person right
to protect its own human body, on the other.

Additionally, there are some less obvious social justice issues that can arise, as those
who can afford plastic surgery procedures to alter a legally problematic profile will do
so, allowing the wealthy to escape some of trappings of a FRT.66

Most importantly, FRT impinges on our privacy. The right to privacy is a fundamental
human right as described in Article 12 of the Universal Declaration of Human Rights.67
Importantly, it shapes the balance of power between the citizen and the government,
between the individual and large business entities and between man and his fellow man.
It is a precondition for the democracy development and freedom. Without privacy there
is no freedom of speech, freedom of religion or freedom of movement.

In recent years, the right to privacy has been substantially eroded by new technologies
that continually threaten it.68 To some degree this is our own fault. Witness the plethora
of banal, and no so banal information that we readily shout out to the world on social
media. “The rise of social networking online means that people no longer have an expec‑
tation of privacy . . . the privacy was no longer a ‘social norm.’ ”69 However, to some
degree this is not our fault. Granted the new reality of a camera(phone) in every pocket

77
5 Pro Evidence

is a consumer failing, but the sprouting of closed circuit cameras on every street corner
is the fault of the government and the growing reliance on overly‑pervasive surveillance
for preventing crime.70

FRT decimates privacy and makes widespread discrimination a practical possibility.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Facial recognition’s most dangerous attribute—that it enables unavoidable, dragnet


surveillance of law‑abiding activity on a massive scale—helps explain why its harms
to privacy, free expression, and due process are so broadly felt. As one example, more
than 227 million Americans held driver’s licenses in 2019,77 and at least 26 states
permit law enforcement to run facial recognition searches on those databases.78 The
landmark Georgetown report estimated in 2016 that as many as one in two Americans
have had their photos searched in that way.79 As both cameras and facial recognition
technologies grow cheaper and more advanced, more institutions have begun to
implement various forms of the technology, the surveillance becomes harder to escape.
If you have a drivers’ license, walk around a public place monitored by cameras, or use
social media, your face is most likely in a facial recognition database (with or without
your knowledge).

Other privacy risks are similar to those created by other biometrics, such as fingerprints,
palm prints, or irises, but are still not quite as severe as the implications of facial recog‑
nition. Photographs are used far more widely for identification, allowing the creation of
databases from photographs collected for different purposes. As one example, no one
is posting pictures of their palm prints on Facebook—they’re posting pictures of their
faces. Facial recognition can be used to quickly review faces in large crowds in a way
that iris recognition, palmprint recognition, and fingerprint recognition cannot be.80

Facial recognition technologies are particularly corrosive of practical obscurity, the ef‑
fect of realistic constraints such as cost, feasibility, volume, and even the fallibility of
human memory on the functional availability, and thus privacy implications, of osten‑
sibly available information.81 Facial recognition searches and surveillance erode the

78
5 Pro Evidence

barriers of practical obscurity by enabling the searcher or watcher to connect a physi‑


cal face to a name and list of facts about the subject, and then combine photographic
databases that were compiled under different circumstances. Woodrow Hartzog and
Evan Selinger have written extensively about how facial recognition’s threat to prac‑
tical obscurity makes it a uniquely dangerous surveillance technology,82 and as they
point out,83 the invasions that facial recognition technologies enable make the previ‑
ous assumptions one could make about privacy in public obsolete. The risk assumed
by simply venturing into the public square is categorically different when law enforce‑
ment is able to cheaply, quickly, and quietly identify you on the spot, even when you
have had no previous contact with law enforcement whatsoever.

By relying on photos taken in a range of circumstances, facial recognition systems col‑


lapse contextual distinctions that are a crucial component of privacy.84 Law enforce‑
ment systems run comparison searches on mugshot photos (meaning that the subject
may not have been convicted of a crime), photos from drivers’ licenses, and photos
from social media, while the algorithms they’re relying on have likely been trained on
pictures obtained without the subjects’ consent. Simply put, photographs are collected
and disseminated for all kinds of reasons that may implicate very different parts of a
person’s life.

This context collapse is only likely to accelerate. The race to improve facial recognition
algorithms by training them on larger and more diverse datasets has incentivized re‑
searchers and companies to obtain as many useable photographs as they can through
whatever means they can.85 One notorious company, Clearview AI, enables users to
take a picture of someone, upload it, and quickly access publicly available photos of
that person scraped from millions of websites, along with links to where those pictures
were originally found.86 As facial recognition technologies become even cheaper and
easier to use, and the social norms around their use continue to evolve, the boundaries
between how the pictures were made available and how researchers, companies, and
the government will use them will only crumble faster.

Attempts to infer personal attributes or emotions from someone’s facial expression also
invite privacy invasions and discrimination.87 The long and ugly history of pseudosci‑
entific attempts to connect physical appearance to mental and moral aptitude will not
be improved or corrected by incorporating those methods into algorithms. Systems that
promise to “assess criminality” or assess a job applicant’s candidacy for the position will
only reify existing inequality by providing a supposedly scientific justification for dis‑
crimination.88 Studies have found that not only are the claims made by the companies

79
5 Pro Evidence

selling emotional analysis products unsupported,89 but that these systems introduce
an additional form of racial bias by misinterpreting the facial expressions of Black peo‑
ple,90 generally providing them with more negative scores on average than people of
other ethnicities. As fairness in machine learning expert Meredith Whitaker noted in
testimony before the House of Representatives, emotional analysis technologies are be‑
ing deployed in all sorts of contexts where erroneous assessments could limit the life
opportunities and well‑being of the subject, from assessing job candidates, to attempt‑
ing to gauge a patient’s pain, discerning which shoppers are most likely to shoplift, de‑
termining which students in a classroom are paying attention, or assessing someone’s
sexuality.91

80
5 Pro Evidence

5.1.6 Privacy—Impact

Information is power and privacy is how people control their own lives.

Richards 21

Neil Richards (the Koch Distinguished Professor in Law and co‑director of the
Cordell Institute for Policy in Medicine & Law at Washington University in St. Louis),
interviewed by Neil Schoenherr, “Is privacy dead?,” Source, 6 December 2021,
https://source.wustl.edu/2021/12/is‑privacy‑dead/, accessed 10 March 2023

Simply put, privacy matters because information is power and human information con‑
fers power over human beings like you and me. Privacy rules interrupt, shape and
constrain these uses of our information to monitor, segment, nudge and manipulate us,
and we need better privacy rules to advance important human values like the devel‑
opment of our identities, the exercise of our political freedom, and our protection as
consumers in a digital marketplace. Privacy rules allow us to obtain these undeniably
good things, and we need to fight for better ones.

Privacy is a bulwark against unrestrained power.

Richards 22

Neil Richards (the Koch Distinguished Professor in Law and co‑director of the Cordell
Institute for Policy in Medicine & Law at Washington University in St. Louis), Why Pri‑
vacy Matters, pgs. 208‑209, Oxford University Press, 2022.

That honest and nuanced understanding of privacy is precisely what I have tried to of‑
fer in this book. And now that we have seen the argument in all of its detail, it will be
helpful to put the pieces back together so that we can appreciate it as a coherent (and
hopefully persuasive) whole. To recap: Privacy is the degree to which human infor‑
mation neither is known nor used. Human information confers power over humans.
Because privacy is about power, we should think about it in terms of the rules that gov‑
ern human information, rules that constrain and channel that power. Because human
information rules of some sort are inevitable, we should think about privacy in instru‑
mental terms to promote human values. This focus allows us to dispense with some of
the myths about privacy, like that it’s about hiding dark secrets, that it’s about control
or creepiness, and that it is dying. Privacy isn’t dying, but it is up for grabs. We should

81
5 Pro Evidence

craft our human information rules to promote human values like human identity, politi‑
cal freedom, and consumer protection the way we have crafted our rules for free speech
around the search for truth, self‑government (which is very close to “freedom”), and
autonomy (which is related to “identity”). If we do that, then we might have an infor‑
mation revolution that is closer to the promises of the tech industry and lives up to those
promises of human connection, empowerment, and flourishing with which the internet
broke onto the public stage in the 1990s. That, in a nutshell, is my argument, and it’s
what I hope that you, the reader, will take with you once you close this book. You may
not agree with it, but I hope that you take it seriously and in the good faith with which
it is intended.

Privacy is crucial to identity formation.

Richards 22

Neil Richards (the Koch Distinguished Professor in Law and co‑director of the Cordell
Institute for Policy in Medicine & Law at Washington University in St. Louis), Why Pri‑
vacy Matters, pg. 130, Oxford University Press, 2022.

To sum up, privacy matters if we are interested in developing a diversity of interests,


opinions, and identities as a society. It also matters if we are concerned that social norms
may be stifling or oppressive, or that filter bubbles and echo chambers might divide and
polarize us. And it matters if we agree with Cohen that a critical perspective and dis‑
tance on our social norms is important. That is, privacy matters if we wish to encourage
human flourishing through the development of identities that value difference, indi‑
viduality, and eccentricity rather than in stifling those characteristics in favor of bland
conformity. These processes can be threatened in a variety of ways, such as by forcing,
filtering, and exposure. But privacy supplies spaces in which the personal and social
processes of identity development and experimentation can flourish. In this way, by
shielding and supporting our efforts to develop actually authentic, messy, and change‑
able identities, privacy matters because it enables us to be human.

Surveillance destroys the possibility of authentic freedom.

Richards 22

Neil Richards (the Koch Distinguished Professor in Law and co‑director of the Cordell

82
5 Pro Evidence

Institute for Policy in Medicine & Law at Washington University in St. Louis), Why Pri‑
vacy Matters, pgs. 162‑163, Oxford University Press, 2022.

Surveillance, as we’ve seen, is complicated. It’s traditionally thought of as a govern‑


ment activity, but it’s also widely practiced by the private sector, with at best a blurry
and permeable barrier separating the two. It’s an inextricable part of every modern
political state, a highly profitable business, and a perpetually lurking threat to demo‑
cratic self‑government. These threats can take many forms, from inhibiting dissenting
thought and voices to blackmail, microtargeted persuasion, and discrimination. It can
boost democratic participation and legitimacy when used to get out the vote, or it can
make us less free through targeted voter suppression and manipulation. Privacy dis‑
rupts these practices, whether of sorting, discrimination, or other forms of oppression.
Where surveillance allows microtargeted manipulation, for example, privacy protects
the processes of democratic freedom, whether we think of them in terms of autonomy
or a broader sense of human empowerment that transcends some of the limitations of
traditional liberal theory.

By contrast, rules protecting citizens (and noncitizens) against surveillance can safe‑
guard political freedom and political equality. At the most basic level, data that is never
collected cannot menace intellectual privacy, nor can it be used to blackmail, persuade,
or sort people for discriminatory treatment. Surveillance may flow like liquid between
the public and private spheres, but law can certainly restrict the government’s ability to
purchase or otherwise access private data, as well as its ability to make sensitive infor‑
mation available to the public. American law already restricts these activities in some
areas. Indeed, these rules have constitutional foundations in the Fourth Amendment’s
warrant requirement and in the constitutional right of information privacy recognized
in Whalen v. Roe (1977), which envisions constitutional limits on the government’s abil‑
ity to disclose sensitive data about its citizens, such as their medical records.110 Changes
in technology, changes in corporate business practices, and the increasingly liquid na‑
ture of surveillance have enabled many of the new forms of surveillance discussed in
this chapter, and the arms race between surveillance and law continues, as it must, if
we are to continue to protect democracy from the power effects of human information
technologies.

The right to claim privacy is thus the right to be a citizen entrusted with the power to
keep secrets from the government, the right to exercise the freedom so lauded in ortho‑
dox American political theory and popular discourse. It is a claim to equal citizenship,
with all the responsibility that comes with such a claim, and an essential and fundamen‑

83
5 Pro Evidence

tal right in any meaningful account of what it means to be free. In order to achieve this
promise in practice, privacy protections must be extended as a shield against inequality
of treatment. Privacy enables democracy, and it can be used to secure democracy in the
digital age by limiting access to and regulating the use of new technologies of suppres‑
sion and oppression powered by human information. We saw in chapter 4 how privacy
rules that allow identity development help us to develop as humans. To this list we can
now add that privacy rules can serve the function of allowing us to exist as free and
self‑governing citizens.

Privacy is crucial to being able to participate in our digital society.

Richards 22

Neil Richards (the Koch Distinguished Professor in Law and co‑director of the Cordell
Institute for Policy in Medicine & Law at Washington University in St. Louis), Why Pri‑
vacy Matters, pg. 212, Oxford University Press, 2022.

As this book has explained, privacy protections have become central to our ability to
participate in our digital society as individuals, citizens, and situated consumers. Even
though we might think of privacy as being in tension with being part of a society, I’ve
tried to argue that privacy makes being social possible. By allowing us to manage our
boundaries and by protecting us from information power, privacy has become as neces‑
sary to our ability to participate fully in society as the First Amendment is necessary to
our political freedom. Without privacy, we lack respite, our identities are shaped, our
votes are nudged, we are exposed as consumers, and we cannot trust the digital world
in which we live. Such a future would be a bleak one. We must reject it and work hard
to ensure that it never comes to pass. If we are to learn from the mistakes of the past
as we approach the future, we should recognize that privacy is fundamental to our dig‑
ital lives and that a society without meaningful protections for privacy is inadequate.
Privacy is a fundamental right, and we should recognize it—and broadly protect it—as
such. As a fundamental right, privacy cannot simply be one that we can waive implic‑
itly. Privacy must be built into the structure of our society the way that other protective
structures are built in—like the rule of law, independent judiciary, a free press, and
free expression in pursuit of democratic self‑government and artistic expression.† And
it cannot be protected just against the state. American law has recognized privacy as a
fundamental right against the government since the 1960s, but its inability to articulate
and protect privacy against private actors has a lot to do with the mess that we are in.

84
5 Pro Evidence

Privacy as an idea is under threat, to be sure. But it is not dead, and we must not give
up fighting for it. Meaningful protection of our privacy safeguards all of these values
and more. It is indispensable, and we must fight to preserve it along with our other
hard‑won fundamental rights.

The destruction of privacy will ruin our digital future.

Richards 22

Neil Richards (the Koch Distinguished Professor in Law and co‑director of the Cordell
Institute for Policy in Medicine & Law at Washington University in St. Louis), Why Pri‑
vacy Matters, pgs. 212‑213, Oxford University Press, 2022.

I’ll offer one final thought about privacy. We are living in an information society, which
means that our ability to have a meaningful say in how our human information is used
is everything. Information is power, after all, and this is an information society. Within
a few short years we will likely be surrounded by self‑driving cars, connected smart
homes, ubiquitous tracking of employees, augmented reality, artificial intelligence, and
“precision medicine.” Other marvels of the information age lie before us. As technologi‑
cal marvels do, they bring the promise of innovative and disruptive good and innovative
and disruptive harm. As we confront these disruptions and challenges, human infor‑
mation will only become more important, not less, and privacy rules will continue to
increasingly define how we live our lives as humans, citizens, and situated consumers.
Information—particularly the ability to exploit information—is power. Human infor‑
mation and the technologies that run on it are becoming the foundation of our society.
As more and more human information is collected, and as more and more activities are
fueled by the exploitation of that data, the rules we set (or don’t set) for our human
information will determine what kind of society we live in. Is it one in which people
are empowered to develop their identities, to be free from observation and domination
by the state, and to fairly participate as consumers in the economy without being ma‑
nipulated by the datainformed social sciences? Or will it be a society ordered from the
top down, in which governments, companies, and other institutions can track and ma‑
nipulate us for their own purposes, in which our data serves their ends rather than our
own? If the past is any guide to the future, the answer will most likely lie somewhere
in between these two poles. Yet where that answer lies is important—at least if we care
about identity development, political freedom, consumer protection, and building trust
in our digital future. Without a doubt, in the evolving information society that we live

85
5 Pro Evidence

in, privacy has become the whole ball game. And that’s why privacy matters.

86
5 Pro Evidence

5.1.7 Free Expression

FRT threatens free expression by chilling behavior—empirical examples show that


it severely threatens core democratic values.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Facial recognition technologies also threaten free expression. As Neil Richards has writ‑
ten about at length, intellectual privacy is a necessary condition for the exercise of the
mental autonomy that free expression protections are intended to facilitate.92 Without
the ability to think, write, and communicate in private, we self‑censor and choose not
to experiment with ideas that risk eliciting social, legal, or physical consequences.93 As
Richards puts it, “surveillance inclines us to the mainstream and the boring.” 94

The knowledge that law enforcement is capable of quickly and cheaply identifying peo‑
ple in a crowd can deter political protest, as people may be correctly afraid of reprisals.
People may be concerned about having their photos collected and used to identify them
in real life, which may prevent them from using social media to connect with family,
friends, readers, audiences for products they create, and others. The inability to pre‑
serve anonymity in public and on the internet corrodes the ability of anyone afraid of
having their identity used against them to speak freely.95

Nor are such fears irrational. In the 2016 Georgetown report examining how facial recog‑
nition technologies are used by law enforcement, only one of the fifty‑two law enforce‑
ment agencies they examined had a use policy that expressly prohibited officers from
using them to track people engaged in “political, religious, or other protected speech.”
96 In 2016, the ACLU of Northern California found that the Baltimore Police Department
used facial recognition to identify protestors in real time,97 and in 2018, the Secret Ser‑
vice also began testing a facial recognition system in the public areas around the White
House in order to help it identify “known subjects of interest.” 98 More recently, police
in Delhi used facial recognition software to screen crowds at a rally for Prime Minister
Modi;99 in Hong Kong, law enforcement authorities have access to facial recognition
technology that can identify protestors,100 many of whom took to covering their faces
for that very reason.101 Surveillance of political protestors is a shameful tradition that

87
5 Pro Evidence

is no less harmful via digital methods than it is through analog ones.102 The threats
that facial recognition poses to the democratic rights of free assembly, expression, and
political dissent are concrete, severe, and broadly applicable.

FRT chills expression which impacts clearly protected Constitutional rights like
free assembly and privacy.

Lynch 20

Jennifer Lynch (Surveillance Litigation Director at the Electronic Frontier Foundation),


Face Off: Law Enforcement Use of Face Recognition Technology, Electronic Frontier
Foundation, 20 April 2020. http://dx.doi.org/10.2139/ssrn.3909038

Unique Impact on Civil Liberties

Some proposed uses of face recognition would clearly impact Fourth Amendment rights
and First Amendment‑protected activities and would chill speech. If law enforcement
agencies add crowd, security camera, and DMV photographs into their databases, any‑
one could end up in a database without their knowledge—even if they are not suspected
of a crime—by being in the wrong place at the wrong time, by fitting a stereotype that
some in society have decided is a threat, or by engaging in “suspect” activities such as
political protest in public spaces rife with cameras. Given law enforcement’s history of
misuse of data gathered based on people’s religious beliefs, race, ethnicity, and political
leanings, including during former FBI director J. Edgar Hoover’s long tenure and during
the years following September 11, 2001,8 Americans have good reason to be concerned
about expanding government face recognition databases.

Like other biometrics programs that collect, store, share, and combine sensitive and
unique data, face recognition technology poses critical threats to privacy and civil lib‑
erties. Our biometrics are unique to each of us, can’t be changed, and often are easily
accessible. Face recognition, though, takes the risks inherent in other biometrics to a
new level because it is much more difficult to prevent the collection of an image of your
face. We expose our faces to public view every time we go outside, and many of us share
images of our faces online with almost no restrictions on who may access them. Face
recognition therefore allows for covert, remote, and mass capture and identification of
images.9 The photos that may end up in a database could include not just a person’s
face but also how she is dressed and possibly whom she is with.

Face recognition and the accumulation of easily identifiable photographs implicate free

88
5 Pro Evidence

speech and freedom of association rights and values under the First Amendment, espe‑
cially because face‑identifying photographs of crowds or political protests can be cap‑
tured in public, online, and through public and semipublic social media sites without
individuals’ knowledge.

Law enforcement has already used face recognition technology at political protests.
Marketing materials from the social media monitoring company Geofeedia bragged
that, during the protests surrounding the death of Freddie Gray while in police cus‑
tody, the Baltimore Police Department ran social media photos against a face recogni‑
tion database to identify protesters and arrest them.10

Government surveillance like this can have a real chilling effect on Americans’ willing‑
ness to engage in public debate and to associate with others whose values, religion, or
political views may be considered different from their own. For example, researchers
have long studied the “spiral of silence”— the significant chilling effect on an individ‑
ual’s willingness to publicly disclose political views when they believe their views dif‑
fer from the majority.11 In 2016, research on Facebook users documented the silencing
effect on participants’ dissenting opinions in the wake of widespread knowledge of gov‑
ernment surveillance—participants were far less likely to express negative views of gov‑
ernment surveillance on Facebook when they perceived those views were outside the
norm.12

In 2013, a study involving Muslims in New York and New Jersey found that excessive
police surveillance in Muslim communities had a significant chilling effect on First
Amendment‑protected activities.13 Specifically, people were less inclined to attend
mosques they thought were under government surveillance, to engage in religious
practices in public, or even to dress or grow their hair in ways that might subject them
to surveillance based on their religion. People were also less likely to engage with
others in their community who they did not know for fear any such person could
either be a government informant or a radical. Parents discouraged their children from
participating in Muslim social, religious, or political movements. Business owners took
conscious steps to mute political discussion by turning off Al‑Jazeera in their stores,
and activists self‑censored their comments on Facebook.14

These examples show the real risks to First Amendment‑protected speech and activities
from excessive government surveillance—especially when that speech represents a mi‑
nority or disfavored viewpoint. While we do not yet appear to be at point where face
recognition is being used broadly to monitor the public, we are at a stage where the
government is building the databases to make that monitoring possible. We must place

89
5 Pro Evidence

meaningful checks on government use of face recognition now before we reach a point
of no return.

90
5 Pro Evidence

5.1.8 FRT Bad

FRT is uniquely dangerous—face prints are hard to change, draw from massive
legacy databases, input points are far too numerous, tipping point creep, and faces
are central to identity.

Hartzog and Selinger 18

Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity) and Dr. Evan Selinger (professor of philosophy at Rochester Institute of Tech‑
nology), “Facial Recognition Is the Perfect Tool for Oppression,” Medium, 3 August
2018, https://medium.com/s/story/facial‑recognition‑is‑the‑perfect‑tool‑for‑oppression‑
bc2a08f0fe66, accessed 9 March 2023

Despite the problems our colleagues have documented, you might be skeptical that a
ban is needed. After all, other technologies pose similar threats: geolocation data, social
media data, search history data, and so many other components of our big data trails
can be highly revealing in themselves and downright soul‑searching in the aggregate.
And yet, facial recognition remains uniquely dangerous. Even among biometrics, such
as fingerprints, DNA samples, and iris scans, facial recognition stands apart.

Systems that use face prints have five distinguishing features that justify singling them
out for a ban. First, faces are hard to hide or change. They can’t be encrypted, unlike a
hard drive, email, or text message. They are remotely capturable from distant cameras
and increasingly inexpensive to obtain and store in the cloud — a feature that, itself,
drives surveillance creep.

Second, there is an existing legacy of name and face databases, such as for driver’s
licenses, mugshots, and social media profiles. This makes further exploitation easy
through “plug and play” mechanisms.

Third, unlike traditional surveillance systems, which frequently require new, expensive
hardware or new data sources, the data inputs for facial recognition are widespread and
in the field right now, namely with CCTV and officer‑worn body cams.

Fourth, tipping point creep. Any database of faces created to identify individuals ar‑
rested or caught on camera requires creating matching databases that, with a few lines
of code, can be applied to analyze body cam or CCTV feeds in real time. New York
Governor Andrew Cuomo perfectly expressed the logic of facial recognition creep, in‑
sisting that vehicle license‑plate scanning is insignificant compared to what cameras can

91
5 Pro Evidence

do once enabled with facial recognition tech. “When it reads that license plate, it reads
it for scofflaws…[but] the toll is almost the least significant contribution that this elec‑
tronic equipment can actually perform,” Cuomo said. “We are now moving to facial
recognition technology, which takes it to a whole new level, where it can see the face of
the person in the car and run that technology against databases.” If you build it, they
will surveil.

Finally, it bears noting that faces, unlike fingerprints, gait, or iris patterns, are central
to our identity. Faces are conduits between our on‑ and offline lives, and they can be
the thread that connects all of our real‑name, anonymous, and pseudonymous activities.
It’s easy to think people don’t have a strong privacy interest in faces because many of
us routinely show them in public. Indeed, outside of areas where burkas are common,
hiding our faces often prompts suspicion.

The thing is we actually do have a privacy interest in our faces, and this is because hu‑
mans have historically developed the values and institutions associated with privacy
protections during periods where it’s been difficult to identify most people we don’t
know. Thanks to biological constraints, the human memory is limited; without techno‑
logical augmentation, we can remember only so many faces. And thanks to population
size and distribution, we’ll encounter only so many people over the course of our life‑
times. These limitations create obscurity zones, and because of them, people have had
great success hiding in public.

Recent Supreme Court decisions about the 4th Amendment have shown that fighting
for privacy protections in public spaces isn’t antiquated. Just this summer, in Carpenter
v. United States, our highest court ruled by a 5–4 vote that the Constitution protects
cellphone location data. In the majority opinion, Chief Justice John Roberts wrote, “A
person does not surrender all Fourth Amendment protection by venturing into the pub‑
lic sphere. To the contrary, ‘what [one] seeks to preserve as private, even in an area
accessible to the public, may be constitutionally protected.’ ”

Facial recognition is a unique risk—it allows a different kind of tracking that


exceeds anything prior in scope and scale.

Garvie et al. 16

Clare Garvie (senior associate with the Center on Privacy & Technology at Georgetown
Law), Alvaro Bedoya, Jonathan Frankle, “The Perpetual Line‑Up,” Georgetown Law

92
5 Pro Evidence

Center on Privacy & Technology, 18 October 2016, https://www.perpetuallineup.org/,


accessed 9 March 2023

Here, we can begin to see how face recognition creates opportunities for tracking—and
risks—that other biometrics, like fingerprints, do not. Along with names, faces are
the most prominent identifiers in human society—online and offline. Our faces—not
fingerprints—are on our driver’s licenses, passports, social media pages, and online
dating profiles. Except for extreme weather, holidays, and religious restrictions, it is
generally not considered socially acceptable to cover one’s face; often, it’s illegal.14 You
only leave your fingerprints on the things you touch. When you walk outside, your
face is captured by every smartphone and security camera pointed your way, whether
or not you can see them.

Face recognition isn’t just a different biometric; those differences allow for a different
kind of tracking that can occur from far away, in secret, and on large numbers of people.

Professor Laura Donohue explains that up until the 21st century, governments used bio‑
metric identification in a discrete, one‑time manner to identify specific individuals. This
identification has usually required that person’s proximity or cooperation—making the
process transparent to that person. These identifications have typically occurred in the
course of detention or in a secure government facility. Donohue refers to this form of
biometric identification as Immediate Biometric Identification, or IBI. A prime example
of IBI would be the practice of fingerprinting someone during booking for an arrest.

In its most advanced forms, face recognition allows for a different kind of tracking.
Donohue calls it Remote Biometric Identification, or RBI. In RBI, the government uses
biometric technology to identify multiple people in a continuous, ongoing manner. It
can identify them from afar, in public spaces. Because of this, the government does not
need to notify those people or get their consent. Identification can be done in secret.

This is not business as usual: This is a capability that is “significantly different from that
which the government has held at any point in U.S. history.”15

93
5 Pro Evidence

5.1.9 Other BRT is Dangerous

Other biometric recognition technology is just as dangerous as FRT.

Zeng et al. 19

Yi Zeng (Research Center for AI Ethics and Safety, Beijing Academy of Artificial Intel‑
ligence, China), Enmeng Lu (Institute of Automation, Chinese Academy of Sciences,
China), Yinqian Sun, Ruochen Tian, Responsible Facial Recognition and Beyond, Com‑
puter Vision and Pattern Recognition, 2019, https://doi.org/10.48550/arXiv.1909.12935

Similar Risks in Other Biometric Recognition

Facial recognition, as one type of biometric recognition technology, is not all that we
should care about. In fact, similar risks exist in almost every type of biometric recogni‑
tion, including but not limited to gait recognition, iris recognition, fingerprint recogni‑
tion, and voice recognition.

When it comes to gait recognition, we are also faced with similar risks and ethical issues.
According to a study from the University of California and Beihang University (Zhang,
Wang, and Bhanu 2010), ethnics can be detected using human’s gait. They can distin‑
guish people from East Asia and South America with about 80% for accuracy only based
on their gaits. Moreover, even in 2005, gender classification has achieved over 95% for
accuracy using gait, as suggested in the work of Dankook University and Southampton
University(Yoo, Hwang, and Nixon 2005). In summary, gait recognition can also ex‑
tract features like gender and ethnics just as facial recognition does, which could bring
a series of problems related to algorithmic bias.

Iris recognition has similar risks. Evaluations have shown that gender, eye color, race
will have a different impact on the accuracy in iris recognition. Recognition on the
UND iris database shows that the accuracy on male is 96.67%, while only 86% on female
(Tapia, Perez, and Bowyer 2014). An evaluation on different algorithms also shows that
some of them perform better on male data, while some of them perform better on fe‑
male data (Quinn et al. 2018). For eye color, 13 algorithms perform better on dark eyes
(brown and black), while the rest of 27 algorithms perform better on light eyes (blue,
green and grey). Concerning race, the accuracy for white people is the best, while for
Asian people is the worst (Quinn et al. 2018). In (Howard and Etter 2013), similar re‑
sults were presented. The false rejection rates for different races are African American
> Asian > Hispanic > Caucasian. While the false rejection rates for eye colors are Black >
Brown > Blue > Green > Hazel > Blue‑Green (Howard and Etter 2013).

94
5 Pro Evidence

For fingerprint recognition, the accuracy on gender recognition could reach at least the
accuracy of 97% (Gornale, Patil, and Veersheety 2016), which means that one can extract
the gender information, and this information could have similar risks to be used with
gender discrimination. In addition, the recognition accuracy for male and female are
91.69% and 84.69%, and future efforts are needed to make a balance (Wadhwa, Kaur,
and Singh 2013).

95
5 Pro Evidence

5.1.10 Surveillance

FRT enables damaging and oppressive surveillance.

Selinger and Hartzog 18

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Univer‑
sity), “Amazon Needs to Stop Providing Facial Recognition Tech for the Government,”
Medium, 22 June 2018, https://medium.com/s/story/amazon‑needs‑to‑stop‑providing‑
facial‑recognition‑tech‑for‑the‑government‑795741a016a6, accessed 10 March 2023

In sum, there is a general problem (no anonymity in public) with unevenly distributed
consequence (more threatening to minority and other vulnerable groups). Facial recog‑
nition enables surveillance that is oppressive in its own right; it’s also the key to perpetu‑
ating other harms, civil rights violations, and dubious practices. These include rampant,
nontransparent, targeted drone strikes; overreaching social credit systems that exercise
power through blacklisting; and relentless enforcement of even the most trivial of laws,
like jaywalking and failing to properly sort your garbage cans.

Biometric recognition technology enables mass surveillance.

Amnesty International 21

Amnesty International (international non‑governmental organization focused on


human rights), “Press Release: Amnesty International and more than 170 organisations
call for a ban on biometric surveillance,” 7 June 2021, https://www.amnesty.org/en/latest/press‑
release/2021/06/amnesty‑international‑and‑more‑than‑170‑organisations‑call‑for‑a‑
ban‑on‑biometric‑surveillance/, accessed 9 March 2023

Our call for a ban specifically focuses on, but is not limited to, the use of these technolo‑
gies to identify or distinguish a person from a larger set of individuals, also known as
facial or biometric “identification” (i.e. one‑to‑many matching). We are concerned about
the use of these technologies to identify, single out, or track individuals using their face,
gait, voice, personal appearance, or any other biometric identifier in a manner that en‑
ables mass surveillance or discriminatory targeted surveillance, i.e., surveillance that
disproportionately impacts the human rights and civil liberties of religious, ethnic, and
racial minorities, political dissidents, and other marginalized groups. We also acknowl‑
edge that, in certain cases, facial and other biometric “authentication” systems (i.e. one‑

96
5 Pro Evidence

to‑one matching) can be built and used in a manner that equally enables problematic
forms of surveillance, such as by creating large, centralized biometric databases which
can be reused for other purposes.

97
5 Pro Evidence

5.1.11 Dehumanization

FRT reshapes our world to turn us into walking barcodes.

Houwing 20

Lotte Houwing (researcher and policy advisor at the Dutch digital rights NGO Bits of
Freedom), Stop the Creep of Biometric Surveillance Technology, European Data Protec‑
tion Law Review (EDPL), Vol. 6, No. 2, 2020, pp. 174‑177. HeinOnline

Finally, with the introduction of certain technologies in society, the underlying assump‑
tions of these technologies are brought along, shaping the way we look at the world.
The word ‘biometrics’ means turning biological characteristics to metrics. However, in
translating our faces into more easily computable data, people are reduced to walking
barcodes.

Some technologies, like emotion detection technology, take it a step further, ‘identify‑
ing’ emotions or personality traits based on facial movements or dimensions. When
‘objective’ value or meaning is attached to these characteristics, we start to tread the wa‑
ters of a discredited pseudo‑science rightfully left behind: physiognomy. The idea is
that it is possible to extract information about a person’s character from the biological
characteristics of their face.

Any of these concerns on their own, should be argument enough for why we should
severely limit the use of facial recognition. Taken together, we believe they convinc‑
ingly lay out why the costs of deploying biometric surveillance technology in the pub‑
lic space are too high, and adding this technology to states’ surveillance capacity would
constitute too big an infringement on our liberties.

FRT is like plutonium—it is toxic at its core because of its dehumanizing effects via
its imposition of digital epidermalization.

Stark 19

Luke Stark (a postdoctoral researcher in the Fairness, Accountability, Transparency and


Ethics (FATE) Group at Microsoft Research Montreal, and an Affiliate of the Berkman
Klein Center for Internet & Society at Harvard University), “Facial recognition is the plu‑
tonium of AI,” XRDS: Crossroads, The ACM Magazine for Students Volume 25, Num‑
ber 3 (2019), Pages 50‑55, http://dx.doi.org/10.1145/3313129

98
5 Pro Evidence

When, in 1941, Glenn T. Seaborg and his colleagues at the University of California
Berkeley isolated—and subsequently named—plutonium, the radioactive element 93,
Seaborg reportedly suggested the periodic symbol Pu for the discovery. According to
Seaborg, it “sounded like the words a child would exclaim, ‘Pee‑yoo!’ when smelling
something bad” [1]. Plutonium, industrially produced for the American atomic bombs
dropped on the Japanese cities of Hiroshima and Nagasaki in August 1945, was ill fa‑
vored even by its discoverers.

Today, plutonium has very few non‑military uses (its application in nuclear weapons
being, of course, a moral abomination in itself). Plutonium is produced as a byproduct
of uranium‑based nuclear power, and is the chief component of nuclear waste; in minis‑
cule amounts, it is also used as a power source in specialized scientific instruments, such
as aboard space probes. Plutonium has only highly specialized and tightly controlled
uses, and poses such a high risk of toxicity if allowed to proliferate that it is controlled
by international regimes, and not produced at all if possible.

Plutonium, in other words, is an apt material metaphor for digital facial recognition
technologies: Something to be recognized as anathema to the health of human society,
and heavily restricted as a result.

Readers might object that the analogy between plutonium and facial recognition tech‑
nologies is not just alarmist, but nonsensical. Yet in forthcoming work, the University
of Washington’s Anna Lauren Hoffmann and I argue the metaphors we use to make
sense of digital systems can reveal important similarities between a new technology or
practice, and other, older technological problems [2]. By analogizing facial recognition
to plutonium, I want to add two broad points to an increasingly lively debate about the
risks of facial recognition technologies. First, facial recognition technologies, by virtue
of the way they work at a technical level, have insurmountable flaws connected to the
way they schematize human faces. These flaws both create and reinforce discredited
categorizations around gender and race, with socially toxic effects. The second is, in
light of these core flaws, the risks of these technologies vastly outweigh the benefits, in
a way that’s reminiscent of hazardous nuclear technologies. That is why the metaphor
of plutonium is apt. Facial recognition, simply by being designed and built, is intrinsi‑
cally socially toxic, regardless of the intentions of its makers; it needs controls so strict
that it should be banned for almost all practical purposes.

There have been a number of recent warnings about the dangers of facial recognition.
Last August, Woodrow Hartzog and Evan Selinger cautioned that facial recognition
technology “is the most uniquely dangerous surveillance mechanism ever invented”

99
5 Pro Evidence

[3]. Arguing for a total ban, the authors gave several reasons why facial recognition
technologies are uniquely dangerous: images of human faces are hard to hide or change;
there is an existing store of databases, such as of drivers licenses, matching faces to
names; video surveillance mechanisms are cheap and already widespread; and most
crucially, faces, unlike other biometric indicators, are central to our personal identities
and social lives. We can’t escape or permanently hide our faces; as a result, our freedom
to exist outside constant surveillance, Hartzog and Selinger write, is threatened by this
“menace disguised as a gift.”

Hartzog and Selinger are right in their concerns, but the problems with facial recognition
technology go even further. In a recent article, I laid out some of the reasons why facial
recognition is troublesome at the conceptual and technical levels, even if all of the social
use cases laid out by Hartzog and Selinger could be satisfactorily solved [4]. That article,
which explores facial recognition through the lens of digital animation grounded in
systems like Apple’s FaceID, is indebted to a number of brilliant scholars of technology
and race, including Simone Browne, Wendy Hui Kyong Chun, Lisa Nakamura, Sianne
Ngai, and Safiya Umoja Noble. These thinkers (all women of color) are at the forefront
of the critical interrogation of technologies, like facial recognition, both in the role these
systems play in shaping our social lives, as well as how pre‑existing forms of bias, racial
animus, and asymmetries of power are built into novel digital tech.

The fundamental problem with facial recognition technologies is they attach numerical
values to the human face at all. As Browne [5] and other scholars have observed, fa‑
cial recognition technologies and other systems for visually classifying human bodies
through data are inevitably and always means by which “race,” as a constructed cate‑
gory, is defined and made visible. Reducing humans into sets of legible, manipulable
signs has been a hallmark of racializing scientific and administrative techniques going
back several hundred years. The systems used by facial recognition technologies to code
human faces perform an essentializing visual schematization.

These systems thus enact a process Browne terms, “digital epidermalization,” or “the
imposition of race on the body” through the classification and schematization of human
facial features [5]. The imposition of racial categories onto human bodies is of course sci‑
entifically unsound. As a recent op‑ed, authored by more than 60 academics observed,
“a robust body of scholarship recognizes the existence of geographically based genetic
variation in our species, but shows that such variation is not consistent with biological
definitions of race,” and, moreover, that such variation does not “map precisely onto
ever changing socially defined racial groups” [6]. Race, in other words, is a set of cate‑

100
5 Pro Evidence

gories dreamed up by humans and perpetuated by human activity—including through


digital systems of classification.

Marta Maria Maldonado describes this process of racialization as, “the production, re‑
production of and contest over racial meanings and the social structures in which such
meanings become embedded” [7]. “Racial meanings,” she observes, “involve essential‑
izing on the basis of biology or culture.” Essentialization, or abstracting away context
to focus on particular elements, is central to racial animus: Not all essentialization is
racist, but all racism involves some form of essentialization.

Like genetic variation, physiological facial variation is not dispositive of racial cate‑
gories, either biological or sociological. Yet it is precisely because facial recognition
technologies do not “see” in the human sense that they are so dangerous. Facial recog‑
nition involves identifying, extracting, and selecting contrasting patterns in an image,
and then classifying and comparing them to a previously compiled database of other
patterns. Facial recognition technologies assign numerical values to schematic repre‑
sentations of the face, and make comparisons between those values. At a technical level,
it is not possible to separate the work of associating schematically mapped parts of the
face with real humans with quantitative comparison, ordering, and ranking.

Critical race scholars, from W.E.B. DuBois and Frantz Fanon in the early 20th century
to Browne, Achille Mbembe, Eduardo Bonilla‑Silva, and Kimberlé Williams Crenshaw
today, have articulated the connections between systems of racial oppression and quan‑
tification [8]. In the case of facial recognition, the schematization of human facial fea‑
tures is driven by a conceptual logic that these theorists and others, such as the French
philosopher Michel Foucault, have identified as fundamentally racist because it is con‑
cerned with using statistical methods to arbitrarily divide human populations.

This process of biopolitical management is grounded in finding numerical reasons for


construing some groups as subordinate, and then reifying that subordination by wield‑
ing the “charisma of numbers” to claim subordination is a “natural” fact. As such,
racism’s function, as Foucault describes it, is “a way of introducing a break into the do‑
main of life [...] of fragmenting the field of the biological that power controls” [9]. Race
and racism are “the preconditions that make killing acceptable” in societies focused on
making discriminations based on technical norms and standards—the justification in
turning authority’s custodianship of life and living into that of death and dying [10].

Recent work by Joy Buolamwini and Timnit Gebru at MIT [11] documents the existing
bias in facial recognition training sets; the difficulty many commercial facial recognition

101
5 Pro Evidence

systems have in recognizing darker female faces illustrates one aspect of digital epider‑
malization’s privileging of whiteness [12]. As Buolamwini observes, “monitoring phe‑
notypic and demographic accuracy of these systems as well as their use is necessary to
protect citizen rights” [12]. Facial recognition technologies were neither invented for nor
exist in a social vacuum. They are, in Browne’s words, “designed and operated by real
people to sort real people.” As an example of these systems’ discriminatory effects, both
Browne and Buolamwini note how, in Browne’s words, “particular biometric systems
privilege[e] whiteness, or lightness, in the ways in which certain bodies are measured
for enrollment.” By introducing a variety of classifying logics that either reify existing
racial categories or produce new ones, the automated pattern‑generating logics of facial
recognition systems both reproduce systemic inequality and exacerbate it.

Yet even if facial recognition systems were ever able to map each and every human face
with technical perfection, the core conceptual mechanism of facial recognition would
remain, in my view, irredeemably discriminatory. If human societies were not racist,
facial recognition technologies would incline them toward racism; as human societies
are often racist, facial recognition exacerbates that animus. This claim is a diagnostic
one about systemic problems, not a polemic against the designers and makers of these
technologies. The analogy to plutonium is apt: As a radioactive element, plutonium’s
biological toxicity comes from its structure, just as facial recognition’s social toxicity
comes from the very parameters of what its algorithms do.

Like the refining of plutonium, the basic research programs developing facial recogni‑
tion technologies was funded by, but formally separated from, the military [13]. And
like many scientists involved in the Manhattan Project in the 1940s, computer scien‑
tists are also sounding the alarm regarding the technologies they and their colleagues
have made. Yet what is the harm‑to‑benefit ratio around facial recognition technolo‑
gies? Hartzog and Selinger observe, “when technologies become so dangerous, and the
harm‑to‑benefit ratio becomes so imbalanced, categorical bans are worth considering.”
I have laid out the harms inherent in these systems, of which racial categorizing is one
of many. Where, if any, are the benefits?

Proponents of facial recognition point to several arenas in which they claim these tech‑
nologies will bring benefits; these include public safety, consumer convenience, and
the general verification of individual identity online. Yet given the ways facial recog‑
nition systems embed racializing and racist logics into its structure, the potential harm
for these systems’ use in public safety and law enforcement contexts should be obvious;
it is the equivalent of deploying a tactical nuclear weapon to demolish an ordinary of‑

102
5 Pro Evidence

fice building. Reports prepared by the Electronic Frontier Foundation (EFF)’s Jennifer
Lynch [14], and by Clare Garvie, Alvaro Bedoya, and Jonathan Frankle from George‑
town’s Center on Privacy and Technology [15] exhaustively document the dangers to
civil liberties and potentials for racial discrimination posed by facial recognition, as does
AI Now’s recent 2018 report [16].

Racial discrimination by facial recognition is not only a problem in the United States,
as recent reporting on Chinese detention of members of the minority Muslim Uyghurs
in western China make clear. But in the case of the United States, understanding fa‑
cial recognition in its security context as a powerful means toward systemic oppression
against black people highlights just how toxic its use within the broader edifice of securi‑
tization is, and how much that edifice’s other techniques and procedures are structurally
accomplice in racist violences—again, regardless of the individual feelings and beliefs
of the professionals involved in its design and deployment.

Likewise, using facial recognition for more general forms of confirming identity online
raises similar core questions regarding trading off its enormous risks for relatively mea‑
ger gains. Recently, Sebastian Benthall, responding to a question I posed on Twitter of
“When *shouldn’t* you build a machine learning system?” suggested “one should not
build an ML [machine learning] system for making a class of decisions if there is already
a better system for making that decision that does not use ML” [17]. This response high‑
lights how little is to be gained by the widespread deployment of facial recognition to
confirm identity, and how much there is to lose. Why introduce an invasive technol‑
ogy with a wide range of ill effects, when other mechanisms will do as well? In effect,
this move is the equivalent of using plutonium to heat not distant space probes, but
residential homes: other options do the job just as well, and the risk of horrendous con‑
sequences is vastly reduced.

Perhaps the most widely cited use case for facial recognition is as a tool of consumer con‑
venience and playfulness. Yet here, racial logics rear their ugly head too. Masking apps
are one particularly egregious, and obvious, avenue for racial discrimination via digi‑
tal epidermalization. In 2016, Snapchat came under fire for a “Bob Marley” mask filter
described by many commentators as “digital blackface.” In 2017, the FaceApp app de‑
ployed a “Hot” filter that lightened a photograph’s skin tone and applied smoothing to
make a subject’s facial features appear “white.” Apparently immune to the widespread
critiques of racism, the company later released a set of filters explicitly labeled as racial:
“Asian, Black, Caucasian and Indian.”

Likewise, digital animations like Samsung’s memoji and Apple’s animoji improve fa‑

103
5 Pro Evidence

cial recognition technologies’ ability to recognize all human faces, making this logic of
racializing privilege even more pervasive and perverse. These systems serve as facial
privacy loss leaders, getting users accustomed to cute, seemingly harmless applications
of facial recognition tech. Precisely because they enlist multiple different technologies
of classification within the mechanisms through which our digital social and emotional
lives take place, these systems are racialized surveillance disguised as animation. In the
consumer context, there is no reason to allow a technology with such toxic effects.

There are strong arguments for an outright ban on facial recognition systems, but there
have also been increasing calls for regulation from the tech sector itself. In July of 2018,
Brad Smith, President and General Counsel of Microsoft, called for both vigorous reg‑
ulation of and heightened corporate social responsibility toward facial recognition sys‑
tems [18]. [For full disclosure, I am an employee of Microsoft Research, though the
views expressed in this article are my own and do not represent those of Microsoft Re‑
search or Microsoft more broadly.] Smith observed the utility of regulation in areas
such as automobiles, air safety, food, and pharmaceutical products—one could easily
have added hazardous waste to the list. Smith’s position shows how even companies
invested in some applications of facial recognition like Microsoft recognize some of the
dangers these technologies pose.

Recognizing facial recognition as plutonium‑like in its hazardous effects only under‑


scores the need to build on calls for regulation like Smith’s, paying close attention to
how the government regulates a hazardous substance like plutonium. Smith notes one
potential limited use case for facial recognition: As an accessibility tool for the visually
impaired. Under a strong regulatory scheme, devices enabling this kind of functional‑
ity, like other digital accessibility devices and clinical health apps might be regulated
by the Food and Drug Administration. Just as the use of a substance like plutonium
for specialized medical or security applications is highly constrained and closely moni‑
tored, facial recognition technologies could be subject to similar constraints. Plutonium
serves as a useful metaphor for facial recognition because it signals some technologies
are so dangerous if broadly accessible that they should be banned for almost all practical
purposes.

Facial recognition’s racializing effects are so potentially toxic to our lives as social beings
that its widespread use doesn’t outweigh the risks. “The future of human flourishing
depends upon facial recognition technology being banned before the systems become
too entrenched in our lives,” Hartzog and Selinger write. “Otherwise, people won’t
know what it’s like to be in public without being automatically identified, profiled, and

104
5 Pro Evidence

potentially exploited.” To avoid the social toxicity and racial discrimination it will bring,
facial recognition technologies need to be understood for what they are: nuclear‑level
threats to be handled with extraordinary care.

105
5 Pro Evidence

5.1.12 Right to Obscurity

Facial recognition destroys the right to obscurity.

Laperruque 17

Jake Laperruque (Contributor at The Century Foundation), “Preserving the Right to


Obscurity in the Age of Facial Recognition,” The Century Foundation, 20 October
2017, https://tcf.org/content/report/preserving‑right‑obscurity‑age‑facial‑recognition/,
accessed 6 March 2023

Ray Bradbury’s vision of a future of pervasive surveillance in a Fahrenheit 451 was per‑
haps not ambitious enough. Today, the government does not need to call on the pop‑
ulace to act with a million eyes to find a specific person, anywhere, at any time. With
facial recognition technology, the government can do this, on its own, with the push of
a button.

This technology not only exists: it is already in place, and in use, for surveillance efforts
across the country. It operates via massive networks of cameras, together with extensive
government databases of photographs tagged and compiled into profiles. Law enforce‑
ment use powerful computer algorithms to parse stockpiles of camera footage, picking
out every face in a particular scene and rapidly associating them with profiles in the
databases.

The ubiquity of networked surveillance cameras makes targeted facial recognition


surveillance possible at essentially any place and any time. Law enforcement has more
and more cameras every day, including CCTV cameras, police body cameras, and even
cameras on drones and surveillance planes hovering over cities, which have already
been used to record mass protests.2 The FBI’s Next Generation Biometric Identification
Database and its facial recognition unit, FACE Services, can already search for and
identify nearly 64 million Americans, from its own databases or via its access to state
DMV databases of photo IDs.

With the records already in hand, the technology in place, and the system’s capabilities
rapidly growing, it may be soon that government will be able to find out who you are,
where you’ve been and when, and who you’ve associated with simply by putting your
name into a search bar. In this way, facial recognition heralds the end of obscurity.

Despite this imminent danger, hardly any limits have been placed on this use of facial
recognition technology. It was not until March of this year that Congress held a hearing

106
5 Pro Evidence

to discuss the risks of facial recognition surveillance, and no laws—state or federal—


exist that impose restrictions on how facial recognition surveillance can be used, and
who it can target. Because of this absence, facial recognition can be used to circumvent
existing legal protections against location tracking, opening the door to unprecedented
government logging of personal associations and intimate details of citizens’ lives.

Facial recognition could also become a dangerous tool for cataloging participation in
First Amendment activities. Religious and political associations could become known
to the government on an enormous scale, with little effort or cost, enabling disruption,
persecution, and abuse. American history throughout the twentieth century and recent
government activities in the past two decades both demonstrate that fear of such abuse
is quite warranted.

The right to obscurity is a crucial democratic right.

Laperruque 17

Jake Laperruque (Contributor at The Century Foundation), “Preserving the Right to


Obscurity in the Age of Facial Recognition,” The Century Foundation, 20 October
2017, https://tcf.org/content/report/preserving‑right‑obscurity‑age‑facial‑recognition/,
accessed 6 March 2023

Left unchecked, facial recognition will soon bring an end to obscurity. Obscurity may
not seem like a fundamentally important value. It was not lauded in the Federalist Pa‑
pers, inscribed in the Constitution, or mentioned as part of the “Four Freedoms” we
fought to preserve in World War II. And yet, poetically standing unnoticed in the back‑
ground, obscurity has served as a critical pillar to American democracy. Democracy
by its nature relies on anonymity of some activities and interactions, especially in First
Amendment activities like protests and religious worship. For over 200 years, obscurity
has been one of anonymity’s most powerful defenses. These protections are in grave
danger, and their survival requires a swift and innovative policy response.

107
5 Pro Evidence

5.1.13 Due Process

FRT threatens due process by turning everyone into a suspect.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Facial recognition technologies also threaten components of due process that are fun‑
damental to how the American criminal justice system is intended to work.103 Sim‑
ply by having a driver’s license, Americans could be included in a law enforcement
database that an officer or agent could use to search for possible suspects of a crime.104
The possibility that anyone may be subject to suspicion invites what can accurately, if
somewhat dramatically, be described as tyranny: individualized suspicion is a key law
enforcement constraint and pillar of our criminal justice system, and it is undermined
by technology that, in the words of the Georgetown experts who conducted the land‑
mark 2016 study, enrolls you in a “perpetual line‑up.” 105 A false positive can endanger
someone’s freedom or even their life—after facial recognition technology misidentified
Muslim activist Amara K. Majeed as a suspect in the Sri Lanka Easter bombings, she
received death threats and her family members in Sri Lanka were harassed by the po‑
lice, even after the FBI put out a public statement correcting its mistake.106 The South
Wales police department stored the records of 2,297 people after misidentifying them,
reporting a jaw‑dropping false positive rate of 92%.107

108
5 Pro Evidence

5.1.14 Normalization

Normalization and technology creep make any use of FRT an existential threat by
expanding surveillance capitalism.

Selinger and Hartzog 19

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity), The Inconsentability of Facial Surveillance, 66 Loyola Law Review 101, 2019.
https://scholarship.law.bu.edu/faculty_scholarship/3066

“Surveillance” is an ominous word. In the post‑Snowden world, it evokes Orwellian


watchers who observe our every move, as persistent as they are powerful. Given the
strong reactions the term can evoke, why hasn’t greater resistance manifested against
surveillance threats? An important reason is that surveillance technology is deployed
in ways that make us feel comfortable with, not creeped out by, the algorithms and
people observing us.1 Facebook, for example, is designed to be an environment that
feels so intimate that users focus on sharing information with friends without thinking
about “surveillance capitalism” and all of the data the company collects, analyzes, and
monetizes on the back end.2 At airports and concerts, the experience of using facial
recognition technology, a tool that is used for racial profiling and tracking in China and
to scan the streets of Russia for “people of interest,” can feel like a godsend, saving us
and everyone else who socially conforms from waiting in long frustrating lines.3 The
more familiar and beneficial a surveillance technology like facial recognition seems, the
easier it is for technology companies, government agencies, and entrepreneurs to create
conditions for widespread passive acceptance.

Normalization, which involves treating facial recognition technology as a mundane part


of the machinery that is necessary for powering a complex digital society, and func‑
tion creep, which entails incrementally expanding how the technology is used, mask
harms to individual and collective autonomy. They make it easy for surveillers to op‑
erate within a permissive regulatory regime: one that has porous boundaries between
the government and the private sector, and treats consent as the basis for authorizing
permission for watching, tagging, tracking, and sorting.4 Even when our consent is ob‑
tained through questionable means, perhaps nudged by dark patterns and hidden op‑
tions, many of us will say yes when companies ask for it while engaging in surveillance
or surveillance‑related activities.5 With limited alternatives to choose from and barriers

109
5 Pro Evidence

to collective action that impede creating new, less surveillance intensive options, assent‑
ing to surveillance seems like the most rational “choice” for avoiding the penalties that
come from being an opt‑out outlier while accruing whatever take‑it‑or‑leave‑it benefits
are offered by the consent‑seeker, however meager they may be.6

The normalization of FRT will occur via small incremental steps.

Selinger and Hartzog 19

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity), The Inconsentability of Facial Surveillance, 66 Loyola Law Review 101, 2019.
https://scholarship.law.bu.edu/faculty_scholarship/3066

How might this social transformation occur? With the law lagging behind innovation
and an existing legacy of name‑face databases ripe for plug‑and‑play expansion, the
perceived advantages of easily and cheaply analyzing biometric faceprints that link our
on‑ and off‑line lives could drive widespread adoption. As this happens, people could
get used to thinking of facial recognition technology as the go‑to solution for solving all
kinds of problems throughout society. Tired of remembering and entering in a passcode
to unlock your phone? Try facial recognition. Long lines boarding a plane? Maybe
facial recognition could help. Not sure who’s knocking at your door? Facial recognition
could tell you. Missing your child while they’re at summer camp and want to watch
them play? Facial recognition to the rescue! And so on.

Patching social problems with technological solutions is easier than mustering the will
to solve harder issues around inequality, education, and opportunity. The drumbeat
of security stokes fear. And enhancing convenience is a powerful motivating force in
American life. Consequently, it won’t be reasonable to expect most people to grasp
that they should summon the political will to push back against incremental buildup
of negative effects that initially concentrate the worst outcomes on people of color and
activists. Immediate gratification, abstract perceptions of risk, and certain harm is a
recipe for doom.

That leads to a death by a thousand cuts.

Selinger and Hartzog 19

110
5 Pro Evidence

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity), The Inconsentability of Facial Surveillance, 66 Loyola Law Review 101, 2019.
https://scholarship.law.bu.edu/faculty_scholarship/3066

Should facial recognition surveillance be consentable? By appealing to Kim’s frame‑


work to answer this question, we must ask whether it is possible to validly consent to
the proposed activity, and whether social harms caused by the activity outweigh its so‑
cial benefits. It seems unlikely that someone could give valid consent to most forms
of facial surveillance because the context in which such consent would be sought frus‑
trates the pre‑conditions for meaningful decision‑making. In order for consent to data
and surveillance practices to be knowing and voluntary, at least three pre‑conditions
should exist: (1) such a request should be infrequent, (2) the harms to be weighed must
be vivid, and (3) there should be incentives to take each request for consent seriously. 45
If the requests for consent are too frequent people will become overwhelmed and desen‑
sitized. This renders them susceptible to user interfaces and dense, confusing, turgid
privacy policies that are designed to exploit their exhaustion to extract consent. If the
harms are framed in terms of abstract notions of privacy and autonomy or the possibil‑
ity of abuse is too distant to be readily foreseeable, then people’s cost/benefit calculus
may be corrupted by an inability to take adequate stock of the risks. Finally, if the risk
of harm is distributed over the course of many different decisions—as is common with
loss of obscurity through surveillance—people will lack the proper incentive to take
each request for consent seriously. After all, no single decision represents a significant
threat. Instead, society is exposed to death by a thousand cuts, with no particular cut
rising to the threat level where substantive and efficacious dissent occurs.

111
5 Pro Evidence

5.1.15 Harms Children

FRT is uniquely harmful to children; however, there is no practical path towards


banning FRT for only children—that necessitates a complete ban to protect children.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

To my knowledge, there haven’t been many calls for child‑specific facial recognition
protections.15 But children are a sympathetic population, and their privacy protections
often draw a broader consensus than do protections for adults given the general agree‑
ment that children’s immaturity makes them particularly vulnerable in a variety of ways.
The combination of a rising tide of facial recognition regulation with its frequent use on
a population whose privacy rights are more widely agreed upon as necessary raises the
question. What would be the value of a child‑specific ban on the use of these technolo‑
gies?

Children’s developmental immaturity, their even weaker ability to avoid unwanted


surveillance relative to adults, and the fact that facial recognition technologies are gener‑
ally less accurate for them might seem to support a uniquely high standard, like a child‑
specific ban on facial recognition technologies. Some of the harms may be particularly
severe for children, others are shared by other demographic groups or are nearly uni‑
versal. The possible harms ensuing from facial recognition technologies’ limited ability
to accurately recognize young faces are similar to those shared by other demographic
groups, such as people with darker skin, Asian people, non‑binary people, and the el‑
derly. Facial recognition’s erosion of practical obscurity is a broadly shared harm, as is
the erosion of due process protections in law enforcement investigations and the chilling
effect on political protest.

But even if a child‑specific ban were morally defensible or a sufficient response to the
full range of implicated threats, it would also be near‑impossible to meaningfully enact.
Young people are often subjected to general‑audience uses of these technologies, which
will make parsing legal and illegal uses difficult. In this article, I argue that in some
ways, the harms of facial recognition technologies are particularly severe for children,
but only slightly more so than they are for a number of distinct groups, and for the

112
5 Pro Evidence

population as a whole. The damage caused by facial recognition technologies to fun‑


damental freedoms and their fundamental inescapability warrant a comprehensive ban
on their use. The severity of that damage for children simply adds additional support
for the case.

FRT for children is uniquely damaging because they have less control of their own
lives, and the harms of surveillance are worse for children who are less mature and
are still developing their personal identity.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Child‑Specific Harms

Young people are another group for whom facial recognition technologies are more
likely to be inaccurate, and for whom the use of those technologies poses risks distinct
to them on the basis of their physical characteristics.155 Children often have even less
control over their privacy than adults do, and facial recognition surveillance frequently
targets children out of misguided attempts to protect them.156 The chill to free expres‑
sion that results from awareness of surveillance through facial recognition may also
have a particularly significant impact on their emotional and intellectual development.
The fact that facial recognition technologies perform less accurately for children’s faces
puts them at risk when law enforcement or school security systems use the technology.
Many of these harms are similarly applicable to and deeply concerning for adults, but
may be even more severe for children due to their immaturity and the fact that child‑
hood and adolescence are tremendously formative for both identity and opportunities
later in life.

In some cases, facial recognition may be even more unavoidable for young people than
it is for adults. Children, and to some extent adolescents, often do not have control over
their movements or how parents or schools disseminate pictures of them. There is of
course some basis for that—autonomy is tied to the maturity required to safely exercise
it, and the law provides parents with decision‑making rights over their young children
in all kinds of ways.157 But children are being subjected to facial recognition technology

113
5 Pro Evidence

in schools, at summer camp, at daycare, and in places where adults are also surveilled
(like church, their apartment building, or in public).158 Concerns about safety in schools
have helped facial recognition grow in popularity in schools, which children are, with
a few exceptions, legally required to attend.159 Children also have their pictures taken
by adults, some of whom upload them to social media without understanding the full
ramifications.160 My objective is not to blame every parent who has ever uploaded a
picture of their child—sharing pictures of one’s children is a natural instinct. But when
children’s pictures are widely disseminated online, anyone who obtains access to them
may use the photos, including for the purpose of adding pictures of children to facial
recognition databases.

Ironically, children may be at particular risk of having their pictures added to facial
recognition databases without their knowledge or permission out of attempts to im‑
prove facial recognition algorithms’ ability to accurately recognize them.161 As com‑
panies and governments try to improve their datasets and algorithms, they will focus
on obtaining pictures from populations poorly represented in them, such as children
or adults with darker skin.162 In one particularly horrifying example, the National In‑
stitute of Standards and Technology resorted to using images of children who were
exploited for child pornography in order to build a sufficiently robust database of chil‑
dren’s faces as part of an evaluation of popular facial recognition algorithms.163

Moreover, concern over children’s safety, and the broad consensus that measures in‑
tended to keep them safe are desirable, may lead to particular focus on children when it
comes to uses of facial recognition technologies designed to either keep track of children
or find ones in danger.164 Amazon’s announcement that it would prohibit law enforce‑
ment use of its facial recognition service Rekognition for one year provides an example
of how the dangers of facial recognition can be distinguished when it comes to deploy‑
ment on children for their purported protection.165 The company specifically exempted
use by organizations to “to help rescue human trafficking victims and reunite missing
children with their families.” 166 Similarly, Clearview AI, which has been heavily crit‑
icized for its privacy violative services, has been quick to tout the use of its product in
cases involving children,167 including investigations into child sexual exploitation.168
The horrendous nature of those crimes may seem to reduce the need for scruples when
it comes to the harms of these technologies, when in fact the sensitivity of the circum‑
stances makes their problems even more concerning. The false identification of a vic‑
tim could be deeply traumatic, and the false identification of an ostensible perpetrator
could lead to tremendously damaging consequences for the wrongly identified. Yet the

114
5 Pro Evidence

gravity of those crimes, and the desire of all relevant stakeholders to prevent, discover,
halt, and deter their occurrence, may lead to facial recognition technologies being de‑
ployed in that context with some frequency or that context being exempted from certain
reforms.

Children have developmental limitations which make the harms to privacy far
worse than to adults.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Developmental Limitations

When it comes to privacy, advertising, and the ability to contract, there is a strong tra‑
dition in the law of recognizing children’s immaturity and vulnerability due to their
age.169 As privacy‑invasive technologies have adopted a larger and more impactful
role in children’s lives, research from a range of disciplines including sociology, media
studies, and engineering has examined children and adolescents’ privacy attitudes and
coping strategies.170 Given the relative newness of facial recognition technologies (and
most likely, the necessary logistical difficulties involved with doing ethical research on
children), only a few studies focus on the impact of facial recognition specifically, and
more research is sorely needed. Until then, research on young people’s attitudes about
online privacy risks are instructive given how companies scrape pictures from the inter‑
net to fuel facial recognition databases, while research about their reactions to realtime
video surveillance can inform how they may react to those uses of facial recognition and
analysis.171

In terms of broader privacy attitudes and perceptions, studies have found that children
have a range of perspectives and capacities to comprehend privacy risks, but tend to
focus more on interpersonal privacy violations, such as if a parent can view something
they’ve posted, rather than on privacy invasions by corporations or the government.172
For example, a 13‑year‑old in one study explained that she considered Facebook to be
public and Twitter private, because she knew that her peers maintained Facebook ac‑
counts and would see what she posted, when that was less likely to be true of Twitter.173

115
5 Pro Evidence

Young people have privacy concerns, but may not be as concerned about corporate ex‑
ploitation, including the collection and use of any photos they post by companies, and
are less able to accurately gauge comparative risks. Young children also struggle with
correctly assessing different types of privacy risks, such as the importance of not disclos‑
ing sensitive information publicly as opposed to correctly evaluating privacy risks.174
A poor understanding of the nuances of various privacy risks makes children vulnera‑
ble to their privacy being exploited online, including by having the pictures they post
collected and used in facial recognition databases. The relative popularity with young
people of social media platforms that focus on user‑posted photos and videos, like Tik‑
Tok, Instagram, and Snapchat, also gives the companies operating those applications
and any third parties they share data with frequent opportunities to collect images of
children’s faces.175

This disproportionate focus on interpersonal privacy and lack of understanding of cor‑


porate (or governmental) surveillance also extends to teenagers.176 A lack of awareness
of how their data is being collected combined with a desire to connect with their peers
on social media can also make young people vulnerable to having their photos collected
for a facial recognition database simply because they wanted to communicate with their
friends. In fact, one study of teenagers’ privacy perceptions and concerns found that
while the teens tended to report a general concern about being identified by data col‑
lected from them, they failed to accurately gauge the risks of disclosing information
themselves, including photographs.177

Children and teens being unaware that using Facebook to bond with their friends puts
them at risk of their pictures being shared with law enforcement178 is precisely why
we have consumer protection, criminal, and other doctrines in various areas of law that
create allowances for immaturity. There are compelling reasons why this kind of com‑
mercial surveillance cannot be fairly attributed to informed assumption of the risk for
adults in many circumstances. But it is particularly unconscionable for companies to
take advantage of young people whose decision‑making capabilities make them even
more even more poorly equipped to protect themselves from commercial and govern‑
mental intrusion.

Conversely, awareness of surveillance also has repercussions for young people distinct
from ignorance or confusion. When teenagers are aware and concerned about online
surveillance, some choose not to participate in online activity, such as by refraining from
using social media altogether or otherwise modifying their online behavior.179 While
abstention would shield them from privacy invasions, it also prevents them from bond‑

116
5 Pro Evidence

ing with their peers online, which is an increasingly substantial component of many ado‑
lescent social interactions.180 A study by Alice Marwick and danah boyd of teenagers
from low income backgrounds found a range of privacy awareness, savviness, and con‑
cerns, including a general awareness that whatever information they post online can
have reputational or professional ramifications for them later in life. They also observed
a heavy undercurrent of victim‑blaming for privacy violations.181

Similarly, researchers who examined how adolescents’ approach to selfpresentation on


social media impacts their identity formation documented scrupulous self‑awareness
in spaces where adolescents knew it was likely their peers could see what they posted,
as opposed to anonymous formats like blogs where they can explore new subjects and
identities without fear of criticism or rejection by peers.182 Young people use social me‑
dia to bond with their peers and discover more about the world at crucial development
stages, and justified fears of surveillance could limit their ability to make those relation‑
ships and seek out important information, or at the very least mold how they approach
those things in ways we do not currently understand.183

Research on CCTV surveillance can be instructive for the impact real‑time facial recog‑
nition surveillance may have on young people, including disparate effects based on
gender and class. A UK study on students’ reactions to CCTV surveillance found that
knowledge of the cameras produced a range of responses from students: for some it has
a chilling effect, as they were concerned the cameras might misinterpret their actions,
while others attempted to avoid the surveillance or obfuscate their conduct so that it
would be misinterpreted.184 Gender and class also appeared to impact the children’s
responses: children from wealthier neighborhoods noted that they had did not mind
public CCTV cameras because “they weren’t doing anything wrong,” 185 while girls
frequently reported concerns about voyeurism and that constant surveillance facilitated
a need to look “perfect.” 186

Real‑time monitoring of the spaces children inhabit will likely impact how they behave
and how they think of themselves, and knowledge of their privacy being invaded on‑
line may have similar effects. As Livingstone et al note in their comprehensive litera‑
ture review of existing research on children’s privacy perceptions and literacy, much
work is left to be done on how children’s development is impacted by a lack of privacy,
and the distinctions between how children respond at different ages.187 McCahill and
Finn also note that further work is needed on the impact of surveillance on children’s
identity formation in particular.188 But the research that exists suggests that children’s
understanding and reactions to surveillance warrants careful scrutiny of introducing

117
5 Pro Evidence

surveillance into their lives for their purported benefit.

FRT is particularly inaccurate on young people because they are still aging.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Bias and Inaccuracies

Researchers have also found that facial recognition algorithms perform less accurately
on the faces of young people. The National Institute of Standards and Technology,
which runs a series of periodical evaluations of facial recognition algorithms, found
higher rates of false positives for children and the elderly in its most recent study, with
the highest rates among those for the youngest children and the oldest adults.189 People
aged 12‑20 produced high false match rates, and the dataset did not include individuals
below the age of 12.190 The report concluded that aging, as it changes one’s appearance
over the course of decades, “will ultimately undermine automated face recognition.”
191 It’s intuitive that an algorithm trained to verify the identity of a 12‑year‑old would
might return a false negative on images of the child at, for example, 15, when the shape
of their face that the program learned to recognize has changed.

The NIST results echo what little other work there is on the accuracy of facial recog‑
nition algorithms on young faces. A 2019 study tested eight facial recognition systems
and found that they performed more poorly for children than adults on both one‑to‑one
and one‑to‑many searches.192 An earlier study found that age variation—the age of the
subject in the probe image compared to the age of the subject in the database image—
heavily impacted the accuracy of facial recognition algorithms used on children, partic‑
ularly younger children.193 While much more work is needed, evaluations of how facial
recognition algorithms assess children have generally shown that the existing systems
are often inaccurate for young faces.

That makes FRT dangerous, especially for young children of color.

Barrett 20

118
5 Pro Evidence

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Inaccurate facial recognition systems that make it more likely for young people to
become erroneously involved in a law enforcement investigation pose severe risks to
young people, particularly young people of color, given how contact with the criminal
justice system can threaten their current and future wellbeing, health, educational and
professional prospects, and freedom. In 2019, the Georgetown Center on Privacy &
Technology obtained records demonstrating that the NYPD had been using thousands
of juvenile mug shots in a facial recognition database in order to compare them to
crime scene images, including children as young as 11.194 The decision to use the
system on mug shot images of children and teens was approved by the NYPD’s legal
department, but was not disclosed to oversight authorities like the City Council, nor
to the public.195 As of August 2019, there were photos of 5,500 individuals in the
database, and the NYPD would not provide statistics on how often their system pro‑
vided false matches.196 Furthermore, it is unclear how many other police departments
are doing the same thing, as few have public‑facing facial recognition policies.197
Putting children into facial recognition systems that are likely to misidentify them puts
them at risk of being wrongly accused of a crime. At the same time, systemic problems
like the failure of procedural protections for young people and the need for children to
make crucial legal decisions without an attorney can make it harder for them to defend
themselves and emerge from the encounter unscathed.198

These risks are far more dire for children of color. Black youth are a cataclysmic 500%
more likely to be detained or committed than their white peers,199 and they are more
likely to be sent to adult prisons and receive longer sentences than their peers, even
when accounting for the type of offense.200 This puts them further at risk, as youth
who are tried as adults and sent to adult prisons are more likely to commit suicide in
jail, exhibit psychiatric symptoms, and re‑offend upon release.201 While there is scant, if
any, research on how facial recognition algorithms assess children of color specifically,
the inability of these systems to assess adults with darker skin and children makes it
more likely that there would be additional errors in evaluating children of color.

These mutually reinforcing risk factors for children of color in particular also illustrate
why the use of facial recognition systems in schools is so corrosive. While the under‑
standing that their movements are being surveilled in real time will likely have chilling

119
5 Pro Evidence

effects on the intellectual and emotional development of all children, inaccurate assess‑
ments of children of color could help exacerbate the school‑to‑prison pipeline, or the
effects of a punitive approach to school discipline that results in higher arrest and in‑
carceration rates for the children subject to it, particularly poor children and children of
color.202

120
5 Pro Evidence

5.1.16 Private Companies Bad

The private use of biometric surveillance enables the creation of dangerous


databases.

Amnesty International 21

Amnesty International (international non‑governmental organization focused on


human rights), “Press Release: Amnesty International and more than 170 organisations
call for a ban on biometric surveillance,” 7 June 2021, https://www.amnesty.org/en/latest/press‑
release/2021/06/amnesty‑international‑and‑more‑than‑170‑organisations‑call‑for‑a‑
ban‑on‑biometric‑surveillance/, accessed 9 March 2023

Our call for a ban covers the use of these technologies when they are used for surveil‑
lance in publicly accessible spaces and in spaces which people cannot avoid. While law
enforcement use of these technologies has attracted attention and criticism, their use by
private actors can pose the same threat to our rights, especially when private actors ef‑
fectively engage in surveillance on behalf of governments and public agencies in public‑
private partnerships or otherwise provide information derived from such surveillance
to the authorities.

We have also seen a worrying development with private facial recognition providers
compiling and amalgamating databases of “suspicious” individuals, and sharing these
databases with multiple clients. This in effect creates “nationwide databases” shared
between private companies which are compiled at the discretion of untrained staff, are
not subject to any oversight, and which can lead to discrimination against individuals
who appear on watchlists in all premises using such databases.

The use of these technologies to surveil people in city parks, schools, libraries, work‑
places, transport hubs, sports stadiums, housing developments, and even in online
spaces such as social media platforms, constitutes an existential threat to our human
rights and civil liberties and must be stopped.

121
5 Pro Evidence

5.1.17 Security Breaches

Security breaches would be catastrophic since biometric data cannot be easily


changed—the risk is high from multiple points of failure.

Lynch 20

Jennifer Lynch (Surveillance Litigation Director at the Electronic Frontier Foundation),


Face Off: Law Enforcement Use of Face Recognition Technology, Electronic Frontier
Foundation, 20 April 2020. http://dx.doi.org/10.2139/ssrn.3909038

Security Risks Posed by the Collection and Retention of Face Recognition Data

All government data is at risk of breach and misuse by insiders and outsiders. However,
the results of a breach of face recognition or other biometric data could be far worse
than other identifying data, because our biometrics are unique to us and cannot easily
be changed.

The many recent security breaches, email hacks, and reports of falsified data— including
biometric data—show that the government needs extremely rigorous security measures
and audit systems in place to protect against data loss. In 2017, hackers took over 123 of
Washington D.C.’s surveillance cameras just before the presidential inauguration, leav‑
ing them unable to record for several days.20 During the 2016 election year, news media
were consumed with stories of hacks into email and government systems, including into
United States political organizations and online voter registration databases in Illinois
and Arizona.21 In 2015, sensitive data stored in Office of Personnel Management (OPM)
databases on more than 25 million people was stolen, including biometric information,
addresses, health and financial history, travel data, and data on people’s friends and
neighbors.22 More than anything, these breaches exposed the vulnerabilities in govern‑
ment systems to the public—vulnerabilities that the United States government appears
to have known for almost two decades might exist.23

The risks of a breach of a government face recognition database could be much worse
than the loss of other data, in part because one vendor—MorphoTrust USA—has de‑
signed the face recognition systems for the majority of state driver’s license databases,
federal and state law enforcement agencies, border control and airports (including TSA
PreCheck), and the State Department. This means that software components and con‑
figuration are likely standardized across all systems, so one successful breach could
threaten the integrity of data in all databases.

122
5 Pro Evidence

Vulnerabilities exist from insider threats as well. Past examples of improper and unlaw‑
ful police use of driver and vehicle data suggest face recognition data will also be mis‑
used. For example, a 2011 state audit of law enforcement access to driver information in
Minnesota revealed “half of all law‑enforcement personnel in Minnesota had misused
driving records.”24 In 2013, the National Security Agency’s Inspector General revealed
NSA workers had misused surveillance records to spy on spouses, boyfriends, and girl‑
friends, including, at times, listening in on phone calls. Another internal NSA audit
revealed the “unauthorized use of data about more than 3,000 Americans and green‑
card holders.”25 Between 2014 and 2015, Florida’s Department of Highway Safety and
Motor Vehicles reported about 400 cases of improper use of its Driver and Vehicle Infor‑
mation Database.26 And a 2016 Associated Press investigation based on public records
requests found that “[p]olice officers across the country misuse confidential law enforce‑
ment databases to get information on romantic partners, business associates, neighbors,
journalists and others for reasons that have nothing to do with daily police work.”27

Many of the recorded examples of database and surveillance misuse involve male of‑
ficers targeting women. For example, the AP study found officers took advantage of
access to confidential information to stalk ex‑girlfriends and look up home addresses
of women they found attractive.28 A study of England’s surveillance camera systems
found the mostly male operators used the cameras to spy on women.29 In 2009, FBI
employees were accused of using surveillance equipment at a charity event at a West
Virginia mall to record teenage girls trying on prom dresses.30 In Florida, an officer
breached the driver and vehicle database to look up a local female bank teller he was
interested in.31 More than 100 other Florida officers accessed driver and vehicle infor‑
mation for a female Florida state trooper after she pulled over a Miami police officer
for speeding.32 In Ohio, officers looked through a law enforcement database to find in‑
formation on an ex‑mayor’s wife, along with council people and spouses. 33 And in
Illinois, a police sergeant suspected of murdering two ex‑wives was found to have used
police databases to check up on one of his wives before she disappeared.34

It is unclear what, if anything federal and state agencies have done to improve the secu‑
rity of their systems and prevent insider abuse. In 2007, the Government Accountabil‑
ity Office (GAO) specifically criticized FBI for its poor security practices. GAO found,
“[c]ertain information security controls over the critical internal network reviewed were
ineffective in protecting the confidentiality, integrity, and availability of information
and information resources.”35 Given all of this—and the fact that agencies often retain
personal data longer than a person’s lifetime36—law enforcement agencies must do

123
5 Pro Evidence

more to explain why they need to collect so much sensitive biometric and biographic
data, why they need to maintain it for so long, and how they will safeguard the data
from the data breaches we know will occur in the future.

124
5 Pro Evidence

5.1.18 Scientifically Flawed

Biometric classifications are scientifically flawed.

Amnesty International 21

Amnesty International (international non‑governmental organization focused on


human rights), “Press Release: Amnesty International and more than 170 organisations
call for a ban on biometric surveillance,” 7 June 2021, https://www.amnesty.org/en/latest/press‑
release/2021/06/amnesty‑international‑and‑more‑than‑170‑organisations‑call‑for‑a‑
ban‑on‑biometric‑surveillance/, accessed 9 March 2023

Furthermore, many applications of facial and biometric classification, which make infer‑
ences and predictions about things such as people’s gender, emotions, or other personal
attributes, suffer from serious, fundamental flaws in their scientific underpinnings. This
means that the inferences they make about us are often invalid, in some cases even oper‑
ationalizing eugenicist theories of phrenology and physiognomy, thereby perpetuating
discrimination and adding an additional layer of harm as we are both surveilled and
mischaracterized.

125
5 Pro Evidence

5.1.19 Lethal Autonomous Weapons

Facial recognition technology can be used for the development of lethal


autonomous weapons.

Meacham and Gak 22

Darian Meacham (philosopher at Maastricht University in the Netherlands) and Martin


Gak (producer for Conflict Zone, a political show on Deutche Welle), “Does facial
recognition tech in Ukraine’s war bring killer robots nearer?,” openDemocracy, 30
March 2022, https://www.opendemocracy.net/en/technology‑and‑democracy/facial‑
recognition‑ukraine‑clearview‑military‑ai/, accessed 7 March 2023

Technology that can recognise the faces of enemy fighters is the latest thing to be de‑
ployed to the war theatre of Ukraine. This military use of artificial intelligence has all
the markings of a further dystopian turn to what is already a brutal conflict.

The US company Clearview AI has offered the Ukrainian government free use of its
controversial facial recognition technology. It offered to uncover infiltrators – includ‑
ing Russian military personnel – combat misinformation, identify the dead and reunite
refugees with their families.

To date, media reports and statements from Ukrainian government officials have
claimed that the use of Clearview’s tools has been limited to identifying dead Russian
soldiers in order to inform their families as a courtesy. The Ukrainian military is also
reportedly using Clearview to identify its own casualties.

This contribution to the Ukrainian war effort should also afford the company a baptism
of fire for its most important product. Battlefield deployment will offer the company
the ultimate stress test and yield valuable data, instantly turning Clearview AI into a
defence contractor – potentially a major one – and the tool into military technology.

If the technology can be used to identify live as well as dead enemy soldiers, it could also
be incorporated into systems that use automated decision‑making to direct lethal force.
This is not a remote possibility. Last year, the UN reported that an autonomous drone
had killed people in Libya in 2020, and there are unconfirmed reports of autonomous
weapons already being used in the Ukrainian theatre.

Our concern is that hope that Ukraine will emerge victorious from what is a murderous
war of aggression may cloud vision and judgement concerning the dangerous prece‑
dent set by the battlefield testing and refinement of facial‑recognition technology, which

126
5 Pro Evidence

could in the near future be integrated into autonomous killing machines.

To be clear, this use is outside the remit of Clearview’s current support for the Ukrainian
military; and to our knowledge Clearview has never expressed any intention for its
technology to be used in such a manner. Nonetheless, we think there is real reason for
concern when it comes to military and civilian use of privately owned facial‑recognition
technologies.

The promise of facial recognition in law enforcement and on the battlefield is to increase
precision, lifting the proverbial fog of war with automated precise targeting, improving
the efficiency of lethal force while sparing the lives of the ‘innocent’.

But these systems bring their own problems. Misrecognition is an obvious one, and
it remains a serious concern, including when identifying dead or wounded soldiers.
Just as serious, though, is that lifting one fog makes another roll in. We worry that
for the sake of efficiency, battlefield decisions with lethal consequences are likely to
be increasingly ‘blackboxed’ – taken by a machine whose working and decisions are
opaque even to its operator. If autonomous weapons systems incorporated privately
owned technologies and databases, these decisions would inevitably be made, in part,
by proprietary algorithms owned by the company.

Clearview rightly insists that its tool should complement and not replace human
decision‑making. The company’s CEO also said in a statement shared with open‑
Democracy that everyone who has access to its technology “is trained on how to use it
safely and responsibly”. A good sentiment but a quaint one. Prudence and safeguards
such as this are bound to be quickly abandoned in the heat of battle.

Clearview’s systems are already used by police and private security operations – they
are common in US police departments, for instance. Criticism of such use has largely
focused on bias and possible misidentification of targets, as well as over‑reliance on the
algorithm to make identifications – but the risk also runs the other way.

The more precise the tool actually is, the more likely it will be incorporated into au‑
tonomous weapons systems that can be turned not only on invading armies but also on
political opponents, members of specific ethnic groups, and so on. If anything, improv‑
ing the reliability of the technology makes it all the more sinister and dangerous. This
doesn’t just apply to privately owned technology, but also to efforts by states such as
China to develop facial recognition tools for security use.

127
5 Pro Evidence

5.1.20 No Accountability

FRT systems have virtually no accountability or oversight—human review fails


because of automation bias and a lack of transparency.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

The use of facial recognition systems by law enforcement is also subject to little oversight
or quality control.108 Law enforcement officials often do not confine themselves to using
only the high‑quality images that facial recognition algorithms are designed to identify.
A recent report by the Georgetown Center on Privacy & Technology found that at least
a half‑dozen police departments across the country are comparing forensic sketches to
facial recognition databases when the officers don’t have a photograph of the suspect,
despite the fact that sketches are of far too low a quality to provide an accurate result.109
The report also found that police departments have used photos of celebrities, or edited
the photos of suspect photographs, adding additional potential for error and arbitrary
results.110

Human review is a common and popular proposed check on algorithmic decision‑


making systems, the idea being that if the machine’s assessment is always tempered
by a human being’s corrective analysis, the human being should be able to catch the
machine’s mistakes. But automation bias, the tendency of human beings to trust the
judgment of computers over their own without a rational basis to do so, makes this a
less effective check that facial recognition defenders tend to claim.111

Furthermore, while police departments often claim that facial recognition searches are
never relied upon as determinative evidence for an arrest, those claims seem dubious.
In Detroit, Julian‑Borchak Williams was wrongfully arrested for larceny that he did not
commit, based on his erroneous identification by a facial recognition algorithm—the
first known case of its kind.112 While Mr. Williams’ case is the first known example
of facial recognition technology being the direct basis for a wrongful arrest, it is highly
unlikely to be the only example that exists. After Mr. Williams’ story was reported
by the New York Times, the Detroit police chief described the facial recognition sys‑
tem his department has relied on as misidentifying suspects “96% of the time.” 113

128
5 Pro Evidence

In Florida, for example, the New York Times found that in a few cases where officers
used facial recognition to locate a suspect, documents indicated that there was no other
evidence.114 State laws vary as to what methods and materials law enforcement are re‑
quired to disclose to the defendant, and states often refuse to disclose their use of facial
recognition, instead referring vaguely to facial recognition searches as “investigative
means” or “attempt[s] to identify.” 115 In New York, the attorneys for a man arrested
for theft argued that there was no probable cause to arrest their client beyond the facial
recognition search that NYPD officers ran of an image of their client, only months after
NYPD Commissioner wrote in the New York Times that a match would never be used
as the sole basis for arrest.116 The Bronx DA’s office claimed that the match was not the
sole basis for the probable cause to arrest, and also argued that it was not required to dis‑
close information about how the technology had been used.117 On the day of the trial,
the office lowered the charges from a felony to a misdemeanor with time served, which
an observer might conclude was related to concerns that a judge might agree with the
defense’s arguments and the NYPD’s use of facial recognition could be subjected to un‑
wanted scrutiny.118 This would not be an implausible tactic, as it has previously been
deployed by prosecutors’ offices to obscure their reliance on cell‑site simulators.119

The lack of transparency surrounding the use of facial recognition technologies by law
enforcement makes the lack of standards and oversight all the more concerning for indi‑
vidual civil liberties. The 2016 Georgetown study, Perpetual Line‑Up, found that only
four of the fifty‑two law enforcement agencies surveyed had any kind of publicly avail‑
able policy concerning their use of facial recognition.120 Law enforcement agencies are
cagey about their use of this technology, admitting its existence, but providing limited
information in response to public records requests. Facial recognition technology ven‑
dors like Amazon even use non‑disclosure agreements with law enforcement to keep
its use of the technology from the public.121 Surveys of public defenders have also il‑
lustrated that prosecutors frequently fail to disclose necessary information about law
enforcement use of facial recognition technologies, 122 which the ACLU has argued are
often unconstitutional violations of defendants’ Brady rights.123 There are few qual‑
ity controls or oversight mechanisms over law enforcement uses of facial recognition
databases, meaning that errors may go undetected and uncorrected.124

The damage that facial recognition technologies inflict on privacy, free expression, and
due process affects us all, and should not be taken lightly. Even if facial recognition
technology weren’t fraught with biased accuracy problems or deployed with sloppy
haphazardness that raises the likelihood of errors, it would still pose a severe threat to

129
5 Pro Evidence

democratic values working exactly as intended.

130
5 Pro Evidence

5.1.21 Now Key

Failure to act now risks a dystopian future where the government wields enormous
power over people.

Selinger and Hartzog 19


Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and
Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity), The Inconsentability of Facial Surveillance, 66 Loyola Law Review 101, 2019.
https://scholarship.law.bu.edu/faculty_scholarship/3066
Consentability contains a passage about technology‑induced change that is so bleak, it
is worth quoting at length.

Technology will continue to push the boundaries of what society thinks is


acceptable. In some cases, the changes will be gradual, occurring first on
the fringes of society and undetected by the public. . . . Sometimes the
changes will go undetected because they are not visible or obvious to most
people. As Lori Andrews observed in the context of genetics policy, “When
technologies are introduced incrementally and policies are adopted in small
units to deal with a few isolated issues, there is less opportunity to stimu‑
late a social debate about whether we are moving in a direction in which we
want to go.” Companies, skilled in the art of marketing and sales, may try
to manipulate the public and intimidate lawmakers into accepting products
and services which degrade, rather than enhance, social relations. Legisla‑
tures will be indifferent or reluctant to act until there is some sort of social
outcry or the impact on society is too great to ignore. The law will arrive too
late, after social norms have already been established and when it is much
more difficult to reverse society’s course.27

Before showing how Kim’s consentability framework can be applied to the facial recog‑
nition technology debates, we will sketch the outline of dystopian future. The scenario
is a thought experiment about a possible world where the dire risks posed by facial
recognition technology poses are realized. The transition from the present world to this
hypothetical future could occur due to structural problems like the ones Kim outlines
in the above passage.
Much of the discussion about the immediate and short to medium term problems with
facial recognition technology focuses on the harm that could occur if the technology

131
5 Pro Evidence

continues to produce inaccurate results.28 Law‑abiding people could be put on govern‑


ment watchlists, deprived of due process in court, prevented from accessing places they
should be allowed to enter, and questioned or detained by law enforcement. Govern‑
ment and industry could deny people access to their assets, deprive them of job oppor‑
tunities, and mischaracterize their identities and behaviors. While everyone is vulner‑
able to these harms, false positives and negatives disproportionately affect minorities,
especially people of color.29 These discussions also emphasize that the law poses few
restrictions on facial recognition technology. Furthermore, there is little transparency
about how facial recognition technology is used as we can see from the fact that state
legislatures are not required to openly debate and approve (i.e., consent) using driver’s
license photos for government facial recognition databases.30 Finally, internal policies
for the government using facial recognition technology are not standardized.

132
5 Pro Evidence

5.1.22 AT: Convenience

The allure of convenience is a Trojan Horse—it smuggles in and normalizes a


dangerous tool of surveillance.

Hartzog and Selinger 18

Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity) and Dr. Evan Selinger (professor of philosophy at Rochester Institute of Tech‑
nology), “Facial Recognition Is the Perfect Tool for Oppression,” Medium, 3 August
2018, https://medium.com/s/story/facial‑recognition‑is‑the‑perfect‑tool‑for‑oppression‑
bc2a08f0fe66, accessed 9 March 2023

The Trojans would have loved facial recognition technology.

It’s easy to accept an outwardly compelling but ultimately illusory view about what the
future will look like once the full potential of facial recognition technology is unlocked.
From this perspective, you’ll never have to meet a stranger, fuss with passwords, or
worry about forgetting your wallet. You’ll be able organize your entire video and pic‑
ture collection in seconds — even instantly find photos of your kids running around at
summer camp. More important, missing people will be located, schools will become
safe, and the bad guys won’t get away with hiding in the shadows or under desks.

Total convenience. Absolute justice. Churches completely full on Sundays. At long last,
our tech utopia will be realized.

Tempted by this vision, people will continue to invite facial recognition technology into
their homes and onto their devices, allowing it to play a central role in ever more aspects
of their lives. And that’s how the trap gets sprung and the unfortunate truth becomes
revealed: Facial recognition technology is a menace disguised as a gift. It’s an irresistible
tool for oppression that’s perfectly suited for governments to display unprecedented
authoritarian control and an all‑out privacy‑eviscerating machine.

We should keep this Trojan horse outside of the city.

“Convenience” for only white men is not true convenience.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies

133
5 Pro Evidence

for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

An argument for banning facial recognition technologies because of the range of harms
they inflict would be incomplete without at least a brief discussion of the technologies’
perceived benefits. Proponents of commercial applications of facial recognition tend
to highlight the convenience of using one’s face as a biometric identifier.340 People
struggle to remember complex passwords and simple ones are easy for hackers to crack,
whereas, proponents argue, a face is unique and diminishes the possibility of human
fallibility creating a security vulnerability at that particular vector.341 But the benefits
of the identifier hinge on its accuracy, which, as this article has attempts to explain,
varies starkly among demographic groups. A definition of “convenience” that excludes
young people, older people, women, people with darker skin, Asian people, and gender
nonbinary people does not mean much. Moreover, the very fact that a face is a function‑
ally irreplaceable identifier makes the security implications all the more severe when
databases are hacked.

Convenience benefits are vastly overstated and can be solved without creating
massive databases that devastate freedom.

Selinger and Hartzog 19

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity), The Inconsentability of Facial Surveillance, 66 Loyola Law Review 101, 2019.
https://scholarship.law.bu.edu/faculty_scholarship/3066

But many of these touted benefits are meager, incremental improvements that could
likely be approximated through less dangerous means. For example, facial recogni‑
tion is being deployed to streamline the hassle associated with paper boarding passes,
cash and debit cards, and passcodes and fingerprint access.47 But these technologies al‑
ready worked reasonably (or exceptionally) well. The legitimately compelling benefits,
such as finding missing people and keeping people safe, would require large, promis‑
cuous databases working with interconnected and ubiquitous sensors making a mind‑
bogglingly large number of fraught algorithmic decisions. Such an infrastructure would
extract a massive toll on our freedoms, civil liberties, and autonomy. Setting up this in‑
frastructure also intrinsically incentivizes its use due to the sunk cost fallacy, a cognitive
bias emphasized by the cognitive science literature that Kim discusses. 48 The sunk cost

134
5 Pro Evidence

fallacy is the tendency for humans continue down a particular course once they have
made significant investment in it. Spending all the resources required for getting the
infrastructure built and stoking expectations that the infrastructure is required for so‑
cial progress would therefore make it hard to change course and accept the reality that
previous resources could have been better spent.

Benign uses of the technology only serve to normalize the collection of biometric
data.

Simonite 22

Tom Simonite (a senior editor who edits WIRED’s business coverage), “Face Recog‑
nition Is Being Banned—but It’s Still Everywhere,” WIRED, 22 December 2022,
https://www.wired.com/story/face‑recognition‑banned‑but‑everywhere/, accessed 27
February 2023

Caitlin Seeley George, a campaign director at nonprofit Fight for the Future, finds the
spread of face recognition in airports and other areas of daily life concerning. “We need
to ban all facial recognition, because the harms of this technology far outweigh any
benefits,” she says.

George considers seemingly benign or careful uses of the technology dangerous because
they help normalize collection of personal and biometric data that can be hacked or ex‑
ploited. “The more places people see it, the more comfortable people feel,” she says.
“When we do things for convenience we may not be thinking through all the repercus‑
sions.”

At the same time, George is optimistic about containing face recognition. She points to
Facebook’s decision to shut its tagging system, the spread of local bans, and legislation
introduced to both houses of Congress this year by a group of Democratic lawmakers
and Senator Bernie Sanders (I‑Vermont) that would ban use of face recognition by fed‑
eral agencies. Similar bills were introduced in 2020 but did not proceed to a vote.

135
5 Pro Evidence

5.1.23 AT: Overreaction

The dangers of FRT are real—the information will be used to police and control you.

Selinger and Hartzog 19

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity), “What Happens When Employers Can Read Your Facial Expressions?,” The
New York Times, 17 October 2019, https://www.nytimes.com/2019/10/17/opinion/facial‑
recognition‑ban.html, accessed 9 March 2023

A second argument for little or no regulation of facial recognition is that strong fears
about new technologies are overreactions. Apostles of innovation compare people who
are calling for banning facial recognition to the naysayers of yesteryear whose anxi‑
eties about new technologies ranging from the automobile to photography proved un‑
founded. From this perspective, facial recognition technology is the new fingerprint.

But the tangible harms of facial recognition are potentially far more menacing. The
technology is less accurate with people of color and is biased along gender lines. And
things will not get better as it becomes more accurate, because big companies, govern‑
ment agencies and even your next‑door neighbors will seek to deploy it in more places.
They will want to identify and track you. They will want to categorize your emotions
and identity. They will want to infer where you might shop, protest or work — and use
that information to control and manipulate you, or deprive you of opportunities.

It is likely that the technology will be used to police social norms. People who skip
church or jaywalk will be noticed — and potentially ostracized. And you’d better start
practicing your most convincing facial expressions. Otherwise, during your next job
interview, a computer could code you as a liar or malcontent.

136
5 Pro Evidence

5.1.24 AT: Crime

There is little empirical evidence of FRT’s ability to enhance security, bias turns it,
and costs to privacy and due process outweigh.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Perhaps the most fervently touted argument for the value of facial recognition technolo‑
gies is their purported utility in law enforcement and national security contexts. 342
“The ability to more easily identify criminals makes us all safer” is a temptingly simple
argument. But as even many law enforcement professionals have argued,343 technol‑
ogy that is less accurate for a wide range of demographic groups does not bolster col‑
lective safety, it diminishes it by subjecting members of those groups to unwarranted
scrutiny and directing officers to pursue erroneous leads. As discussed above, reports
of shoddy data practices abound, even despite a lack of transparency surrounding po‑
lice uses of these technologies. 344 Little evidence exists to support the notion that facial
recognition technologies actually enable police officers to do their jobs better by help‑
ing them to correctly identify suspects. In fact, evidence that it does the opposite only
continues to mount.345

Ultimately, most of the claims concerning the benefits of facial recognition technologies
are false, like the safety narrative, while others are simply not sufficient to outweigh the
severe costs to privacy, due process, and free expression. Technology that subjects peo‑
ple of color to even more disproportionate police scrutiny imperils their freedom and
safety, rather than bolstering it. The ability to identify protestors doesn’t make anyone
safer, it gives the government license to chill free expression and quash dissent. Subject‑
ing children to surveillance in schools is unlikely to prevent a school shooting, as even
some vendors admit.346 Facial recognition databases used by either government or law
enforcement also create new sources of hackable information that leave people vulner‑
able to identity theft and fraud. Despite the rhetoric of its defenders, facial recognition
technologies imperil the people and values they purportedly protect.

137
5 Pro Evidence

The use of FRT in crime‑fighting leads to false identifications that cannot be easily
challenged in court.

Johnson 20
Khari Johnson (senior writer for WIRED covering artificial intelligence), “The Hid‑
den Role of Facial Recognition Tech in Many Arrests,” WIRED, 7 March 2022,
https://www.wired.com/story/hidden‑role‑facial‑recognition‑tech‑arrests/, accessed 10
March 2023
The prosecutor who told Jackson how her client had been identified was unusual.
Across most of the US, neither police nor prosecutors are required to disclose when
facial recognition is used to identify a criminal suspect. Defense attorneys say that
puts them at a disadvantage: They can’t challenge potential problems with facial
recognition technology if they don’t know it was used. It also raises questions of equity,
since studies have shown that facial recognition systems are more likely to misidentify
people who are not white men, including people with dark skin, women, and young
people.
“Facial recognition technology use shouldn’t be a secret,” says Anton Robinson, a former
public defender now at the Innocence Project, a nonprofit dedicated to getting people
who’ve been wrongly convicted out of prison. “It’s such a big issue in criminal cases.
Attorneys shouldn’t be left to have these epiphany moments.”
Misidentification is historically a huge factor in sending innocent people to prison. The
Innocence Project found that more than two‑thirds of people exonerated through DNA
evidence had been misidentified by witnesses, making it the leading factor in these con‑
victions. Eyewitnesses can struggle to identify people they don’t know, especially when
those individuals are of different racial or ethnic backgrounds.
The rules regulating facial recognition use are gaining importance as more police agen‑
cies adopt the technology. In 2016, the Georgetown Center on Privacy and Technology
said police in most US states had access to the tech and that photos of about half of US
adults were in a facial recognition database. The report also warned that the technology
would disproportionately hurt Black people because of the technology’s higher error
rates for people with dark skin. In a 2019 report, the Georgetown center said New York
police had made more than 2,800 arrests following face recognition searches between
2011 and 2017. Last year, BuzzFeed News reported that law enforcement agencies in 49
states, and more than 20 federal agencies, had at least tested facial recognition technol‑
ogy products from Clearview AI.

138
5 Pro Evidence

Using FRT to solve crime obscures the root cause of crime and slaps a band‑aid on a
bullet wound while also exacerbating other social ills.

Waghorne 22

Sylvia Waghorne (J.D., Washington University School of Law), The Price Of Privacy: A
Call For A Blanket Ban On Facial Recognition In The City Of St. Louis, Volume 69, Issue
1, 2022, After the Trump Administration: Lessons and Legacies for the Legal Profession,
pages 421‑447, https://journals.library.wustl.edu/lawpolicy/article/id/8639/

Given the significant threat that facial recognition technology poses to the constitution‑
ally protected liberties of all St. Louis citizens, and the even greater threat posed specif‑
ically to people of color, the city should ban facial recognition technology. It is true that
there are potential benefits to the use of facial recognition technology, especially when
it comes to law enforcement and security. However, the use of such technology does
not actually solve the underlying issues in society, it merely moves the target. While
some stores may enjoy a decrease in robberies, the fundamental social problems con‑
tributing to crime go ignored. As Jeramie Scott, Senior Counsel at the Electronic Pri‑
vacy Information Center, argues, “[y]ou are implementing a technology that pushes us
closer to a total surveillance state, and it’s a technology that actually doesn’t address the
underlying issues.” 147 As a society, we often look toward technology to provide easy
solutions to the problems that plague us—but quick fixes seldom deliver on their lofty
promises, and can obscure the root cause for why we looked for a fix in the first place.
Indeed, “[p]atching social problems with technological solutions is easier than muster‑
ing the will to solve harder issues around inequality, education, and opportunity. The
drumbeat of security stokes fear.” 148 Increased surveillance through facial recognition
does not ultimately serve our communities, but rather puts a Band‑Aid on some of our
deepest wounds while simultaneously opening up new ones.

Attempting to manage insecurity through technocratic solutions fails and


reproduces hierarchies that damage those at the bottom.

Smyth 19

Sara M. Smyth (an Associate Professor of Law at La Trobe University, Melbourne, Aus‑
tralia), Biometrics, Surveillance and the Law Societies of Restricted Access, Discipline and Con‑
trol, New York: Routledge, 2019, ISBN: 978‑0‑367‑07719‑8

139
5 Pro Evidence

The escalating global insecurity of the last couple of decades has brought with it an
equally precipitous rise in the use of sophisticated technologies. Predictive algorithms,
risk models, and other sorts of automated decision‑making tools are now ubiquitous in
the public service.1 Investments in these systems are often justified by calls for admin‑
istrative efficiency – doing more with less and making decisions on a fairer and more
consistent basis.

Yet, although we devote more and more attention to managing it, risk seems increas‑
ingly to be out of our control. Our society’s belief that risk is everywhere has prompted
ever more determined efforts to control it. And the more we consider and discuss risks,
the more this leads to a climate of fear. This, in turn, leads to the demand for more infor‑
mation about risks, creating a vicious circle that helps to justify even more surveillance
and profiling in the pursuit of security.

Yet security technologies are not neutral. They produce and reinforce understandings
of gender, race, class, authority and criminality (i.e. a larger set of relationships that
are politically determined and that raise complex social, economic, political and legal
issues). These understandings further contribute to the development and use of the
technology. Biometric technologies, thus, categorize individuals with a dangerous logic
that cannot be viewed as “true” or ‘objective’.

Those lower on the social and economic hierarchy – the poor, working‑class and minori‑
ties – have often found themselves targets rather than beneficiaries of these systems.2
Since those who are most vulnerable to risk include those whose actions can contribute
to risk, certain groups of people – for example, particular ethnic minorities or poor peo‑
ple – are often seen as risky themselves.3 Marginalized people are exposed to more risks
and are categorized as bad risks; thus, the very people who need our help the most are
viewed as a threat, which actually inhibits them from getting the help that they might
need.4

140
5 Pro Evidence

5.1.25 AT: Airports

FRT in airports is likely to lead to misidentifications, particularly for minority


groups.

Fowler 22

Geoffrey A. Fowler (The Washington Post’s technology columnist based in San Fran‑
cisco), “TSA now wants to scan your face at security. Here are your rights.,” Washington
Post, 2 December 2022, https://www.washingtonpost.com/technology/2022/12/02/tsa‑
security‑face‑recognition/, accessed 10 March 2023

But the TSA hasn’t actually released hard data about how often its system falsely iden‑
tifies people, through incorrect positive or negative matches. Some of that might come
to light next year when the TSA has to make its case to the Department of Homeland
Security to convert airports all over the United States into facial recognition systems.

“I am worried that the TSA will give a green light to technology that is more likely to
falsely accuse Black and Brown and nonbinary travelers and other groups that have
historically faced more facial recognition errors,” said Cahn of STOP.

Research has shown facial recognition algorithms can be less accurate at identifying
people of color. A study published by the federal National Institute of Standards and
Technology in 2019 found that Asian and African American people were up to 100 times
more likely to be misidentified than White men, depending on the particular algorithm
and type of search.

Should travelers be concerned? “No one should worry about being misidentified. That
is not happening, and we work diligently to ensure the technology is performing ac‑
cording to the highest scientific standards,” Lim told me. “Demographic equitability is
a serious issue for us, and it represents a significant element in our testing.”

That doesn’t satisfy critics such as Cahn. “I don’t trust the TSA to evaluate the efficacy
of its own facial recognition systems,” he said.

Biometrics still make errors which disproportionately harm minority groups, the
data can still be misused, and it is vulnerable to cyberattacks.

Glick 21

141
5 Pro Evidence

Molly Glick (Associate Innovation Editor at Inverse), “Airports Are Embracing


Facial Recognition. Should We Be Worried?,” Discover Magazine, 21 November
2021, <https://www.discovermagazine.com/technology/airports‑are‑embracing‑facial‑
recognition‑ should‑we‑be‑worried>, accessed 10 March 2023

And although proponents of biometric security screenings commonly point to their


high degree of accuracy, such percentages can be misleading. In 2017, Senators Edward
Markey and Mike Lee pointed out that, even with a 96 percent accuracy rate, this tech‑
nology will still falsely flag one in 25 travelers. The process currently matches correctly
over 98 percent of the time, according to a CBP spokesperson.

But any errors could disproportionately harm people of color: Facial recognition algo‑
rithms may deliver false positives up to 100 times more frequently for the faces of Asian
and Black people than those of white people, according to a 2019 paper by the National
Institute of Standards and Technology.

It’s also hard to tell where our data goes after we depart. In 2018, no airlines nor airport
authorities had told CBP that they planned to retain the biometric data they indepen‑
dently collect for other purposes. But as of May 2020, CBP had only investigated a
single airline partner, out of over 20, regarding their long‑term data usage. It’s unclear
whether they’ve since conducted any audits, and the agency has not yet responded to
Discover’s question.

As for its own biometric information, all photos are deleted from CBP’s cloud platform
within 12 hours. But non‑citizens’ images are transferred to a threat‑monitoring system
for up to 14 days, and CBP can keep photos in a broader database for up to 75 years.
While the government can already access many foreign nationals’ fingerprints and pho‑
tos, as Kugler points out, improved facial recognition represents a significant advance
in targeting undocumented people.

“Immigration enforcement is run out of Homeland Security, which is also the agency in
charge of securing our airports,” Kugler says. “We’re already in the right agency, and
in a way you could say it’s merely more effectively enforcing the laws we already have
… but it’s perhaps too effective.”

Even if an entity claims to have deleted someone’s photo from a facial recognition sys‑
tem, they could still theoretically access a hash, or an algorithm‑derived number that
could be used to retrieve it, Keenan points out. But DHS claims their numbers created
from travelers’ images can’t be reverse‑engineered to do so.

DHS will soon store its biometric data on Amazon Web Services’ GovCloud, along with

142
5 Pro Evidence

that of agencies such as ICE, the Department of Defense and the Central Intelligence
Agency. The DHS can technically share sensitive biometric information with other gov‑
ernment entities, according to their 2020 report. The agency already works with the de‑
partments of Justice and State on the controversial Automated Targeting System, which
uses facial recognition to single out passengers they perceive as threats.

Law enforcement officials have already abused people’s facial scans to identify them
at a political protest. It’s been well‑documented that police use Clearview AI software,
which scrapes people’s data from social media, to do just that. DHS works with
Clearview on “border and transportation security,” GAO noted in a 2021 paper. But
the software isn’t used specifically for airport entry‑exit programs, a CBP spokesperson
told BuzzFeed last year.

CLEAR, meanwhile, states on its website that the company saves biometric data col‑
lected at airports, stadiums and other venues and utilizes it beyond the purposes of au‑
thenticating over 5 million users’ identities. It may even share such data for marketing
purposes, according to reporting by OneZero, and aims to serve as a personal identi‑
fier when customers use their credit and insurance cards, along with other common
interactions.

Regardless of how they use your data, both public and private forces are vulnerable to
cyber attacks. Government contractors, in particular, have exposed sensitive informa‑
tion in the past: In May 2019, CBP experienced a data breach in which hackers stole
thousands of license‑plate images and ID photos from a subcontractor who wasn’t tech‑
nically authorized to hold onto that information.

143
5 Pro Evidence

5.2 Ban Key

5.2.1 Ban Key—General

Regulations will be insufficient and too slow—only an immediate ban can prevent
FRT from becoming too entrenched in our lives.

Hartzog and Selinger 18

Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity) and Dr. Evan Selinger (professor of philosophy at Rochester Institute of Tech‑
nology), “Facial Recognition Is the Perfect Tool for Oppression,” Medium, 3 August
2018, https://medium.com/s/story/facial‑recognition‑is‑the‑perfect‑tool‑for‑oppression‑
bc2a08f0fe66, accessed 9 March 2023

Corporate leadership is important, and regulation that imposes limits on facial recogni‑
tion technology can be helpful. But partial protections and “well‑articulated guidelines”
will never be enough. Whatever help legislation might provide, the protections likely
won’t be passed until face‑scanning technology becomes much cheaper and easier to
use. Smith actually seems to make this point, albeit unintentionally. He emphasizes that
“Microsoft called for national privacy legislation for the United States in 2005.” Well, it’s
2018, and Congress has yet to pass anything.

If facial recognition technology continues to be further developed and deployed, a


formidable infrastructure will be built, and we’ll be stuck with it. History suggests
that highly publicized successes, the fear of failing to beef up security, and the sheer
intoxicant of power will tempt overreach, motivate mission creep, and ultimately lead
to systematic abuse.

The future of human flourishing depends upon facial recognition technology being
banned before the systems become too entrenched in our lives.

Why a Ban Is Necessary

A call to ban facial recognition systems, full stop, is extreme. Really smart scholars like
Judith Donath argue that it’s the wrong approach. She suggests a more technologically
neutral tactic, built around the larger questions that identify the specific activities to be
prohibited, the harms to be avoided, and the values, rights, and situations we are trying
to protect. For almost every other digital technology, we agree with this approach.

144
5 Pro Evidence

But we believe facial recognition technology is the most uniquely dangerous surveil‑
lance mechanism ever invented. It’s the missing piece in an already dangerous surveil‑
lance infrastructure, built because that infrastructure benefits both the government and
private sectors. And when technologies become so dangerous, and the harm‑to‑benefit
ratio becomes so imbalanced, categorical bans are worth considering. The law already
prohibits certain kinds of dangerous digital technologies, like spyware. Facial recogni‑
tion technology is far more dangerous. It’s worth singling out, with a specific prohibi‑
tion on top of a robust, holistic, value‑based, and largely technology‑neutral regulatory
framework. Such a layered system will help avoid regulatory whack‑a‑mole where law‑
makers are always chasing tech trends.

Surveillance conducted with facial recognition systems is intrinsically oppressive. The


mere existence of facial recognition systems, which are often invisible, harms civil lib‑
erties, because people will act differently if they suspect they’re being surveilled. Even
legislation that holds out the promise of stringent protective procedures won’t prevent
chill from impeding crucial opportunities for human flourishing by dampening expres‑
sive and religious conduct.

Facial recognition technology also enables a host of other abuses and corrosive activities:

Disproportionate impact on people of color and other minority and vulnerable popula‑
tions.

Due process harms, which might include shifting the ideal from “presumed innocent”
to “people who have not been found guilty of a crime, yet.”

Facilitation of harassment and violence.

Denial of fundamental rights and opportunities, such as protection against “arbi‑


trary government tracking of one’s movements, habits, relationships, interests, and
thoughts.”

The suffocating restraint of the relentless, perfect enforcement of law.

The normalized elimination of practical obscurity.

The amplification of surveillance capitalism.

As facial recognition scholar Clare Garvie rightly observes, mistakes with the technol‑
ogy can have deadly consequences:

What happens if a system like this gets it wrong? A mistake by a video‑based surveil‑
lance system may mean an innocent person is followed, investigated, and maybe even

145
5 Pro Evidence

arrested and charged for a crime he or she didn’t commit. A mistake by a face‑scanning
surveillance system on a body camera could be lethal. An officer alerted to a potential
threat to public safety or to himself, must, in an instant, decide whether to draw his
weapon. A false alert places an innocent person in those crosshairs.

Two reports, among others, thoroughly detail many of these problems. There’s the in‑
valuable paper written by Jennifer Lynch, senior staff attorney at the Electronic Frontier
Foundation, “Face Off: Law Enforcement Use of Face Recognition Technology.” And
there’s the indispensable study “The Perpetual Line‑Up,” from Georgetown’s Center
on Privacy and Technology, co‑authored by Clare Garvie, Alvaro Bedoya, and Jonathan
Frankle. Our view is deeply informed by this rigorous scholarship, and we would urge
anyone interested in the topic to carefully read it.

A total ban is key—regulation fails because of capture.

FFTF 23

Fight For The Future, “Ban Facial Recognition,” Ban Facial Recognition, 2023,
https://www.banfacialrecognition.com/, accessed 6 March 2023

Regulation is not enough

Like nuclear or biological weapons, facial recognition poses a threat to human society
and basic liberty that far outweighs any potential benefits. Silicon Valley lobbyists are
disingenuously calling for light “regulation” of facial recognition so they can continue
to profit by rapidly spreading this surveillance dragnet. They’re trying to avoid the real
debate: whether technology this dangerous should even exist. Industry‑friendly and
government‑friendly oversight will not fix the dangers inherent in law enforcement’s
use of facial recognition: we need an all‑out ban.

146
5 Pro Evidence

5.2.2 Ban Key—Creep

Technology creep devastates any attempt at procedural regulation—it sets a


dangerous precedent and creates exploitable loopholes that snowball.

Selinger and Hartzog 18

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Univer‑
sity), “Amazon Needs to Stop Providing Facial Recognition Tech for the Government,”
Medium, 22 June 2018, https://medium.com/s/story/amazon‑needs‑to‑stop‑providing‑
facial‑recognition‑tech‑for‑the‑government‑795741a016a6, accessed 10 March 2023

Facial Recognition Technology Creep

Facial recognition technology is not like a general‑purpose computer. It’s a specific tool
that enables tracking based on our most public‑facing and innate biological feature. It’s
an ideal tool for oppressive surveillance. It poses such a severe threat in the hands of law
enforcement that the problem cannot be contained by imposing procedural safeguards
around how faceprint databases and face recognition systems are constructed and used.

Laws adequately locking down facial recognition technology, especially in today’s po‑
litical climate, seem unlikely. The framing of facial recognition as critical for stopping
criminals and finding missing persons will likely result in rules that yield too many con‑
cessions and leave open too many loopholes. But the best course of action for industry
would be to quit cold turkey. As Frank Pasquale has argued about certain unsalvage‑
able surveillance and data technologies, “Sometimes the best move in a game is not to
play.”

As we see it, our procedural pessimism is rooted in a defensible notion of facial recog‑
nition technology creep. Facial recognition creep is the idea that once the infrastructure
for facial recognition technology grows to a certain point, with advances in machine
learning and A.I. leading the way, its use will become so normalized that a new com‑
mon sense will be formed. People will expect facial recognition technology to be the
go‑to tool for solving more and more problems, and they’ll have a hard time seeing
alternatives as anything but old‑fashioned and outdated. This is how “techno‑social en‑
gineering creep” works — an idea that one of us discussed in detail in Re‑Engineering
Humanity with Brett Frischmann.

To appreciate why facial recognition technology creep is a legitimate way to identify

147
5 Pro Evidence

slopes that are genuinely slippery, you have to be a realist and accept the fact that some,
though not all, technological trajectories can be exceedingly difficult to change. This is
especially so in the case of trajectories formed by infrastructure that grows significantly,
where the growth is propelled by strong interest across sectors; heavy financial invest‑
ments; heightened expectations from consumers, citizens, and politicians; increased so‑
cial, personal, regulatory, and economic dependency; and limited legal speed bumps
that stand in the way.

Unfortunately, facial recognition technology is a potent cocktail that’s made from all
these ingredients. It’s the Long Island iced tea of technology.

Our face is one of our most important markers of identity, and losing control of that is
perhaps the greatest threat to our obscurity. We often recognize others by their faces,
even as people age. Faces are also the easiest biometric for law enforcement to obtain,
because they can be unobtrusively, inexpensively, and instantly scanned and tend to be
hard to hide without taking drastic or conspicuous steps.

While there’s talk of “fighting A.I. surveillance with scarves and face paint,” attempts to
disguise your face are doomed to be temporary countermeasures that institutions with
deep pockets will find ways of neutralizing. Furthermore, it’s unfair for people to bear
the burden of protecting themselves from surveillance.

Being attuned to the profound weight of infrastructure surrounding us isn’t the same
thing as being a technological determinist, which is the idea that technology completely
dictates people’s actions. In short, technological determinism means that technology
has been and will continue to be used to increase human well‑being by minimizing
transaction costs wherever the costs can be cut.

The main difference between what we see as being a realist about the power of infrastruc‑
ture and being a technological determinist comes down to different takes on alternative
pathways. It might sound like a contradiction in terms, but the realist can believe in
the transformative potential of ideals. Ideals like civil rights that should matter more
than worshipping at the altar of efficiency. These ideals are damned hard to champion
for, but they’re not preordained to fail. Such ideals place moral progress ahead of the
technological variety, and they take courage — not mere will — to create and preserve.

148
5 Pro Evidence

5.2.3 Ban Key—Danger

FRTs are inherently dangerous and must be abolished—regulation will fail and
only distracts from debate about whether surveillance technologies like these
should even exist.

Dauvergne 22

Peter Dauvergne (a professor of international relations, specializing in global environ‑


mental politics at the University of British Columbia), Chapter 3: The movement to
oppose facial recognition, In: Identified, Tracked, and Profiled: The Politics of Resisting

Facial Recognition Technology, Elgar Publishing, 2022, https://doi.org/10.4337/9781803925899

Fight for the Future – an internet‑based team advocating for responsible, open, and fair
use of technology – agrees. “This inherently oppressive technology cannot be reformed
or regulated,” argues Evan Greer, the director of Fight for the Future. “It poses such a
threat to the future of human society that any potential benefits are outweighed by the
inevitable harms. It should be abolished.”20

In 2019, Fight for the Future launched a campaign to “Ban Facial Recognition” in the
United States. Today, the campaign comprises 41 groups, ranging from OpenMedia
to the ACLU of New York to the Electronic Privacy Information Center to Greenpeace
USA to the Council on American–Islamic Relations to the Black Alliance for Just Immi‑
gration. Those backing this campaign see the calls for state regulation and corporate
self‑governance as a ruse, designed to create false trust, deflect critics, and ultimately
normalize a racist, biased technology for nefarious uses.

“Silicon Valley lobbyists are disingenuously calling for light ‘regulation’ of facial recog‑
nition so they can continue to profit by rapidly spreading this surveillance dragnet,” the
campaign to Ban Facial Recognition is telling the public. “They’re trying to avoid the real
debate: whether technology this dangerous should even exist. Industry‑friendly and
government‑friendly oversight will not fix the dangers inherent in law enforcement’s
use of facial recognition: we need an all‑out ban.”21

A global coalition of digital rights and human rights similarly drafted an open state‑
ment in 2021 calling for a “global” ban on the use of remote biometric technologies in
public spaces, including facial recognition technology. “We call for a ban because, even
though a moratorium could put a temporary stop to the development and use of these
technologies, and buy time to gather evidence and organize democratic discussion, it

149
5 Pro Evidence

is already clear that these investigations and discussions will only further demonstrate
that the use of these technologies in publicly accessible spaces is incompatible with our
human rights and civil liberties and must be banned outright and for good.”22

Regulations will inevitably fail and the systemic rights violations from maintaining
biometric recognition technology outweighs any benefit.

Amnesty International 21

Amnesty International (international non‑governmental organization focused on


human rights), “Press Release: Amnesty International and more than 170 organisations
call for a ban on biometric surveillance,” 7 June 2021, https://www.amnesty.org/en/latest/press‑
release/2021/06/amnesty‑international‑and‑more‑than‑170‑organisations‑call‑for‑a‑
ban‑on‑biometric‑surveillance/, accessed 9 March 2023

Why a ban?

Facial recognition and remote biometric recognition technologies have significant tech‑
nical flaws in their current forms, including, for example, facial recognition systems
that reflect racial bias and are less accurate for people with darker skin tones. However,
technical improvements to these systems will not eliminate the threat they pose to our
human rights and civil liberties.

While adding more diverse training data or taking other measures to improve accuracy
may address some current issues with these systems, this will ultimately only perfect
them as instruments of surveillance and make them more effective at undermining our
rights.

These technologies pose a threat to our rights in two major ways:

First, the training data — the databases of faces against which input data are compared,
and the biometric data processed by these systems — are usually obtained without one’s
knowledge, consent, or genuinely free choice to be included, meaning that these tech‑
nologies encourage both mass and discriminatory targeted surveillance by design.

Second, as long as people in publicly‑accessible spaces can be instantaneously identified,


singled out, or tracked, their human rights and civil liberties will be undermined. Even
the idea that such technologies could be in operation in publicly accessible spaces creates
a chilling effect which undermines people’s abilities to exercise their rights.

150
5 Pro Evidence

Despite questionable claims that these technologies improve public security, any bene‑
fits will always be vastly outweighed by the systematic violation of our rights. We see
growing evidence of how these technologies are abused and deployed with little to no
transparency.

Any survey and analysis of how policing has historically been conducted shows that
experimental use of surveillance technologies often criminalizes low‑income and
marginalized communities, including communities of color, the same communities
that have traditionally faced structural racism and discrimination. The use of facial
recognition and remote biometric recognition technologies is not an exception to this,
and for that reason it must be stopped before an even more dangerous surveillance
infrastructure is created or made permanent.

The mere existence of these tools, whether in the hands of law enforcement or private
companies (or in public‑private partnerships), will always create incentives for func‑
tion creep and increased surveillance of public spaces, placing a chilling effect on free
expression. Because their very existence undermines our rights, and effective oversight
of these technologies is not possible in a manner that would preclude abuse, there is no
option but to ban their use in publicly accessible spaces entirely.

151
5 Pro Evidence

5.2.4 Ban Key—Normalization

The risks of the tools far exceed their benefits—only banning it prevents it from
becoming normalized.

Selinger and Hartzog 19

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity), “What Happens When Employers Can Read Your Facial Expressions?,” The
New York Times, 17 October 2019, https://www.nytimes.com/2019/10/17/opinion/facial‑
recognition‑ban.html, accessed 9 March 2023

We think the senator is right: Stopping this technology from being procured — and its
attendant databases from being created — is necessary for protecting civil rights and
privacy. But limiting government procurement won’t be enough. We must ban facial
recognition in both public and private sectors, before we grow so dependent on it that
we accept its inevitable harms as necessary for “progress.” Perhaps over time appropri‑
ate policies can be enacted that justify lifting a ban. But we doubt it.

The essential and unavoidable risks of deploying these tools are becoming apparent.
A majority of Americans have functionally been put in a perpetual police lineup sim‑
ply for getting a driver’s license: Their D.M.V. images are turned into faceprints for
government tracking with few limits. Immigration and Customs Enforcement officials
are using facial recognition technology to scan state driver’s license databases without
citizens’ knowing. Detroit aspires to use facial recognition for round‑the‑clock moni‑
toring. Americans are losing due‑process protections, and even law‑abiding citizens
cannot confidently engage in free association, free movement and free speech without
fear of being tracked.

Yet industry officials, lawmakers and even some privacy advocates have been skeptical
about banning the technology, as we and others proposed more than a year ago. Other
than clearly malicious technologies like spyware, the United States has banned very few
technologies, even those used in warfare. Those making a case against a ban on facial
recognition muster three main arguments.

152
5 Pro Evidence

5.2.5 Ban Key—Alts Fail

Only a ban works—alternatives like regulations or moratoria all fail.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

The range of proposals for how to regulate facial recognition is wide, but the existence
of those proposals and the success of local bans and moratoria are significant reasons for
optimism. A comprehensive ban at the federal level is a lofty goal, to be sure— but the
limits of the more modest solutions illustrate why a comprehensive ban is worth striv‑
ing for. A regulatory scheme that permits the use of facial recognition technologies on
the basis that enforcement of violations will be enough to deter and prevent undesirable
behavior will be inadequate, and ignores how privacy law can be co‑opted in to proce‑
dural symbolism,347 how privacy plaintiffs struggle to receive judicial redress,348 and
the inertia of privacy regulators. Moratoria, while a far superior option over regulation,
assume a point in time where facial recognition technologies will be sufficiently free of
bias— a moment that may never come. Bans on law enforcement uses correctly recog‑
nize the particular danger of police use of these surveillance technologies. But private
uses can fuel law enforcement ones, and the privacy, free expression, and discrimina‑
tion concerns are all considerable given the size of the role that private companies play
in American lives.

153
5 Pro Evidence

5.2.6 Ban Key—Tradeoff

Failing to ban FRT trades off with investing community resources.

Dauvergne 22

Peter Dauvergne (a professor of international relations, specializing in global environ‑


mental politics at the University of British Columbia), Chapter 3: The movement to op‑
pose facial recognition, In: Identified, Tracked, and Profiled: The Politics of Resisting Facial
Recognition Technology, Elgar Publishing, 2022, https://doi.org/10.4337/9781803925899

Growing numbers of civil society organizations agree, as interviews with anti‑FRT cam‑
paigners in December 2021 and January 2022 confirm. Civil society organizations in
Europe and North America are some of the most critical. “Face recognition surveillance
presents an unprecedented threat to our privacy and civil liberties,” the ACLU is argu‑
ing as it supports campaigns for local bans across the United States.12 “Law enforce‑
ment use of face recognition technology poses a profound threat to personal privacy,
political and religious expression, and the fundamental freedom to go about our lives
without having our movements and associations covertly monitored and analyzed,”
the Electronic Frontier Foundation is telling its base of supporters.13 “We need to rein‑
vest our resources and priorities into meeting the needs of our community, and not
invest in dangerous surveillance tools like facial recognition,” argues Myaisha Hayes
at MediaJustice in Oakland, California.14 The “technology is riddled with racial and
gender bias,” argues Ibrahim Hooper, the communications director for the Council on
American–Islamic Relations, “and it should not be used by any government agency to
target marginalized communities.”

154
5 Pro Evidence

5.2.7 Ban Key—Human Rights

Only a ban respects basic civil liberties and human rights.

Dauvergne 22

Peter Dauvergne (a professor of international relations, specializing in global environ‑


mental politics at the University of British Columbia), Chapter 3: The movement to op‑
pose facial recognition, In: Identified, Tracked, and Profiled: The Politics of Resisting Facial
Recognition Technology, Elgar Publishing, 2022, https://doi.org/10.4337/9781803925899

Scores of other US‑based NGOs are also chiming in to call for a ban on face surveillance
technology. “The use of face surveillance technology needs to end. Face surveillance
violates Americans’ right to privacy, treats all individuals as suspicious, and threatens
First Amendment‑protected rights,” argues Caitriona Fitzgerald of the Electronic Pri‑
vacy Information Center in Washington, DC.16 Not only do we need “a total ban on
the use, development, production, and sale of facial recognition technology for mass
surveillance purposes by the police and other government agencies in the United States,”
Amnesty International USA is telling its base, we need “a ban on exports of the technol‑
ogy systems to other countries.”17

Civil liberties, social justice, and civil rights groups across Canada have taken a com‑
parable stance. Coordinated by the International Civil Liberties Monitoring Group, in
2020 a coalition of more than 30 groups signed an open letter to the Canadian Minister
of Public Safety and Emergency Preparedness calling on the federal government to ban
the use of the technology by the Royal Canadian Mounted Police (RCMP) and national
intelligence agencies. “Facial recognition technology is highly problematic, given its
lack of accuracy and invasive nature, and poses a threat to the fundamental rights of
people in Canada,” the letter states.18

Liberty, the United Kingdom’s oldest civil liberties and human rights organization
(founded in 1934), takes a similar position. Facial recog‑ nition technology “breaches
everyone’s human rights, discriminates against people of colour and is unlawful. It’s
time to ban it,” Liberty argues.19

155
5 Pro Evidence

5.2.8 Regs Fail—General

Regulation assumes there are valuable upsides to FRT—that’s false. Additionally,


procedural rules inevitably favor corporate interests, courts rarely rule in favor of
privacy, and regulation will not constrain the worst behavior.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Enacting procedural rules rather than banning facial recognition is, of course, preferable
to no regulation at all. But the very goal of regulation assumes that there is value to the
use of these technologies that outweighs the harms they wreak. The scale, severity, and
variety of harms in question, along with the limited value of the benefits, belie that con‑
clusion. Procedural protections like notice and opt‑out rights are unlikely to sufficiently
curb the full breadth of harms imposed by these technologies, just as they are insuffi‑
cient to curb other privacy harms.321 The likelihood of mission creep322 is also far too
great. Procedural regulation is a suitable approach to conduct that has socially valuable
benefits worth preserving, and social dangers that are minimal or unlikely enough that
the risk of an insufficient regulatory response is tolerable. That is not the case with facial
recognition technologies. 323

Julie Cohen’s 324 and Ari Waldman’s 325 critiques of the “managerialization” of privacy
law help demonstrate why procedural rules would be inadequate. As they describe it,
the shift of authority over what privacy regulations practically mean from judges and
regulators to corporate compliance officers has reduced the substantive goals of statu‑
tory privacy protections to hollow, symbolic boxchecking exercises. This shift, exac‑
erbated by the proliferation of ambiguous standards that necessitate interpretation by
corporate compliance officers, has diluted the substantive goals of privacy laws from
providing what is necessary to ensure privacy rights are respected to providing what is
sufficient to avert expensive fines.326 In the case of facial recognition technologies, the
most likely outcome is that companies will push the boundaries of profitable products
and services that fail to meaningfully protect people from any number of harms.

Privacy policies and check‑the‑box compliance exercises will not be sufficient to con‑
strain the dangers of facial recognition technologies, particularly given the lack of vig‑

156
5 Pro Evidence

orous regulatory oversight and the difficulty that privacy litigants face in the courts that
is, when the victims of privacy violations even have a private right of action that will
supply them with a judicial forum.327 The benefits of facial recognition technologies
are far too minimal and the harms far too great to accept what Waldman describes as
“a neoliberal ethos that prioritizes deregulated markets and corporate innovation over
human welfare.” 328

Procedural regulation fails—financial incentives overwhelm, there’s no public will,


and there’s a unique danger that escapes regulation.

Hartzog and Selinger 18

Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity) and Dr. Evan Selinger (professor of philosophy at Rochester Institute of Tech‑
nology), “Facial Recognition Is the Perfect Tool for Oppression,” Medium, 3 August
2018, https://medium.com/s/story/facial‑recognition‑is‑the‑perfect‑tool‑for‑oppression‑
bc2a08f0fe66, accessed 9 March 2023

Why Facial Recognition Technology Can’t Be Procedurally Regulated

Because facial recognition technology poses an extraordinary danger, society can’t af‑
ford to have faith in internal processes of reform like self‑regulation. Financial rewards
will encourage entrepreneurialism that pushes facial recognition technology to its limits,
and corporate lobbying will tilt heavily in this direction.

Society also can’t wait for a populist uprising. Facial recognition technology will con‑
tinue to be marketed as a component of the latest and greatest apps and devices. Apple
is already pitching Face ID as the best new feature of its new iPhone. The same goes
for ideologically charged news coverage of events where facial recognition technology
appears to save the day.

Finally, society shouldn’t place its hopes in conventional approaches to regulation.


Since facial recognition technology poses a unique threat, it can’t be contained by
measures that define appropriate and inappropriate uses and that hope to balance
potential social benefit with a deterrent for bad actors. This is one of the rare situations
that requires an absolute prohibition, something like the Ottawa Treaty on landmines.

Right now, there are a few smart proposals to control facial recognition technology and
even fewer actual laws limiting it. The biometric laws in Illinois and Texas, for example,
are commendable, yet they follow the traditional regulatory strategy of requiring those

157
5 Pro Evidence

who would collect and use facial recognition (and other biometric identifiers) to follow
a basic set of fair information practices and privacy protocols. These include require‑
ments to get informed consent prior to collection, mandated data protection obligations
and retention limits, prohibitions on profiting from biometric data, limited ability to
disclose biometric data to others, and, notably, private causes of action for violations of
the statutes.

Proposed facial recognition laws follow along similar lines. The Federal Trade Commis‑
sion recommends a similar “notice, choice, and fair data limits” approach to facial recog‑
nition. The Electronic Frontier Foundation’s report, which focuses on law enforcement,
contains similar though more robust suggestions. These include placing restrictions
on collecting and storing data; recommending limiting the combination of one or more
biometrics in a single database; defining clear rules for use, sharing, and security; and
providing notice, audit trials, and independent oversight. In its model face recognition
legislation, the Georgetown Law Center on Privacy and Technology’s report proposes
significant restrictions on government access to face‑print databases as well as meaning‑
ful limitations on use of real‑time facial recognition.

Tragically, most of these existing and proposed requirements are procedural, and in our
opinion they won’t ultimately stop surveillance creep and the spread of face‑scanning
infrastructure. For starters, some of the basic assumptions about consent, notice, and
choice that are built into the existing legal frameworks are faulty. Informed consent as a
regulatory mechanism for surveillance and data practices is a spectacular failure. Even
if people were given all the control in the world, they wouldn’t be able to meaningfully
exercise it at scale.

Yet lawmakers and industry trudge on, oblivious to people’s time and resource limita‑
tions. Additionally, these rules, like most privacy rules in the digital age, are riddled
with holes. Some of the statutes apply only to how data is collected or stored but largely
ignore how it is used. Others apply only to commercial actors or to the government and
are so ambiguous as to tolerate all kinds of pernicious activity. And to recognize the
touted benefits of facial recognition would require more cameras, more infrastructure,
and face databases of all‑encompassing breadth.

The Future of Human Faces

Because facial recognition technology holds out the promise of translating who we are
and everywhere we go into trackable information that can be nearly instantly stored,
shared, and analyzed, its future development threatens to leave us constantly compro‑

158
5 Pro Evidence

mised. The future of human flourishing depends upon facial recognition technology
being banned before the systems become too entrenched in our lives. Otherwise, peo‑
ple won’t know what it’s like to be in public without being automatically identified,
profiled, and potentially exploited. In such a world, critics of facial recognition technol‑
ogy will be disempowered, silenced, or cease to exist.

The dangers of FRT mean that no regulatory system can prevent abuse—regulations
only normalize surveillance.

Dauvergne 22

Peter Dauvergne (a professor of international relations, specializing in global environ‑


mental politics at the University of British Columbia), Chapter 3: The movement to op‑
pose facial recognition, In: Identified, Tracked, and Profiled: The Politics of Resisting Facial
Recognition Technology, Elgar Publishing, 2022, https://doi.org/10.4337/9781803925899

At the same time, though, increasing numbers of activists have come to the conclusion
that comprehensive, permanent bans are necessary on the use of FRT for routine polic‑
ing, mass surveillance, and discriminatory profiling, arguing that no regulatory system
is ever going to prevent abuse by security forces, politicians, and corporations. Profes‑
sors Woodrow Hartzog and Evan Selinger capture this sentiment well. Facial recogni‑
tion technology, they argue, is “potently, uniquely dangerous – something so inherently
toxic that it deserves to be completely rejected, banned, and stigmatized. ... The weak
procedural path proposed by industry and government will only ensure facial recogni‑
tion’s ongoing creep into ever more aspects of everyday life.”8

The dangers of FRT go beyond accuracy issues—only a total ban solves.

Dauvergne 22

Peter Dauvergne (a professor of international relations, specializing in global environ‑


mental politics at the University of British Columbia), Chapter 3: The movement to op‑
pose facial recognition, In: Identified, Tracked, and Profiled: The Politics of Resisting Facial
Recognition Technology, Elgar Publishing, 2022, https://doi.org/10.4337/9781803925899

Facial recognition technology “is too dangerous to ever be regulated,” argues Jennifer
Jones at the ACLU of Northern California.9 Trying to do so, Hartzog and Selinger add,
will inevitably end up normalizing the technology within society, which would then

159
5 Pro Evidence

smother any opposition. It must never “become too entrenched in our lives,” they write.
If this were to occur, “critics of facial recognition technology will be disempowered,
silenced, or cease to exist.”10

Luke Stark at the University of Western Ontario essentially concurs, describing facial
recognition as the “plutonium of AI,” and calling for “controls so strict” as to effectively
ban the technology. “It’s dangerous, racializing, and has few legitimate uses,” he main‑
tains. And in his view, this will never change, no matter how accurate it becomes.11

The very nature of biometric recognition technologies ensures that the potential for
abuse is too great for regulations to solve.

Amnesty International 21

Amnesty International (international non‑governmental organization focused on


human rights), “Press Release: Amnesty International and more than 170 organisations
call for a ban on biometric surveillance,” 7 June 2021, https://www.amnesty.org/en/latest/press‑
release/2021/06/amnesty‑international‑and‑more‑than‑170‑organisations‑call‑for‑a‑
ban‑on‑biometric‑surveillance/, accessed 9 March 2023

Open letter calling for a global ban on biometric recognition technologies that enable
mass and discriminatory surveillance

We, the undersigned, call for an outright ban on uses of facial recognition and remote
biometric recognition technologies that enable mass surveillance and discriminatory tar‑
geted surveillance. These tools have the capacity to identify, follow, single out, and
track people everywhere they go, undermining our human rights and civil liberties —
including the rights to privacy and data protection, the right to freedom of expression,
the right to free assembly and association (leading to the criminalization of protest and
causing a chilling effect), and the rights to equality and non‑discrimination.

We have seen facial recognition and remote biometric recognition technologies used
to enable a litany of human rights abuses. In China, the United States, Russia, Eng‑
land, Uganda, Kenya, Slovenia, Myanmar, the United Arab Emirates, Israel, and India,
surveillance of protesters and civilians has harmed people’s right to privacy and right
to free assembly and association. The wrongful arrests of innocent individuals in the
United States, Argentina, and Brazil have undermined people’s right to privacy and
their rights to due process and freedom of movement. The surveillance of ethnic and
religious minorities and other marginalized and oppressed communities in China, Thai‑

160
5 Pro Evidence

land, and Italy has violated people’s right to privacy and their rights to equality and
non‑discrimination.

These technologies, by design, threaten people’s rights and have already caused signif‑
icant harm. No technical or legal safeguards could ever fully eliminate the threat they
pose, and we therefore believe they should never be used in public or publicly accessi‑
ble spaces, either by governments or the private sector. The potential for abuse is too
great, and the consequences too severe.

FRT is so dangerous that procedural protections will be unable to constrain it.

Selinger and Hartzog 18

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Univer‑
sity), “Amazon Needs to Stop Providing Facial Recognition Tech for the Government,”
Medium, 22 June 2018, https://medium.com/s/story/amazon‑needs‑to‑stop‑providing‑
facial‑recognition‑tech‑for‑the‑government‑795741a016a6, accessed 10 March 2023

Imagine a technology that is potently, uniquely dangerous — something so inherently


toxic that it deserves to be completely rejected, banned, and stigmatized. Something so
pernicious that regulation cannot adequately protect citizens from its effects.

That technology is already here. It is facial recognition technology, and its dangers are
so great that it must be rejected entirely.

Society isn’t used to viewing facial recognition technology this way. Instead, we’ve been
led to believe that advances in facial recognition technology will improve everything
from law enforcement to the economy, education, cybersecurity, health care, and our
personal lives. Unfortunately, we’ve been led astray.

Procedural Pessimism

After an outcry from employees and advocates, Google recently announced it will not
renew a controversial project with the Pentagon called Project Maven. It also released
a set of principles that will govern how it develops artificial intelligence. Some prin‑
ciples focus on widely shared ideals, like avoiding bias and incorporating privacy by
design principles. Others are more dramatic, such as staying away from A.I. that can
be weaponized and steering clear of surveillance technologies that are out of sync with
internationally shared norms.

161
5 Pro Evidence

Admittedly, Google’s principles are vague. How the rules get applied will determine if
they’re window dressing or the real deal. But if we take Google’s commitment at face
value, it’s an important gesture. The company could have said that the proper way to
get the government to use drones responsibly is to ensure that the right laws cover con‑
troversial situations like targeted drone strikes. After all, there’s nothing illegal about
tech companies working on drone technology for the government.

Indeed, companies and policymakers often seek refuge in legal compliance procedures,
embracing comforting half‑measures like restrictions on certain kinds of uses of tech‑
nology, requirements for consent to deploy technologies in certain contexts, and vague
pinkie‑swears in vendor contracts to not act illegally or harm others. For some problems
raised by digital and surveillance technologies, this might be enough, and certainly it’s
unwise to choke off the potential of technologies that might change our lives for the bet‑
ter. A litany of technologies, from the automobile to the database to the internet itself,
has contributed immensely to human welfare. Such technologies are worth preserving
with rules that mitigate harm but accept reasonable levels of risk.

Facial recognition systems are not among these technologies. They can’t exist with ben‑
efits flowing and harms adequately curbed. That’s because the most‑touted benefits of
facial recognition would require implementing oppressive, ubiquitous surveillance sys‑
tems and the kind of loose and dangerous data practices that civil rights and privacy
rules aim to prevent. Consent rules, procedural requirements, and boilerplate contracts
are no match for that kind of formidable infrastructure and irresistible incentives for
exploitation.

The weak procedural path proposed by industry and government will only ensure facial
recognition’s ongoing creep into ever more aspects of everyday life. It will place an
even greater burden on people to protect themselves, and it will require accountability
resources from companies that they don’t have. It’s not worth it, no matter what carrots
are dangled. To stop the spread of this uniquely dangerous technology, we’ll need more
than processing rules and consent requirements. In the long run, we’re going to need a
complete ban.

162
5 Pro Evidence

5.2.9 Regs Fail—Creep

Regulations cannot solve function creep of both the scope of technology and data
access.

Houwing 20

Lotte Houwing (researcher and policy advisor at the Dutch digital rights NGO Bits of
Freedom), Stop the Creep of Biometric Surveillance Technology, European Data Protec‑
tion Law Review (EDPL), Vol. 6, No. 2, 2020, pp. 174‑177. HeinOnline

A concern we have about calling for new regulation addressing biometric surveillance
in the public space, is that we will not be able to contain the use. The call for regulation
is a call for a limited legal basis for the deployment of this extremely invasive technol‑
ogy. History has taught us never to underestimate a good function creep. There are
several ways the use and effects of facial recognition surveillance might expand over
time. First, the legal basis and/or the scope of the basis can be expanded. We have
seen this before with other surveillance measures being introduced. Restricting the use
of such far‑reaching technology to combating terrorism might sound limited, but the
limitation and therefore protections are dependent on government classifications. Sev‑
eral examples around the world, show that even non‑violent citizen interests groups
are classified as ‘extremist’ or ‘terrorist’ when more powers to surveil these groups are
desired.3 A second example of how function creep will take place, is with regards to ac‑
cess to the data. Waiving the fraud‑prevention‑flag, and showing a complete distrust of
citizens, government institutions are very keen to share access and combine databases.4
Why would facial recognition databases be exempt from this data hunger?

163
5 Pro Evidence

5.2.10 Regs Fail—Children

FRT for children is uniquely damaging and justifies a complete ban.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

The use of facial recognition technologies on children endangers their wellbeing despite
the fact that their surveillance is often intended for their protection. The developmental
immaturity of young people means that the chilling effects of surveillance may be partic‑
ularly impactful on their emotional and intellectual growth, and arbitrary errors in law
enforcement investigations may have particularly severe consequences for their safety,
freedom, and life trajectories. But even the harms that are uniquely severe for children
are still severely felt by the other demographic groups for whom facial recognition tech‑
nologies tend to perform poorly, and the damage that these services wreak on privacy,
free expression, and due process is essentially universal. Children should be protected
from the destructive effects of facial recognition technologies, so should everyone else,
and the harms that are even more severe for children are damaging enough for adults
alone to necessitate a ban.

The harms of FRT on children and other vulnerable groups justify a wholesale ban.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

A Child‑Specific Ban is Not Sufficient, Defensible, Or Feasible

Many of the harms that are either broadly shared or shared by some demographic
groups may have particularly severe consequences for children, like the potential chill‑
ing effects of surveillance on their emotional and intellectual development early on, or
early exposure to the criminal justice system through inclusion in a law enforcement

164
5 Pro Evidence

database. The harms that facial recognition surveillance creates may have an outsized
impact on the lives of young people by virtue of the fact that their anonymity is eroded
earlier in their lives, and the ramifications of any kinds of fairness implications will af‑
fect them at a time when they’re more vulnerable.332

That vulnerability to facial recognition technologies merits strong privacy protections,


but there is no defensible basis for stronger civil liberties protections for one demo‑
graphic group that is vulnerable to discrimination through the use of the technology
by virtue of who they are, but not the other groups who are similarly vulnerable. Some
of the harms children experience are uniquely severe for them, but relativity is key: the
harms that facial recognition technologies inflict on all groups merit a ban even without
the additional severity of the harms to children. Studies have shown that facial recog‑
nition algorithms are less accurate for children’s faces, but as discussed above, those
studies have also shown the failure of those algorithms to accurately assess the faces of
Asian people, Black people, women, and the elderly—and many aren’t even designed
to account for the existence of non‑binary or trans people at all. The potential for dis‑
crimination extends to those groups as well, and while the forms of discrimination may
vary, they are all concerning enough to warrant intervention.

Children are uniquely vulnerable because of limited autonomy and the


disproportionate surveillance they face.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Further, certain use cases and vulnerabilities for children as compared to other groups
simply aren’t comparable in a meaningful way such that a child‑limited prohibition
would be defensible. Children are subject to disproportionate surveillance beyond their
control given the concerns of adults for their safety, and they often have limited auton‑
omy over their movements. The same is true for the disproportionate surveillance of
residents of public housing—which, of course, includes children and teenagers—whose
ability to protect or reject unwanted surveillance is often limited.333 Children surveilled
in schools and adults in public housing often do not have a meaningful way to avoid

165
5 Pro Evidence

face surveillance, and the privacy and freedom of the latter group should be protected
just as much as the privacy and freedom of the former.

Children cannot consent.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Children’s incapacity to consent to surveillance is another factor that makes it a partic‑


ularly unfair intrusion into their private lives. But ultimately, children and adults alike
have little control over whether facial recognition technologies are used on them.334
Many of the most problematic uses occur without the surveiller even attempting to ob‑
tain consent, and when consent is obtained from adults, it is almost always uninformed,
if not flatly coerced. Consent to Facebook’s terms of service is not a meaningful indica‑
tion of knowing acceptance when the company permits a facial recognition service to
scrape its platform and build tools for commercial335 or law enforcement use.336 Con‑
sent is not a meaningful protection against government‑collected images like driver’s
license photos or visa from being repurposed for facial recognition databases—people
need to be able to drive cars and travel. A robust literature also illustrates just how
poorly adults are situated to make informed privacy decisions, given the complexity and
length of privacy policies and the frequency with which people encounter them.337 As
much as children and teens struggle to accurately assess privacy risks, so too do adults.
Children being exploited in a slightly more egregious way does not justify the exploita‑
tion of everyone else.

There would no way to enforce a child‑specific ban.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

166
5 Pro Evidence

Beyond the lack of normative merits, a child‑specific ban would also be exceedingly dif‑
ficult to coherently design and effectively enforce. COPPA attempts to strike the balance
of limiting data collected from children without unduly limiting the collection practices
of general audience services through a twopronged applicability approach: the statute
applies to companies that direct services to children and companies that do not delib‑
erately target them, but do have actual knowledge that they are collecting personal in‑
formation from children anyway. The results have been murky at best. The high bar
of an actual knowledge standard has allowed general audience services to feign igno‑
rance of the children on their platform, while the lack of enforcement has incentivized
companies to ignore it, given that failing to comply with COPPA is unlikely to produce
an investigation or penalty.338 The harms that these technologies inflict on adults are
too significant for an intervention that deliberately excludes them to be desirable or
sufficient. But even if the harms to adults could be defensibly ignored, attempting to
distinguish child‑specific from general uses will be unreliable and under‑inclusive. A
facial recognition law should not replicate COPPA’s mistakes, it should learn from them
by simply setting a higher standard for everyone.

A child‑specific ban would still harm adults.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

Moreover, as this article has attempted to illustrate, the harms that are particularly
heightened for children are still severe for adults, with the same result of eroding their
privacy, free expression, and due process rights. Knowledge of surveillance may have
particular implications for children’s intellectual development and political freedom,
but it also chills free expression for adults. Facial recognition technology’s singularly
pernicious assortment of attributes as a surveillance tool—that hiding or changing
your face is impracticable or impossible, that it weaponizes existing databases of
photographs, and consent is often ill‑informed and coercive on the rare occasions
it’s even sought—generally implicates all age groups. Having a picture or video of
you used in a facial recognition search by law enforcement is more likely to result
in an erroneous result for certain groups, but law enforcement’s surreptitious use of

167
5 Pro Evidence

these technologies in investigations still affects due process rights for everyone. A
child‑specific ban would not address any of these dangers, and would thus be an overly
narrow response to a far broader problem.

Finally, child‑specific privacy protections can still be an acceptable or even welcome


policy approach in other circumstances where they are necessary for children, and in‑
applicable to adults. A regulatory regime that provided strong, comprehensive privacy
protections for all people, with meaningful enforcement by the government and private
plaintiffs and functional redress for violations, might invite additional, heightened pro‑
tections for children in situations that generally don’t exist for adults, such as privacy
in school settings, or even privacy protections for children from their parents.339 Nor
am I arguing that COPPA should be wholly preempted by a general facial recognition
technology ban, given the valuable, if highly imperfect, protections COPPA provides for
children in contexts beyond the use of facial recognition technology. But in the context
of technology or circumstances that implicate both children and adults, the heightened
vulnerability of children should not invite the assumption that protections for adults
are not similarly needed. In the case of facial recognition technologies, children’s auton‑
omy, safety, and freedom are not uniquely under threat such that protections for them
alone are necessary, and protections for adults are unnecessary. On the contrary.

168
5 Pro Evidence

5.2.11 Regs Fail—Freedom

Regulating is insufficient because governments themselves are the greatest threats


and it would still result in massive harm to our freedom.

Selinger and Hartzog 19

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity), The Inconsentability of Facial Surveillance, 66 Loyola Law Review 101, 2019.
https://scholarship.law.bu.edu/faculty_scholarship/3066

To that end, an unexpected shift in governance has begun. U.S. cities have started ban‑
ning government agents from using facial recognition technology.52 Statewide mora‑
toriums on government agents are being considered too.53 Bans, whether temporary
or permanent, are extremely rare in U.S. governance because lawmakers and policy ad‑
vocates often make three core presumptions about regulation. The first is that extreme
fears about new technologies should be viewed as over‑reactions that parallel previous
panics about technologies that society effectively adapted to, such as the automobile, ra‑
dio, and television.54 The second is that all dual‑use technologies should be integrated
into society through policies that aim to appropriately balance costs and benefits.55 The
third is that the best approach to regulating surveillance is through tech‑neutral legis‑
lation that applies to all surveillance technologies and does not single out specific ones
for unique treatment.56

For the reasons that we have provided, we believe that these presumptions do not apply
here and conclude that, at a minimum, moratoriums are justified because the conditions
for consentability for facial recognition technology have not been met. Furthermore,
face surveillance of all kinds presents a panoply of harms, most notably corrosion of
collective autonomy through the chill of increased surveillance and machines indulge
the fatally flawed notion of perfect enforcement of the law. Neither consent nor proce‑
dural frameworks like warrant requirements are sufficient to address these harms. As
such, we argue face surveillance should be banned. Regulating the government with‑
out also imposing restrictions on technology companies is insufficient, but a promising
start because, at present, government agents pose the greatest threats.

As Clare Garvie rightly observes, mistakes with facial recognition technology can have
deadly consequences.57 This means they can trample an individual’s right to be free
from bodily harm, the highest of the individual autonomy rights in Kim’s framework.58

169
5 Pro Evidence

What happens if a system like this gets it wrong? A mistake by a video‑based


surveillance system may mean an innocent person is followed, investigated,
and maybe even arrested and charged for a crime he or she didn’t commit.
A mistake by a face‑scanning surveillance system on a body camera could be
lethal. An officer alerted to a potential threat to public safety or to himself,
must, in an instant, decide whether to draw his weapon. A false alert places
an innocent person in those crosshairs.59

Lawmakers could regulate facial recognition a few different ways, and all but one will
lead to an irrevocable erosion of obscurity and collective autonomy. When considering
how to regulate private commercial use of facial recognition, lawmakers will be tempted
to go back to that old standby regulatory mechanism that they always reach for when
they lack political capital, resources, or imagination: consent. Consent is attractive be‑
cause it pays lip service to the idea that people have diverse preferences, it’s steeped
in the law, and at a glance appears to be a compromise between competing values and
interests. But as Kim demonstrated and we argue, it is fool’s gold for facial recognition
technologies, especially face surveillance. Even highly regulated and constrained use
of facial recognition technology that has been agreed to will lead to an erosion of ob‑
scurity and a harm to our collective autonomy without actually serving our individual
autonomy interests.
The problem is that there aren’t many proven alternatives to consent regimes for com‑
mercial use of facial recognition that go beyond mere procedural frameworks. If the
E.U.’s General Data Protection Regulation is any guide, the most prominent alternative
to legitimize collection and processing of face biometric data is to require companies to
have a “legitimate interest” in doing so.60 But what constitutes a “legitimate interest”
is notoriously slippery and subject to drift. Lawmakers have yet to get serious in using
this concept to significantly rein in the power wielded by data controllers.
So, if facial recognition becomes entrenched in the private sector by procedural frame‑
works, that means that in addition to a warrant framework’s accretion problem, the gov‑
ernment will also have a backdoor to retroactive surveillance via the personal data in‑
dustrial complex. Through public/private cooperation, surveillance infrastructure will
continue to be built, chill will still occur, harms will still happen, norms will still change,
collective autonomy still will suffer, and people’s individual and collective obscurity
will bit by bit continue to diminish.
The end result is that even if advocates of consent and warrant requirements got every‑
thing on their wish list, society would still end up worse off. We would suffer unaccept‑

170
5 Pro Evidence

able harm to our obscurity and collective autonomy through a barrage of I agree buttons
and search warrants powered by government and industry’s unquenchable thirst for
more access to our lives. There is only one way to stop the harms of face surveillance.
Ban it.

171
5 Pro Evidence

5.2.12 Regs Fail—Chilling

Even if rarely used, the harms of FRT will be massive via chilling behavior.

Selinger and Hartzog 19

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity), The Inconsentability of Facial Surveillance, 66 Loyola Law Review 101, 2019.
https://scholarship.law.bu.edu/faculty_scholarship/3066

The harms of facial surveillance are legion. The mere existence of facial recognition sys‑
tems, which are often invisible, harms civil liberties because people will act differently
if they suspect they’re being surveilled.49 Even legislation that promises stringent pro‑
tective procedures won’t prevent chill from impeding crucial opportunities for human
flourishing by dampening expressive and religious conduct. Warrant requirements for
facial recognition will merely set the conditions for surveillance to occur, which will
normalize tracking and identification, reorganize and entrench organizational structure
and practices, and drive government and industry investment in facial recognition tools
and infrastructure.

172
5 Pro Evidence

5.2.13 AT: Consent

Consent isn’t possible with regards to FRT because it erodes collective autonomy
and people lack the knowledge to meaningfully consent.

Selinger and Hartzog 19

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity), The Inconsentability of Facial Surveillance, 66 Loyola Law Review 101, 2019.
https://scholarship.law.bu.edu/faculty_scholarship/3066

We argue that valid consent is not possible for face surveillance in many of its current
and proposed applications because of its inevitable corrosion of our collective auton‑
omy, to say nothing of the dubious validity of individual consent in these contexts.8
Additionally, we argue that some forms of characterization are inconsentable due to
collective autonomy problems and are at least vulnerable to defective consent. Even
“1:1 facial identification” features are highly subject to defective consent and should be
highly scrutinized. Only facial detection tools (“is this a face?”) seem entitled to the ben‑
efit of the doubt because they are not used to persistently track, identify, or manipulate
people.

One reason consent to facial recognition is highly suspect is that people do not and
largely cannot possess an appropriate level of knowledge about the substantial threats
that facial recognition technology poses to their own autonomy.9 Additionally, the fram‑
ing of this debate around the amorphous concept of individual “privacy” has hidden
unjustifiable risks to two of the most important values implicated by facial recogni‑
tion: obscurity and collective autonomy. Even if some people withhold consent for
face surveillance, others will inevitably give it. Rules that facilitate this kind of permis‑
sion will normalize behavior, entrench organizational practices, and fuel investment in
technologies that will result in a net increase of surveillance. Expanding a surveillance
infrastructure will increase the number of searches that occur which, in itself, will have
a chilling effect over time as law enforcement and industry slowly but surely erode our
collective and individual obscurity.

Building an infrastructure to facilitate surveillance will also provide more vectors for
abuse and careless errors. No one is perfect, and the more requests for permission to
surveil that are made the more harm from mistakes and malice will exist. Additionally,
the larger and more entrenched facial recognition infrastructure becomes, the more op‑

173
5 Pro Evidence

portunities exist for law enforcement to bypass procedural rules on searches to obtain
information directly from industry. For example, if the government were prohibited
from directly using facial recognition technologies, it could purchase people’s location
data obtained from facial recognition technology (and thus linked to their identities)
from private industry. Procedural rules wouldn’t address the true harm of these tech‑
nologies without further prohibitions to prevent end‑runs around the aims of a restric‑
tion.

174
5 Pro Evidence

5.2.14 AT: Accuracy Requirements

Increasingly accurate FRT is actually more dangerous—it makes surveillance even


more powerful.

Selinger and Hartzog 19

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity), The Inconsentability of Facial Surveillance, 66 Loyola Law Review 101, 2019.
https://scholarship.law.bu.edu/faculty_scholarship/3066

Over time, advances in facial recognition technology might eliminate all kinds of errors.
Unfortunately, more accurate versions of the technology pose even greater dangers be‑
cause the problems with facial surveillance are fundamental and unique. Evan Greer
contends, “Biometric surveillance powered by artificial intelligence is categorically dif‑
ferent than any surveillance we have seen before. It enables real‑time location tracking
and behavior policing of an entire population at a previously impossible scale.”31 The
technology can be used to create chill that routinely prevents citizens from engaging
in First Amendment protected activities, such as free association and free expression.
They could also gradually erode due process ideals by facilitating a shift to a world
where citizens are not presumed innocent but are codified as risk profiles with varying
potentials to commit a crime. In such a world, the government and companies alike
will find it easy to excessively police minor infractions, similar to how law enforcement
already uses minor infractions as pretexts to cover up more invasive motives.32 Surveil‑
lance tools bestow power on the watcher. Abuse of the power that was once localized
and costly could become systematized, super‑charged, and turnkey. Companies could
expand their reach of relentless and manipulative marketing by peddling their wares
over smart signs that display personalized advertisements in public spaces. And as
more emotional states, private thoughts, and behavioral predictions are coded from fa‑
cial data, people will lose more and more control over their identities. They could be
characterized as belonging to groups that they don’t identify with or don’t want every‑
one knowing they belong to. And while schools might monitor students more intensely
and make the educational environment more like a prison, bad actors will have oppor‑
tunities to create even more general security problems through hacking and scraping.

175
5 Pro Evidence

5.2.15 AT: Data Anonymity

Data anonymity doesn’t solve—it can still be linked.

Amnesty International 21

Amnesty International (international non‑governmental organization focused on


human rights), “Press Release: Amnesty International and more than 170 organisations
call for a ban on biometric surveillance,” 7 June 2021, https://www.amnesty.org/en/latest/press‑
release/2021/06/amnesty‑international‑and‑more‑than‑170‑organisations‑call‑for‑a‑
ban‑on‑biometric‑surveillance/, accessed 9 March 2023

Although some applications of facial recognition and remote biometric recognition


claim to protect people’s privacy by not linking to their legal identities, they can
nevertheless be used to single out individuals in public spaces, or to make inferences
about their characteristics and behavior. In all such situations, it does not matter
whether data are anonymized to protect personally identifiable information or only
processed locally (i.e. “on the edge”); the harm to our rights occurs regardless because
these tools are fundamentally designed for, and enable, the surveillance of people in a
manner that is incompatible with our rights.

176
5 Pro Evidence

5.2.16 AT: Ban Only for LE

Banning it only for law enforcement fails because of blurred lines and mission
creep.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

The idea of a temporary moratorium on the technology is preferable to leaky procedural


rules, but tends to rest on the idea that there will ever be a time when they work equally
well for all demographic groups, which may never be possible.329 Even in a magical
world where the full range of bias problems were capable of correction, the end date of
the moratorium would mean that the harms to privacy, free expression, and due process
would return. Bans limited to law enforcement uses of facial recognition technologies
are an excellent start, but are still insufficient given how porous the line between law
enforcement and private uses often is, and the dangers of commercial uses.330 Mission
creep is far too likely and difficult to prevent,331 and abuse under opaque standards
is a significant contributing factor to how these technologies currently put democratic
values, like due process rights, at risk. For a class of services with severe bias problems
designed with the explicit objective of making anonymity impossible, a comprehensive
ban is the most appropriate response. The harms are vast and far‑reaching; the response
must be also.

177
5 Pro Evidence

5.2.17 AT: Moratorium

A moratorium only delays when harms to privacy and due process would occur.

Barrett 20

Lindsey Barrett (Staff Attorney and Teaching Fellow, Communications and Technology
Law Clinic, Georgetown University Law Center), Ban Facial Recognition Technologies
for Children—And for Everyone Else. Boston University Journal of Science and Tech‑
nology Law. Volume 26.2, 24 July 2020, https://ssrn.com/abstract=3660118

The idea of a temporary moratorium on the technology is preferable to leaky procedural


rules, but tends to rest on the idea that there will ever be a time when they work equally
well for all demographic groups, which may never be possible.329 Even in a magical
world where the full range of bias problems were capable of correction, the end date of
the moratorium would mean that the harms to privacy, free expression, and due process
would return. Bans limited to law enforcement uses of facial recognition technologies
are an excellent start, but are still insufficient given how porous the line between law
enforcement and private uses often is, and the dangers of commercial uses.330 Mission
creep is far too likely and difficult to prevent,331 and abuse under opaque standards
is a significant contributing factor to how these technologies currently put democratic
values, like due process rights, at risk. For a class of services with severe bias problems
designed with the explicit objective of making anonymity impossible, a comprehensive
ban is the most appropriate response. The harms are vast and far‑reaching; the response
must be also.

A moratorium doesn’t solve the fact that the technology is more dangerous the more
accurate it is.

Houwing 20

Lotte Houwing (researcher and policy advisor at the Dutch digital rights NGO Bits of
Freedom), Stop the Creep of Biometric Surveillance Technology, European Data Protec‑
tion Law Review (EDPL), Vol. 6, No. 2, 2020, pp. 174‑177. HeinOnline

Brought back to its core, the purpose of a moratorium is to postpone the deployment of
face surveillance technology until the most pressing concerns are mitigated. The first
of those concerns is that of inaccuracy and bias. Our worries as regards to arguing for
a moratorium on the basis of this concern, is that the technological deficiencies might

178
5 Pro Evidence

be solvable over time, at least to an extent that brings the percentage of false positive
and negatives within the realms of what our political leaders deem acceptable. More
importantly, however, is that we might just be looking at a technology that becomes
more dangerous, the better it works. Not when it’s giving you access to your phone,
but definitely when applied as a mass surveillance tool.

FRT violates fundamental rights.

Houwing 20

Lotte Houwing (researcher and policy advisor at the Dutch digital rights NGO Bits of
Freedom), Stop the Creep of Biometric Surveillance Technology, European Data Protec‑
tion Law Review (EDPL), Vol. 6, No. 2, 2020, pp. 174‑177. HeinOnline

Another aspect that calls for a moratorium often focused on is the demand for a reg‑
ulatory framework. This might imply to some that current legislation is ambiguous
about the acceptability of face surveillance. We need to be very clear that assessing face
surveillance in public space in the light of the European Convention on Human Rights,
the Charter of Fundamental Rights of the European Union, and the principles set out in
the General Data Protection Regulation, the required mass‑scale processing of biometric
data seems to be at odds.

A moratorium only gives space for the technology to become normalized.

Houwing 20

Lotte Houwing (researcher and policy advisor at the Dutch digital rights NGO Bits of
Freedom), Stop the Creep of Biometric Surveillance Technology, European Data Protec‑
tion Law Review (EDPL), Vol. 6, No. 2, 2020, pp. 174‑177. HeinOnline

Finally, we’re concerned that for the duration of the moratorium, we will see the tech‑
nology become normalized. We will see industry deploy its lobbyists. We will see the
companies at the forefront of product development, search for and find product‑market‑
fit. We will see civil society again and again mobilise citizens until those citizens become
fatigued and weary, and disbelieving that their voice makes a difference. In other words:
time might prove a threat to our ability to clearly and adequately assess this technology.

179
5 Pro Evidence

Moratorium fails—it doesn’t eliminate the use of dangerous tech fast enough.

Amnesty International 21

Amnesty International (international non‑governmental organization focused on


human rights), “Press Release: Amnesty International and more than 170 organisations
call for a ban on biometric surveillance,” 7 June 2021, https://www.amnesty.org/en/latest/press‑
release/2021/06/amnesty‑international‑and‑more‑than‑170‑organisations‑call‑for‑a‑
ban‑on‑biometric‑surveillance/, accessed 9 March 2023

We call for a ban because, even though a moratorium could put a temporary stop to the
development and use of these technologies, and buy time to gather evidence and orga‑
nize democratic discussion, it is already clear that these investigations and discussions
will only further demonstrate that the use of these technologies in publicly accessible
spaces is incompatible with our human rights and civil liberties and must be banned
outright and for good.

180
5 Pro Evidence

5.2.18 AT: Notice and Choice

Notice and choice fails—it doesn’t adequately convey the threat of the technology to
privacy.

Selinger and Hartzog 19

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity), “What Happens When Employers Can Read Your Facial Expressions?,” The
New York Times, 17 October 2019, https://www.nytimes.com/2019/10/17/opinion/facial‑
recognition‑ban.html, accessed 9 March 2023

The first argument is that the benefits of using a new technology can often outweigh the
harms. Law enforcement officials say that facial recognition helps catch criminals, find
missing people and prevent crimes. Industry officials point to the convenience of being
recognized by your phone, in photos and as you’re boarding a plane.

Instead of proposing a ban, these advocates of regulating facial recognition with a light
touch assert that lawmakers should create laws that incentivize law enforcement to ad‑
dress systemic bias in their procedures and require companies and government agencies
to adopt rules that heighten transparency and accountability. A guiding hope is that re‑
quiring consent forms or search warrants before facial recognition technology is applied
to our images will protect our rights.

We disagree. “Notice and choice” has been an abysmal failure. Social media companies,
airlines and retailers overhype the short‑term benefits of facial recognition while using
unreadable privacy policies and vague disclaimers that make it hard to understand how
the technology endangers users’ privacy and freedom.

And while warrant requirements are important for limiting the power of government
officials, they don’t apply to the private sector, where individuals and companies will
remain largely free to monitor their families, neighbors, co‑workers and rival businesses
as they choose, including by sharing the information with law enforcement.

181
5 Pro Evidence

5.2.19 AT: FRT Bans Insufficient

We cannot wait to update privacy rules—banning a uniquely dangerous technology


now is an urgent priority.

Selinger and Hartzog 19

Dr. Evan Selinger (professor of philosophy at Rochester Institute of Technology) and


Dr. Woodrow Hartzog (professor of law and computer science at Northeastern Uni‑
versity), “What Happens When Employers Can Read Your Facial Expressions?,” The
New York Times, 17 October 2019, https://www.nytimes.com/2019/10/17/opinion/facial‑
recognition‑ban.html, accessed 9 March 2023

A third argument in favor of using facial recognition technology is that privacy and
civil liberties are best protected by creating rules that focus on all surveillance, not
just a particular technology. If we ban facial recognition today, this argument goes,
what happens in the future when gait recognition or devices that read brain patterns go
mainstream? Some argue that banning this technology or some other one will prevent
us from wrestling with larger questions that apply to all emerging technologies, like
whether there is a right to be anonymous in public. Many of those questions are likely
to be taken up in the courts.

But we believe society can’t wait years for institutions like the Supreme Court to update
privacy protections for the digital age. By then, facial recognition infrastructure will
be ubiquitous, and exploiting its full potential will seem like a good use of resources.
The law singles out specific technologies all the time because they are so exceptional.
Automobiles, spyware, medical devices and a host of other technologies have their own
specific rules. Airplanes and telecommunications technologies were given their own
federal regulatory agencies.

Facial recognition is truly a one‑of‑a‑kind technology — and we should treat it as such.


Our faces are central to our identities, online and off, and they are difficult to hide. Peo‑
ple look to our faces for insight into our innermost feelings and dispositions. Our faces
are also easier to capture than biometrics like fingerprints and DNA, which require phys‑
ical contact or samples. And facial recognition technology is easy to use and accessible,
ready to plug into police body cameras and other systems.

We support a wide‑ranging ban on this powerful technology. But even limited prohi‑
bitions on its use in police body cams, D.M.V. databases, public housing and schools
would be an important start.

182
5 Pro Evidence

5.2.20 AT: Opt‑Out

You cannot opt‑out of your face or from public spaces.

Houwing 20

Lotte Houwing (researcher and policy advisor at the Dutch digital rights NGO Bits of
Freedom), Stop the Creep of Biometric Surveillance Technology, European Data Protec‑
tion Law Review (EDPL), Vol. 6, No. 2, 2020, pp. 174‑177. HeinOnline

First, there are two distinct ways in which it leaves no room to opt‑out. Although public
space might not be well‑defined in law, and the limits of which spaces are public and
which aren’t, are not agreed upon, there is consensus about the fact that public space
is a place where people wanting to take part in society have no ability to opt out from
entering. In addition to the impossibility to opt‑out from public space, it is impossible to
opt‑out from your face, and difficult to prevent your face from being surveilled once the
technology has been deployed on the streets. The extremely personal characteristics of
your face cannot be changed or left at home in a drawer. In several countries, it is even
forbidden by law to cover your face when in public space. On top of that, it is fairly
easy to gather face information covertly and distantly. This allows others to identify
and follow people through public space without their knowledge.

183
5 Pro Evidence

5.2.21 AT: Targeted Solutions

Targeted solutions fail because they will inevitably be aggregated.

Houwing 20

Lotte Houwing (researcher and policy advisor at the Dutch digital rights NGO Bits of
Freedom), Stop the Creep of Biometric Surveillance Technology, European Data Protec‑
tion Law Review (EDPL), Vol. 6, No. 2, 2020, pp. 174‑177. HeinOnline

Second, although the intended purpose of the deployment might be targeted, the real
world effects of face surveillance in public space are in any case untargeted. For instance,
in order to prevent all people on a particular watchlist from entering a specific place,
you need to scan and analyse every person and compare them to your list. Also, thanks
to insights into existing facial recognition law enforcement databases, we’ve seen that
there’s a tendency to collect as many faces as possible. A study from 2016 shows that
half of all United States adults are already included in one,’ and in the Netherlands the
criminal database includes 1.4 million people,2 which translates to 1 in every 12 citizens.

184
5 Pro Evidence

5.2.22 AT: Specific Use Restrictions

Regulations that target a single use will get struck down by the Courts as imposing
content‑based restrictions on free speech.

Ringel and Reid 23

Evan Ringel (PhD student at Hussman School of Journalism and Media, University of
North Carolina at Chapel Hill) and Amanda Reid (Associate Professor at Hussman
School of Journalism and Media, University of North Carolina at Chapel Hill), Regu‑
lating Facial Recognition Technology: A Taxonomy of Regulatory Schemata and First
Amendment Challenges. Communication Law and Policy, 15 February 2023, Forthcom‑
ing, http://dx.doi.org/10.2139/ssrn.4360358

Intuitively, it would make sense that a regulation explicitly designed to target a spe‑
cific use of facial recognition technology may be more likely to survive constitutional
scrutiny as a narrowly tailored regulation based on a compelling government interest.
However, the Supreme Court’s decision in Sorrell shows the potential problems with
such an approach, setting up one of the core tensions in state attempts to regulate facial
recognition technology. In Sorrell, Vermont’s attempted ban on the use of prescriber‑
identifying information for marketing purposes was seen by the Court as a “content‑
and speaker‑based restriction[].”223 In response to the problem of prescription drug
companies using prescriber‑identifying information to affect the prescribing practices of
doctors, the Prescription Confidentiality Act explicitly prevented the use of prescriberi‑
dentifying information for marketing purposes while leaving the door open for other
uses of the information. The Court interpreted this narrow tailoring of Vermont’s regu‑
lation as a unconstitutional content‑based burden on speech, finding that the law effec‑
tively “prevent[ed] detailers–and only detailers–from communicating with physicians
in an effective and informative manner.”224 However, Justice Kennedy suggested that
a broader restriction may have been more likely to survive constitutional scrutiny.225

Essentially, by targeting a specific use of prescriber‑identifying information that Ver‑


mont felt was negatively affecting its citizens rather than passing a broader restriction
on use of the information, the state imposed a content‑based restriction on speech. The
Court acknowledged that determining a particular regulation is content‑based is “all but
dispositive” in deciding a case; if a regulation is content‑based, it will almost certainly
be struck down as unconstitutional.226 In the context of facial recognition technology,
Sorrell establishes a regulatory tension that makes it difficult for states to target the uses

185
5 Pro Evidence

of facial recognition technology they find most problematic. The more granular a regu‑
lation is, the more likely that the court will apply a higher level of constitutional scrutiny
and/or determine that the regulation represents a content‑based burden on speech. Sor‑
rell shows that narrow regulations prohibiting a specific use of information will struggle
to survive even intermediate scrutiny. However, general regulations prohibiting most
uses of information may be more likely to survive under Sorrell; the less targeted a re‑
striction is, the less likely a court may be to apply heightened scrutiny to the restriction
as a content‑based restriction on speech.

186
5 Pro Evidence

5.2.23 AT: Enforce Existing Laws

Enforcing existing legal protections fails because it is nearly impossible to


formulate a coherent legal basis for regulations or to enforce.

Houwing 20

Lotte Houwing (researcher and policy advisor at the Dutch digital rights NGO Bits of
Freedom), Stop the Creep of Biometric Surveillance Technology, European Data Protec‑
tion Law Review (EDPL), Vol. 6, No. 2, 2020, pp. 174‑177. HeinOnline

It would be preferable to address the problem of biometric surveillance technologies in


the public space with strong enforcement of existing regulation over the creation of a
new legal instrument that bans it. The reason for this is that it shows and strengthens the
potential this framework has in terms of protecting our rights and freedoms, providing
us with a strong and extensive framework in the long run. Unfortunately, a few factors
complicate this.

The Law Enforcement Directive additionally sets the high demand of strict necessity,
suitable safeguards should be in place and the processing must be permitted by Euro‑
pean Union or Member State law.5 It is questionable whether it is possible to formulate
a legal basis that meets these requirements while allowing for the mass‑scale processing
of biometric data inherent to these surveillance technologies. However, the protection of
our fundamental rights and freedoms does not benefit from the possible disagreement
or a long trial‑and‑error process.

Another problem is that these legal frameworks in themselves offer just theoretical pro‑
tection. The actual protection they have to offer is as strong as their enforcement mech‑
anisms. It is exactly these mechanisms that might be the weakest link in the chain.

As we see the deployment of facial recognition in public space in several Member States,
it might be needed that authorities provide some extra clarity and invigorate the existing
framework with an explicit ban on biometric surveillance technologies in public space.
44 digital rights organisations, including Bits of Freedom, called for a ban on biometric
mass surveillance.6

The one thing we know for sure, is that to protect our free societies and our fundamental
rights and freedoms, we cannot let biometric surveillance technology sneak up on us.

187
6 Con Evidence

6.1 General

6.1.1 Laundry List

Outright bans harm vulnerable populations such as trafficking victims.

Lobel 22
Orly Lobel (Warren‑Distinguished Professor and Director of the Center for Employment
and Labor Policy (CELP) at University of San Diego), “The Problem With Too Much
Data Privacy,” Time, 27 October 2022, https://time.com/6224484/data‑privacy‑problem/,
accessed 27 February 2023
We rightfully fear surveillance when it is designed to use our personal information in
harmful ways. Yet a default assumption that data collection is harmful is simply mis‑
guided. We should focus on regulating misuse rather than banning collection. Take
for example perhaps the most controversial technologies that privacy advocates avidly
seek to ban: facial recognition. 20 cities and counties around the U.S. have passed bans
on government facial recognition. In 2019, California enacted a three‑year moratorium
on the use of facial recognition technology in police body cameras. The two central con‑
cerns about facial recognition technology are its deficiencies in recognizing the faces of
minority groups—leading, for example, to false positive searches and arrests—and its
increase in population surveillance more generally. But the contemporary proposals of
unnuanced bans on the technology will stall improvements to its accuracy and hinder
its safe integration, to the detriment of vulnerable populations.
These outright bans ignore that surveillance cameras can help protect victims of domes‑
tic violence against abuser trespassing, help women create safety networks when trav‑
eling on their own, and reduce instances of abuse of power by law enforcement. Facial
recognition is increasingly aiding the fight against human trafficking and locating miss‑
ing people—and particularly missing children—when the technology is paired with AI

188
6 Con Evidence

that creates maturation images to bridge the missing years. There are also many benefi‑
cial uses of facial recognition for the disability community, such as assisting people with
impaired vision and supporting the diagnosis of rare genetic disorders. While class ac‑
tion and ACLU lawsuits and reform proposals stack up, we need balanced policies that
allow facial recognition under safe conditions and restrictions.

Bans ignore the beneficial uses of FRT.

Feeney 22

Matthew Feeney (was the director of Cato’s Project on Emerging Technolo‑


gies),“Regulate Facial Recognition, Don’t Ban It,” Cato Institute, 17 May 2022,
https://www.cato.org/blog/regulate‑facial‑recognition‑dont‑ban‑it, accessed 6 March
2023

Facial recognition is an ideal technology for mass surveillance. Chinese authorities reg‑
ularly provide depressing examples of how facial recognition can be used to surveil
entire communities. However, that facial recognition can be used for mass surveillance
does not warrant outright bans on the technology. It is not hard to imagine beneficial
uses of facial recognition technology. Facial recognition could help police find missing
children and adults with dementia who are lost. It can also be used to identify suspects
in violent crimes.

Bans are overly broad and ignore valuable applications of thetechnology.

Feeney and Chiu 22

Matthew Feeney (was the director of Cato’s Project on Emerging Technologies) and
Rachel Chiu, “Facial Recognition Debate Lessons from Ukraine,” Cato Institute, 17
March 2022, https://www.cato.org/blog/facial‑recognition‑debate‑lessons‑ukraine,
accessed 6 March 2023

Civil liberties and surveillance concerns are important to debates over FRT, but they
should not be the only considerations. As the Reuters article demonstrates, facial recog‑
nition can help defending countries in wartime. But it can also be valuable in peace‑
time. For example, in 2018 police in New Delhi scanned images of 45,000 children in
orphanages and care facilities. Within four days, they were able to identify 2,930 miss‑

189
6 Con Evidence

ing children – a feat that would have been exceedingly difficult in the absence of this
technology.

FRT can be applied in many circumstances: It can help refugees locate their families,
strengthen commercial security, and prevent fraud. There are also use cases unrelated
to law enforcement and safety. For example, CaliBurger patrons can pay for their meal
with a face scan. Similar payment systems have been trialed around the world, in coun‑
tries such as Denmark, Nigeria, South Korea, and Finland.

Broad prohibitions and bans often overlook these valuable applications, focusing solely
on misuse by law enforcement.

190
6 Con Evidence

6.1.2 No Solvency

Banning biometrics misses the point—unregulated data collection is inevitable and


bans can’t solve the myriad of other ways we are being surveilled.

Schneier 20

Bruce Schneier (fellow at the Harvard Kennedy School and the author, most recently,
of “Click Here to Kill Everybody: Security and Survival in a Hyper‑Connected World”),
“We’re Banning Facial Recognition. We’re Missing the Point.,” The New York Times,
20 January 2020, https://www.nytimes.com/2020/01/20/opinion/facial‑recognition‑ban‑
privacy.html, accessed 9 March 2023

These efforts are well intentioned, but facial recognition bans are the wrong way to fight
against modern surveillance. Focusing on one particular identification method miscon‑
strues the nature of the surveillance society we’re in the process of building. Ubiquitous
mass surveillance is increasingly the norm. In countries like China, a surveillance infras‑
tructure is being built by the government for social control. In countries like the United
States, it’s being built by corporations in order to influence our buying behavior, and is
incidentally used by the government.

In all cases, modern mass surveillance has three broad components: identification, cor‑
relation and discrimination. Let’s take them in turn.

Facial recognition is a technology that can be used to identify people without their
knowledge or consent. It relies on the prevalence of cameras, which are becoming both
more powerful and smaller, and machine learning

technologies that can match the output of these cameras with images from a database
of existing photos.

But that’s just one identification technology among many. People can be identified at
a distance by their heart beat or by their gait, using a laser‑based system. Cameras are
so good that they can read fingerprints and iris patterns from meters away. And even
without any of these technologies, we can always be identified because our smartphones
broadcast unique numbers called MAC addresses. Other things identify us as well: our
phone numbers, our credit card numbers, the license plates on our cars. China, for
example, uses multiple identification technologies to support its surveillance state.

Once we are identified, the data about who we are and what we are doing can be corre‑
lated with other data collected at other times. This might be movement data, which can

191
6 Con Evidence

be used to “follow” us as we move throughout our day. It can be purchasing data, inter‑
net browsing data, or data about who we talk to via email or text. It might be data about
our income, ethnicity, lifestyle, profession and interests. There is an entire industry of
data brokers

who make a living analyzing and augmenting data about who we are — using surveil‑
lance data collected by all sorts of companies and then sold without our knowledge or
consent.

There is a huge — and almost entirely unregulated — data broker industry in the United
States that trades on our information. This is how large internet companies like Google
and Facebook make their money. It’s not just that they know who we are, it’s that they
correlate what they know about us to create profiles about who we are and what our
interests are. This is why many companies buy license plate data from states. It’s also
why companies like Google are buying health records, and part of the reason Google
bought the company Fitbit, along with all of its data.

The whole purpose of this process is for companies — and governments — to treat
individuals differently. We are shown different ads on the internet and receive different
offers for credit cards. Smart billboards display different advertisements based on who
we are. In the future, we might be treated differently when we walk into a store, just as
we currently are when we visit websites.

The point is that it doesn’t matter which technology is used to identify people. That
there currently is no comprehensive database of heart beats or gaits doesn’t make the
technologies that gather them any less effective. And most of the time, it doesn’t matter
if identification isn’t tied to a real name. What’s important is that we can be consistently
identified over time. We might be completely anonymous in a system that uses unique
cookies to track us as we browse the internet, but the same process of correlation and dis‑
crimination still occurs. It’s the same with faces; we can be tracked as we move around
a store or shopping mall, even if that tracking isn’t tied to a specific name. And that
anonymity is fragile: If we ever order something online with a credit card, or purchase
something with a credit card in a store, then suddenly our real names are attached to
what was anonymous tracking information.

Regulating this system means addressing all three steps of the process. A ban on facial
recognition won’t make any difference if, in response, surveillance systems switch to
identifying people by smartphone MAC addresses. The problem is that we are being
identified without our knowledge or consent, and society needs rules about when that

192
6 Con Evidence

is permissible.

193
6 Con Evidence

6.1.3 Bans Impossible

Outright bans will be impossible because of huge variability in the technology.

Heilweil 20

Rebecca Heilweil (Reporter, Recode by Vox), “How can we ban facial recognition when
it’s already everywhere?,” Vox, 3 July 2020, https://www.vox.com/recode/2020/7/3/
21307873/facial‑recognition‑ban‑law‑enforcement‑apple‑google‑facebook, accessed 7
March 2023

Regulating facial recognition will be piecemeal

The Facial Recognition and Biometric Technology Moratorium Act recently introduced
on Capitol Hill is sweeping. It would prohibit federal use of not only facial recognition
but also other types of biometric technologies, such as voice recognition and gait recog‑
nition, until Congress passes another law regulating the technology. The bill follows
other proposals to limit government use of the technology, including one that would
require a court‑issued warrant to use facial recognition and another that would limit
biometrics in federally assisted housing. Some local governments, like San Francisco,
have also limited their own acquisition of the technology.

So what about facial recognition when it’s used on people’s personal devices or by pri‑
vate companies? Congress has discussed the use of commercial facial recognition and
artificial intelligence more broadly. A bill called the Commercial Facial Recognition Pri‑
vacy Act would require the explicit consent of companies collecting peoples’ biometric
information, and the Algorithmic Accountability Act would require large companies to
check their artificial intelligence, including facial recognition systems, for bias.

But the ubiquitous nature of facial recognition means that regulating the technology will
inevitably require piecemeal legislation and attention to detail so that specific use cases
don’t get overlooked. San Francisco, for example, had to amend its facial recognition
ordinance after it accidentally made police‑department‑owned iPhones illegal. When
Boston passed its recent facial recognition ordinance, it created an exclusion for facial
recognition used for logging into personal devices like laptops and phones.

“The mechanisms to regulators are so different,” said Brian Hofer, who helped craft San
Francisco’s facial recognition ban, adding that he’s now looking at creating local laws
modeled after Illinois’ Biometric Information Privacy Act that focus more on consumers.

194
6 Con Evidence

“The laws are so different it would be probably impossible to write a clean, clearly un‑
derstood bill regulating both consumer and government.”

195
6 Con Evidence

6.1.4 Circumvention

Bans will be ignored—NYC schools prove.

Miguel and Schwarz 22

Juan Miguel (Program Associate, Education Policy Center) and Daniel Schwarz
(Senior Privacy & Technology Strategist, Policy), “NY is Ignoring the Ban on
Facial Recognition in Schools,” New York Civil Liberties Union, 28 June 2022,
https://www.nyclu.org/en/news/ny‑ignoring‑ban‑facial‑recognition‑schools, accessed
27 February 2023

To protect students, New York State adopted a law in 2020 placing a moratorium on the
use of invasive, biased, privacy‑destroying biometric surveillance in schools. The mora‑
torium cannot be lifted until the New York State Education Department (NYSED) issues
a report on the risks and benefits of this technology in schools and the Commissioner of
Education authorizes its use.

Despite this ban, the NYCLU has uncovered evidence that New York officials — in‑
cluding NYSED — are ignoring the law by approving grant applications for schools to
purchase biometric surveillance technologies, including facial recognition.

196
6 Con Evidence

6.1.5 Rollback

Bans get rolled back—politicians don’t want to perceived as soft on crime.

Metz 22

Rachel Metz (CNN Business contributor), “First, they banned facial recognition. Now
they’re not so sure,” CNN, 5 August 2022, https://edition.cnn.com/2022/08/05/tech/facial‑
recognition‑bans‑reversed/index.html, accessed 7 March 2023

Roughly two dozen facial‑recognition bans of various types have been enacted in com‑
munities and a few states across the United States since 2019. Many of them came in
2020; as Schwartz pointed out, there was a push in favor of limiting police use of surveil‑
lance technology surrounding the protests that came in the wake of the fatal arrest of
George Floyd by Minneapolis police officers in May of that year. Then, in the past year,
“the pendulum has swung a bit more in the law‑and‑order direction,” he said.

“In American politics there are swings between being afraid of government surveillance
and being afraid of crime. And in the short term there seems to have been a swing in
favor of fear of crime,” he said, adding that the EFF is “optimistic” that the overall trend
is toward limiting government use of such surveillance technologies.

Reversals in New Orleans and Virginia

When New Orleans approved a ban of facial‑recognition technology in late 2020 as part
of a broader ordinance to regulate numerous surveillance technologies in the city, six of
the seven council members at the time voted in favor of it (one was absent). By contrast,
when the votes were tallied in July for the ordinance that would allow police to use
facial‑recognition technology, four council members voted for it and two voted against
it (one counsel member was absent).

The turnabout less than two years later comes after a rise in homicides, following a
decline from 2016 to 2019.

The new rule lets city police request the use of facial‑recognition software to aid investi‑
gations related to a wide range of violent crimes, including murder, rape, kidnap, and
robbery.

In a statement applauding the city council’s July 21 vote in favor of facial‑recognition


technology, New Orleans mayor LaToya Cantrell said, “I am grateful that the women
and men of the NOPD now have this valuable, force multiplying tool that will help take
dangerous criminals off our streets.”

197
6 Con Evidence

Yet Lesli Harris, a New Orleans city council member who opposed the July ordinance,
is concerned about how the legislation could impact the civil rights of people in the city.
“As a woman of color it’s hard for me to be in favor of facial recognition,” Harris said,
pointing out that studies that have shown that the technology can be less accurate at
recognizing people of color, and women of color in particular.

In Virginia, legislation that went into effect last July banned local law enforcement and
campus police from using facial‑recognition technology unless the state legislature first.

198
6 Con Evidence

6.1.6 Biometrics Are More Secure

Biometrics are far more secure and hacking it is far more difficult.

Leong 19

Brenda Leong (Senior Counsel and Director of Artificial Intelligence and Ethics at the
Future of Privacy Forum), Facial recognition and the future of privacy: I always feel
like … somebody’s watching me, Bulletin of the Atomic Scientists, 75:3, 109‑115, 2019,
https://doi.org/10.1080/00963402.2019.1604886

One of the greatest areas of angst for those concerned about facial recognition systems
is in the security of the data. There are at least a couple of ways to consider the security.
One is “How close is it to”perfect” – can it be spoofed (access gained without the actual
person’s face present) or hacked (accessing the stored file of templates). High‑quality
facial recognition systems rate very well on such a scale, but no system is perfect, and
critics have argued that even the small percentage of incorrect outputs of such a system
make them unreasonably risky. Perhaps, however, a better way of ranking them is to
compare them to available alternatives, such as passwords.

Biometric data, contained in a database of enrolled individuals, is almost certainly a


more secure option than passwords. Passwords can be cracked fairly easily by “brute
force” methods (such as running software that attempts patterned combinations of num‑
bers and letters in alphanumeric code) if they’re not strong – which most are not. And
people tend to re‑use them, so having someone’s password from one account is likely
to provide access to other accounts as well, which results in the password only being as
safe as the weakest system on which it’s stored. So if a server file of passwords and a
server file of biometric templates are each breached, what are the risks?

Access to passwords yields immediately usable information to directly access individ‑


ual accounts. And, as mentioned, each password might be useful to access other ac‑
counts as well. In contrast, a breach of a database of biometric data will yield only those
binary numbers which cannot easily – if ever – be “backengineered” into the template
or the original image. This information cannot be used to access the associated account,
nor is it likely to be the same system as used on any other accounts, since each platform
is probably using a different vendor and the algorithms are not interoperable. Actually
breaching a database of biometric data may or may not be harder, depending on general
network security, but if it were breached, there is a much higher likelihood that the data
would be much harder to exploit in any systematic way. Finally, biometrics are almost

199
6 Con Evidence

always part of a 2‑factor system – meaning they are only one piece of a multiple‑step
access process – and therefore just having the biometric isn’t enough to gain access.

200
6 Con Evidence

6.1.7 Border Controls

Use of FRT at the border is Constitutional and justified.

Santamaria 21

Kelsey Y. Santamaria (Legislative Attorney at the Congressional Research Service), Fa‑


cial Recognition Technology and Law Enforcement: Select Constitutional Considera‑
tions, In: Issues with Facial Recognition Technology, Eds. Warren Lambert, Nova Science
Publishers, New York, 2021. ISBN: 978‑1‑53618‑973‑5

The federal government makes use of FRT to identify international travelers coming to
and departing from the United States. But under current jurisprudence, this use seems
unlikely to trigger serious Fourth Amendment concerns.

Congress has broad authority to regulate persons or property entering the United
States—an authority that is rooted in its power to regulate foreign commerce and to
protect the integrity of the nation’s borders.147 Under federal statutes, government
officers may inspect and search individuals, merchandise, vehicles, and vessels that
are attempting to enter the United States or are found further within the interior of
the country shortly after entry.148 Additionally, government officers have statutory
authority to investigate potential violations of federal immigration laws at the border
and surrounding areas.149

Federal law requires DHS to develop and deploy a biometric entry and exit system.150
CBP has used a form of FRT, known as Traveler Verification Service (TVS), to support
biometric entry and exit systems at air, sea, and land environments. 151 CBP also uses
facial recognition and iris‑scanning technology for pedestrian travelers at some land
ports of entry, as well as facial recognition of occupants in moving vehicles entering
and exiting the United States. 152 In addition, CBP collects biometric information of
persons interdicted when illegally crossing the international border.153

The Supreme Court has recognized searches and seizures at international borders as
unique cases for Fourth Amendment purposes.154 Under the border search exception,
searches performed at international borders in relation to an actual or attempted bor‑
der crossing155 do not generally require a warrant, probable cause, or reasonable sus‑
picion.156

But the border search exception has limits. The Supreme Court has stated that rou‑
tine searches at the border “are reasonable simply by virtue of the fact that they occur

201
6 Con Evidence

at the border.”157 That said, not all searches at the border are per se reasonable un‑
der the Fourth Amendment. Some border searches conducted in a particularly intru‑
sive manner—such as a body cavity search—may still be limited by the Fourth Amend‑
ment.158 Simply stated, the reasonableness of a border search depends on the circum‑
stances of the search itself.159

Depending on the level of intrusion, some searches performed at the international


border may require reasonable suspicion of unlawful activity.160 When determining
whether a search is reasonable, Fourth Amendment jurisprudence generally catego‑
rizes searches at the border into two categories: routine searches and nonroutine
searches, with the latter requiring a level of particularized suspicion of illegal activity.
Routine searches generally include searches of automobiles, baggage, and other goods
entering the country.161 Additionally, an individual seeking to enter the country may
be required to submit to a search of his or her outer clothing, 162 which may include an
examination of the contents of a purse, wallet, or pockets and a canine sniff.163 While
this is ongoing, the individual may be subject to a brief detention. 164 Nonroutine
border searches—such as prolonged detentions, strip searches, body cavity searches,
or involuntary x‑ray searches—require reasonable suspicion.165

Jurisprudence suggests that minimally intrusive collection of biometric data at an inter‑


national border does not affront the Fourth Amendment. For example, the Second Cir‑
cuit166 has noted that collecting fingerprints, another biometric identifier, at a land port
of entry was a routine search, meaning that no reasonable suspicion was required.167
A Fourth Amendment challenge to the collection of nonobtrusive personal identifiers,
such as the collection and comparison of facial geometry through FRT at the border,
appears unlikely to succeed in court based on current case law. 168 Furthermore, FRT‑
enhanced surveillance at the international border, for the purpose of monitoring the
entry and exit of persons from the United States, likely would not raise the same pri‑
vacy concerns in cases like Carpenter because the monitoring would not aggregate data
providing “an intimate window into a person’s life” to the extent it did in Carpenter. 169
It therefore seems unlikely that a court would conclude that the use of FRT for the sole
purpose of monitoring the entry and exit of travelers raises meaningful Fourth Amend‑
ment concerns.

202
6 Con Evidence

6.1.8 Law Enforcement

FRT increases public safety—examples from the NYPD prove.

McClellan 20
Elizabeth McClellan (J.D. Candidate, 2021, University of Maryland Francis King Carey
School of Law), Facial Recognition Technology: Balancing the Benefits and Concerns,
15 Journal of Business & Technology Law 363, 2020, https://digitalcommons.law.
umaryland.edu/jbtl/vol15/iss2/7
Similarly, facial recognition technology has been utilized in America to help increase
public safety. In August of 2019, police in New York used facial recognition technol‑
ogy to track down an accused rapist in less than twenty‑four hours after the alleged
attack.65 The technology, Facial Identification Section, compared video footage from a
nearby food store to mug shots that had previously been taken of the suspect.66 New
York Police Department officers noted that typically, a case such as this wouldn’t be
solved due to the “resources and manpower” it takes to identify a suspect.67 Perpetra‑
tors in crimes of violence such as this one are typically repeat offenders—thus, facial
recognition technology is able to quickly aid law enforcement’s search and prevent fu‑
ture offenses.68

Facial recognition is key to crime‑fighting—judicial oversight and transparency


solve criticisms of FRT.

Gikay 22
Asress Adimi Gikay (Senior Lecturer in AI, Disruptive Innovation and Law, Brunel
University London), “Facial recognition: why we shouldn’t ban the police from using
it altogether,” Conversation, 4 November 2022, https://theconversation.com/facial‑
recognition‑why‑we‑shouldnt‑ban‑the‑police‑from‑using‑it‑altogether‑193895, ac‑
cessed 27 February 2023
Nonetheless, facial recognition has its benefits. It can help police to find serious crimi‑
nals, including terrorists, not to mention missing children and people at risk of harming
themselves or others.
Like it or not, we also live under colossal corporate surveillance capitalism already. The
UK and US have among the most installed CCTV cameras in the world. London resi‑
dents are filmed 300 times a day on average, and police can usually use the data without

203
6 Con Evidence

a search warrant. As if that wasn’t bad enough, big tech companies know almost every‑
thing personal about us. Worrying about live facial recognition is inconsistent with our
tolerance of all this surveillance.

A better approach

Instead of an outright ban, even of covert facial recognition, I’m in favour of a statutory
law to clarify when this technology can be deployed. For one thing, police in the UK
can currently use it to track people on their watchlists, but this can include even those
charged with minor crimes. There are also no uniform criteria for deciding who can be
listed.

Under the EU’s proposed law, facial recognition could only be deployed against those
suspected of crimes carrying a maximum sentence of upwards of three years. That
would appear to be a reasonable cut‑off.

Secondly, a court or similar independent body should always have to authorise deploy‑
ment, including assessing whether it would be proportionate to the police objective in
question. In the Met, authorisation currently has to come from a police officer ranked
superintendent or higher, and they do have to make a call on proportionality – but this
should not be a police decision.

We also need clear, auditable ethical standards for what happens during and after the
technology is deployed. Images of wrongly identified people should be deleted imme‑
diately, for instance. Unfortunately, Met policy on this is unclear at present. The Met
is trying to use the technology responsibly in other respects, but this is not enough in
itself.

Last but not least, the potential for discrimination should be tackled by legally requiring
developers to train the AI on a diverse enough range of communities to meet a minimum
threshold. This sort of framework should allow society to enjoy the benefits of live
facial recognition without the harms. Simply banning something that requires a delicate
balancing of competing interests is the wrong move entirely.

Effective identification is key to law enforcement.

Peterson et al. 23

Peterson, Samuel, Brian A. Jackson, Dulani Woods, and Elina Treyger, Find‑
ing a Broadly Practical Approach for Regulating the Use of Facial Recognition

204
6 Con Evidence

by Law Enforcement. Santa Monica, CA: RAND Corporation, 2023. https:


//www.rand.org/pubs/research_reports/RRA2249‑1.html, accessed 27 February 2023

Identification in Law Enforcement

Identification is a critical function of law enforcement. It is important for distinguish‑


ing individuals to ensure that the appropriate person has been detained and processed
through the justice system. Identification is also critical for determining whether a par‑
ticular person or object was present at the location of a crime or was present at locations
connected to a crime. Methods of identification have evolved dramatically, often be‑
cause of the availability of new technologies.

Technological Advances to Aid in Identification

Methods to improve identification are constantly evolving. The most‑established meth‑


ods fall under the forensic identification category. Three primary methods of forensic
identification are fraction ridge analysis (fingerprints), forensic odontology (teeth), and
DNA analysis (genetic material), a relatively recent addition. A key feature of these
identifiers is that they are reliable indicators of identity when appropriately examined
because they are based on a certified source that is not easily tampered with (e.g., you
cannot readily morph or deepfake DNA at a crime scene; see Ngan et al., 2020; Tolosana
et al., 2022; Vincent, 2022). Further, the comparison is being made to a known source
that was obtained directly—and legally—from that source.

Secondary methods of identification can be useful where primary identifiers are incon‑
clusive or missing. These may include examinations of medical information, pathology,
personal effects (e.g., tattoos, piercings, jewelry, clothing, shoes), or other pieces of evi‑
dence, such as handwriting or firearms that may be used in combination to identify an
individual. Forms of digital forensic evidence, such as cell phone tower pinging, GPS
coordinates, or bank records also serve as secondary methods to link someone to a lo‑
cation at a particular time. It is important to note that each of the above methods of
identification—both the primary and secondary—has accuracy limitations that we will
not discuss further in this report.

FR can be viewed as a key advancement in methods of identification, with characteristics


that make it distinct in some important ways from previous identification methods. For
instance, although similar in some ways to primary methods, FR is somewhat different
in that it can be implemented from a distance (i.e., it does not require direct contact).
It is this passive ability to capture all faces in an area that makes FR potentially much
more expansive—and much more intrusive—than other forms of primary identification.

205
6 Con Evidence

Although the potential breadth of FR’s ability to identify people is dependent on how
widely cameras and video surveillance systems are deployed, FR can analyze video data
with much less human effort than a manual review. FR is also different from secondary
identification in that the face is—with few exceptions3—a unique biometric identifier
on its own. Nevertheless, a number of factors can impede the accuracy of facial image
capture and FR systems, including image quality (e.g., illuminance, angle, occlusions).

Constraining the use of FRT allows us to capture the benefits of the technology
while minimizing the risks.

Peterson et al. 23

Peterson, Samuel, Brian A. Jackson, Dulani Woods, and Elina Treyger, Find‑
ing a Broadly Practical Approach for Regulating the Use of Facial Recognition
by Law Enforcement. Santa Monica, CA: RAND Corporation, 2023. https:
//www.rand.org/pubs/research_reports/RRA2249‑1.html, accessed 27 February 2023

The Potential Ubiquity of Facial Recognition and Options for Narrowing

Combining FR with pervasive surveillance is possible with current technology and rep‑
resents the most concerning application of this technology because it is overly broad
and pervasive. Much of our discussion about regulating FR use by law enforcement
involves narrowing the use of FR in ways that maximize benefits and minimize risks,
along with careful review to ensure that these regulations were upheld. This narrowing
could consist of limiting FR to investigating serious crimes, increasing the level of autho‑
rization required, reducing the number of people being searched or doing the searching,
restricting the types of images or the quality of images that can be used, setting a thresh‑
old for search results, and deleting data.

Having a requirement to seek warrants from an outside source represents both an over‑
sight and practical check on the power of law enforcement to collect and use information,
and technological advancements often lead to or even require new investigative tools
and techniques. Although other tools and resources also facilitate investigations, they
are generally limited in their ability to identify anyone, at any time. However, FR can
potentially change that, and that change is one source of concern from civil society: that
the technology has the potential to fundamentally shift the balance between the police
and the public.

Constraints on how the technology is used—that it cannot and will not be applied any‑

206
6 Con Evidence

time, anywhere, and for any purpose—seek to shift that balance back again, partially,
although this may not alleviate the concerns of those most worried about the introduc‑
tion of this technology. While still allowing FR use, such constraints seek to make FR use
more rare than commonplace and, by doing so, maintain some of that practical check on
police capability at the price of some of the potential efficiencies the technology could
provide.

Transparency about when FR is used is also a critical check. This includes public aware‑
ness of preapproveduse cases and FR use in individual cases (e.g., notification to defen‑
dants). If the system is flawed or used inappropriately, full disclosure can potentially
identify issues and reduce or prevent the high cost of an innocent person being con‑
victed.

Mandating fairness and accuracy assessments solves criticisms of bias.

MacCarthy 21

Mark MacCarthy (Nonresident Senior Fellow ‑ Governance Studies, Center for Technol‑
ogy Innovation), “Mandating fairness and accuracy assessments for law enforcement fa‑
cial recognition systems,” Brookings, 26 May 2021, https://www.brookings.edu/blog/techtank/
2021/05/26/mandating‑fairness‑and‑accuracy‑assessments‑for‑law‑enforcement‑facial‑
recognition‑systems/, accessed 8 March 2023

A requirement for prior public assessment of the technology itself is one element of a rea‑
sonable use policy. Thus, it may not be reasonable for a police agency to use FRT unless
it knows the fallibility of the technology, and how often it makes mistakes, especially
when applied to different subgroups defined by gender, race, age, and ethnicity. As part
of a reasonable use policy, developers should be required to submit their FRT systems
to the National Institute of Standards and Technology (NIST) to assess their accuracy
and fairness and NIST must make the results of this assessment publicly available, in‑
cluding to potential purchasers, before these systems can be offered on the market or
put into service for law enforcement purposes.

Federal fairness and accuracy assessments constrain FRT and prevent it from
exacerbating bias.

MacCarthy 21

207
6 Con Evidence

Mark MacCarthy (Nonresident Senior Fellow ‑ Governance Studies, Center for Technol‑
ogy Innovation), “Mandating fairness and accuracy assessments for law enforcement fa‑
cial recognition systems,” Brookings, 26 May 2021, https://www.brookings.edu/blog/techtank/2021/
05/26/mandating‑fairness‑and‑accuracy‑assessments‑for‑law‑enforcement‑facial‑
recognition‑systems/, accessed 8 March 2023

FAIRNESS AND ACCURACY IN FACIAL RECOGNITION SYSTEMS

NIST has already established criteria for evaluating the accuracy and fairness of facial
recognition systems as part of its ongoing Facial Recognition Vendor Tests. Over the
last several years, the agency has conducted and published independent assessments of
systems that have been voluntarily submitted to them for evaluation and it maintains
an ongoing program of evaluation of vendor systems.

In a typical law enforcement use, an agency would run a photograph of a person of


interest through a facial recognition system, which can search enormous databases in
a matter of seconds. The system typically returns a score indicating how similar the
image presented is to one or more of the images in the database.

The law enforcement agency would want the system to indicate a match if there really
was one in the database, that is, it would want a high hit rate, or conversely, a low miss
rate. But the agency also wants the system to be selective and indicate if there is no
match, when in fact there is none.

There’s a mathematical trade‑off between these two goals. Typically a facial recognition
system will be tuned to return a match only if its score is above a certain threshold. This
choice of a threshold represents a balance between the costs of a false negative—missing
a lead—and the costs of a false positive—wasting time pursuing innocent people.

NIST’s tests measure both a facial recognition system’s false positive rate and its false
negative rate at a certain threshold. Another way NIST measures accuracy is to ask
whether the highest match score is a false match, regardless of the threshold, and to
calculate a “rank one miss rate” as the rate at which the pair with the highest returned
similarity score is not a genuine match.

According to NIST’s assessments, how accurate are today’s facial recognition al‑
gorithms? The agency reports that using high quality images, the best algorithm
tested has a “rank one miss rate” of 0.1%, but that is only with high quality images
such as those obtained from a cooperating subject in good lighting. With the lower
quality images typically captured in real world settings, error rates climb as high as
20%. Moreover, algorithms vary enormously in their accuracy, with poor performers

208
6 Con Evidence

making mistakes more than half the time. In addition, in many cases, the correct image
in the database receives the highest similarity score, but that score is very low, below
the required operational threshold. This means that in practice the real miss rate will
be higher than indicated by the rank one miss rate.

NIST also assesses facial recognition systems for fairness. In December 2019, NIST pub‑
lished a report on demographic differentials in facial recognition. It assessed the extent
to which the accuracy of facial recognition systems varied across subgroups of people
defined by gender, age, race, or country of origin. It defined fairness as homogeneous
accuracy across groups and unfairness as the extent to which accuracy is not the same
across all subgroups.

How did the tested algorithms do on fairness?

In general, the report found that African American women have higher false positive
rates, that black men invariably give lower false negative identification rates than white
men, and that women invariably give higher false negative rates than men. These dif‑
ferentials were present even when high quality images were used. The report did not
use image data from the internet nor from video surveillance and so did not capture any
additional demographic differentials that might occur in such photographs. The report
also found that the more accurate algorithms tended to be the most equitable. A key
finding from the agency’s research was that different algorithms performed differently
on equitable treatment of different subgroups.

RECOMMENDATION FOR ASSESSMENTS

The recommendation presented here for mandated prior assessments of the accuracy
and fairness of law enforcement uses of facial recognition technology builds on and
includes numerous prior proposals including:

As the NIST fairness report recommended, owners and users of facial recognition sys‑
tems should “know their algorithm” and use “publicly available data from NIST and
elsewhere” to inform themselves.

Facial recognition systems used in policing should “participate in NIST accuracy tests,
and…tests for racially biased error rates,” as proposed in the Georgetown study.

Police procurement officials should “take all reasonable steps to satisfy themselves
either directly or by independent verification” whether facial recognition software
presents a risk of bias before putting the system in use, as recommended by the former
U.K. Surveillance Camera Commissioner.

209
6 Con Evidence

“Third‑party assessments” should be used to ensure facial recognition systems for law
enforcement meet “very high” mandated accuracy standards as suggested in a previous
Brookings study from my colleague Darrell West.

As recommended by the National Security Commission on Artificial Intelligence,


Congress should require prior risk assessments “for privacy and civil liberties impacts”
of AI systems, including facial recognition, used by the Intelligence Community, the
Department of Homeland Security, and the Federal Bureau of Investigation.

A key purpose of these proposals requiring assessments prior to putting facial recog‑
nition systems into use is to allow law enforcement procurement agencies to compare
competing algorithms. For this purpose, standardization of testing criteria and proce‑
dures is essential—otherwise potential purchasers would have no way of comparing ac‑
curacy scores from different vendors. In these circumstances, the best procedure would
be the administration of standardized tests by an independent reviewing agency. NIST
has already demonstrated the capacity for conducting these studies and has developed
a widely accepted methodology for assessing both accuracy and demographic fairness.
It is the natural choice as the agency to perform mandated standardized assessments.

Ideally, a federal facial recognition law would impose a uniform national policy requir‑
ing prior NIST testing of facial recognition systems used in law enforcement anywhere
in the country. Failing that, the federal government has levers it can use, some of which
are under the control of the administration without further authorization from Congress.
For instance, federal financial assistance for facial recognition in law enforcement and
police access to the FBI database could be conditioned on proof that any facial recogni‑
tion tools in use participated in the NIST accuracy and fairness trials.

FURTHER RECOMMENDATIONS

West’s Brookings report expressed concerns that NIST tests might not “translate into
everyday scenarios.” NIST acknowledges that its assessments did not include use of
facial recognition software on images from the internet and from simple video surveil‑
lance cameras.

To remedy this issue, the Georgetown study suggests “accuracy verification testing on
searches that mimic the agency’s actual use of face recognition—such as on probe im‑
ages that are of lower quality or feature a partially obscured face.” These improvements
in NIST testing procedures might make its assessments more reflective of real‑world
conditions.

Another way forward is for developers to take steps to reduce demographic differentials

210
6 Con Evidence

by using more diverse data sets for training. While NIST did not investigate the cause
of the demographic differentials it found, it noted that the differences in false positives
between Asian and Caucasian faces for algorithms developed outside Asia were not
present for algorithms developed in Asia, suggesting that a more diverse training data
might reduce demographic differentials.

Steps to mitigate demographic differentials in use are also possible. NIST investigated
the idea that law enforcement agencies could use different accuracy thresholds for dif‑
ferent subgroups, which would have the effect of reducing demographic differentials.
This is a promising mitigation step. One study found that to achieve equal false pos‑
itive rates, “East Asian faces required higher identification thresholds than Caucasian
faces…”

Law enforcement agencies have an obligation to avoid bias against protected groups
defined by gender, age, race and ethnicity, and this duty includes their use of facial
recognition software. The Georgetown study recommends that the Civil Rights Divi‑
sion of the U.S. Department of Justice should investigate state and local agencies’ use
of face recognition for potential disparate impacts that violate this duty to avoid bias in
policing, and this seems a promising idea.

But how much deviation from statistical parity in facial recognition accuracy should be
a cause of concern?

The rule of thumb used in U.S. employment law is the 80% test and this might provide
some guidance. Applied to facial recognition software, this rule of thumb would require
that differentials in facial recognition accuracy for subgroups defined by gender, age,
race, or ethnicity should be no more than 20%.

Agencies using facial recognition software with unacceptable differentials in error rates
should be required to take mitigation steps, including perhaps the use of different
thresholds before initiating or continuing to use it. If such steps do not produce
satisfactory equity results for the software, however, then as the UK’s former Camera
Surveillance Commissioner recommends, the system should not be used.

211
6 Con Evidence

6.1.9 Terrorism

FRT is key to counter‑terrorism—wide nets need to be cast to enable successful


counter‑terror responses.

Porter 21

Tony Porter (former Surveillance Camera Commissioner, is the Chief Privacy Officer
at Corsight AI), “Facial Recognition: Facing up to terrorism,” Counter Terror Business, 1
August 2021, https://counterterrorbusiness.com/features/facial‑recognition‑facing‑terrorism,
accessed 11 March 2023

The need for FRT to combat terrorism

In a hostile world, terrorism risks are increasing. These risks pose a significant threat
to not only national security, but to political and social stability and economic develop‑
ment. The utilisation of facial recognition solutions can play a key role in improving
the efficiency of police forces, intelligence agencies and organisations to respond and
prevent major attacks, in a way that minimises intrusiveness for citizens.

In general, FRT is a biometric surveillance aid which uses a camera to capture an image
of an individual’s face, mainly in densely populated places such as streets, shopping
centres, and football arenas. It is then able to provide a similarity score when it recog‑
nises a similarity between a facial image captured, with an image held within a criminal
database. If a match is made, an alarm will inform the security operator to do a visual
comparison. The operator then can verify the match and radio police officers and have
them conduct a stop, if one is needed. It is important to note, the technology does not
establish individual ‘identity’ – that is the job of humans.

Moreover, many terrorists are not known to a database and can move around populated
spaces largely unnoticed. With the advancement of AI, these surveillance systems can
now monitor patterns of irregular behaviour, such as someone leaving a bag unattended
for a long period of time or returning to a site regularly to take photographs. This infor‑
mation can then be used as the basis on which to perform actions, e.g. to notify officers
to conduct a stop search or to record the footage. Security officers need access to this
type of intelligence, to secure the perimeter of their facility and ultimately save lives.

212
6 Con Evidence

6.1.10 Trafficking

FRT is key to fighting child sex trafficking.

Simonite 19

Tom Simonite (senior editor who edits WIRED’s business coverage), “How
Facial Recognition Is Fighting Child Sex Trafficking,” WIRED, 19 June 2019,
https://www.wired.com/story/how‑facial‑recognition‑fighting‑child‑sex‑trafficking/,
accessed 7 March 2023

ONE EVENING IN April, a California law enforcement officer was browsing Facebook
when she saw a post from the National Center for Missing and Exploited Children with a
picture of a missing child. The officer took a screenshot of the image, which she later fed
into a tool created by nonprofit Thorn to help investigators find underage sex‑trafficking
victims. The tool, called Spotlight, uses text‑ and image‑processing algorithms to match
faces and other clues in online sex ads with other evidence.

Using Amazon’s facial recognition technology, Spotlight quickly returned a list of on‑
line sex ads featuring the girl’s photo. She had been sold for weeks. The ads set in
motion some more traditional police work. “Within weeks that child was recovered
and removed from trauma,” Julie Cordua, CEO of Thorn, said, recounting the case at
an Amazon conference in Las Vegas this month.

The rescue illustrates Thorn’s strategy of nurturing new technology to combat child sex‑
trafficking and exploitation online. The nonprofit was cofounded in 2009 by actors Demi
Moore and Ashton Kutcher and has become influential with both law enforcement—
who can use Spotlight and other tools for free—and the tech industry. Thorn’s partners
include Facebook, Amazon, and Dropbox.

Thorn may soon expand its influence. In April, the nonprofit was named one of eight
projects that will share in $280 million from TED’s philanthropic offshoot, the Auda‑
cious Project. Thorn’s exact share has not been disclosed, but it will likely provide a
major boost: The nonprofit’s income totaled $3.2 million in 2017, filings show.

One potential use for the new funding: new technology that would dig deeper into the
online supply chain of child pornography, attempting to control it closer to the source.
Cordua imagines software crawling the dark web, where she says the material often first
appears, to find new imagery. Digital fingerprints for the files could then be added to au‑
tomated blacklists used by companies such as Facebook, preventing it from circulating

213
6 Con Evidence

more broadly. Facebook says it took action on 5.4 million pieces of child pornography
in the first quarter of 2019.

Cordua describes Thorn’s mission as a kind of immune response to an undertreated


disease of the internet. Social networks and smartphones have enabled new forms of
commerce and fun—but also made it easier to traffic in children or pornographic ma‑
terial featuring them. Cops lack the tools and expertise needed to fight that, Cordua
says. Tech companies lack the motivation to spend heavily on a problem where progress
doesn’t offer profits.

“It was becoming more and more difficult to address this problem,” Cordua says. She
previously led global marketing at Motorola’s cellphone division during the brand’s
peak, and filled a similar role at RED, the brand Apple and others use to direct funds to
AIDS programs in Africa.

Thorn initially worked to pressure technology companies to do more about online child
exploitation. More recently, it shifted to what Cordua says is a more effective strategy
of producing and operating new technology for use by law enforcement and the private
sector.

Spotlight was Thorn’s first software project and got its first major test during the 2015
Super Bowl, in Arizona. The initial version used text processing technology to highlight
posts likely to be written by, or about, an underage person, and to pull out phone num‑
bers and other data. Investigators could use those details to connect different ads, or
cross‑reference with NCMEC’s list of missing children.

Spotlight had been built on Amazon’s cloud service from an early stage, but in 2018 the
two organizations began to talk about new features that use the company’s image pro‑
cessing technology. Investigators can now use facial recognition algorithms marketed
under Amazon’s Rekognition service to check images against faces on NCMEC’s list.
Spotlight also uses Rekognition to extract text from photos, because some sex ads hide
text in images to escape conventional search tools.

Thorn says Spotlight has been used by law enforcement on almost 40,000 cases in North
America, in which investigators found more than 9,000 children, and over 10,000 traf‑
fickers. For Amazon, Thorn also offers a way to highlight the benefits of facial recogni‑
tion, after accusations that its use by law enforcement endangers privacy, and that the
company’s technology is inaccurate.

Thorn’s second major software product, Safer, is built around different image process‑
ing technology. It helps tech companies detect images of child sexual abuse on their

214
6 Con Evidence

platforms using PhotoDNA, a system developed by Microsoft with Dartmouth College,


and used by other companies including Facebook.

PhotoDNA works by checking images against a list of hashes—mathematical


fingerprints—of known child‑abuse images. Cordua says Thorn’s implementa‑
tion makes deploying the system and processes needed to support it less costly,
encouraging use by smaller firms. Photo sharing sites Imgur and SmugMug, which
owns Flickr, are among a handful of companies testing Safer.

Cordua says the new investment from TED’s Project Audacity could help make hashing
blacklists more proactive, so images can be added to the system before they have circu‑
lated widely. That requires digging into the dark web—sites protected by anonymity
tools such as Tor.

“The dream scenario is that in real time you could get hashes from newly produced con‑
tent from the dark web, before it goes viral,” Cordua says. The plan is still taking shape,
but one option would be to train machine learning algorithms that could flag potential
material for review by experts. Thorn has previously built language processing tools to
help law enforcement officers find child abuse content on the dark web.

Thorn has been in talks with the Canadian Centre for Child Protection, a fellow non‑
profit, about coordinating to curtail child pornography on the dark web. The Canadian
organization operates software called Project Arachnid that crawls the dark web and
conventional websites to spot known child abuse images and automatically notify site
operators. It also helps investigators find new content. In two and a half years, Arachnid
has crawled 76 billion images, and sent out 3.8 million notices.

Hany Farid, a UC Berkeley professor who codeveloped PhotoDNA with Microsoft


while at Dartmouth, says that program has demonstrated an important new model for
tackling child pornography. Previous work has typically waited for companies or users
to report material. “This is the first and only active approach, as far as I know,” Farid
says. He also says Thorn has experience tracking content beyond the openly accessible
web. “Thorn has been effective at diving into the dark web where a lot of child sexual
abuse material has moved,” he says.

Lianna McDonald, executive director at the Canadian organization, says a new gener‑
ation of more proactive technical tools can take the fight to suppress child exploitation
online to a new level. “We’re at a point where we really feel that we’re going to get
ahead of this victimization,” she says.

215
6 Con Evidence

FRT is key to identifying trafficking victims.

Freethink Team 22
Freethink Team, “How facial recognition is identifying human trafficking victims,”
Freethink, 15 June 2022, https://www.freethink.com/hard‑tech/facial‑recognition‑
finding‑human‑trafficking‑victims, accessed 7 March 2023
When law enforcement seeks to identify the suspects and victims involved in sexual
exploitation cases, sometimes the only lead is an image. Many such cases go unsolved.
But over the past decade, facial recognition technology has helped solve more cases by
enabling law enforcement to quickly connect names to images extracted from all types
of content. Facial recognition technology has existed in various forms since the 1960s.
But only recently has it achieved a level of sophistication where it’s possible to identify a
person through a single facial image — even one that’s grainy or taken from an off angle
— by comparing it to the billions of facial images that exist online. This is the capability
provided by Clearview AI. Founded in 2017, the facial recognition company offers law
enforcement agencies access to a database containing more than 20 billion facial images
collected by publicly available means, such as images people upload to open or public
Facebook, Twitter, and Instagram accounts. “When you’re dealing with child exploita‑
tion, a lot of times the only thing we have is the face of a child,” Kevin Metcalf, Presi‑
dent and Founder of the National Child Protection Task Force told Freethink. “When
Clearview came along, that allowed us to search and identify who they are, where they
are. […] I can attest for hundreds of kids who were identified using facial recognition
technology.” A new window into human trafficking A case from Nevada offers an ex‑
ample. Law enforcement had received a tip that a woman featured on an escort website
was underage. In terms of verifying the tip, the main problem was that the woman’s
personal information on the website was fake. But the photos were real. “We were
able to take one of the photographs from one of these online escort websites, upload
it into Clearview, and, within seconds, Clearview was able to identify that individual,”
Chris Johnson, a detective who is part of the Regional Human Exploitation and Traf‑
ficking Unit in Reno, Nevada, told Freethink. “It was a 16‑year‑old female juvenile who
was being sex‑trafficked. We were able to get her immediately recovered, out of the
life, surrounded by resources, and her trafficker is currently sitting in prison.” The US
State Department estimates that there are 24.9 million victims worldwide at any given
moment. The vast majority are women and children, but human traffickers prey on
people of all ages, backgrounds, and nationalities. For law enforcement agencies work‑
ing to thwart human trafficking, the challenge is identifying not only the perpetrators,

216
6 Con Evidence

but also the victims. After all, the dynamics of sex trafficking can be complex. Victims
are often too ashamed, fearful of law enforcement, or psychologically dependent on
their trafficker to try to start a new life. In some cases, victims simply have nowhere
else to go. “I lost my job and I had no place to live,” said one human trafficking vic‑
tim in a report conducted by researchers at the Office of Sex Trafficking Intervention
Research (STIR) at Arizona State University. “I didn’t have money to support myself to
keep myself alive.” Thanks to the proliferation of publicly available online photographs,
facial recognition technology has opened new interventions for victims of human traf‑
ficking. But how does it work? The next generation of facial recognition technology
When Clearview AI cofounder Hoan Ton‑That began building the company, most fa‑
cial recognition software was limited by a fundamental problem. “The issues with the
previous stuff — you’d have to get a perfectly aligned face,” Ton‑That told Freethink.
“If you don’t have a good shot of someone, it becomes a real problem.” From being un‑
able to process different resolutions and graininess to poor performance in identifying
people of varying demographics, it was clear that facial recognition technology needed
upgrades. Clearview AI’s current system is powered by a neural net that converts im‑
ages into vectors consisting of more than 500 points. These points are based on unique
facial features, such as the particular shape of someone’s mouth or the distance between
their eyes. The neural net then clusters images with similar vectors. After uploading a
probe image in the system, Clearview AI populates similar image results with a link
to where the image is publicly‑available online. “We had this really amazing moment
where we were trying a new neural network,” Ton‑That told Freethink. “Every single
time, it would find the right person — different angles with masks and glasses — and
that’s when I realized we made the huge breakthrough in terms of accuracy. We’re
99 percent accurate across all demographics.” Applying facial recognition in the real
world Clearview AI originally didn’t want to sell access to its system to governments,
Ton‑That told Freethink. It’s easy to envision how facial recognition technology can go
wrong. In addition to concerns over privacy and abuse by governments, facial recogni‑
tion systems that produce false identifications could potentially lead to wrongful arrests.
But after achieving an unprecedentedly high accuracy rate, Clearview AI decided that
equipping law enforcement with access to its technology would produce a social net
good, particularly because it would greatly improve agencies’ ability to identify and
save people involved in human trafficking. After all, the content in which these people
appear often exists with a fake name or no other data attached to the victims and perpe‑
trators. With Clearview AI, law enforcement agencies can shine a light on the darkest
places of the internet, putting names to victims and offering them a chance at a new life.

217
6 Con Evidence

“Once officers started getting access to the technology, the word spread so fast,” Jessica
Garrison, head of Government Affairs at Clearview AI, told Freethink. “Being able to
take an unidentified child that’s being abused online and determine who they are, that’s
a needle in the haystack.”

218
6 Con Evidence

6.1.11 Airports

Use of FRT in airports is key to national security and increases convenience.

SIA 23

Security Industry Association (the leading trade association for global security solution
providers, with over 1,100 innovative member companies representing thousands of
security leaders and experts who shape the future of the security industry), “Calls to
End Biometrics for Air Traveler Verification Are Misguided, Put Americans at Risk,” 10
February 2023, https://www.securityindustry.org/2023/02/10/calls‑to‑end‑biometrics‑
for‑air‑traveler‑verification‑are‑misguided‑put‑americans‑at‑risk/, accessed 6 March
2023

In a recent letter to the Transportation Security Administration (TSA), several U.S. sena‑
tors have called for ending the successful and popular use of facial recognition technol‑
ogy for traveler verification at TSA screening checkpoints. Unfortunately, the senators’
demand is based on mischaracterizations despite visible information provided by TSA
on how the technology is used.

It’s always important that implementations of advanced technologies like facial recog‑
nition balance privacy concerns; however, TSA should reject demands to end use of the
technology because in this case, facial recognition provides enhanced security, accuracy
and convenience for travelers without impacting existing privacy rights or expectations.

Air travelers must already present valid ID at security checkpoints, which is subject to
inspection for authenticity and checks against flight information. Additionally, TSA
personnel compare the photo on each ID with the person presenting it for visual verifi‑
cation that they match. If one chooses to opt in to this completely voluntary biometric
program, this additional step is automated at a kiosk. No personal passenger or identity
information is retained or shared. The technology is not used to “identify” or poten‑
tially “misidentify” a person – it simply verifies whether (or not) the photo of a person
matches their photo taken at the kiosk. Follow‑up visual inspection by TSA personnel
can address any issues that arise with the automated process.

It is no surprise that availability is being expanded to more U.S. airports beyond initial
pilots with TSA PreCheck passengers. According to the most comprehensive public
opinion research on facial recognition to date, nearly 70% of Americans support use of
facial recognition for TSA screening. Additionally, the U.S. government is seeking to
leverage the highest‑performing technology available on the market, which, contrary

219
6 Con Evidence

to some claims, is highly accurate overall and across demographic groups. For exam‑
ple, the top 20 facial recognition algorithms are over 99.7% accurate in matching across
white, Black, male and female demographics according to recent test data from the Na‑
tional Institute of Standards and Technology.

Congressional oversight is critical to ensuring accountability from federal agencies and


programs. We believe members of Congress should carefully and thoroughly consider
the specific, limited and beneficial role of biometrics in air traveler security as they carry
out this important duty.

The benefits of employing FRT at airports outweighs the risks.

Solarova et al. 22

Sara Solarova (Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia),


Juraj Podroužek, Matúš Mesarčík, Adrian Gavornik & Maria Bielikova. Recon‑
sidering the regulation of facial recognition in public spaces. AI Ethics (2022).
https://doi.org/10.1007/s43681‑022‑00194‑0

1.4.1 Checking identities in airports

Facial recognition technologies have been continuously used in airports all over the
world. In Germany, multiple airports have used a system which integrates FRT for iden‑
tity verification [20]. This technology can spot illegal attempts to enter a country at a
more efficient and precise level, thus increasing the levels of public security maintaining
public order and simultaneously increasing the comfort of the passengers to decrease
the waiting time. Moreover, responding to the COVID‑19 threats, FRT can help to ad‑
dress the need for contactless security checks amidst the pandemic crisis, reducing the
transmission of pathogens at the airport. The use of FRT in airports has also one of the
highest levels of support among the public and experts [49].

1.4.1.1 Chilling effects

The expectation to be verified at the airport is apparent for all its visitors. In some cases
of digital onboarding, individuals must be identified to be permitted to enter the air‑
port’s premises. Yet the extent of such biometric identification should still be properly,
and clearly explained and other alternatives should be available for people who exercise
their right to not be processed by facial recognition systems.

1.4.1.2 Forced recognition

220
6 Con Evidence

Nevertheless, we recognize the concern that a person’s position during an airport check
might become asymmetrical, particularly when she may be distressed that refusing to
undergo the biometric identification control might raise suspicion from the side of the
authorities. In these situations, societal pressure might nudge people to subject them‑
selves to biometric identification even if they preferred the alternative. Therefore, it is
inevitable to ensure that the alternative to biometric identification not only exists but
also it should guarantee equal standing and reliability to the biometric one. Hereby, an
individual can make a free choice without having to be concerned about consequences
thereof.

1.4.1.3 Social exclusion

Even though there is a significant improvement of FRT accuracy considering the demo‑
graphic features in recent years, the presence of unfair biases is still inevitable to address.
The risk that socially biased systems can result in perpetuated social inequalities would
be considerable at the airports when the systems will single out specific individuals or
groups of people for increased harassment or searches [50].

1.4.1.4 False positives, false negatives

The occurrence of false‑positive and false‑negatives and the ethical concerns thereof
can be undertaken by guaranteeing the right to obtain human intervention on the part
of the controller and to express one’s own point of view and to contest the decision of
facial recognition system should there be a suspicion of erroneous automated outcome.
The risk of false positives could increase the discomfort of passengers and can have a
negative impact on their dignity. On the other hand, the occurrence of false negatives
can heavily endanger the security of the whole area and it is one of the biggest issues
to be dealt with. Guaranteeing a human intervention would be also useful as a part
of fallback procedures in case of malfunctioning of FRT or of serious doubts about its
accuracy in specific cases.

1.4.1.5 Data control

Being able to get information on the amount of time for which the video footage and
images will be stored in a system does not only address awareness issues but also the
privacy concerns. Henceforth, the system could be designed to delete personal data
from the airport system after the take‑off. Following this approach, we can mitigate the
risk of ambiguity with tracking the duration of biometric mass surveillance.

221
6 Con Evidence

6.1.12 Office Buildings

Biometric identification drastically enhances security for private firms.

Solarova et al. 22

Sara Solarova (Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia),


Juraj Podroužek, Matúš Mesarčík, Adrian Gavornik & Maria Bielikova. Recon‑
sidering the regulation of facial recognition in public spaces. AI Ethics (2022).
https://doi.org/10.1007/s43681‑022‑00194‑0

1.4.2 Authorisation to enter office buildings

With the use of FRT, owners of these buildings can improve security, identify unautho‑
rised access, and make the entry process seamless and comparatively faster than requir‑
ing identification with an ID card. Security passes or ID cards can be stolen, duplicated,
or borrowed, which might compromise the security of a particular facility. Facial bio‑
metric identification will significantly decrease such a risk as biometric identifiers are
more difficult to obstruct.

1.4.2.1 Chilling effects

Most of the ethical risks and respective countermeasures available for FRT deployed on
entrances of company premises are similar to the use of FRT at the airport checks. As
in the airport scenario, most visitors have an expectation they will be subject to verifica‑
tion checks. But visible and clear information on the use of FRT within given space to
safeguard the awareness should be provided.

1.4.2.2 Forced recognition

For people who choose to not be identified by FRT, a separate entrance for conventional
access should be available to address the concern of autonomy, regardless of whether
they are employees or the general public attending the place.

1.4.2.3 Data control

The set of identities is relatively stable. Companies are expected to have a database of
their employees or people who are allowed to enter their premises beyond the general
public space, such as the reception area, typically accessible to the wider public without
the need for identification. This means that even when a biometric data are taken and
processed for the purposes of identification against a larger database of people, and
the access is denied, the data could be erased automatically and not stored for future

222
6 Con Evidence

purposes. This way, the concern of intrusion can be addressed, namely by decreasing
the amount of time for which the biometric data are stored.

223
6 Con Evidence

6.1.13 Stadiums

Utilizing FRT for monitoring stadiums is permissible.

Solarova et al. 22

Sara Solarova (Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia),


Juraj Podroužek, Matúš Mesarčík, Adrian Gavornik & Maria Bielikova. Recon‑
sidering the regulation of facial recognition in public spaces. AI Ethics (2022).
https://doi.org/10.1007/s43681‑022‑00194‑0

1.4.3 Checking visitors on stadiums

Stadiums are known for hosting a larger number of people who are emotionally charged
during a particular event. As a result, violence and fights frequently occur, often result‑
ing in casualties. Facial recognition technology can help to identify the people on the
blacklist thereby preventing them from entering the stadium. The automated process
of entry decreases the waiting time for attendees of an event, improving the overall ex‑
perience. The benefits of using FRT can improve the overall sense of security, such as
by preventing people with previous instances of violent behaviour in stadiums from
entering or enhancing the convenience of getting in.

1.4.3.1 Chilling effects

This use of FRT should also be properly and clearly communicated, so that the individ‑
uals are informed about the processes of FRT within the stadiums.

1.4.3.2 Forced recognition

Separate entrances should be provided for people who exercise their right for their data
to not be processed by facial recognition systems.

1.4.3.3 False positives and false negatives

The ethical concerns addressing the erroneous decision of denying an individual the
entry to the stadium (i.e., false positive) can be mitigated by human oversight. Such an
approach can minimise the potential concern of accuracy.

1.4.3.4 Data control

As in previous use‑cases, we expect the set of people possibly identified to be approxi‑


mately stable, given the capacity of the stadium and the number of bought tickets. Un‑
less the individual is on the blacklist based on their prior violent conduct, the biometric

224
6 Con Evidence

data of the attendees should be stored for a limited time after the event deemed abso‑
lutely necessary for maintaining public safety and order. The software in place does not
need to store images of anyone besides the blacklisted people and the data stored in the
internal stadium’s system must not be connected to the internet or any other system,
which minimises the possibility of being hacked [51].

225
6 Con Evidence

6.1.14 Medicine

FRT is key to rapid and accurate medical diagnoses.

McClellan 20

Elizabeth McClellan (J.D. Candidate, 2021, University of Maryland Francis King Carey
School of Law), Facial Recognition Technology: Balancing the Benefits and Concerns,
15 Journal of Business & Technology Law 363, 2020, https://digitalcommons.law.
umaryland.edu/jbtl/vol15/iss2/7

Perhaps one of the most unlikely benefits of facial recognition technology is found in the
medical field. According to a study from June 2014, scientists from Oxford have report‑
edly developed a facial recognition program that is able to diagnose rare genetic con‑
ditions, such as Down Syndrome, through the observation of an ordinary photo.72 For
many rare disorders, there is no genetic test and thus may only be diagnosed through
a specialist’s analysis of facial features, as these rare genetic conditions are often ac‑
companied by abnormal facial features.73 However, these specialists are rare to find—
therefore, with a developed facial recognition technology, more individuals with rare
disorders will have access to a medical diagnosis.74

226
6 Con Evidence

6.1.15 Ukraine

FRT is being used in the Ukraine war.

Feeney and Chiu 22

Matthew Feeney (was the director of Cato’s Project on Emerging Technologies) and
Rachel Chiu, “Facial Recognition Debate Lessons from Ukraine,” Cato Institute, 17
March 2022, https://www.cato.org/blog/facial‑recognition‑debate‑lessons‑ukraine,
accessed 6 March 2023

According to Reuters, Ukrainian officials are using the facial recognition search engine
Clearview AI to “uncover Russian assailants, combat misinformation and identify the
dead.” In the United States, Clearview AI has made headlines in reporting on law en‑
forcement, with civil liberties experts raising well‑founded concerns about the prolifer‑
ation of facial recognition technology (FRT) in police departments. These concerns have
prompted calls for the outright ban of facial recognition. Yet the Reuters article serves as
a reminder that FRT has many applications beyond policing, and that those concerned
about FRT should focus on regulations guiding deployment rather than seeking a ban
on the technology.

Since the Russian invasion of Ukraine a few weeks ago, the American government has
responded with economic sanctions and military assistance to Ukraine. But pressure
on Russia has come from private companies as well as governments. Apple has sus‑
pended sales in Russia and limited Russian use of its Apple Pay and Apple Maps soft‑
ware. Google has suspended advertising in Russia and halted payment on Google Pay.
A host of other U.S.‑based companies have also taken steps to limit Russian access to
their goods and services. Yet the recent news about Clearview AI shows another way
that private companies can involve themselves in the ongoing war: by assisting the
Ukrainian government.

Clearview AI scrapes billions of images from social media sites such as Twitter, Face‑
book, and Instagram in order to build a search engine for facial images. Someone with
Clearview AI’s technology can upload an image that Clearview AI’s FRT then compares
to the billions of images in the Clearview AI database, thereby confirming identity. In
the U.S., civil liberties activists have protested the police use of Clearview AI, which has
resulted in thousands of queries.

American police use of Clearview AI prompted Google and Facebook to write cease
and desist letters to the company, urging it to stop using photos from their platforms.

227
6 Con Evidence

Clearview AI responded that its use of publicly available photos was First Amend‑
ment‑protected speech.

Clearview AI made the same First Amendment argument while seeking to dismiss a
lawsuit filed by Illinois residents. Clearview AI argued that Illinois’ biometric privacy
law, the Biometric Information Privacy Act, violated the First Amendment, a claim the
presiding judge found unpersuasive.

Amidst civil liberties controversies at home, Clearview AI is now seeking to help the
Ukrainian government. News from the Ukraine‑Russia war has showcased the use of
weaponized disinformation and misinformation. Russia invaded a county where a ma‑
jority of adults have smartphones. The result is that social media platforms are awash
with videos and photos of the conflict. Predictably, fake content associated with the
conflict has been spread by both those seeking to confuse social media users and those
who incorrectly believe what they are sharing is accurate.

Civil liberties and surveillance concerns are important to debates over FRT, but they
should not be the only considerations. As the Reuters article demonstrates, facial recog‑
nition can help defending countries in wartime. But it can also be valuable in peace‑
time. For example, in 2018 police in New Delhi scanned images of 45,000 children in
orphanages and care facilities. Within four days, they were able to identify 2,930 miss‑
ing children – a feat that would have been exceedingly difficult in the absence of this
technology.

FRT can be applied in many circumstances: It can help refugees locate their families,
strengthen commercial security, and prevent fraud. There are also use cases unrelated
to law enforcement and safety. For example, CaliBurger patrons can pay for their meal
with a face scan. Similar payment systems have been trialed around the world, in coun‑
tries such as Denmark, Nigeria, South Korea, and Finland.

Broad prohibitions and bans often overlook these valuable applications, focusing solely
on misuse by law enforcement.

While the deployment of facial recognition in Ukraine highlights positive potential, it


also underscores the jurisdictional challenges associated with this controversial tech‑
nology. In recent months, Clearview AI has been the subject of international investi‑
gations, with some governments claiming that its proprietary facial recognition system
runs afoul of national data protection laws. Regulators in Sweden, France, and Australia
have ordered the company to delete all data relating to its citizens, while the United
Kingdom and Italy have imposed large fines.

228
6 Con Evidence

As lawmakers and regulators around the world continue to grapple with FRT policy
they should consider its benefits as well as its costs. It is possible to craft policies that
allow public officials to use FRT while also protecting civil liberties. The recent use of
Clearview AI in Ukraine does not mean that we should ignore the potential for FRT to
be used for surveillance. Rather it should serve as a reminder that FRT policy should
focus on uses of the technology rather than the technology itself.

229
6 Con Evidence

6.1.16 Distracted Driving

FRT can detect distracted drivers like in Australia.

McClellan 20

Elizabeth McClellan (J.D. Candidate, 2021, University of Maryland Francis King Carey
School of Law), Facial Recognition Technology: Balancing the Benefits and Concerns,
15 Journal of Business & Technology Law 363, 2020, https://digitalcommons.law.
umaryland.edu/jbtl/vol15/iss2/7

Despite the lack of regulations, facial recognition technology has presented a plethora
of benefits to society from traffic safety to medical advancements. Internationally, fa‑
cial recognition technology has been used to prevent distracted driving.59 In Australia,
authorities have begun utilizing the technology of the Australian company Acusensus
to help prevent distracted driving by installing camera systems above and on the side
of roads to help detect “distracted drivers.”60 The cameras capture pictures of all cars
passing by and search through the pictures to find drivers using their phones while driv‑
ing.61 If it is found that the driver is using a phone (and is thus deemed a “distracted
driver”), the system will encrypt the image and send it to authorities.62 However, if a
distracted driver is not detected, the system immediately deletes the picture.63 Acusen‑
sus recently presented the technology at an international conference to countries includ‑
ing Canada, thus posing the possibility that this technology will continue to be utilized
by more countries across the globe.64

230
6 Con Evidence

6.1.17 Consumer Benefits

FRT provides many consumer benefits.

McClellan 20

Elizabeth McClellan (J.D. Candidate, 2021, University of Maryland Francis King Carey
School of Law), Facial Recognition Technology: Balancing the Benefits and Concerns,
15 Journal of Business & Technology Law 363, 2020, https://digitalcommons.law.
umaryland.edu/jbtl/vol15/iss2/7

Facial recognition technology also provides numerous benefits to consumers. In an era


where data is so easily accessible, it becomes increasingly important to protect this data.
Facial recognition technology allows users to engage in “multifactor biometrics” to ver‑
ify a user’s identity, such as voice and facial recognition.69 Companies such as Apple
have begun using multifactor biometrics and facial recognition technology as a method
to unlock phones.70 Similarly, companies such as Google have developed technology
that is able to recognize a user’s voice, such that its Google Home responses may be tai‑
lored to the specific user, or may not respond at all to users who it does not recognize.71

231
6 Con Evidence

6.1.18 Innovation

Broad bans stifle tech innovation—disincentivizes investment in machine learning


and hurts public safety.

Freiburger 19

Kevin Freiburger (Director of Identity Programs at Valid), “Why Bans on Facial Recogni‑
tion Could Stifle Tech Innovation,” Valid, 23 September 2019, https://valid.com/articles/facial‑
recognition‑innovation‑blog/, accessed 7 March 2023

The idea of facial recognition applications in the public sector was once a seemingly
impossible task. Early facial analysis software algorithms didn’t produce accurate or
reliable results, systems were disproportionately inaccurate across certain genders and
races, and the infrastructure required to power the system required a significant IT effort
to maintain.

However, companies that provide facial recognition systems matured the technology
significantly in the last decade — making facial recognition applications faster, more ac‑
curate and easier to operate. Significant investments in machine learning (ML) technol‑
ogy and other technical infrastructure made those gains possible. Now face recognition
software self‑learns because the creator painstakingly trains the algorithm with larger
and more diverse data sets

It leads to fewer error rates over a broader range of races and genders. And thanks to the
democratization of ML from large tech behemoths like Google, Amazon, and Microsoft:
Even small or medium‑sized facial recognition providers can take advantage of turnkey
toolsets and infrastructure to improve accuracy

But as significant as advancements in the technology may be, there are still ethical con‑
flicts around privacy and the use of facial recognition applications in the public sector.
While Microsoft will not sell their solution to government buyers, Amazon has affirmed
it will continue contracting with government entities as long as they “follow the law.”
The company asserts a technology shouldn’t be banned simply because the potential for
misuse exists.

The debate isn’t limited to the companies developing the tech. The increased interest in
facial recognition by government agencies has raised the ire of watchdog organizations
and citizens alike, who are concerned about the potential for human rights and privacy
violations.

232
6 Con Evidence

Clashing ideologies related to public safety applications and the potential for privacy
violations has now come to a head in San Francisco. Earlier this year, city leadership
voted to ban surveillance technology that uses facial recognition from being used by
government agencies or police. The ban covers body cameras, toll readers and video
surveillance devices as well as all iterations of facial recognition software and the in‑
formation that it collects. While mass surveillance seems to be the intended target of
the ban, the ordinance as written restricts any technology that uses or creates biometric
data.

It’s clear that city leaders had the right intentions with this action — the city does, af‑
ter all, represent progressive ideals and is seen as the center of technology innovation.
However, the ban is so broad that it might very well stifle the innovation the city is
known for and restrict opportunities to improve public safety in the process.

Banning facial recognition outright is a mistake

Facial recognition has valuable and life‑saving potential if deployed correctly, and it is
incumbent upon government agencies to take the lead in crafting smart legislation and
ethical frameworks for the use of the technology.

Innovation at Departments of Motor Vehicles (DMVs) across the US are examples of


how this technology can be applied in a non‑invasive manner. Many agencies currently
use the technology to reduce identity theft and prevent the issuance of fraudulent IDs.
However, bans like San Francisco’s could threaten helpful applications like these or at
the very least, pose costly legal challenges.

We have seen the success of facial recognition technology firsthand in the public sector.
At one state government agency, workers identified 173 fraudulent transactions over
12 months using facial recognition at a DMV location, and nearly 30 percent of these
were attempting to steal another resident’s identity using stolen information. Given the
pervasiveness of the threat, all‑out bans on technology like facial recognition represent
a step backward for public safety.

Government leaders need more education to prevent bad legislation

The San Francisco ban caused a surge of interest concerning the way facial recognition
and the technology that powers it works. That’s a positive development. The public
and watchdog agencies deserve clear, honest answers to their questions. If technology
companies can deliver transparency coupled with accurate results, it will start building
trust. Ultimately, a proper balance of privacy and public safety is possible if both sides
are willing to engage.

233
6 Con Evidence

So, let’s clear up some major misconceptions about how facial recognition works. For
one, the technology is not a binary, definitive system for identifying suspects and crim‑
inals. The words “Facial Recognition” say everything you need to know. It’s a proba‑
bility; it is not a legal identification of a person like a fingerprint. Law enforcement and
judicial courts will use fingerprint biometrics to identify people. You will never hear
that “identification” language in facial recognition.

Instead, it acts as more of a lead for law enforcement, yielding a match probability that
is then analyzed by a trained professional to conduct a broader investigation. At the
end of the day, the technology doesn’t serve as the final adjudication in a criminal case.

Additionally, facial recognition technology may yield inaccurate results if not properly
trained. Realistically, the only way to allow these systems to improve is with more
data. Machine learning can help software and algorithms improve with repetition and
iteration over time. But to reduce the potential for inaccuracy, developers need the
ability to test this technology in the real world in non‑criminal use cases (e.g., point‑of‑
sale authentication in a school cafeteria).

It must be made clear to both government agencies and the public that this technology
is neither a total remedy for public safety nor a completely nefarious tool for privacy in‑
vasion. Rather, the technology as it exists can act as a helpful guide for law enforcement
to gather leads, identify patterns and aid officers. And there are non‑criminal use cases
for the technology and developers are launching those solutions every day.

234
6 Con Evidence

6.1.19 AT: Bias

Bias claims are outdated and based on shoddy science—rapid technical


improvements have drastically increased accuracy and reduced bias

Baker 22

Stewart A. Baker (a partner in the Washington office of Steptoe & Johnson LLP. He
returned to the firm following 3½ years at the Department of Homeland Security as
its first Assistant Secretary for Policy. He earlier served as general counsel of the
National Security Agency), “The Flawed Claims About Bias in Facial Recognition,”
Lawfare, 2 February 2022, https://www.lawfareblog.com/flawed‑claims‑about‑bias‑
facial‑recognition, accessed 11 March 2023

If you’ve been paying attention to press and academic studies in recent years, you know
one thing about face recognition algorithms. They’re biased against women and racial
minorities. Actually, you’ve probably heard they’re racist. So says everyone from the
MIT Technology Review and Motherboard to the ACLU and congressional Democrats.

There’s just one problem with this consensus. It’s wrong. And wrong in a way that has
dangerous consequences. It’s distorting laws all around the country and handing the
global lead in an important new technology to Chinese and Russian competitors.

That’s not to say that face recognition never had a problem dealing with the faces of
women and minorities. A decade ago, when the technology was younger, it was often
less accurate in identifying minorities and women. A 2012 study published by the IEEE
found that when used on photos of men, whites, or the middle aged, the best commercial
systems matched faces successfully about 94.5 percent of the time, but their success
rates were lower for women (at 89.5 percent), Blacks (88.7 percent), and the young (91.7
percent).

These are the numbers that drove the still widely repeated claim that face recognition is
irretrievably racist. In fact, that claim relies on data from an early stage in the technol‑
ogy’s development. And it has frozen the narrative by invoking a political and moral
context that makes it hard to acknowledge the dramatic improvements in face recogni‑
tion that followed the 2012 study. Racism, after all, is rarely cured by a few technical
tweaks.

But face recognition algorithms are just tools. They may be accurate or not. The inac‑
curacies may be more common for some groups than others. But, like any tool, and

235
6 Con Evidence

especially like any new technology, improvements are likely. Treating face recognition
differentials as an opportunity to explore society’s inherent racism, in contrast, doesn’t
lead us to expect technical improvements. And that, it turns out, is why the “racism”
framework is wrong. Recent improvements in face recognition show that disparities
previously chalked up to bias are largely the result of a couple of technical issues.

The first is data. To be accurate, machine learning needs a big dataset. The more data
you put in, the more accuracy you get out. Since minorities are by definition less well
represented in the population than the majority, a lack of data may explain much of the
“bias” in face recognition systems. That’s what tests suggest; algorithms developed in
East Asia have done better than Western systems at identifying Asian faces—probably
because they had more Asian faces to learn from. Luckily, this technical problem has
a technical solution. Simply expanding the training set should improve accuracy and
reduce differential error rates.

A second technical issue is how the images in question are captured. It’s obvious that
good lighting improves face recognition. And, as camera makers already recognize,
the wrong lighting or exposure can easily produce photographs that don’t do justice
to people with darker skin. So simply improving the lighting and exposures used to
capture images should improve accuracy and reduce race and gender differences.

In fact, that’s what more recent studies show. When it examined face recognition in
2018, the National Institute of Standards and Technology (NIST) found “massive gains
in accuracy” since 2012, with error rates that fell below 0.2 percent with good lighting,
exposures, focus and other conditions. In other words, used properly, the best algo‑
rithms got the right answer 99.8 percent of the time, and most of the remaining error
was down not to race or gender but to aging and injuries that occurred between the first
photo and the second.

Real‑life implementations tell the same story. Two agencies that I know well—the Trans‑
portation Security Administration and Customs and Border Protection (CBP)—depend
heavily on identity‑based screening of travelers. As they rolled out algorithmic face
recognition, they reported on the results. And, like NIST, they found “significant im‑
provements” in face recognition tools in just the two years between a 2017 pilot and the
start of operations in 2019. Those improvements seriously undercut the narrative of race
and gender bias in face recognition. While CBP doesn’t collect data on travelers’ race,
it does know a lot about travelers’ country of citizenship, which in turn is often highly
correlated to race; using this proxy, CBP found that race had a “negligible” effect on the
accuracy of its face matches. It did find some continuing performance differences based

236
6 Con Evidence

on age and gender, but those had declined a lot thanks to improvements in operational
factors like illumination. These changes, the study found, “led to a substantial reduc‑
tion in the initial gaps in matching for ages and genders”: In fact, by 2019 the error rate
for women was 0.2 percent, better than the rate for men and much better than the 1.7
percent error rate for women found in 2017.

Of course, CBP offers face recognition the easiest of tests, a one‑to‑one match in which
the algorithm just has to decide whether my face matches the picture on my passport.
Other tests are harder, particularly the “one‑to‑many” searches that match photos from
a crime to a collection of mug shots. These may have lower accuracy and, with less
control over lighting and exposures, more difficulty with darker skin.

So technical improvements may narrow but not entirely eliminate disparities in face
recognition. Even if that’s true, however, treating those disparities as a moral issue
still leads us astray. To see how, consider pharmaceuticals. The world is full of drugs
that work a bit better or worse in men than in women. Those drugs aren’t banned as
the evil sexist work of pharma bros. If the gender differential is modest, doctors may
simply ignore the difference, or they may recommend a different dose for women. And
even when the differential impact is devastating—such as a drug that helps men but
causes birth defects when taken by pregnant women—no one wastes time condemning
those drugs for their bias. Instead, they’re treated like any other flawed tool, minimizing
their risks by using a variety of protocols from prescription requirements to black box
warnings.

Somehow, the algorithmic bias studies, and the journalists who cover them, have
skipped both of these steps. They do not devote much time to asking whether the
differentials they’ve found can actually cause harm. Nor do they ask whether the risk
of harm can be neutralized when the algorithm’s output is actually used. If they did,
face recognition wouldn’t have the toxic reputation it has today. Because it turns out
that the harms attributed to face recognition bias are by and large both modest and
easy to control.

Let’s start with the kind of harm most people imagine when they hear about bias in
face recognition: an innocent man arrested or convicted because the algorithm falsely
matched his image to a crime video. For all its prominence in popular imagination, there
have been very few such cases in real life, and for good reason. Early bias studies, such
as the 2012 IEEE study, found that certain populations were “more difficult to recog‑
nize.” That is, the systems were less good at finding matches in those populations. This
is a difference, but it is not clear that it would lead to more false arrests of minorities. In

237
6 Con Evidence

actual use, face recognition software announces a match only if the algorithm assigns a
high probability (often a 95 percent probability) to the match, meaning that weak or du‑
bious matches are ignored. So it’s hard to see why being “difficult to recognize” would
lead to more false arrests.

Even more important, it is stunningly easy to build protocols around face recognition
that largely wash out the risk of discriminatory impacts. In many cases, they already
exist, because false matches in face recognition were a problem long before comput‑
ers entered the scene. The risk of false matches explains why police departments have
procedures for lineups and photo arrays. Similar safeguards would work for machine
matches: A simple policy requiring additional confirmation before relying on algorith‑
mic face matches would probably do the trick. The protocol might simply require the
investigating officer to validate the algorithm’s match using his or her own judgment,
and eyesight. Indeed, this is so simple and obviously a mechanism for controlling error
that one has to wonder why so few researchers who identify bias in artificial intelli‑
gence ever go on to ask whether the bias they’ve found could be controlled with such
measures.

Of course, false matches aren’t the only way that bias in face recognition could cause
harm. There are also false “no match” decisions. Face recognition is used more com‑
monly for what could be called identity screening, which is done at the border, the
airport, or on your iPhone to make sure the photo on your ID matches your face. False
matches in that context don’t discriminate against anyone; if anything, they work in fa‑
vor of individuals who are trying to commit identity theft. From the individual’s point
of view, a risk of discrimination arises only from a false report that the subject and the
photo don’t match, an error that could deny the subject access to his phone or her flight.

But once again, these consequences are vanishingly rare in the real world. Partly that’s
because the government at least can control things like lighting and exposure, making
technical errors less likely. Presumably that’s why the CBP report shows negligible error
differentials for different races (or at least different countries of origin). And even where
error differentials remain for some groups, such as the aged, there are straightforward
protocols for reducing the error’s impact. As a practical matter, agencies that check IDs
do not deny access just because the algorithm says it has found a mismatch. Instead,
that finding generally triggers a set of alternative authentication methods—having a
human double check your photo against your face and ask you questions to verify your
identity. It’s hard to say that someone required to answer a few additional questions
has been seriously harmed by the algorithm’s error, even if that error is a little more

238
6 Con Evidence

likely for older travelers. After all, face recognition software can also have problems
with eyeglasses, and being required to take them off for the camera could be called
discrimination based on disability, but in both cases, it’s hard to see the inconvenience
as a moral issue, let alone a reason to discredit the technology.

In short, the evidence about bias in facial recognition evokes Peggy Lee’s refrain: “Is
that all there is?” Sadly, the answer is yes; that’s all there is. For all the intense press
and academic focus on the risk of bias in algorithmic face recognition, it turns out to be
a tool that is very good and getting better, with errors attributable to race and gender
that are small and getting smaller—and that can be rendered insignificant by the simple
expedient of having people double check the machine’s results by using their own eyes
and asking a few questions.

One can hope that this means that the furor over face recognition bias will eventually
fade. Unfortunately, the cost of that panic is already high. The efficiencies that face
recognition algorithms make possible are being spurned by governments caught up in
what amounts to a moral panic. A host of cities and at least five states (Maine, Vermont,
Virginia, Massachusetts and New York) have adopted laws banning or restricting state
agencies’ use of face recognition.

Perhaps worse, tying the technology to accusations of racism has made the technology
toxic for large, responsible technology companies, driving them out of the market. IBM
has dropped its research entirely. Facebook has eliminated its most prominent use of
face recognition. And Microsoft and Amazon have both suspended face recognition
sales to law enforcement.

These departures have left the market mainly to Chinese and Russian companies. In fact,
on a 2019 NIST test for one‑to‑one searches, Chinese and Russian companies scored
higher than any Western competitors, occupying the top six positions. In December
2021, NIST again reported that Russian and Chinese companies dominated its rank‑
ings. The top‑ranked U.S. company is Clearview AI, whose business practices have
been widely sanctioned in Western countries.

Given the network effects in this business, the United States may have permanently
ceded the face recognition market to companies it can’t really trust. That’s a heavy
price to pay for indulging journalists and academics eager to prematurely impose a
moral framework on a developing technology.

239
6 Con Evidence

Technical criticisms of accuracy are irrelevant—they will improve and regulations


ensure only FRTs with sufficient accuracy are deployed.

Robbins 21

Scott Robbins (PhD, MSc, Post Doc Research Fellow, University of Bonn, Germany), Fa‑
cial Recognition for Counter‑Terrorism: Neither a Ban Nor a Free‑for‑All. In: Henschke,
A., Reed, A., Robbins, S., Miller, S. (eds.) Counter‑Terrorism, Ethics and Technology. Ad‑
vanced Sciences and Technologies for Security Applications. Springer, Cham. 2021. Pgs.
89‑104, https://doi.org/10.1007/978‑3‑030‑90221‑6_6

Meanwhile, the benefits of FRTs will be disproportionately received by middle‑aged


white males. Not only will they be identified more reliably—meaning that they will get
through security lines without further intrusive surveillance, but they will dispropor‑
tionately feel the benefits of convenience that these technologies promised in the first
place. Joy Buolamwini (mentioned above) started to analyze FRTs precisely because
she couldn’t get FRTs to recognize her face. At one point, she put on a white mask trig‑
gering the program to recognize hers as a face [13]. The point is that the convenience
promised by FRTs is distributed unfairly. This compounds the problem above because
the same group that disproportionately experiences the harms of FRTs also dispropor‑
tionately fail to experience its benefits. This problem must be overcome if FRTs are to
be used anywhere. The main point is that many FRTs don’t work. If a particular tech‑
nology doesn’t work, then we shouldn’t use it. However, this does not mean that the
technology will not work in the future. In this paper, I assume that we will only be using
FRTs that work with an appropriate level of effectiveness for everyone.

240
6 Con Evidence

6.1.20 AT: Moratorium

A moratorium ignores the valuable uses of FRT and shreds innovation.

SIA 21

Security Industry Association (the leading trade association for global secu‑
rity solution providers, with over 1,100 innovative member companies rep‑
resenting thousands of security leaders and experts who shape the future of
the security industry), “Security Industry Association Opposes Reintroduction
of Facial Recognition & Biometric Technology Moratorium Act,” 16 June 2021,
<https://www.securityindustry.org/2021/06/16/security‑industry‑association‑opposes‑
reintroduction‑of‑facial‑recognition‑biometric‑ technology‑moratorium‑act/>, accessed
6 March 2023

SILVER SPRING, Md. – The Security Industry Association (SIA) – the leading trade
association representing security solutions providers – has announced its strong oppo‑
sition to the Facial Recognition and Biometric Technology Moratorium Act, originally
introduced in 2020 and reintroduced June 15 by Sen. Edward Markey (D‑Mass.). SIA
publicly opposed the bill’s initial introduction, which largely resembles the version put
forth in the 117th Congress.

The legislation would impose a blanket ban on most federal, state and local use of nearly
all biometric and related image analytics technologies, which threatens the legitimate,
documented benefits of facial recognition technologies used by law enforcement, includ‑
ing:

Identifying individuals who stormed the U.S. Capitol on Jan. 6

Reuniting victims of human trafficking with their families and loved ones

Detecting use of fraudulent documentation by non‑citizens at air ports of entry

Aiding counterterrorism investigations in critical situations

Exonerating innocent individuals accused of crimes

“Rather than impose sweeping moratoriums, SIA encourages Congress to propose bal‑
anced legislation that promulgates reasonable safeguards to ensure that facial recogni‑
tion technology is used ethically, responsibly and under appropriate oversight and that
the United States remains the global leader in driving innovation,” said SIA CEO Don
Erickson.

241
6 Con Evidence

Such approaches and recommendations have been reflected in the following areas sup‑
ported by SIA:

The U.S. Innovation and Competition Act – applauded by SIA earlier this month follow‑
ing its passage in the Senate – which authorizes the National Science Foundation (NSF)
to disburse funds for research and development initiatives on key technology areas, in‑
cluding biometrics

Increased funding to the National Institute of Standards and Technology (NIST) Image
Analysis Unit, which will expand NIST’s testing infrastructure and computing power
necessary to enhance NIST’s Facial Recognition Vendor Test Program

Direct additional NSF funding to historically Black colleges and universities, Hispanic‑
serving institutions and other minority institutions to develop interdisciplinary research
focused on facial recognition algorithmic development to provide students with first‑
hand knowledge about the challenges and opportunities presented by these advanced
technologies, including issues surrounding performance differentials and bias mitiga‑
tion

SIA is committed to promoting policies that support innovation in security and life
safety technologies and supports U.S. leadership in key technology areas, including bio‑
metrics. In May, SIA joined the U.S. Chamber of Commerce and a group of other associ‑
ations in sending a letter to President Biden expressing concern with blanket moratori‑
ums on facial recognition and advocating for a combination of technological safeguards
and policy measures to effectively mitigate any risks associated with the technology and
ensure that it is developed and used responsibly.

SIA also recently sent a letter to President Biden and Vice President Harris urging the
administration and Congress to consider policies that enable American leadership in
developing biometric technologies; issued policy principles that guide the commercial
sector, government agencies and law enforcement on how to use facial recognition in a
responsible and ethical manner; released comprehensive public polling on support for
facial recognition use across specific applications; and published a list of successful uses
of the technology.

242
6 Con Evidence

6.1.21 AT: Privacy

Privacy must be weighed against other conflicting interests.

Lobel 22

Orly Lobel (Warren‑Distinguished Professor and Director of the Center for Employment
and Labor Policy (CELP) at University of San Diego), “The Problem With Too Much
Data Privacy,” Time, 27 October 2022, https://time.com/6224484/data‑privacy‑problem/,
accessed 27 February 2023

Privacy has long dominated our social and legal debates about technology. The Federal
Trade Commission and other central regulators aim to strengthen protections against
the collection of personal data. Data minimization is the default set in Europe by the
GDPR and a new bill before U.S. Congress, The American Data Privacy and Protection
Act, similarly seeks to further privacy’s primacy.

Privacy is important when it protects people against harmful surveillance and public dis‑
closure of personal information. But privacy is just one of our democratic society’s many
values, and prohibiting safe and equitable data collection can conflict with other equally
valuable social goals. While we have always faced difficult choices between competing
values—safety, health, access, freedom of expression and equality—advances in tech‑
nology make it increasingly possible for data to be anonymized and secured to balance
individual interests with the public good. Privileging privacy, instead of openly ac‑
knowledging the need to balance privacy with fuller and representative data collection,
obscures the many ways in which data is a public good. Too much privacy—just like too
little privacy—can undermine the ways we can use information for progressive change.

Privacy conflicts with developing better algorithms.

Lobel 22

Orly Lobel (Warren‑Distinguished Professor and Director of the Center for Employment
and Labor Policy (CELP) at University of San Diego), “The Problem With Too Much
Data Privacy,” Time, 27 October 2022, https://time.com/6224484/data‑privacy‑problem/,
accessed 27 February 2023

We also need to recognize that privacy can conflict with better, more accurate, and less
biased, automation. In the contemporary techlash, in which algorithms are condemned

243
6 Con Evidence

as presenting high risks of bias and exclusion, the tension between protecting personal
data and the robustness of datasets must be acknowledged. For an algorithm to be‑
come more accurate and less biased, it needs data that is demographically reflective.
Take health and medicine for example. Historically, clinical trials and health‑data col‑
lection have privileged male and white patients. The irony of privacy regulation as a
solution to exclusion and exploitation is that it fails to address the source of much bias:
partial and skewed data collection. Advances in synthetic data technology, which al‑
lows systems to artificially generate the data that the algorithm needs to train on can
help alleviate some of these tensions between data collection and data protection. Con‑
sider facial recognition again: we need more representative training data to ensure that
the technology becomes equally accurate across identities. And yet, we need to be de‑
liberate and realistic about the need for real data for public and private innovation.

Privacy hampers innovation, specifically medical innovation.

Lobel 22

Orly Lobel (Warren‑Distinguished Professor and Director of the Center for Employment
and Labor Policy (CELP) at University of San Diego), “The Problem With Too Much
Data Privacy,” Time, 27 October 2022, https://time.com/6224484/data‑privacy‑problem/,
accessed 27 February 2023

An overemphasis on privacy can hamper advances in scientific research, medicine, and


public health compliance. Big data collected and mined by artificial intelligence is al‑
lowing earlier and more accurate diagnosis, advanced imaging, increased access to and
reduced costs of quality care, and discovery of new connections between data and dis‑
ease to discover novel treatments and cures. Put simply, if we want to support medical
advances, we need more data samples from diverse populations. AI advances in radi‑
ology have resulted not only in better imaging but also in reduced radiation doses and
faster, safer, and more cost‑effective care. The patients who stand to gain the most are
those who have less access to human medical experts.

In its natural state—to paraphrase the tech activist slogan “Information wants to be free”
(and channeling the title of my own book Talent Wants to Be Free)—data wants to be
free. Unlike finite, tangible resources like water, fuel, land or fish, data doesn’t run out
because it is used. At the same time, data’s advantage stems from its scale. We can find
new proteins for drug development, teach speech‑to‑text bots to understand myriad

244
6 Con Evidence

accents and dialects, and teach algorithms to screen breast mammograms or lung x‑
rays when we can harness the robustness of big data—millions, sometimes billions, of
data points. During the COVID‑19 pandemic, governments track patterns of the spread
of the disease and fight against those providing false information and selling products
under fraudulent claims about cures and protections. The Human Genome Project is a
dazzling, paradigmatic leap in our collective knowledge and health capabilities enabled
by massive data collection. But there is much more health information that needs to be
collected, and privileging privacy may be bad for your health.

Privacy hampers innovation, specifically medical innovation.

Lobel 22

Orly Lobel (Warren‑Distinguished Professor and Director of the Center for Employment
and Labor Policy (CELP) at University of San Diego), “The Problem With Too Much
Data Privacy,” Time, 27 October 2022, https://time.com/6224484/data‑privacy‑problem/,
accessed 27 February 2023

In health care, this need for data is perhaps intuitive, but the same holds true if we
want to understand—and tackle—the root causes of other societal ills: pay gaps, dis‑
criminatory hiring and promotion, and inequitable credit, lending, and bail decisions.
In my research about gender and racial‑pay gaps, I’ve shown that more widespread in‑
formation about salaries is key. Similarly, freely sharing information online about our
job experiences can improve workplaces, and there are initiatives concerning privacy
that may inadvertently backfire and result in statistical discrimination against more vul‑
nerable populations. For example, empirical studies suggest that ban‑the‑box privacy
policies about criminal background checks for hiring may have led to increased racial
discrimination in some cities.

Privacy serves primarily to shield the rich.

Lobel 22

Orly Lobel (Warren‑Distinguished Professor and Director of the Center for Employment
and Labor Policy (CELP) at University of San Diego), “The Problem With Too Much
Data Privacy,” Time, 27 October 2022, https://time.com/6224484/data‑privacy‑problem/,
accessed 27 February 2023

245
6 Con Evidence

Privacy—and its pervasive offshoot, the NDA—has also evolved to shield the power‑
ful and rich against the public’s right to know. Even now, with regard to the right to
abortion, the legal debates around reproductive justice reveal privacy’s weakness. A
more positive discourse about equality, health, bodily integrity, economic rights, and
self‑determination would move us beyond the sticky question of what is and is not in‑
cluded in privacy. As I recently described in a lecture about Dobbs v. Jackson Women’s
Health Organization, abortion rights are far more than privacy rights; they are health
rights, economic rights, equality rights, dignity rights, and human rights. In most cir‑
cumstances, data collection should not be prevented but safeguarded, shared, and em‑
ployed to benefit all.

The dominance of privacy is reverse goldilocks—it both over and under‑includes


relevant values.

Lobel 22

Orly Lobel (Warren‑Distinguished Professor and Director of the Center for Employment
and Labor Policy (CELP) at University of San Diego), “The Problem With Too Much
Data Privacy,” Time, 27 October 2022, https://time.com/6224484/data‑privacy‑problem/,
accessed 27 February 2023

While staunch privacy advocates emphasize tools like informed consent and opt‑out
methods, these policies rely on a fallacy of individual consent. Privacy scholars agree
that consent forms—those ubiquitous boilerplate clickwrap policies—are rarely read or
negotiated. Research also reveals that most consumers are quite agnostic to privacy set‑
tings. The behavioral literature calls this the privacy paradox, revealing that in practice
people are regularly willing to engage in a privacy calculus, giving up privacy for per‑
ceived benefits. So privileging privacy is both over and under‑inclusive: It neglects a
fuller array of values and goals we must balance, but it also fails to provide meaningful
assurances for individuals and communities who have an undeniable history of being
oppressed by the state and privileged elite. The dominance of privacy policy can distort
nuanced debates about distributional justice and human rights, as we continue to build
our digital knowledge commons. Collection of important data to tackle our toughest
social issues is a critical mandate of democracy.

246
6 Con Evidence

6.1.22 AT: Privacy—Uniqueness

Privacy is already dead—NSA surveillance, Cambridge Analytica, and Equifax


prove.

Kerry 18

Cameron F. Kerry (Ann R. and Andrew H. Tisch Distinguished Visiting Fellow ‑


Governance Studies, Center for Technology Innovation), “Why protecting privacy
is a losing game today—and how to change the game,” Brookings, 12 July 2018,
https://www.brookings.edu/research/why‑protecting‑privacy‑is‑a‑losing‑game‑today‑
and‑how‑to‑change‑the‑game/, accessed 27 February 2023

There is a classic episode of the show “I Love Lucy” in which Lucy goes to work wrap‑
ping candies on an assembly line. The line keeps speeding up with the candies coming
closer together and, as they keep getting farther and farther behind, Lucy and her side‑
kick Ethel scramble harder and harder to keep up. “I think we’re fighting a losing game,”
Lucy says.

This is where we are with data privacy in America today. More and more data about
each of us is being generated faster and faster from more and more devices, and we
can’t keep up. It’s a losing game both for individuals and for our legal system. If we
don’t change the rules of the game soon, it will turn into a losing game for our economy
and society.

The Cambridge Analytica drama has been the latest in a series of eruptions that have
caught peoples’ attention in ways that a steady stream of data breaches and misuses of
data have not.

The first of these shocks was the Snowden revelations in 2013. These made for long‑
running and headline‑grabbing stories that shined light on the amount of information
about us that can end up in unexpected places. The disclosures also raised awareness of
how much can be learned from such data (“we kill people based on metadata,” former
NSA and CIA Director Michael Hayden said).

The aftershocks were felt not only by the government, but also by American companies,
especially those whose names and logos showed up in Snowden news stories. They
faced suspicion from customers at home and market resistance from customers over‑
seas. To rebuild trust, they pushed to disclose more about the volume of surveillance
demands and for changes in surveillance laws. Apple, Microsoft, and Yahoo all engaged

247
6 Con Evidence

in public legal battles with the U.S. government.

Then came last year’s Equifax breach that compromised identity information of almost
146 million Americans. It was not bigger than some of the lengthy roster of data breaches
that preceded it, but it hit harder because it rippled through the financial system and
affected individual consumers who never did business with Equifax directly but never‑
theless had to deal with the impact of its credit scores on economic life. For these people,
the breach was another demonstration of how much important data about them moves
around without their control, but with an impact on their lives.

Now the Cambridge Analytica stories have unleashed even more intense public atten‑
tion, complete with live network TV cut‑ins to Mark Zuckerberg’s congressional testi‑
mony. Not only were many of the people whose data was collected surprised that a
company they never heard of got so much personal information, but the Cambridge
Analytica story touches on all the controversies roiling around the role of social media
in the cataclysm of the 2016 presidential election. Facebook estimates that Cambridge
Analytica was able to leverage its “academic” research into data on some 87 million
Americans (while before the 2016 election Cambridge Analytica’s CEO Alexander Nix
boasted of having profiles with 5,000 data points on 220 million Americans). With over
two billion Facebook users worldwide, a lot of people have a stake in this issue and, like
the Snowden stories, it is getting intense attention around the globe, as demonstrated
by Mark Zuckerberg taking his legislative testimony on the road to the European Par‑
liament.

Privacy is already dead—no one cares about data privacy.

Sahota 20

Neil Sahota (business advisor and contributor at Forbes), “Privacy Is Dead And Most
People Really Don’t Care,” Forbes, 14 October 2020, https://www.forbes.com/sites/neilsahota/2020/10/14/pr
is‑dead‑and‑most‑people‑really‑dont‑care/?sh=5dfd2ad87b73, accessed 10 March 2023

Have you read the terms and conditions to use Facebook? Your smart phone? Most
people have not, and probably with good reason. They’re hundreds, if not, thousands of
pages long. In fact, even contract lawyers with thirty years of experience have struggled
in trying to understand these agreements. Deep down, though, each of us knows that
we’re signing away our privacy rights to use these platforms and devices. So why do
we do it? We don’t truly value privacy as much as we like to believe we do.

248
6 Con Evidence

Humans are social animals, so we have a strong need to interact with other people and
belong to something. Guess what social media does for us? It helps us stay connected.
Think about how you often hear about important life events now friends and family.
Someone got engaged? They post it on Facebook. Good friend wins an award? They
post it on Instagram. Wondering what restaurant to eat at? We search on Yelp. Because
of network externalities, we must be on these platforms to stay connected. Don’t like
Facebook capturing your data? Well, good luck hearing the latest news on your friends
and family. In fact, by not being on the Facebook family of social media, it actually trig‑
gers feelings isolation and disconnectedness. Are we willing to give up a little privacy
to keep our social bonds in place? Absolutely.

It’s easy for us to turn a blind eye to what is happening as long as people remain bliss‑
fully ignorant. Find this shocking? Consider the Netflix docudrama the Social Dilemma.
Former insiders of these companies explain and show how companies not only mine
and sell your data but also how they have created their systems to make you addicted
to their platforms. They need you to keep returning so they can collect more and more
data from you and monetize it. Many people who have watched the Social Dilemma are
astonished and mortified by what they learn. Yet how many of them really swear off
the platforms and delete their accounts? We have rationalized away what is happening
with essentially a mindset of: “I won’t’ ask, and you don’t tell…” so long as nothing
bad happens.

The secret compact on data “privacy” is that we all know (at least subconsciously) that
our data is being taken, mined, and sold. We’re often copacetic with that because we
get enough value out of the systems that we don’t mind this cost. However, part of this
implicit agreement is that nefarious people cannot hack their way to our data. (Buying
is apparently perfectly acceptable though for most people.) Consider the data breaches
that SnapChat or Instagram suffered. There was huge public outrage over the lack of
data security (not so much privacy.) People didn’t seem so appalled at the level of data
collected (even SnapChat taking contact information without requesting permission for
it.) The indignation really centered on these companies not preventing cyber‑crime.

This leads to perhaps the biggest challenge most people face: we don’t fully understand
what data these companies know about us. Have you ever requested a copy of your
data that Google or Facebook has? Most people are shell‑shocked at what they know
that they have never disclosed on those platforms. The power to aggregate data and
connect hidden dots to reveal more about us is a testament to the advanced analytics
these companies have. Remember when Target was able to piece together a teenage

249
6 Con Evidence

girl was pregnant before even her parents knew? We are giving so much more data
away than we realize, and that’s probably the real crux of the situation. Privacy is not
as important as how people are using the data.

We already live in a world where people are used to sharing everything online. You
know those phone phishing scams like the fraudsters pretending to be the IRS? Young
millennials and Generation Z fall victim to them the most of any generation because
they’re used to giving information away. People get important value from these plat‑
forms and devices and accept the trade offs for it. Data security is still paramount, but
the strong belief for data privacy is pretty much dead.

250
6 Con Evidence

6.2 Regulation

6.2.1 General

Instituting basic regulations such as prohibiting real‑time capability, enabling


database restrictions, and various requirements on the deployment of such
technology allows us to capture to benefits of FRT without a full ban.

Feeney 19

Matthew Feeney (was the director of Cato’s Project on Emerging Technologies), “Should
Police Facial Recognition Be Banned?,” Cato Institute, 13 May 2019, https://www.cato.
org/blog/should‑police‑facial‑recognition‑be‑banned, accessed 6 March 2023

Concerns about surveillance, racial bias, and speech are clearly on the minds of San
Franciscan officials. These concerns look set to result in a ban on law enforcement using
facial recognition in San Francisco. Such a ban may well be justified given the state
of facial recognition technology and the potential for abuse. However, we should ask
whether there are any policies that would allow police to use facial recognition without
putting civil liberties at risk.

My own in‑progress list of necessary conditions for law enforcement facial recognition
deployment are the following:

— A prohibition on real‑time capability: Facial recognition technology should be used


as an investigative tool rather than a tool for real‑time identification.

— Database restrictions: Law enforcement facial recognition databases should only in‑
clude data related to those with outstanding warrants for violent crimes. Law enforce‑
ment should only be able to add data related to someone to the database if they have
probable cause that person has committed a violent crime. Relatives or guardians of
missing persons (kidnapped children, those with dementia, potential victims of acci‑
dents or terrorist attacks) should be able to contribute relevant data to these databases
and request their prompt removal.

— Open source/data requirement: The source code for the facial recognition system as
well as the datasets used to build the system should be available to anyone.

— Public hearing requirement: Law enforcement should not be permitted to use facial
recognition technology without first having informed the local community and allowed
ample time for public comment.

251
6 Con Evidence

— Threshold requirement: Deployment of facial recognition should be delayed until law


enforcement can demonstrate at least a 95 percent identity confidence threshold across
a wide range of demographic groups (gender, race, age, etc.).*

Such requirements would make law enforcement facial recognition very rare or per‑
haps even nonexistent, which are bullets I’m willing to bite. But these requirements
aren’t a ban. Given the current state of affairs it seems appropriate for San Francisco to
implement a facial recognition ban. Such a ban would be of reassurance to many San
Francisco residents, but we should consider whether bans are optimal facial recognition
policies.

252
6 Con Evidence

6.2.2 Terrorism

Limiting FRT solely to combatting terrorism is justified.

Robbins 21

Scott Robbins (PhD, MSc, Post Doc Research Fellow, University of Bonn, Germany), Fa‑
cial Recognition for Counter‑Terrorism: Neither a Ban Nor a Free‑for‑All. In: Henschke,
A., Reed, A., Robbins, S., Miller, S. (eds.) Counter‑Terrorism, Ethics and Technology. Ad‑
vanced Sciences and Technologies for Security Applications. Springer, Cham. 2021. Pgs.
89‑104, https://doi.org/10.1007/978‑3‑030‑90221‑6_6

4 Conditions for the Use of Facial Recognition

Given the argument for bans on FRT and the privacy and free speech rights enshrined in
liberal democratic constitutions and human rights declarations, it is clear that the state
must justify the use of FRTs before they can be used to capture terrorists. This is not a
technology that simply improves upon a power that the state already had; instead, it is
an entirely novel power. That is the power to identify anyone that comes into view of
an FRT equipped camera without a human being watching the video feed.

Here I will outline the conditions that FRT should be subject to operate in a liberal
democracy justifiably. I expand on each in the sections below. The context in which
FRT is being used must be one in which the public does not have a reasonable expecta‑
tion of privacy. Second, the only goal should be to prevent serious crimes like terrorism
from taking place. Finally, FRTs to store and capture biometric facial data in a database,
the individual in question must be suspected of committing a serious crime.

Using FRT only in places where people do not have a reasonable expectation of
privacy and ensuring they are marked respects rights to privacy.

Robbins 21

Scott Robbins (PhD, MSc, Post Doc Research Fellow, University of Bonn, Germany), Fa‑
cial Recognition for Counter‑Terrorism: Neither a Ban Nor a Free‑for‑All. In: Henschke,
A., Reed, A., Robbins, S., Miller, S. (eds.) Counter‑Terrorism, Ethics and Technology. Ad‑
vanced Sciences and Technologies for Security Applications. Springer, Cham. 2021. Pgs.
89‑104, https://doi.org/10.1007/978‑3‑030‑90221‑6_6

4.1 Reasonable Expectation of Privacy

253
6 Con Evidence

In a famous case in the United States, the supreme court ruled that Charles Katz had
a reasonable expectation of privacy when he closed the phone booth door [4, Chap. 1].
This meant that the evidence collected by the state who was listening in on his conver‑
sations in that phone booth had to be thrown out. This notion of a ‘reasonable expec‑
tation of privacy’ is fundamental to how the value of privacy is interpreted in liberal
democracies. It is not just a legal notion but a notion which grounds how we act. In
our bedrooms, we have a reasonable expectation of privacy, so we can change clothes
without fear of someone watching. When Charles Katz closed the door to the phone
booth he was using, he enjoys a reasonable expectation of privacy—he believes that no
one should listen to his conversation.

Facial data captured by FRTs should be at least as protected as voice data. CCTVs in
the public sphere should not be collecting information on individuals—something that
happens when CCTVs are equipped with FRT. When I walk down my street, I have a
reasonable expectation that my comings and goings are not being recorded—whether
it be a police officer following me around or by a smart CCTV camera recognizing my
face. Regular CCTVs do not record individuals’ comings and goings; rather, they record
what happens at a particular location.

The difference is that a CCTV camera does not record a line in a database that includes
my identity and the location that I was ‘seen’ at. CCTV equipped with FRT can record
such a line in a database—significantly empowering the state to perform searches that
tell them much about my comings and goings. Not only should these searches be linked
to clear justifications; but there should be clear justifications for collecting such intimate
data (their comings and goings) on individuals.

This reasonable expectation can be overridden if I have committed a serious crime or


plan on committing a serious crime. This is because my right to privacy would be over‑
ridden by the “rights of other individuals…to be protected by the law enforcement agen‑
cies from rights violations, including murder, rape, and terrorist attack” [17, 110]. If one
were to be in the process of planning a terrorist attack, it would not be a surprise to them
that they were being surveilled. Terrorists take active measures to prevent surveillance
that they expect to occur. This may seem to justify the placing of smart CCTVs in public
spaces to identify terrorists.

CCTV cameras are currently placed in many public spaces. If something happens, the
authorities can review the CCTV footage to see who was responsible. In this case, the
place itself is being surveilled. Data on individuals is not ‘captured’ in any sense. There
is no way to search a database of CCTV footage for a particular name. One must look at

254
6 Con Evidence

the footage. However, if this CCTV camera were to be “smart” and capture biometric fa‑
cial data along with video footage, then each individual who is captured by this camera
is being surveilled. The authorities now know each person that comes into this camera’s
view and what time they were there. This, even though an overwhelming majority of
people coming into any CCTV camera’s view has not, and does not plan to, commit a
serious crime. Their privacy has been invaded.

This has ethical implications regarding scope creep and chilling behavior discussed in
Sect. 3. If FRT enabled CCTV cameras are in operation, then it is easy for the state to
add new uses for the technology. A simple database search could reveal everyone who
goes into an area with many gay bars. A gay man in a country where homosexuality is
considered unacceptable but not illegal may chill their behavior—that is, not go to gay
bars to fear those visits being documented. While the FRT enabled CCTV cameras were
initially installed to counter terrorism, the ability to easily search for anyone that has
come across it makes it easy to use it for other, illegitimate purposes.

The state could simply state that they will only use FRTs with a warrant targeted against
an individual suspect of a serious crime. For example, the authorities may have good
information regarding the planning of a terrorist attack by a particular person. It is im‑
perative that they find this person before they are able to execute the attack. They obtain
a warrant and then use the city’s network of FRT‑enabled CCTV cameras to ‘look’ for
this person. If this person’s face is captured by one of these cameras, then the authorities
are immediately notified.

If we bracket issues of efficacy and disparate impact, it appears that this would be a
useful power to the state—and subject to restrictions that protect privacy. The issue is
not whether or not to use FRTs, but how they can and should be used. However, these
would be merely institutional and perhaps legal barriers that are subject to interpreta‑
tion. The scope of national security is little understood. Donald Trump used the concept
to justify the use of collecting cell‑phone location data to track suspected illegal immi‑
grants [14]. The power enabled by FRTs is so great, and the justifications to use them
will be so little understood, that it will be near impossible for regular citizens to feel
and act as if they have privacy—even if they do, in principle, have it. Your partner may
promise to never read your journal unless you are either dead or in a coma; however,
the fact that she has a key and knows where it is will probably cause you do self sensor
what you write down—just in case. With a journal, and with your general comings and
goings, you should enjoy a reasonable expectation of privacy.

However, there are some public spaces where individuals do not enjoy a reasonable

255
6 Con Evidence

expectation of privacy. Airports and border crossings are two such examples. For bet‑
ter or worse, we now expect little privacy in these contexts. Authorities are permitted
to question us, search our bags, search our bodies, submit us to millimeter scans, etc.
It would be rather odd to think that our privacy was invaded more by our faces be‑
ing scanned and checked against a criminal database. On regular public sidewalks, I
would be horrified to find out that the state recorded my comings and goings; however,
I would be shocked to find out the state did not record each time I crossed into and out
of the country. This points to the idea that there may be places where we should have
a reasonable expectation of privacy—whether we do or not.

A recent U.S. supreme court case illustrates this nicely. Timothy Carpenter was arrested
for armed robbery of Radio Shacks and T‑Mobile stores. The police used a court order
(which is subject to less standards than a warrant) to obtain GPS data gathered by his
cell phone and collected by the telecommunications companies MetroPCS and Sprint.
In an opinion written by chief justice John Roberts, the supreme court ruled that Timo‑
thy Carpenter should have a reasonable expectation of privacy concerning his constant
whereabouts. The government cannot simply, out of curiosity, obtain this data [24].
This prevents the widespread use of smart CCTV cameras in plain sight to undermine
our ‘reasonable expectation of privacy.’ The state should not use conspicuous surveil‑
lance as a way to claim that no one has a reasonable expectation of privacy where these
cameras exist. The critical point is that there are public spaces where citizens of a liberal
democracy should have a reasonable expectation of privacy.

Therefore, if there are places where citizens should not have a reasonable expectation
of privacy and FRTs are effective (they do not cause unequally distribute false positives
and false negatives across different groups), it may be justifiable to use FRTs in those
places. People expect the state to protect them from terrorism. If FRTs contribute to
keeping citizens safe from terrorists, then there is a good reason to use them. However,
based on the analysis above, they cannot simply be used anywhere as there are places
where citizens should have a reasonable expectation of privacy.

The above points to the allowable use of regular CCTV cameras in public spaces but pre‑
vents FRTs from operating in those same public spaces.Footnote7 The problem now is:
How will the public know the difference? This is a serious problem. After all, the right to
free expression may be ‘chilled’ because people believe that the state is surveilling their
actions. I may worry that because my friend lives above a sex shop, the state’s surveil‑
lance may cause them to believe I frequent the sex shop rather than visit my friend. I
may, therefore, not visit my friend very often. Or I may not join a Black Lives Matter

256
6 Con Evidence

protest because I believe the state is using FRTs to record that I was there. This is the
“chilling effect” mentioned in Sect. 3.2. This can occur even if the state is not engaging
in such surveillance. The only thing that matters is that I believe it to be occurring.

The ‘chilling effect’ puts the burden on the state to assure the public that such unjusti‑
fied surveillance is not happening. Where it is justified, there are appropriate safeguards
and oversight to prevent misuse, etc. This requires institutional constraints, laws, and
effective messaging. As [20] argue, institutional constraints and laws alone will not as‑
sure the public that the state is not practicing unjustified intrusive surveillance. And
vice versa, effective messaging alone will not ensure that the state is not practicing un‑
justified intrusive surveillance.

For example, if the state creates laws that prevent the use of FRT on regular city streets
but the cameras that are used look the same as the smart CCTV cameras that have FRT
in airports, then the public will not be assured that facial recognition is not taking place.
This sets up the conditions for the chilling effect to occur. However, if the state uses cam‑
eras that are clearly marked for facial recognition in places like airports, and cameras
that are clearly marked ‘no facial recognition’ on city streets but no laws are preventing
them from using FRT on city streets, then the public has a greater chance of being as‑
sured. However, nothing is preventing the state from using the footage of those cameras
and running facial recognition on them after the video has been captured. Therefore, it
takes both institutional constraints (bound by law) and effective messaging to meet the
standards which support liberal democratic values like free expression.

This creates two conditions for the state’s use of FRT. First, the state must create insti‑
tutional constraints that only allow FRTs to be used in places where people do not (and
should not) enjoy a reasonable expectation of privacy (e.g., airports, border crossings).
Second, the cameras equipped with FRT must be marked to assure the public that they
are not being surveilled in places that they should have a reasonable expectation of pri‑
vacy.

Terrorism is a serious threat that requires FRT to combat.

Robbins 21

Scott Robbins (PhD, MSc, Post Doc Research Fellow, University of Bonn, Germany), Fa‑
cial Recognition for Counter‑Terrorism: Neither a Ban Nor a Free‑for‑All. In: Henschke,
A., Reed, A., Robbins, S., Miller, S. (eds.) Counter‑Terrorism, Ethics and Technology. Ad‑

257
6 Con Evidence

vanced Sciences and Technologies for Security Applications. Springer, Cham. 2021. Pgs.
89‑104, https://doi.org/10.1007/978‑3‑030‑90221‑6_6

4.2 Cause for the State’s Use of FRTs

The state should not simply use new technology because it exists. There must be a
purpose for using technology that is greater than the harms and privacy infringements
that occur due to that technology. It would be odd to use wiretaps to surveil a serial
jaywalker. Wiretaps are used in highly restrictive situations involving serious criminals.
FRTs should be no different. The point is, that “justifications matter.” Collecting facial
data by using FRTs for countering terrorism does not mean that the data is now fair game
for any other use. Each use must have its moral justification—and if that justification
no longer obtains, then that data should be destroyed [8, 257].

Terrorism is a serious enough risk (in terms of possible harm—not necessarily in terms
of likelihood) that it features as a justification employed by those advocating the use of
FRTs. In these cases, one does not feel as if the privacy rights of terrorists are so strong
that they should not be surveilled. We expect the government to do what they can to
find people like this. Their privacy rights are overridden by others’ rights not to be
injured or killed in a terrorist attack.

The problem is that FRTs must also surveil everyone that comes into view of one of its
cameras. That is, each face is used as an input to an algorithm that attempts to match that
face to an identity and/or simply check whether that face matches one of the identities
of suspected terrorists. In a technical sense, this technology could only be used for the
legitimate purpose of finding terrorists. However, as argued above—the difficulty in
assuring the public that this is the case will have a chilling effect. Furthermore, the real
possibility of scope creep makes placing these cameras, in places where people should
have a reasonable expectation of privacy, dangerous.

This means that no matter the cause, FRTs should not be employed in places where inno‑
cent people have a reasonable expectation of privacy (as argued above). However, once
we restrict its use to those places where there is no reasonable expectation of privacy,
then finding serious criminals using FRTs poses no ethical problem (providing that it
reaches a threshold of effectiveness). The third condition for the use of FRTs is that FRTs
should be restricted to finding serious criminals (e.g., terrorists).

258
6 Con Evidence

Forbidding the use of third‑parties in surveillance solves privacy and data concerns.

Robbins 21

Scott Robbins (PhD, MSc, Post Doc Research Fellow, University of Bonn, Germany), Fa‑
cial Recognition for Counter‑Terrorism: Neither a Ban Nor a Free‑for‑All. In: Henschke,
A., Reed, A., Robbins, S., Miller, S. (eds.) Counter‑Terrorism, Ethics and Technology. Ad‑
vanced Sciences and Technologies for Security Applications. Springer, Cham. 2021. Pgs.
89‑104, https://doi.org/10.1007/978‑3‑030‑90221‑6_6

4.3 Reliance on Third‑Party Technology

The state’s reliance on third‑party technology companies to facilitate surveillance is per‑


haps the area where the most violations of liberal democratic values occur. For example,
the government cannot simply scrape the entire internet of pictures of people, match the
faces to names, create a detailed record of things you have done, places you have gone,
people you have spent time with, etc. Especially without a just cause. This amounts to
intrusive surveillance of every individual. In liberal democracies, there must be a justi‑
fication (resulting in a warrant approved by a judge) to engage in such surveillance of
an individual. Surveilling a million people should not be considered more acceptable
than the surveillance of one person. However, Clearview A.I. has been scraping images
from the web and creating digital identities for years. Many police departments and
government agencies are now using this third‑party company to aid in using FRTs [9].

This causes significant ethical concern for three reasons: first, some third‑party compa‑
nies do not follow the constraints already mentioned above; second, sensitive data is
being stored and processed by third‑party companies that have institutional aims that
could incentivize the misuse or abuse of this data; and third, the role that these compa‑
nies play in surveillance may reduce the public’s trust in them.

4.3.1 Contracting out the Bad Stuff

When I first encountered FRT at an airport, I was a bit squeamish. It took me some time
to understand why. Indeed, I am not against using such technology to prevent terrorists
from entering the country or detecting people who are wanted in connection with a
serious crime or find children on missing person lists.Footnote8 I also did not feel that
I had a reasonable expectation of privacy. I expect to be questioned by a border guard
and have my passport checked. I expect that my bag or my body could be searched.
And I expect to be captured on camera continuously throughout the airport. So why
did I have this immediate adverse reaction towards the use of FRT by the state?

259
6 Con Evidence

The answer lies in my knowledge regarding the contracting out of such work to third‑
party technology companies. I am expected to trust the state and the third party tech‑
nology company that is behind the technology. Are they capturing my biometric face
data and storing it on their third‑party servers? Are there institutional barriers prevent‑
ing them from reusing or selling that data for their benefit? Is the data captured, sent,
stored, and processed in line with best security practices? In short, I fear that even if
the proper laws and constraints regarding the state’s use of FRTs are in place, that third‑
party technology company is not bound by them or does not respect them.Footnote9

This is wrong. There are laws in place that prevent the United States, for example, con‑
tracting out intrusive surveillance on their citizens to other countries. So the U.S.—not
being able to collect data on its citizens—cannot ask the U.K. to collect data on a U.S.
citizen. The same should be true for FRTs. Suppose the U.S. cannot gather facial data
on the entire U.S. population (practicing bulk surveillance). In that case, the U.S. should
also not contract such work out to a third‑party company—or use a third party company
that has engaged in this practice. If I contract the work of killing an enemy to somebody
else, that does not absolve me of all responsibility regarding the murder of that enemy.

It is not, in principle, unacceptable to use tools created by third‑party companies. Third‑


party companies often have the resources and incentives to create far better tools than
the government could create. Silicon valley technology companies attract many creative
and motivated thinkers—and pay them a salary that the government could not afford.
It would be detrimental to say that the government cannot use tools created by these
companies. However, big data and artificial intelligence have made this relationship
much more complicated.

Rather than merely purchasing equipment, the government is now purchasing services
and data. A.I. algorithms created by third‑party companies are driven by the collection
of vast amounts of data. If this algorithm is to be used by the state, the state must ensure
that the data driving it was collected according to laws governing the state’s data col‑
lecting capabilities. Furthermore, the hosting of the data that the government collects
is increasingly being contracted out to cloud services like Amazon Web Services. This
is so because this data processing is extremely resource‑intensive and something that
third‑party companies are more efficient at. This creates a situation where our biometric
facial data may have to be sent to a third‑party company for storage and/or processing.
The company in question must have no ability to see/use this data. This is so for two rea‑
sons. First, these companies have institutional aimsFootnote10 that have nothing to do
with the security of the state. This creates incentives for companies to use this data for

260
6 Con Evidence

their aims—creating an informational injustice [10]. Furthermore, this blurring of insti‑


tutional aims (e.g., maximizing profits and countering terrorism) could be detrimental
to the company. As a result of NSA programs like PRISM, which purportedly allows
the state to gain access to the company servers of Google and Facebook [5], rival com‑
panies are now advertising that they are outside of U.S. jurisdiction and can therefore
be used without fear of surveillance.Footnote11

Second, this data is now being entrusted to companies that may not have the same secu‑
rity standards or oversight expected for the storage and processing of sensitive surveil‑
lance data. Recently the Customs and Border Patrol contracted out facial recognition to a
third‑party company which was breached in a cyber‑attack causing the photos of nearly
100,000 people to be stolen. Customs and Border Patrol claimed no responsibility—
saying it was the third‑party company’s fault. The state should be responsible for the
security of surveillance data [19, 35].

This discussion should cause constraints on how the state uses third‑party companies to
facilitate surveillance. Condition number four for the state’s use of FRTs is that the state
should not use third‑party companies that violate the first three conditions during the
creation or use of its service. This means that the state should know about the services
they are using. Furthermore, a fifth condition is that the third‑party company should
not be able to access or read the sensitive data collected by the state. This keeps the state
in control of this sensitive surveillance data.

261
6 Con Evidence

6.2.3 Accountability Rules

Accountability requirements would help citizens exercise private right of action


regarding FRT.

McClellan 20

Elizabeth McClellan (J.D. Candidate, 2021, University of Maryland Francis King Carey
School of Law), Facial Recognition Technology: Balancing the Benefits and Concerns, 15
Journal of Business & Technology Law 363, 2020, https://digitalcommons.law.umaryland.
edu/jbtl/vol15/iss2/7

A. Accountability

The first step to regulating facial recognition technology should be to ensure that all
users of the technology are held accountable for their uses of the technology. Although
federal statutes would ensure uniformity across the nation for the proper and improper
uses of the technology, it is important that citizens are able to exercise their own rights
to ensure the technology is being used properly. This would most efficiently be accom‑
plished by providing citizens with a private right of action regarding the use of facial
recognition technology.

In line with the Illinois statute,98 violations of biometric privacy acts should provide
a private right of action for citizens. Absent a private right of action, any violations of
biometric privacy would be left in the hands of the government to decide whether or
not to get involved, rather than up to the citizens whose privacy was violated. Including
a provision with a private right of action for citizens would increase accountability for
both law enforcement and private companies utilizing facial recognition technology.

The private right of action is historically important to the American judicial system and
is something that European advocates have attempted to model.99 American reform‑
ers originally pushed for a private right of action for citizens with a desire to create a
more efficient legal system and make it easier for individuals with meritorious claims
to “have their day in court.”100 The importance of a private right of action has been
illustrated most notably through antitrust enforcement. Following the Sherman Act of
1890, courts began to recognize substantive rights of plaintiffs and encouraged private
actions, which in turn increased the awareness of issues in antitrust.101

Similarly, a private right of action for the misuse of biometrics would increase aware‑
ness of the issues surrounding the technology. Although Rosenbach v. Six Flags En‑

262
6 Con Evidence

tertainment Corporation illuminated the potential weaknesses and flaws of the Illinois
Biometric Information Privacy Act,102 no other state statute allows a private cause of
action making it impossible to challenge these statutes unless the government chooses
to interfere.103 A private cause of action would allow citizens to draw attention to the is‑
sues surrounding facial recognition technology and biometric privacy, and would help
hold both private corporations and the government accountable for their uses of the
technology.

263
6 Con Evidence

6.2.4 Transparency Requirements

Transparency requirements, similar to BIPA, ensure citizens have information to


meaningful consent to FRT.

McClellan 20

Elizabeth McClellan (J.D. Candidate, 2021, University of Maryland Francis King Carey
School of Law), Facial Recognition Technology: Balancing the Benefits and Concerns, 15
Journal of Business & Technology Law 363, 2020, https://digitalcommons.law.umaryland.
edu/jbtl/vol15/iss2/7

B. Transparency

The second step to effectively regulating facial recognition technology is ensuring that
companies and governments utilizing the technology are transparent and honest with
users about the uses of the technology. Following the Illinois Biometric Information Pri‑
vacy Act, federal facial recognition technology regulations should require all companies
utilizing the technology to publicize their privacy policy.104 As noted, most companies
today that provide some type of “privacy policy” are often intentionally vague and ex‑
clude definitions and specifics as to the usage of the data collected.105 To combat these
issues and encourage transparency with users, the federal regulations should also spec‑
ify exactly what information should be required in these privacy policies. For example,
just as the Illinois statute requires private entities to develop a written policy and es‑
tablish a “retention schedule” for the information collected,106 the federal regulation
should specify further what is required in a “retention schedule,” allowing for proper
usage of the technology, but ensuring that companies do not retain the information for
longer than necessary.

The federal regulations should also provide a list of definitions for specific facial recogni‑
tion technology words and phrases to ensure consistency across the country. Although
the few state statutes in place now provide definitions for similar words, such as “bio‑
metric identifier,” their definitions vary greatly and create serious inconsistencies. 107
In order to create transparency for citizens and provide a uniform understanding of
the type of information that is protected from both private usage and government us‑
age, the federal regulation should specify a definition for these words, and other words
commonly used in the facial recognition technology field. The creation of consistent
requirements and definitions for private corporations and governments utilizing facial
recognition technology will ensure uniformity throughout the United States and ensure

264
6 Con Evidence

that all citizens are equally informed about their rights regarding the technology.

265
6 Con Evidence

6.2.5 Privacy Protections

Opt‑out provisions and strict limitations on the use of FRT respects privacy.

McClellan 20

Elizabeth McClellan (J.D. Candidate, 2021, University of Maryland Francis King Carey
School of Law), Facial Recognition Technology: Balancing the Benefits and Concerns, 15
Journal of Business & Technology Law 363, 2020, https://digitalcommons.law.umaryland.
edu/jbtl/vol15/iss2/7

C. Privacy

Lastly, federal facial recognition technology regulations should ensure and protect the
privacy of all citizens, as one of facial recognition technology’s most concerning aspects
is the potential violation of citizens’ privacy by both law enforcement and private com‑
panies. Although facial recognition technology provides benefits in the criminal justice
system,108 federal regulations must support these benefits while ensuring the privacy
of citizens and guaranteeing safety from “unreasonable searches and seizures” of the
government.109 This may be accomplished by allowing law enforcement’s use of the
technology, but with certain limitations.

In 1956, Maryland passed the Wiretapping and Electronic Surveillance statute.110 This
statute prohibits the private recording of conversations, exclusive of certain police activ‑
ity, such as engaging in a criminal investigation with reasonable cause, or where an offi‑
cer’s safety may be in jeopardy.111 However, under the statute, officers may not record
private conversations absent these specified circumstances.112 Following the structure
of the Maryland Wiretapping and Electronic Surveillance statute, the use of facial recog‑
nition technology by law enforcement should also be prohibited under the federal reg‑
ulations, exclusive of certain activity. In private places, such as homes and cars, the
government and law enforcement should not be allowed to use facial recognition de‑
vices absent a warrant. However, due to the potential benefits of the technology, the
devices may be used in public places if there is probable cause, similar to the Maryland
Wiretapping and Electronic Surveillance statute.

A regulation structure such as this is also supported by Kyllo. While the Supreme Court
held that thermal imaging devices should not be used to examine a private home since
the devices were not “in general public use” and were used “to explore details of the
home that would previously have been unknowable without physical intrusion,”113
the usage of facial recognition technology to monitor drivers, such as Australia’s usage

266
6 Con Evidence

of the technology,114 similarly constitutes a physical intrusion. Therefore, any use of


the technology to monitor drivers or individuals in their home, without their consent,
should be explicitly prohibited by federal statute in order to ensure uniformity across
the country.

Additionally, in order to ensure privacy and autonomy in private uses of the technology,
citizens should have the right to choose whether to engage or not engage in the usage of
facial recognition technology. Federal regulations should always allow an exception for
citizen consent to engage in law enforcement or private corporations’ use and retention
of the information. However, as of now, most private companies’ policies seem to in‑
dicate a default method of privacy that automatically opts users “in” to the data usage,
and requires users to actively opt “out.”115 Federal regulations should instead require
that the default method of privacy opts users “out” of the data collection and usage
unless users actively consent to engage in the technology. By providing citizens with
the autonomy to engage in facial recognition technology, and by closely regulating law
enforcement’s usage of the technology without citizen consent, all citizens, regardless
of their location or jurisdiction, would have their privacy protected from unwarranted
intrusions.

267
6 Con Evidence

6.2.6 Semi‑Open Public Areas

FRT should be allowed for semi‑open public areas.

Solarova et al. 22

Sara Solarova (Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia),


Juraj Podroužek, Matúš Mesarčík, Adrian Gavornik & Maria Bielikova. Recon‑
sidering the regulation of facial recognition in public spaces. AI Ethics (2022).
https://doi.org/10.1007/s43681‑022‑00194‑0

The establishment of an identity that is secure and convenient has become critical, thus
prompting the need for reliable authentication techniques in the wake of networking,
communication, and mobility [9]. Biometric facial recognition technologies offer such
establishments based on our unique characteristics and many other benefits. Data pro‑
cessed by facial recognition systems have been used for numerous years in the area of
freedom, security and justice safeguarding (supra)national security [10]. The unprece‑
dented advancements in AI‑powered FRT extend beyond the purposes of national se‑
curity hence necessitating a separate discussion to which we have an ambition to con‑
tribute. Following the work of Almeida et al. [11] and Moraes et al. [12] on possible
regulatory and ethical approaches for FRT deployment predominantly in law enforce‑
ment, our aim is to contribute to the debate of effective regulation of FRT by demonstrat‑
ing that a distinction of public spaces between semi‑open public spaces and open public
spaces allows for more clarity and transparency in upcoming discussions. Our addition
also lies in the assessment of various ethical and societal risks and the subsequent illus‑
tration of specific countermeasures including their level of applicability to demonstrate
the differences between use‑cases in which FRT could be deployed in various public
spaces.

We believe that trustworthy facial recognition technology can be achieved only if its de‑
velopment and use is intertwined with ethics at its core. We elaborate on this idea from
several perspectives. First, we support regulation of FRT that weighs the ethical and so‑
cietal risks and identifies appropriate countermeasures based on a risk‑based approach.
Second, we hold that this risk‑based approach should not only be applied to facial recog‑
nition technology in general but should take into account a set of possible use‑cases in
which FRT could be deployed and used. And third, we suggest that analysing the im‑
pacts of FRT for specific use‑cases will allow us not only to better identify and categorise
ethical and societal risks but also to propose more appropriate countermeasures.

268
6 Con Evidence

For this purpose, we outline and discuss some of the most vocalised concerns regarding
FRT, considering the overarching ethical principles and human values of transparency,
fairness, robustness, privacy, and human agency. We propose a non‑exhaustive set of
use‑cases, from which the risks and its mitigations can be applied in a more general
discussion about effective regulation of FRT, and AI in general.

It should be also mentioned that we shall not refrain from analysing the impacts of fa‑
cial recognition technologies in all kinds of spaces including online spaces, even if pro‑
posed legislation is focusing merely on the physical environment. This topic, however,
requires more elaborated discussion and is beyond the scope of our paper. Nonetheless,
we are convinced that the ethical issues of facial recognition in online spaces are equally
important and deserve further attention.

1.1 Facial recognition in public spaces

The debate about regulation of FRT in Europe is grounded mainly in the context of pub‑
lic spaces. This notion is also crucial in the subsequent debate in favour of or against the
deployment of FRT. According to the AIA [1] which aims to put forward the horizontal
regulation of AI in general, public space is defined as “any physical space accessible to
the public, irrespective of private or public ownership” (Article 3 [13]). This definition
embraces not only all open and freely accessible spaces like streets or town squares but
all spaces that are in principle accessible for the public including spaces like airports or
commercial buildings such as office buildings, retail centres or entertainment centres
[14], which are also under some circumstances accessible, despite belonging to private
entities. The view is also supported by Recital 9 of AIA classifying streets, relevant parts
of government buildings and most transport infrastructure, spaces such as cinemas, the‑
atres, shops, and shopping centres under the notion of public space. From the legal per‑
spective, these areas in which FRT can be deployed are considered indistinguishable
because they fall into the broader category of public spaces. However, from the per‑
spective of ethical and societal risks that stem from deploying FRT and their possible
mitigations, these areas are not equivalent and warrant a more nuanced differentiation.

Differentiating between public spaces is crucial in many aspects. For example, it pro‑
vides supplemental information on the data set under investigation, i.e., the set of peo‑
ple that move across a space. Such differentiation between the sets of people brings
into the forefront the fact that movement and presence in some of the public spaces al‑
ready carries a certain expectation of one’s identity being checked. Also, there are public
spaces with a relatively stable and predictable set of people that need to be recognised
(entrances to stadiums or company premises to an extent). And as we will demonstrate

269
6 Con Evidence

later, the set of ethical and societal risks alongside effective countermeasures to these
risks may vary between different kinds of public spaces as well.

For this reason, we propose a distinction between open public spaces and semi‑open
public spaces [15, 16], which is more sensitive to different levels of accessibility of pub‑
lic spaces. We regard open public space as a physical space that is publicly accessible
without further specific social selection. This does not mean, however, that the use of
such a space is not subject to certain rules governing its utilisation and that such a space
does not have its owner or administrator. Frequently open public spaces are owned or
managed by state institutions or public administration. On the other hand, semi‑open
public spaces represent a physical area that might be owned by either private or state
entities. Accordingly, and unlike open public spaces, semi‑open public spaces are not
necessarily a world of strangers [16, 17]. Interactions that happen in semi‑open public
spaces are more structural, involving interpersonal networks of people that attend such
spaces, adding certain private character to them particularly lacking in public spaces
[18]. Therefore, this character might impose a certain degree of regulation of social ac‑
cessibility, such as the expectation to be identified in order to be granted a further entry.

We believe that the knowledge of this differentiation has the potential to improve the
transparency and clarity of debate on effective regulation of FRT. It should also increase
the awareness of the public regarding specific purposes and contexts for their biometric
data processing and debunk the universalization of FRT in public spaces as inherently
intrusive. Unfortunately, this differentiation is only barely present in current and forth‑
coming regulation of FRT.

270
6 Con Evidence

6.2.7 Ban Facial Surveillance

Banning facial surveillance, but regulating facial identification prevents abuses


while enabling law enforcement to combat crime.

Friedman and Ferguson 19

Barry Friedman (professor and director of the Policing Project at New York University
School of Law and the author of “Unwarranted: Policing Without Permission”) and
Andrew Guthrie Ferguson (a professor at the David A. Clarke School of Law at Uni‑
versity of the District of Columbia, a fellow at the Policing Project and the author of
“The Rise of Big Data Policing”), “Here’s a Way Forward on Facial Recognition,” The
New York Times, 31 October 2019, https://www.nytimes.com/2019/10/31/opinion/facial‑
recognition‑regulation.html, accessed 9 March 2023

It’s no surprise, then, that Congress has been holding hearings about whether to ban or
regulate the technology. We believe there is a way forward that can address the serious
privacy concerns while also acknowledging the argument made recently by James P.
O’Neill, New York City’s police commissioner, in an Op‑Ed article in The Times, that “it
would be an injustice to the people we serve if we policed our 21st‑century city without
using 21st‑century technology.”

The solution is to distinguish between two very different uses of facial recognition tech‑
nology, banning one and allowing but tightly regulating the other.

We should ban “face surveillance,” the use of facial recognition (in real time or from
stored footage) to track people as they pass by public or private surveillance cameras,
allowing their whereabouts to be traced.

On the other hand, we should allow “face identification”— again, with strict rules — so
the police can use facial recognition technology to identify a criminal suspect caught on
camera.

With cameras so pervasive on street poles and buildings, widespread face surveillance
would be Big Brother come to life, allowing for the tracking of our every movement
and the stitching together of intimate portraits of our lives. Yes, banning it could cost
the police the ability to nab a dangerous fugitive on the run. But allowing it could lead
to the mass surveillance that China is deploying. Most Americans aren’t going to be
comfortable with that, nor should they be.

Law enforcement should not have a problem with banning facial surveillance. In his

271
6 Con Evidence

essay in The Times, Commissioner O’Neill did not argue for face surveillance. And the
official who heads the F.B.I.’s information services branch, Kimberly del Greco, has said,
“We do not perform real‑time surveillance.” The Chicago and Detroit police depart‑
ments acquired the technology to conduct face surveillance, but after objections from
the public, said they would not use it.
Let’s settle this now and adopt a ban on face surveillance.
On the other hand, face identification counsels a different response. Police officers face
a laborious and often fruitless task when they try to match photos of crime suspects to
mug shots of people who have already been arrested. Why not use facial recognition
technology to assist law enforcement in this effort?
We think legislatures should allow but tightly regulate face identification. We would
impose five requirements.
First, face identification should not be deployed at all until it can recognize the faces
of all races and genders equally effectively. There’s enough racial bias in the criminal
justice system without technology making matters worse. This can and will get solved,
but until then we ought to declare a pause.
Second, face identification should be available to law enforcement only for the most
serious of crimes, like murder, rape, robbery and aggravated assault. No more sniffing
through motor vehicle department databases for unlawful immigrants, as I.C.E. has
done, or chasing down petty criminals. The country already has overcrowded jails; too
many people — disproportionately people of color — are dragged into the criminal
justice system. Unless we limit the use of face identification to the most serious offenses,
we will worsen this problem.
Third, and perhaps counterintuitively, use of facial recognition technology should not
be limited to criminal databases. Commissioner O’Neill sought to reassure readers that
his department was using only arrestee databases for searches, not motor vehicle photos.
Even some civil liberties advocates assume “mug shot” databases are less problematic
than databases of innocent people.
But this is exactly backward. Mug shot databases are the product of decades of discrimi‑
natory policing for offenses like drug crimes; using those will continue us on this course.
To catch the people who commit serious offenses, the police should search a database
that includes all our faces.
Fourth, face identification should not be allowed without a judicial warrant. Without
judicial supervision, we can’t be sure face identification is used only as permitted.

272
6 Con Evidence

Finally, any law allowing face identification ought to come with penalties for misuse.
Courts and legislatures constantly set up guardrails for law enforcement, but fail to
impose penalties for violations. Officials and departments should face serious conse‑
quences if they fail to follow the rules.

This proposal isn’t going to please everyone. But as we read the debates over the last
months, this is a place where compromise might work. It allows law enforcement use of
facial recognition where it makes sense, with serious protections against spillover and
misuse.

273

You might also like