Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 32

1NC Shirley Dubs ACTUAL---Dartmouth BC

Innovation Adv
1NC---K
Algorithms are not people. Interpreting AI as a creative agent rather than a data
processing equation cements a computational hierarchy that steals unpaid labor,
seeds an inevitable AI winter, and distracts from the immediate humanitarian
catastrophes of premature technology.
Lanier 20 – Jaron Zepel Lanier is an American computer scientist, visual artist, computer philosophy
writer, technologist, futurist, and composer of contemporary classical music. Considered a founder of
the field of virtual reality, Lanier and Thomas G. Zimmerman left Atari in 1985 to found VPL Research,
Inc., the first company to sell VR goggles and wired gloves. In the late 1990s, Lanier worked on
applications for Internet2, and in the 2000s, he was a visiting scholar at Silicon Graphics and various
universities. In 2006 he began to work at Microsoft, and from 2009 has worked at Microsoft Research as
an Interdisciplinary Scientist (“The Myth Of AI”, Edge, Available online at
https://www.edge.org/conversation/jaron_lanier-the-myth-of-ai, Accessed 10-05-2022)
THE MYTH OF AI

A lot of us were appalled a few years ago when the American Supreme Court decided , out of the blue, to
decide a question it hadn't been asked to decide , and declare that corporations are people. That's a
cover for making it easier for big money to have an influence in politics . But there's another angle to it,
which I don't think has been considered as much : the tech companies, which are becoming the most
profitable, the fastest rising, the richest companies, with the most cash on hand, are essentially people for a different reason than
that. They might be people because the Supreme Court said so, but they're essentially algorithms.
If you look at a company like Google or Amazon and many others, they do a little bit of device manufacture, but the only reason they do is to create a channel
between people and algorithms. And the algorithms run on these big cloud computer facilities.

The distinction between a corporation and an algorithm is fading. Does that make an algorithm a
person? Here we have this interesting confluence between two totally different worlds . We have the
world of money and politics and the so-called conservative Supreme Court , with this other world of what
we can call artificial intelligence, which is a movement within the technical culture to find an
equivalence between computers and people. In both cases, there's an intellectual tradition that goes
back many decades. Previously they'd been separated; they'd been worlds apart. Now, suddenly they've
been intertwined.

The idea that computers are people has a long and storied history. It goes back to the very origins of
computers, and even from before. There's always been a question about whether a program is
something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be
a program. There has been a domineering subculture—that's been the most wealthy, prolific, and
influential subculture in the technical world —that for a long time has not only promoted the idea that
there's an equivalence between algorithms and life , and certain algorithms and people, but a historical
determinism that we're inevitably making computers that will be smarter and better than us and will
take over from us.

That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what
they hear. You'll have a figure say, "The computers will take over the Earth, but that's a good thing , because
people had their chance and now we should give it to the machines ." Then you'll have other people say,
"Oh, that's horrible, we must stop these computers ." Most recently, some of the most beloved and respected figures in the tech and
science world, including Stephen Hawking and Elon Musk, have taken that position of: "Oh my God, these things are an existential threat. They must be stopped."

In the past, all kinds of different figures have proposed that this kind of thing will happen, using different terminology. Some of them like the idea of the computers
taking over, and some of them don't. What I'd like to do here today is propose that the whole basis of the conversation is itself askew, and confuses us, and does
real harm to society and to our skills as engineers and scientists.

A good starting point might be the latest round of anxiety about artificial intelligence, which has been stoked by some figures who I respect tremendously, including
Stephen Hawking and Elon Musk. And the reason it's an interesting starting point is that it's one entry point into a knot of issues that can be understood in a lot of
different ways, but it might be the right entry point for the moment, because it's the one that's resonating with people.

The usual sequence of thoughts you have here is something like: "so-and-so," who's a well-respected
expert, is concerned that the machines will become smart, they'll take over, they'll destroy us, something
terrible will happen. They're an existential threat, whatever scary language there is. My feeling about
that is it's a kind of a non-optimal, silly way of expressing anxiety about where technology is going. The
particular thing about it that isn't optimal is the way it talks about an end of human agency.

But it's a call for increased human agency , so in that sense maybe it's functional, but I want to go little
deeper in it by proposing that the biggest threat of AI is probably the one that's due to AI not actually
existing, to the idea being a fraud, or at least such a poorly constructed idea that it's phony . In other
words, what I'm proposing is that if AI was a real thing, then it probably would be less of a threat to us
than it is as a fake thing.

What do I mean by AI being a fake thing? That it adds a layer of religious thinking to what otherwise should be a technical field. Now, if
we talk about the
particular technical challenges that AI researchers might be interested in , we end up with something
that sounds a little duller and makes a lot more sense.

For instance, we can talk about pattern classification. Can you get programs that recognize faces, that
sort of thing? And that's a field where I've been active. I was the chief scientist of the company Google bought that got them into that particular game some
time ago. And I love that stuff. It's a wonderful field, and it's been wonderfully useful.

But when you add to it this religious narrative that's a version of the Frankenstein myth , where you say
well, but these things are all leading to a creation of life , and this life will be superior to us and will be
dangerous ... when you do all of that, you create a series of negative consequences that undermine engineering
practice, and also undermine scientific method, and also undermine the economy.
The problem I see isn't so much with the particular techniques, which I find fascinating and useful, and am very positive about, and should be explored more and
developed, but the mythology around them which is destructive. I'm going to go through a couple of layers of how the mythology does
harm.

The most obvious one, which everyone in any related field can understand , is that it creates this ripple
every few years of what have sometimes been called AI winters, where there's all this overpromising
that AIs will be about to do this or that . It might be to become fully autonomous driving vehicles instead
of only partially autonomous, or it might be being able to fully have a conversation as opposed to only
having a useful part of a conversation to help you interface with the device.

This kind of overpromise then leads to disappointment because it was premature , and then that leads to
reduced funding and startups crashing and careers destroyed, and this happens periodically, and it's a
shame. It hurt a lot of careers. It has helped other careers, but that has been kind of random; depending on where you fit in the phase of this process as you're
coming up. It's just immature and ridiculous, and I wish that cycle could be shut down. And that's a widely shared criticism. I'm not saying anything at all unusual.

Let's go to another layer of how it's dysfunctional. And this has to do with just clarity of user interface , and then that turns into
an economic effect. People are social creatures. We want to be pleasant, we want to get along. We've all spent many years as children learning how to
adjust ourselves so that we can get along in the world. If a program tells you, well, this is how things are, this is who you are, this is what you like, or this is what you
should do, we have a tendency to accept that.

Since our economy has shifted to what I call a surveillance economy , but let's say an economy where
algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely
on big data in order to figure out who you should date , who you should sleep with, what music you
should listen to, what books you should read, and on and on and on. And people often accept that because
there's no empirical alternative to compare it to, there's no baseline. It's bad personal science. It's bad self-understanding.
I'll give you a few examples of what I mean by that. Maybe I'll start with Netflix. The thing about Netflix is that there isn't much on it. There's a paucity of content on
it. If
you think of any particular movie you might want to see , the chances are it's not available for
streaming, that is; that's what I'm talking about. And yet there's this recommendation engine , and the
recommendation engine has the effect of serving as a cover to distract you from the fact that there's
very little available from it. And yet people accept it as being intelligent, because a lot of what's available
is perfectly fine.
The one thing I want to say about this is I'm not blaming Netflix for doing anything bad, because the whole point of Netflix is to deliver theatrical illusions to you, so
this is just another layer of theatrical illusion—more power to them. That's them being a good presenter. What's a theater without a barker on the street? That's
But it does contribute, at a macro level, to this overall atmosphere of accepting the
what it is, and that's fine.

algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is
serving to distract you from the fact that there's not much choice anyway .
There are other cases where the recommendation engine is not serving that function, because there is a lot of choice, and yet there's still no evidence that the
recommendations are particularly good. There's
no way to compare them to an alternative , so you don't know what
might have been. If you want to put the work into it, you can play with that; you can try to erase your
history, or have multiple personas on a site to compare them . That's the sort of thing I do, just to get a
sense. I've also had a chance to work on the algorithms themselves , on the back side, and they're interesting,
but they're vastly, vastly overrated.

I want to get to an even deeper problem, which is that there's


no way to tell where the border is between measurement
and manipulation in these systems. For instance, if the theory is that you're getting big data by observing a
lot of people who make choices, and then you're doing correlations to make suggestions to yet more
people, if the preponderance of those people have grown up in the system and are responding to
whatever choices it gave them, there's not enough new data coming into it for even the most ideal or
intelligent recommendation engine to do anything meaningful.

In other words, the


only way for such a system to be legitimate would be for it to have an observatory that
could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a
system that measures which manipulations work, as opposed to which ones don't work, which is very
different from a virginal and empirically careful system that's trying to tell what recommendations would work had it not intervened.
That's a pretty clear thing. What's not clear is where the boundary is.

If you ask: is
a recommendation engine like Amazon more manipulative , or more of a legitimate
measurement device? There's no way to know. At this point there's no way to know, because it's too universal. The same
thing can be said for any other big data system that recommends courses of action to people, whether it's the Google ad business, or social networks like Facebook
deciding what you see, or any of the myriad of dating apps. All of these things, there's no baseline, so we don't know to what degree they're measurement versus
manipulation.

Dating always has an element of manipulation; shopping always has an element of manipulation; in a sense, a lot of the things that people use these things for have
always been a little manipulative. There's always been a little bit of nonsense. And that's not necessarily a terrible thing, or the end of the world.

But it'simportant to understand it if this is becoming the basis of the whole economy and the whole
civilization. If people are deciding what books to read based on a momentum within the
recommendation engine that isn't going back to a virgin population , that hasn't been manipulated, then
the whole thing is spun out of control and doesn't mean anything anymore . It's not so much a rise of evil
as a rise of nonsense. It's a mass incompetence, as opposed to Skynet from the Terminator movies.
That's what this type of AI turns into . But I'm going to get back to that in a second.

To go yet another rung deeper, I'll revive an argument I've made previously, which is that it
turns into an economic problem. The easiest
entry point for understanding the link between the religious way of confusing AI with an economic
problem is through automatic language translation . If somebody has heard me talk about that before, my apologies for repeating
myself, but it has been the most readily clear example.

For three decades, the AI world was trying to create an ideal, little, crystalline algorithm that could take
two dictionaries for two languages and turn out translations between them . Intellectually, this had its origins particularly
around MIT and Stanford. Back in the 50s, because of Chomsky's work, there had been a notion of a very compact and

elegant core to language. It wasn't a bad hypothesis, it was a legitimate, perfectly reasonable hypothesis to test. But over time, the hypothesis failed
because nobody could do it.

Finally, in the 1990s, researchers at IBM and elsewhere figured out that the way to do it was with what we
now call big data, where you get a very large example set, which interestingly, we call a corpus—call it a dead person. That's the term of art for
these things. If you have enough examples, you can correlate examples of real translations phrase by phrase with new documents that need to be translated. You

mash them all up, and you end up with something that's readable . It's not perfect, is not artful, it's not necessarily correct, but
suddenly it's usable. And you know what? It's fantastic. I love the idea that you can take some memo, and instead of having to find a translator and wait for them to
do the work, you can just have something approximate right away, because that's often all you need. That's a benefit to the world. I'm happy it's been done. It's a
great thing.

The thing that we have to notice though is that , because of the mythology about AI, the services are
presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've
created this entity that they call different things at different times—Deep Blue and so forth. The
consumer tech companies, we tend to put a face in front of them , like a Cortana or a Siri. The problem
with that is that these are not freestanding services.

In other words, if
you go back to some of the thought experiments from philosophical debates about AI from
the old days, there are lots of experiments, like if you have some black box that can do something—it can
understand language—why wouldn't you call that a person? There are many, many variations on these
kinds of thought experiments, starting with the Turing test, of course, through Mary the color scientist, and a zillion other ones that have come up.

This is not one of those. What this is, is behind the curtain, is literally millions of human translators who
have to provide the examples. The thing is, they didn't just provide one corpus once way back . Instead,
they're providing a new corpus every day, because the world of references, current events, and slang
does change every day. We have to go and scrape examples from literally millions of translators,
unbeknownst to them, every single day, to help keep those services working .

The problem here should be clear, but just let me state it explicitly: we're
not paying the people who are providing the
examples to the corpora—which is the plural of corpus—that we need in order to make AI algorithms
work. In order to create this illusion of a freestanding autonomous artificial intelligent creature , we have
to ignore the contributions from all the people whose data we're grabbing in order to make it work.
That has a negative economic consequence.
This, to me, is where it becomes serious. Everything up to now, you can say, "Well, look, if people want to have an algorithm tell them who to date, is that any
stupider than how we decided who to sleep with when we were young, before the Internet was working?" Doubtful, because we were pretty stupid back then. I
doubt it could have that much negative consequence.
This is all of a sudden a pretty big deal. If
you talk to translators, they're facing a predicament, which is very similar to
some of the other early victim populations, due to the particular way we digitize things. It's similar to
what's happened with recording musicians, or investigative journalists —which is the one that bothers me the most—or
photographers. What they're seeing is a severe decline in how much they're paid, what opportunities
they have, their long-term prospects. They're seeing certain opportunities for continuing , particularly in
real-time translation… but I should point out that's going away soon too . We're going to have real-time
translation on Skype soon.

The thing is, they're still needed. There's an impulse, a correct impulse , to be skeptical when somebody
bemoans what's been lost because of new technology . For the usual thought experiments that come up, a common point of
reference is the buggy whip: You might say, "Well, you wouldn't want to preserve the buggy whip industry."

But translators
are not buggy whips, because they're still needed for the big data scheme to work.
They're the opposite of a buggy whip. What's happened here is that translators haven't been made
obsolete. What's happened instead is that the structure through which we receive the efforts of real
people in order to make translations happen has been optimized, but those people are still needed.

This pattern—of AI only working when there's what we call big data, but then using big data in order to
not pay large numbers of people who are contributing—is a rising trend in our civilization, which is
totally non-sustainable. Big data systems are useful. There should be more and more of them. If that's
going to mean more and more people not being paid for their actual contributions , then we have a
problem.

The usual counterargument to that is that they are being paid in the sense that they too benefit from all
the free stuff and reduced-cost stuff that comes out of the system . I don't buy that argument, because you
need formal economic benefit to have a civilization, not just informal economic benefit. The difference
between a slum and the city is whether everybody gets by on day-to-day informal benefits or real formal
benefits.

The difference between formal and informal has to do with whether it's strictly real-time or not. If
you're living on informal benefits and you're a musician , you have to play a gig every day. If you get sick,
or if you have a sick kid, or whatever and you can't do it, suddenly you don't get paid that day . Everything's real-time.
If we were all perfect, immortal robots , that would be fine. As real people, we can't do it, so informal
benefits aren't enough. And that's precisely why things, like employment, savings, real estate, and
ownership of property and all these things were invented—to acknowledge the truth of the fragility of
the human condition, and that's what made civilization .

If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings
benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of
problems that I've just gone over, which include acceptance of bad user interfaces , where you can't tell if
you're being manipulated or not, and everything is ambiguous. It creates incompetence, because you
don't know whether recommendations are coming from anything real or just self-fulfilling prophecies
from a manipulative system that spun off on its own , and economic negativity, because you're gradually
pulling formal economic benefits away from the people who supply the data that makes the scheme
work.

For all those reasons, the


mythology is the problem, not the algorithms. To back up again, I've given two
reasons why the mythology of AI is stupid, even if the actual stuff is great . The first one is that it results in periodic
disappointments that cause damage to careers and startups, and it's a ridiculous, seasonal disappointment and devastation that we shouldn't be randomly imposing
on people according to when they happen to hit the cycle. That's the AI winter problem. The second one is that it causes unnecessary negative benefits to society
for technologies that are useful and good. The mythology brings the problems, not the technology .

Having said all that, let's address directly this problem of whether AI is going to destroy civilization and
people, and take over the planet and everything . Here I want to suggest a simple thought experiment of
my own. There are so many technologies I could use for this , but just for a random one, let's suppose somebody
comes up with a way to 3-D print a little assassination drone that can go buzz around and kill somebody .
Let's suppose that these are cheap to make.

I'm going to give you two scenarios. In


one scenario, there's suddenly a bunch of these, and some disaffected
teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing
people randomly. There's so many of them that it's hard to find all of them to shut it down, and there keep on being more and more of them. That's one
scenario; it's a pretty ugly scenario.

There's another one where there's so-called artificial intelligence, some kind of big data scheme, that's
doing exactly the same thing, that is self-directed and taking over 3-D printers , and sending these things
off to kill people. The question is, does it make any difference which it is?

The truth is that the part that causes the problem is the actuator. It's the interface to physicality. It's the
fact that there's this little killer drone thing that's coming around. It's not so much whether it's a bunch
of teenagers or terrorists behind it or some AI, or even , for that matter, if there's enough of them, it could just
be an utterly random process. The whole AI thing, in a sense, distracts us from what the real problem would
be. The AI component would be only ambiguously there and of little importance .

This notion of attacking the problem on the level of some sort of autonomy algorithm, instead of on the
actuator level is totally misdirected. This is where it becomes a policy issue. The sad fact is that, as a society,
we have to do something to not have little killer drones proliferate . And maybe that problem will never
take place anyway. What we don't have to worry about is the AI algorithm running them, because that's
speculative. There isn't an AI algorithm that's good enough to do that for the time being. An equivalent
problem can come about, whether or not the AI algorithm happens. In a sense, it's a massive
misdirection.

This idea that some lab somewhere is making these autonomous algorithms that can take over the
world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some
actuator that can do harm, we have to figure out some way that people don't do harm with it. There are
about to be a whole bunch of those. And that'll involve some kind of new societal structure that isn't
perfect anarchy. Nobody in the tech world wants to face that , so we lose ourselves in these fantasies of
AI. But if you could somehow prevent AI from ever happening, it would have nothing to do with the actual problem that we fear, and that's the sad thing, the
difficult thing we have to face.

I haven't gone through a whole litany of reasons that the mythology of it AI does damage. There's
a whole other problem area that has
to do with neuroscience, where if we pretend we understand things before we do , we do damage to
science, not just because we raise expectations and then fail to meet them repeatedly , but because we
confuse generations of young scientists. Just to be absolutely clear, we don't know how most kinds of thoughts are
represented in the brain. We're starting to understand a little bit about some narrow things. That doesn't mean we never will, but we have to be
honest about what we understand in the present.

A retort to that caution is that there's some exponential increase in our understanding, so we can
predict that we'll understand everything soon . To me, that's crazy, because we don't know what the goal
is. We don't know what the scale of achieving the goal would be ... So to say, "Well, just because I'm
accelerating, I know I'll reach my goal soon," is absurd if you don't know the basic geography which
you're traversing. As impressive as your acceleration might be, reality can also be impressive in the
obstacles and the challenges it puts up . We just have no idea.
This is something I've called, in the past, "premature mystery reduction," and it's a reflection of poor scientific mental discipline. You have to be able to accept what
your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you're a lesser scientist. I don't see that so much
in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that
it does start to bleed over into all kinds of other things. A
great example is the Human Brain Project in Europe , which is a lot
of public money going into science that's very influenced by this point of view , and it has upset some in
the neuroscience community for precisely the reason I described.

There is a social and psychological phenomenon that has been going on for some decades now : A core
of technically proficient, digitally-minded people reject traditional religions and superstitions. They set
out to come up with a better, more scientific framework. But then they re-create versions of those old
religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.
To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world. All of the damages are
essentially mirror images of old damages that religion has brought to science in the past.

There's an anticipation of a threshold, an end of days. This


thing we call artificial intelligence, or a new kind of personhood…
If it were to come into existence it would soon gain all power, supreme power, and exceed people.

The notion of this particular threshold—which is sometimes called the singularity, or super-intelligence, or
all sorts of different terms in different periods—is similar to divinity. Not all ideas about divinity , but a certain kind of

superstitious idea about divinity, that there's this entity that will run the world , that maybe you can pray
to, maybe you can influence, but it runs the world, and you should be in terrified awe of it .

That particular idea has been dysfunctional in human history . It's dysfunctional now, in distorting our
relationship to our technology. It's been dysfunctional in the past in exactly the same way. Only the words have changed.

In the history of organized religion, it's often been the case that people have been disempowered
precisely to serve what were perceived to be the needs of some deity or another , where in fact what they were doing
was supporting an elite class that was the priesthood for that deity.

That looks an awful lot like the new digital economy to me, where you have (natural language)
translators and everybody else who contributes to the corpora t hat allow the data schemes to operate,
contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, "Well,
but they're helping the AI, it's not us, they're helping the AI ." It reminds me of somebody saying, "Oh, build
these pyramids, it's in the service of this deity," but, on the ground, it's in the service of an elite. It's an economic effect of the new idea . The

effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion .
There is an incredibly retrograde quality to the mythology of AI. I know I said it already, but I just have to repeat that this is not a criticism of the particular
algorithms. To me, what would be ridiculous is for somebody to say, "Oh, you mustn't study deep learning networks," or "you mustn't study theorem provers," or
whatever technique you're interested in. Those things are incredibly interesting and incredibly useful. It's the mythology that we have to become more self-aware
of.

This is analogous to saying that in traditional religion there was a lot of extremely interesting thinking, and a lot of great art. And you have to be able to kind of tease
that apart and say this is the part that's great, and this is the part that's self-defeating. We have to do it exactly the same thing with AI now.

This is a hard topic to talk about, because the accepted vocabulary undermines you at every turn. This is also similar to a problem traditional religion. If I talk about
AI, am I talking about the particular technical work, or the mythology that influences how we integrate that into our world, into our society? Well, the vocabulary
that we typically use doesn't give us an easy way to distinguish those things. And it becomes very confusing.
If AI means this mythology of this new creature we're creating, then it's just a stupid mess that's confusing everybody, and harming the future of the economy. If
what we're talking about is a set of algorithms and actuators that we can improve and apply in useful ways, then I'm very interested, and I'm very much a
participant in the community that's improving those things.

Unfortunately, the standard vocabulary that people use doesn't give us a great way to distinguish those two entirely different items that one might reference. I
could try to coin some phrases, but for the moment, I'll just say these are two entirely different things that deserve to have entirely distinguishing vocabulary. Once
again, this vocabulary problem is entirely retrograde and entirely characteristic of traditional religions.

Maybe it's worse today, because in the old days, at least we had the distinction between, say, ethics and morality, where you could talk about two similar things,
where one was a little bit more engaged with the mythology of religion, and one is a little less engaged. We don't quite have that yet for our new technical world,
and we certainly need it.

Having said all this, I'll mention one other similarity, which is that just because a mythology has a ridiculous quality that can undermine people in many cases doesn't
mean that the people who adhere to it are necessarily unsympathetic or bad people. A lot of them are great. In the religious world, there are lots of people I love.
We have a cool Pope now, there are a lot of cool rabbis in the world. A lot of people in the religious world are just great, and I respect and like them. That goes
hand-in-hand with my feeling that some of the mythology in big religion still leads us into trouble that we impose on ourselves and don't need.

In the same way, if you think of the people who are the most successful in the new economy in this digital world—I'm probably one of them; it's been great to me—
they're, in general, great. I like the people who've done well in the cloud computer economy. They're cool. But that doesn't detract from all of the things I just said.

That does create yet another layer of potential confusion and differentiation that becomes tedious to state over and over again, but it's important to say.

The impact is fascism. The aff unironically increases tech bro hegemony. That ensures
a data-driven, right-wing hellscape that outweighs and turns the case.
McQuillan 22 – Dan McQuillan is Lecturer in Creative & Social Computing in the Department of
Computing at Goldsmiths, University of London. He has a PhD in Experimental Particle Physics, and prior
to academia he worked as Amnesty International's Director of E-communications. (“Resisting AI: An
Anti-fascist Approach to Artificial Intelligence,” 2022, pg. 92-100) julian

If there’s one thing that history teaches us, it’s that we need to be very wary of where the systematic
application of discriminative ordering can end up. The necropolitical tendencies that we’ve outlined in
AI resonate with the contemporary turn to far-right politics. This form of politics is re-emerging in the
tech industry itself, in various governments and institutions, and in the upsurge of populist and fascist
political movements. Some of the apparently opportunistic connections between the far right and AI
reveal deeper structural ties. For example, one of the co-founders of the fast-growing AI facial
recognition startup Clearview AI, which has contracts with US Immigration and Customs Enforcement
(ICE) and the US Attorney’s Office for the Southern District of New York, turned out to have
‘longstanding ties to far-right extremists’ (O’Brien, 2020), while another said he was ‘building algorithms
to ID all the illegal immigrants for the deportation squads’. One of the investors in Clearview was Peter
Thiel, co-founder of PayPal and early investor in Facebook. His big data analytics company Palantir has
contracts with the Central Intelligence Agency, the Pentagon, the Homeland Security Department, and
provides target analysis for ICE raids. It’s not that the AI industry is filled with far-right activists, but
rather that strands of reactionary opinion appear rhizomatically across the field of AI. As we shall see,
following these strands reveals the descending double helix of AI’s technopolitics as it connects the
ideologies of statistical rationalism to those of fascism.

The first layer of reactionary politics that forms a visible penumbra around the AI industry can be loosely
referred to as ‘ultrarationalism’ because its most identifiable characteristic is a sociopathic
commitment to statistical rationality. This isn’t a commonsense rational approach to life but a
reification of a rather cold intellectual narrowness that is willing to question any assumption, including
that of compassion towards fellow beings, if it falls foul of a specific kind of reasoning. One of the
trademarks of tech-style rationalism is a frequent reference to Bayesianism. Bayesian statistics, which is
widely used in machine learning, is an interpretation of probability that doesn’t focus on frequency of
occurrence (the basis of classical statistics) but on expectations representing a prior state of knowledge .
The relevant thing here is that Bayesian statistics reflects the state of knowledge about a system and is
modified by ‘updating your priors’ (factoring in new or updated knowledge). Ultrarationalists believe
Bayesianism provides a superior approach to any problem compared to actual expertise or lived
experience (Harper and Graham, nd). Enthusiasts pride themselves on adopting it not only as an
approach to designing machine learning algorithms but as a rational and empirical way of tackling
everyday life, without being diverted by anything as misleading as emotion or empathy. It’s perhaps
unsurprising that such an ethos finds a home in a culture of computer science and AI, especially among
those who believe we’re on the way to artificial general intelligence: ‘In AGI, we see a particular
overvaluation of “general intelligence” as not merely the mark of human being, but of human value:
everything that is worth anything in being human is captured by “rationality or logic”’ (Golumbia, 2019).

This kind of ultrarationalism and its entanglements with artificial intelligence were initially articulated on
blogs such as LessWrong, whose progenitor was the self-styled theorist of superintelligent AI, Eliezer
Yudkowsky, and on blogs like the ultrarationalist touchstone Slate Star Codex. For all their swagger
about science and statistics, the ultrarationalists are so rooted in their innate sense of superiority that
they rarely do the background research necessary to really understand a field of thought and often seem
happy to make things up simply to prove a point. As noted by Elizabeth Sandifer, a researcher and writer
who has studied the ultrarationalists in depth, the standpoint of these blogs resonates strongly with the
tech sector because both communities see themselves as iconoclastic, fearlessly overturning established
knowledge using only the power of their own clever minds. ‘It is no surprise that this has caught on
among the tech industry. The tech industry loves disruptors and disruptive thought,’ she says, ‘But …
[t]he contrarian nature of these ideas makes them appealing to people who maybe don’t think enough
about the consequences’ (Metz, 2021).

Ultrarationalists are unreflective to the point of self-parody. They give their efforts self-aggrandizing
labels like ‘the Intellectual Dark Web’; their blogs are wordy and full of jargon, mainly to obfuscate their
core values; and while they claim to espouse absolute free speech, what they actually produce are
convoluted expressions of male privilege and White supremacy. They complain that men are oppressed
by feminists and that free thought about innate social differences is stymied by a politically correct mob ,
but what they really seem enraged about is anyone challenging them. The populist version of
rationalism legitimizes patriarchal privilege, particularly for young men, and acts as a gateway to far-
right political positions (Peterson, 2018). This in itself pollutes the pool from which AI practitioners are
drawn, but ultrarationalism is also directly imbricated in the political economy of AI. Peter Thiel was a
friend of Yudkowsky and invested money into his research institute (Metz, 2021). He also invested in
two followers of Yudkowsy’s blog who started an AI firm called DeepMind, subsequently bought by
Google, which shot to fame for developing the Go-playing AlphaGo system. OpenAI was founded as a
DeepMind competitor with investment from Elon Musk, and both DeepMind and OpenAI hired from the
rationalist community (Metz, 2021). While it’s difficult to know what proportion of practitioners
entertain these kinds of ideas, the main significance of the ultrarationalist community is the way it acts
as a bridge between the AI field and more explicitly authoritarian politics like neoreaction.

Neoreaction, or what one of its founding thinkers, Nick Land, calls ‘the Dark Enlightenment’ (Land,
2012), is an ideology that embraces and amplifies concepts like data-driven eugenics. It draws from
strands of thinking that, like the alt-right and new-wave White supremacy, have their wellspring in
online forums and discourse. One thing that distinguishes neoreaction from some of the other
manifestations of the online far right, like the frothing misogyny of Gamergate (the online harassment of
women and feminism in the game industry) or hate-trolling of 8chan (a message board site with links to
White supremacism), is its relative coherence as an ideology. And while neoreaction as a movement
may have limited reach, the currents it pulls together are significant because of their alignment with the
affordances of AI. In fact, neoreaction can be situated as the theoretical wing of AI-driven necropolitics.

Neoreaction has an explicit commitment to innate hierarchies of gender and intelligence of the kind
that, as we’ve seen, are only too easily reinforced by AI. It evinces an enthusiasm for race science,
especially the brand of genetic determinism flagged as human biodiversity , and the race realism that
legitimates the concept of human sub-species. Neoreaction’s geneticism is mostly focused on IQ as the
main driver of socioeconomic status, and it has a vision of a ‘genetically self-filtering elite’ (Haider,
2017). It is explicitly anti-democratic, seeing democracy as a demonstrably and inevitably failed
experiment. It draws on wider currents of libertarianism that argue that, due to the inadequate
rationalism of the general public, electoral democracy will ‘inevitably lead to a suboptimal economic
policy’ (Matthews, 2016).

Neoreaction’s preferred structures are authoritarian or monarchist, typically taking the form of a
corporate state with a chief executive officer (CEO) rather than any kind of elected leader. Names that
come up when the leadership role is discussed are people like Peter Thiel, who seems to share many of
the same political leanings as neoreaction, or Eric Schmidt, the former CEO of Google/Alphabet. In his
2009 essay for libertarian publication Cato Unbound, Thiel declared, ‘I no longer believe that freedom
and democracy are compatible.’ The argument from neoreactionary bloggers is that an ‘economically
and socially effective government legitimizes itself, with no need for elections’ (MacDougald, 2015).
Neoreaction is the ascendency of capitalist technocracy without the trappings of electoral legitimacy,
and with an almost mystical belief in authority and hierarchy.

These techno-authoritarians sneer at democracy as an outdated operating system which they can
replace with their own blend of autocracy and algorithms. One of neoreaction’s most prolific
interpreters, Curtis Yarvin (aka Mencius Moldbug), calls this neocameralism, a reference to his
admiration for the political and bureaucratic system of Frederick the Great of Prussia. The future nation
doesn’t have citizens but shareholders: ‘To a neocameralist, a state is a business which owns a country’
(Moldbug, 2007). Given that the combined turnover of the four Silicon Valley giants – Alphabet (Google),
Apple, Amazon and Meta – is bigger than the entire economy of Germany, this isn’t, perhaps, such an
impossible vision. In Nick Land’s brand of accelerationist neoreaction, the capitalist system is ‘locked in
constant revolutionary expansion, moving upwards and outwards on a trajectory of technological and
scientific intelligence-generation that would, at the limit, make the leap from its human biological hosts’
into a superior artificial intelligence (Matthews, 2016).

Attempts to stop AI’s emergence, moreover, will be futile. The imperatives of competition, whether
between firms or states, mean that whatever is technologically feasible is likely to be deployed sooner
or later, regardless of political intentions or moral concerns. These are less decisions that are made than
things which happen due to irresistible structural dynamics, beyond good and evil. (MacDougald, 2015)

Neoreaction takes the structural dynamics that drive AI’s harmfulness and elevates them to teleology.

The general justification offered for these beliefs is that existing systems are palpably imperfect and
inefficient, and infected with unempirical beliefs in human equality. Technological advances provide the
architecture for a move beyond these feeble dependencies to an optimized future. Neoreaction seems
to manifest a pure form of the kind of thoughtlessness that already goes with AI, and a lack of emotional
engagement carried to the point of pathology. Under the technocratic world order of neoreaction,
people are essentially assets or liabilities, and the latter, whether disabled or neurodivergent or racially
inferior, most definitely qualify as being disposable. For all its intellectual pretence, neoreaction is a
glorification of existing inequalities and a wish for their intensification, based on the idea that some
people are more ‘fit’ than others, that their privilege is built into their DNA and is demonstrated by their
wealth and power. This makes for a heady mix with systems like AI, with their inbuilt tendency to
emphasize and accentuate existing disparities of class, gender, race and beyond. Existing technocratic
systems already embed these discriminations, but both AI and neoreaction accelerate them.

Ultrarationalism and neoreaction are ideologies that keep AI aligned with White supremacy, but they
don’t exhaust its full potential for amplifying far-right politics. We are at a critical juncture for AI, not
only because it can intensify existing social injustices but because of the rising far-right political forces
poised to take advantage of it. We need to consider the potential relationship between AI and fascism.
Like the other linkages between social forces and AI that we have considered in this book so far, this is
not only a question of AI being adopted by fascist political currents but about the resonances between
fascistic politics and AI’s base operations.

Fascism is more than an authoritarian way of keeping the system going during difficult times. It’s a
revolutionary ideology that calls for the overthrow of the status quo on both political and cultural fronts.
While AI might seem like a pinnacle of intellectual abstraction, being based on complex mathematics
and finely tuned systems of large-scale computing, its reductive segregations of the social make it
vulnerable to the kind of anti-intellectualism that fuels populist and fascist ideology. What’s at stake
with AI is not merely bias and unfairness but assimilation into far-right political projects. For fascist
ideologues who glorify violence, AI’s tendencies towards epistemic, structural and administrative
violence are not flaws but features. There’s a danger that the disruptive potential of AI will become
entangled with the more savage disruptions of a fascistic social vision.

As we discussed in the Introduction, the core fascist goal is the rebirth of a mythic national community
out of a state of impurity and decline. The fascist revolution relies on the identification of an internal
enemy whose presence pollutes the organic community of the nation, an enemy which may also be
lurking at the borders and threatening to overrun the homeland. According to Nazi philosopher Carl
Schmitt, ‘the specificity of the political’ is the ‘discrimination between friend and enemy’. In Schmitt’s
terms, ‘Every actual democracy rests on the principle that not only are equals equal but unequals will
not be treated equally. Democracy requires, therefore, first homogeneity and second – if the need arises
– elimination or eradication of heterogeneity’ (Schmitt, 1988, p 9). It’s not hard to see how AI’s powers
of discrimination and its facility for creating states of exception align with this kind of political project,
one where the end goal of social exclusion is some form of eugenics.

The immediate danger is not the adoption of AI by a fully fledged fascist regime but the role of AI in the
kind of fascization that we discussed in the Introduction. Witness the ways that state agencies in many
countries are already rushing to embrace AI for the purposes of controlling ‘out groups’ such as
immigrants and ethnic minorities, while the European Union, self-styled institutional guardian of the
modern Enlightenment, is funding AI-driven border regimes while leaving families to drown in the
Mediterranean. The fact that AI is being deployed by states that describe themselves as democracies is
cold comfort if we remember that the National Socialist state in Germany in the 1930s was also a
constitutional democracy in formal terms, albeit one that was hollowed out by states of exception.
Given the historical alliances between fascism and big business, we should also ask whether
contemporary AI corporations would baulk at putting the levers of mass correlation at the disposal of
regimes of rationalized ethnocentrism. In fact, as the history of corporate complicity suggests, they are
likely to find themselves aligned with that fraction of the dominant class which, finding its interests
threatened by an unresolvable crisis, throws its weight behind a fascist movement as a last line of
defence.

Historical fascism has shown itself as being able to embrace the dissonance of employing new
technologies to force a return to an imagined ultra-traditionalist past. Thanks to ideologues like Ernst
Jünger and his vision of ‘technics born from fire and blood’ (Herf, 1986, cited in Malm and The Zetkin
Collective, 2021), the Nazis developed a ‘reactionary modernism’ (Herf, 1986) that appropriated high
technology while rejecting modern value systems. The operations of German fascism were only possible
because of the affordances of advanced technologies and a compliant bureaucracy. The Nazi regime
adopted the pre-computational technology of Hollerith punch card machines, furnished by IBM
subsidiary Dehomag (Black, 2012), as an important part of their programme of mass social sorting and
their identification of demographics for elimination – those whom the Nazis referred to as
Lebensunwertes Leben, ‘life unworthy of life’. While the ideology of fascism usually focuses on a lost
golden age rooted in folk tradition, appealing now to those who feel they’ve lost out to globalization and
technocracy, historical fascism was very pragmatic in its adoption of high tech in the service of an
alternate modernity (Paxton, 2005).

Fascism responds to real social contradictions by offering a fake revolution and a catharsis through
collective psychosis. ‘We are not required to believe that fascist movements can only come to power in
an exact replay of the scenario of Mussolini and Hitler. All that is required to fit our model is
polarization, deadlock, mass mobilization against internal and external enemies, and complicity by
existing elites’ (Paxton, 2005). We can’t rely on images of past fascism to alert us to its re-emergence
because fascism won’t do us the favour of returning in the same easily recognizable form, especially
when it finds new technological vectors. While AI is a genuinely novel approach to computation, what it
offers in terms of social application is a reactionary intensification of existing hierarchies. Likewise,
fascism offers the image and experience of revolution without fundamentally altering the relations of
production or property ownership. AI is technosocial solutionism, while fascism is ultranationalistic
solutionism. The social contradictions that are amplified by AI, and so starkly highlighted by the
disparities of COVID-19 and climate change, are the social contradictions that fascism will claim to solve.

We must apply a critical vigilance to the political resonances of AI, especially where it claims to offer
greater social efficiency through acts of separation and segregation. The essence of fascism is the setting
aside of democracy and due process as a failed project, and the substitution of a more efficacious
system of targeted exclusion. Fascism is less a coherent ideological proposition than a set of ‘mobilising
passions’ (Paxton, 2005), at the root of which is a passionate polarization, a struggle between the pure
and the corrupt, where one’s own ethnic community has become the victim of unassimilable minorities.
These are sentiments that justify any action without limits, and fascism pursues redemptive violence
without ethical or legal restraint. In fascism, a sense of overwhelming crisis combines with a belief in
the primacy of the group to drive national integration through the use of exclusionary violence.
Vote negative to endorse humanist technology. Reject inevitability logic. Artificial
intelligence is an ideology, not a technology.
Lanier 20 – Glen Weyl is Founder and Chair of the RadicalxChange Foundation and Microsoft’s Office of
the Chief Technology Officer Political Economist and Social Technologist (OCTOPEST). Jaron Lanier is the
author of Ten Arguments for Deleting Your Social Media Accounts Right Now and Dawn of the New
Everything. He (and Glen) are researchers at Microsoft but do not speak for the company. (“AI is an
Ideology, Not a Technology”, Wired Magazine, Available online at
https://www.wired.com/story/opinion-ai-is-an-ideology-not-a-technology/, Accessed 10-05-2022)

A leading anxiety in both the technology and foreign policy worlds today is China’s purported edge in
the artificial intelligence race. The usual narrative goes like this: Without the constraints on data
collection that liberal democracies impose and with the capacity to centrally direct greater resource
allocation, the Chinese will outstrip the West. AI is hungry for more and more data, but the West insists
on privacy. This is a luxury we cannot afford, it is said, as whichever world power achieves superhuman
intelligence via AI first is likely to become dominant.

If you accept this narrative, the logic of the Chinese advantage is powerful. What if it’s wrong? Perhaps
the West’s vulnerability stems not from our ideas about privacy, but from the idea of AI itself.

After all, the term "artificial intelligence" doesn’t delineate specific technological advances. A term like
“nanotechnology” classifies technologies by referencing an objective measure of scale, while AI only
references a subjective measure of tasks that we classify as intelligent. For instance, the adornment and
“deepfake” transformation of the human face, now common on social media platforms like Snapchat
and Instagram, was introduced in a startup sold to Google by one of the authors; such capabilities were
called image processing 15 years ago, but are routinely termed AI today. The reason is, in part,
marketing. Software benefits from an air of magic , lately, when it is called AI. If “AI” is more than
marketing, then it might be best understood as one of a number of competing philosophies that can
direct our thinking about the nature and use of computation.

A clear alternative to “AI” is to focus on the people present in the system. If a program is able to
distinguish cats from dogs, don’t talk about how a machine is learning to see. Instead talk about how
people contributed examples in order to define the visual qualities distinguishing “cats” from “dogs” in
a rigorous way for the first time. There's always a second way to conceive of any situation in which AI is
purported. This matters, because the AI way of thinking can distract from the responsibility of humans.

AI might be achieving unprecedented results in diverse fields, including medicine, robotic control, and
language/image processing, or a certain way of talking about software might be in play as a way to not
fully celebrate the people working together through improving information systems who are achieving
those results. “AI” might be a threat to the human future, as is often imagined in science fiction, or it
might be a way of thinking about technology that makes it harder to design technology so it can be used
effectively and responsibly. The very idea of AI might create a diversion that makes it easier for a small
group of technologists and investors to claim all rewards from a widely distributed effort. Computation
is an essential technology, but the AI way of thinking about it can be murky and dysfunctional.
You can reject the AI way of thinking for a variety of reasons. One is that you view people as having a
special place in the world and being the ultimate source of value on which AIs ultimately depend. (That
might be called a humanist objection.) Another view is that no intelligence, human or machine, is ever
truly autonomous: Everything we accomplish depends on the social context established by other human
beings who give meaning to what we wish to accomplish. (The pluralist objection.) Regardless of how
one sees it, an understanding of AI focused on independence from—rather than interdependence with
—humans misses most of the potential for software technology.

Supporting the philosophy of AI has burdened our economy. Less than 10 percent of the US workforce
is officially employed in the technology sector, compared with 30–40 percent in the then leading
industrial sectors in 1960s. At least part of the reason for this is that when people provide data,
behavioral examples, and even active problem solving online, it is not considered “work” but is instead
treated as part of an off-the-books barter for certain free internet services. Conversely, when
companies find creative new ways to use networking technologies to enable people to provide services
previously done poorly by machines, this gets little attention from investors who believe “AI is the
future,” encouraging further automation. This has contributed to the hollowing out of the economy.

Bridging even a part of this gap, and thus reducing the underemployment of workforces in the rich
world, could expand the productive output of Western technology far more than greater receptiveness
to surveillance in China does. In fact, as recent reporting has shown, China’s greatest advantage in AI is
less surveillance than a vast shadow workforce actively labeling data fed into algorithms. Just as was
the case with the relative failures of past hidden labor forces, these workers would become more
productive if they could learn to understand and improve the information systems they feed into, and
were recognized for this work, rather than being erased to maintain the “ignore the man behind the
curtain” mirage that AI rests on. Worker understanding of production processes empowering deeper
contributions to productivity were the heart of the Japanese Kaizen Toyota Production System miracle
in the 1970s and 1980s.

To those who fear that bringing data collection into the daylight of acknowledged commerce will
encourage a culture of ubiquitous surveillance, we must point out that it is the only alternative to such
a culture. It is only when workers are paid that they become citizens in full. Workers who earn money
also spend money where they choose; they gain deeper power and voice in society. They can gain the
power to choose to work less, for instance. This is how worker conditions have improved historically.

It is not surprising that quantitative technical and economic arguments converge on the centrality of
human value. Estimates suggest that the total computational capacity of a single human mind is
greater than that of all today’s computers in the world put together. With the pace of processor
improvements slowing as Moore’s law ends, the prospects of this changing dramatically anytime soon
are dim.

Nor is such a human-centric approach to technology simply a theoretical possibility. Tens of millions of
people every day use video conferencing to deliver personal services, such as language and skill
instruction, online. Online virtual collaboration spaces like GitHub are central to value creation in our
era. Virtual and augmented reality hold out the prospect of dramatically increasing what is possible,
allowing more types of collaborative work to be performed at great distances. Productivity software
from Slack to Wikipedia to LinkedIn to Microsoft product suites make previously unimaginable real-time
collaboration omnipresent.
Indeed, recent research has shown that without the human-created Wikipedia, the value of search
engines would plummet (since that is where the top results of substantial searches are often found),
even though search services are touted as frontline examples of the value of AI. (And yet the Wikipedia
is a thread-bare nonprofit, while search engines are some of the most highly valued assets in our
civilization.) Collaboration technologies are helping us work from home through the Covid-19 epidemic;
it has become a matter of survival, and the future promises ways where long-distance collaboration may
become ever more vivid and satisfying.

To be clear, we are great enthusiasts for the methods most discussed as illustrations of the potential of
AI: deep/convolution networks and so on. These techniques, however, rely heavily on human data. For
example, Open AI’s much celebrated text-generation algorithm was trained on millions of websites
produced by humans. And evidence from the field of machine teaching increasingly suggests that when
the humans generating the data are actively engaged in providing high-quality, carefully chosen input,
they can train at far lower costs. But active engagement is possible only if, unlike in the usual AI
attitude, all contributors, not just elite engineers, are considered crucial role players and are financially
compensated.

A powerful gut response from some AI enthusiasts, after reading this far, might be that we have to be
wrong, because AI is starting to train itself, without people. But AI without human data is only possible
for a narrow class of problems, the kind that can be defined precisely, not statistically, or based on
ongoing measures of reality. Board games like chess and certain scientific and math problems are the
usual examples, though even in these cases human teams using so-called AI resources usually
outperform AI by itself. While self-trainable examples can be important, they are rare and not
representative of real-world problems.

“AI” is best understood as a political and social ideology rather than as a basket of algorithms. The core
of the ideology is that a suite of technologies, designed by a small technical elite, can and should
become autonomous from and eventually replace, rather than complement, not just individual humans
but much of humanity. Given that any such replacement is a mirage, this ideology has strong
resonances with other historical ideologies, such as technocracy and central-planning-based forms of
socialism, which viewed as desirable or inevitable the replacement of most human judgement/agency
with systems created by a small technical elite. It is thus not all that surprising that the Chinese
Communist Party would find AI to be a welcome technological formulation of its own ideology.

It’s surprising that leaders of Western tech companies and governments have been so quick to accept
this ideology. One reason might be a loss of faith in the institutions of liberal democratic capitalism
during the last decade. (“Liberal” here has the broad meaning of a society committed to universal
freedom and human dignity, not the narrower contemporary political one.) Political economic
institutions have not just been performing poorly in the last few decades, they’ve directly fueled the
rise of hyper-concentrated wealth and political power in a way that happens to align with the elevation
of AI to dominate our visions of the future. The richest companies, individuals, and regions now tend to
be the ones closest to the biggest data-gathering computers. Pluralistic visions of liberal democratic
market societies will lose out to AI-driven ones unless we reimagine the role of technology in human
affairs.

Not only is this reimagination possible, it’s been increasingly demonstrated on a large scale in one of
the places most under pressure from the AI-fueled CCP ideology, just across the Taiwan Strait. Under
the leadership of Audrey Tang and her Sunflower and g0v movements, almost half of Taiwan’s
population has joined a national participatory data-governance and -sharing platform that allows
citizens to self-organize the use of data, demand services in exchange for these data, deliberate
thoughtfully on collective choices, and vote in innovative ways on civic questions. Driven neither by
pseudo-capitalism based on barter nor by state planning, Taiwan’s citizens have built a culture of agency
over their technologies through civic participation and collective organization, something we are
starting to see emerge in Europe and the US through movements like data cooperatives. Most
impressively, tools growing out of this approach have been critical to Taiwan’s best-in-the-world success
at containing the Covid-19 pandemic, with only 49 cases to date in a population of more than 20 million
at China’s doorstep.

The active engagement of a wide range of citizens in creating technologies and data systems, through a
variety of collective organizations offers an attractive alternative worldview. In the case of Taiwan, this
direction is not only consistent with but organically growing out of Chinese culture. If pluralistic
societies want to win a race not against China as a nation but against authoritarianism wherever it
arises, they cannot make it a race for the development of AI which gives up the game before it begins.
They must do it by winning on their own terms, terms that are more productive and dynamic in the long
run than is top-down technocracy, as was demonstrated during the Cold War.

As authoritarian governments try to compete against pluralistic technologies in the 21st century, they
will inevitably face pressures to empower their own citizens to participate in creating technical systems,
eroding the grip on power. On the other hand, an AI-driven cold war can only push both sides toward
increasing centralization of power in a dysfunctional techno-authoritarian elite that stealthily stifles
innovation. To paraphrase Edmund Burke, all that is necessary for the triumph of an AI-driven,
automation-based dystopia is that liberal democracy accept it as inevitable.
Space Adv
1NC---Space Wars
No miscalc or escalation
James Pavur 19, DPhil Researcher at the Cybersecurity Centre for Doctoral Training at Oxford
University, and Ivan Martinovic, Professor of Computer Science in the Department of Computer Science
at Oxford University, “The Cyber-ASAT: On the Impact of Cyber Weapons in Outer Space”, 2019 11th
International Conference on Cyber Conflict: Silent Battle,
https://ccdcoe.org/uploads/2019/06/Art_12_The-Cyber-ASAT.pdf
A. Limited Accessibility

Space is difficult. Over 60 years have passed since the first Sputnik launch and only nine countries (ten including the EU) have orbital launch
capabilities. Moreover, a
launch programme alone does not guarantee the resources and precision required to
operate a meaningful ASAT capability. Given this, one possible reason why space wars have not broken out is
simply because only the US has ever had the ability to fight one [21, p. 402], [22, pp. 419–420].

Although launch technology may become cheaper and easier, it is unclear to what extent these
advances will be distributed among presently non-spacefaring nations. Limited access to orbit
necessarily reduces the scenarios which could plausibly escalate to ASAT usage. Only major conflicts between the
handful of states with ‘space club’ membership could be considered possible flashpoints. Even then, the fragility of an attacker’s
own space assets creates de-escalatory pressures due to the deterrent effect of retaliation. Since the
earliest days of the space race, dominant powers have recognized this dynamic and demonstrated an
inclination towards de-escalatory space strategies [23].
B. Attributable Norms

There also exists a long-standing normative framework favouring the peaceful use of space. The effectiveness
of this regime, centred around the Outer Space Treaty (OST), is highly contentious and many have pointed out its serious legal and political
shortcomings [24]–[26]. Nevertheless, this status quo framework has somehow supported over six decades of relative peace
in orbit.

Over these six decades, norms have become deeply ingrained into the way states describe and perceive space
weaponization. This de facto codification was dramatically demonstrated in 2005 when the US found itself on the short end of a 160-1 UN
vote after opposing a non-binding resolution on space weaponization. Although states have occasionally pushed the
boundaries of these norms, this has typically occurred through incremental legal re-interpretation rather than
outright opposition [27]. Even the most notable incidents, such as the 2007-2008 US and Chinese ASAT demonstrations, were couched in
rhetoric from both the norm violators and defenders, depicting space as a peaceful global commons [27, p. 56]. Altogether, this suggests that
states perceive real costs to breaking this normative tradition and may even moderate their behaviours
accordingly.

One further factor supporting this norms regime is the high degree of attributability surrounding ASAT weapons. For
kinetic ASAT technology, plausible deniability and stealth are essentially impossible. The literally explosive act of
launching a rocket cannot evade detection and, if used offensively, retaliation. This imposes high diplomatic costs on ASAT
usage and testing, particularly during peacetime.
C. Environmental Interdependence

A third stabilizing force relates to the orbital debris consequences of ASATs. China’s 2007 ASAT demonstration was the
largest debris-generating event in history, as the targeted satellite dissipated into thousands of dangerous debris particles [28, p. 4]. Since
debris particles are indiscriminate and unpredictable, they often threaten the attacker’s own space
assets [22, p. 420]. This is compounded by Kessler syndrome, a phenomenon whereby orbital debris ‘breeds’ as large pieces of debris collide
and disintegrate. As space debris remains in orbit for hundreds of years, the cascade effect of an ASAT attack can constrain
the attacker’s long-term use of space [29, pp. 295– 296]. Any state with kinetic ASAT capabilities will likely also operate satellites
of its own, and they are necessarily exposed to this collateral damage threat. Space debris thus acts as a strong strategic
deterrent to ASAT usage.
1NC---Debris
No card in the 1AC for how the AFF solves debris.
No follow on: Turner’s specific to the EU and not in the context of their space treaty.
No debris cascades, but even a worst case is confined to low LEO with no impact
Daniel Von Fange 17, Web Application Engineer, Founder and Owner of LeanCoder, Full Stack, Polyglot
Web Developer, “Kessler Syndrome is Over Hyped”, 5/21/2017,
http://braino.org/essays/kessler_syndrome_is_over_hyped/

Kessler Syndrome is overhyped. A chorus of online commenters great any news of upcoming low earth
orbit satellites with worry that humanity will to lose access to space . I now think they are wrong.
What is Kessler Syndrome?

Here’s the popular view on Kessler Syndrome. Every once in a while, a piece of junk in space hits a satellite. This single impact destroys the
satellite, and breaks off several thousand additional pieces. These new pieces now fly around space looking for other satellites to hit, and so
exponentially multiply themselves over time, like a nuclear reaction, until a sphere of man-made debris surrounds the earth, and humanity no
longer has access to space nor the benefits of satellites.

It is a dark picture.

Is Kessler Syndrome likely to happen?


I had to stop everything and spend an afternoon doing back-of-the-napkin math to know how big the threat is. To estimate, we need to know
where the stuff in space is, how much mass is there, and how long it would take to deorbit.

The orbital area around earth can be broken down into four regions.

Low LEO - Up to about 400km. Things that orbit here burn up in the earth’s atmosphere quickly -
between a few months to two years . The space station operates at the high end of this range. It loses about a kilometer of altitude
a month and if not pushed higher every few months, would soon burn up. For all practical purposes, Low LEO doesn’t matter
for Kessler Syndrome. If Low LEO was ever full of space junk, we’d just wait a year and a half, and the
problem would be over.

High LEO - 400km to 2000km. This where most heavy satellites and most space junk orbits. The air is thin
enough here that satellites only go down slowly, and they have a much farther distance to fall. It can take 50 years for stuff here to
get down. This is where Kessler Syndrome could be an issue .

Mid Orbit - GPS satellites and other navigation satellites travel here in lonely, long lives. The volume of
space is so huge, and the number of satellites so few, that we don’t need to worry about Kessler here.

GEO - If you put a satellite far enough out from earth, the speed that the satellite travels around the
earth will match the speed of the surface of the earth rotating under it . From the ground, the satellite will appear to
hang motionless. Usually the geostationary orbit is used by big weather satellites and big TV broadcasting satellites. (This apparent
motionlessness is why satellite TV dishes can be mounted pointing in a fixed direction. You can find approximate south just by looking around at
the dishes in your northern hemisphere neighborhood.) For Kessler purposes, GEO
orbit is roughly a ring 384,400 km around.
However, all the satellites here are moving the same direction at the same speed - debris doesn’t get free
velocity from the speed of the satellites. Also, it’s quite expensive to get a satellite here, and so there aren’t
many, only about one satellite per 1000km of the ring. Kessler is not a problem here.
How bad could Kessler Syndrome in High LEO be?
Let’s imagine a worst case scenario.

An evil alien intelligence chops up everything in High LEO, turning it into 1cm cubes of death orbiting at 1000km,
spread as evenly across the surface of this sphere as orbital mechanics would allow. Is humanity cut off from space?

I’m guessing the


world has launched about 10,000 tons of satellites total. For guessing purposes, I’ll assume 2,500
tons of satellites and junk currently in High LEO. If satellites are made of aluminum, with a density of 2.70 g/cm3, then that’s
839,985,870 1cm cubes. A sphere for an orbit of 1,000km has a surface area of 682,752,000 square KM. So there would be one
cube of junk per .81 square KM. If a rocket traveled through that, its odds of hitting that cube are tiny -
less than 1 in 10,000.

So even in the worst case, we don’t lose access to space.

Now though you can travel through the debris, you couldn’t keep a satellite alive for long in this orbit of death. Kessler Syndrome at its
worst just prevents us from putting satellites in certain orbits.

In real life, there’s a lot of factors that make Kessler syndrome even less of a problem than our worst
case though experiment.

 Debris would be spread over a volume of space, not a single orbital surface, making collisions
orders of magnitudes less likely.
 Most impact debris will have a slower orbital velocity than either of its original pieces - this
makes it deorbit much sooner.
 Any collision will create large and small objects. Small objects are much more affected by
atmospheric drag and deorbit faster, even in a few months from high LEO. Larger objects can be
tracked by earth based radar and avoided.
 The planned big new constellations are not in High LEO, but in Low LEO for faster
communications with the earth. They aren’t an issue for Kessler.
 Most importantly, all new satellite launches since the 1990’s are required to include a plan to get
rid of the satellite at the end of its useful life (usually by deorbiting)
So the realistic worst case is that insurance premiums on satellites go up a bit. Given the current trend
toward much smaller, cheaper micro satellites, this wouldn’t even have a huge effect .
I’m removing Kessler Syndrome from my list of things to worry about.

It takes centuries and adaptation solves


Ted Muelhaupt 19, Associate Principal Director of the Systems Analysis and Simulation Subdivision
(SASS) and Manager of the Center for Orbital and Reentry Debris Studies at The Aerospace Corporation,
M.S., B.S. Aerospace and Aeronautical Engineering & Mechanics, University of Minnesota - Twin Cities,
Senior Member of the American Institute of Aeronautics and Astronautics, “How Quickly Would It Take
For the Kessler Syndrome To Destroy All The Satellites In LEO? And Could You See This Happening From
Earth?”, Quora, 2/28/2019, https://www.quora.com/How-quickly-would-it-take-for-the-Kessler-
Syndrome-to-destroy-all-the-satellites-in-LEO-And-could-you-see-this-happening-from-Earth

The dynamics of the Kessler Syndrome are real, and most people studying it agree on the concept: if
there is sufficient density of objects and mass, a chain reaction of debris breaking up objects and
creating more debris can occur. But the timescale of this process takes decades and centuries. There are
many assumptions that go into these models. Though there is still argument about this, many people in
the field think that the process is already underway in low earth orbit. But others, including myself, think
we can stop it if we take action. This is a slow motion disaster that we can prevent.

But in spite of hype to the contrary, we will never “lose access to space”. Certain missions may become
impractical or too expensive, and we may decide that some orbits are too risky for humans. Even that
depends on the tolerance for the risk. But robots don’t have mothers, and if we feel it is worthwhile we
will take the risk and fly the satellites where we need to.

To the specifics of the question, it will take many decades. It will not destroy all satellites in LEO. You
won’t be able to see it from the ground unless you were extraordinarily lucky, and you happened to see
a flash from a collision in the instant you were looking, with just the right lighting.

No retal or escalation from satellite attacks


Dr. Eric J. Zarybnisky 18, MA in National Security Studies from the Naval War College, PhD in
Operations Research from the MIT Sloan School of Management, Lt Col, USAF, “Celestial Deterrence:
Deterring Aggression in the Global Commons of Space”, 3/28/2018,
https://apps.dtic.mil/dtic/tr/fulltext/u2/1062004.pdf
PREVENTING AGGRESSION IN SPACE

While deterrence and the Cold War are strongly linked in the public’s mind through the nuclear standoff between the United States and the
Soviet Union, the fundamentals of deterrence date back millennia and deterrence remains relevant. Thucydides alludes to the concept of
deterrence in his telling of the Peloponnesian War when he describes rivals seeking advantages, such as recruiting allies, to dissuade an
adversary from starting or expanding a conflict.6F6 Aggression
in space was successfully avoided during the Cold War
because both sides viewed an attack on military satellites as highly escalatory, and such an action would
likely result in general nuclear war.7F7 In today’s more nuanced world, attacking satellites, including
military satellites, does not necessarily result in nuclear war. For instance, foreign countries have used
high-powered lasers against American intelligence-gathering satellites8F8 and the United States has been
reluctant to respond, let alone retaliate with nuclear weapons. This shift in policy is a result of the
broader use of gray zone operations, to which countries struggle to respond while limiting escalation.
Beginning with the fundamentals of deterrence illuminates how it applies to prevention of aggression in space.
1NC---Space Col
Independent space colony is impossible.
Levchenko et al. 19. Professors in the Plasma Sources and Applications Centre/Space Propulsion Centre,
NIE, Nanyang Technological University. 2019. “Mars Colonization: Beyond Getting There.” Global
Challenges, vol. 3, no. 1.

Settlement of Mars—is it a dream or a necessity? From


scientific publications to public forms, there is certainly little
consensus on whether colonization of Mars is necessary or even possible, with a rich diversity of opinions that
range from categorical It is a necessity!20 to equally categorical Should Humans Colonize Other Planets? No.21 A strong proponent of the idea,
Orwig puts forward five reasons for Mars colonization, implicitly stating that establishing a permanent
colony of humans on Mars is no longer an option but a real necessity .20 Specifically, these arguments are:
Survival of humans as a species; Exploring the potential of life on Mars to sustain humans; Using space technology to positively
contribute to our quality of life, from health to minimizing and reversing negative aspects of anthropogenic activity of humans on Earth;
Developing as a species; Gaining political and economic leadership. The first argument captures the essence of what most
space colonization proponents feel—our ever growing environmental footprint threatens the survival of
human race on Earth. Indeed, a large body of evidence points to human activity as the main cause of extinction of many species, with
shrinking biodiversity and depleting resources threatening the very survival of humans on this planet. Colonization of other planets
could potentially increase the probability of our survival. While being at the core of such ambitious projects as Mars One,
a self‐sustained colony of any size on Mars is hardly feasible in the foreseeable future. Indeed, sustaining
even a small number of colonists would require a continuous supply of food, oxygen, water and basic
materials. At this stage, it is not clear whether it would be possible to establish a system that would
generate these resources locally, or whether it would at least in part rely on the delivery of these
resources (or essential components necessary for their local production) from Earth. Beyond the supply
of these very basic resources, it would be quite challenging if not impossible for the colonists to
independently produce hi‐tech but vitally important assets such as medicines, electronics and robotics
systems, or advanced materials that provide us with a decent quality of life. In this case, would their
existence become little more than the jogtrot of life, as compared with the standards expected at the
Earth?22
Innovation Adv
1NC---McGinnis
Reject laundry list ! cards---didn’t read a terminal impact for any of the scenarios in
McGinnis OR have any warrant for why AI’s key to solve them.
1NC---Trust
Zero internal link---no 1AC ev rc says personhood sufficiently promotes trust in AI.
“AI development high now” is a double turn since it proves trust is sufficient in the
squo to promote innovation.
Mistrust of AI is deep-seated.
Vyacheslav Polonski 18, Researcher at the University of Oxford, Ph.D. from the University of Oxford,
M.Sc. in the Social Sciences of the Internet from the Oxford Internet Institute, “People don’t trust AI –
here’s how we can change that,” The Conversation, 01-09-2018, https://theconversation.com/people-
dont-trust-ai-heres-how-we-can-change-that-87129

Artificial intelligence can already predict the future. Police forces are using it to map when and where crime is likely to occur.
Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI
imagination so it can plan for unexpected consequences.
Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet
for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying
on AI and prefer to trust human experts, even if these experts are wrong.

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so
reluctant to trust AI in the first place.
Should you trust Dr. Robot?

IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR disaster. The AI promised to deliver
top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients
worldwide have received advice based on its calculations.

But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided
guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The
supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may
have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that
Watson actually improves cancer survival rates.

On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that
Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning
algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading
many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme.
Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over
two thirds of cases.

The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on
our understanding of how other people think and having experience of their reliability. This helps create a
psychological feeling of safety. AI, on the other hand, is still fairly new and unfamiliar to most people. It makes decisions using a
complex system of analysis to identify potentially hidden patterns and weak signals from large amounts of data.

Even if it can be technically explained (and that’s not always the case), AI’s decision-making process is usually too
difficult for most people to understand. And interacting with something we don’t understand can cause anxiety and make us feel
like we’re losing control. Many people are also simply not familiar with many instances of AI actually working, because it
often happens in the background.
Instead, they are acutely aware of instances where AI goes wrong: a Google algorithm that classifies people of colour as
gorillas; a Microsoft chatbot that decides to become a white supremacist in less than a day; a Tesla car operating in autopilot mode that
resulted in a fatal accident. These unfortunate examples have received a disproportionate amount of media attention, emphasising the
message that we cannot rely on technology. Machine learning is not foolproof, in part because the humans who design it aren’t.

A new AI divide in society?

Feelings about AI also run deep. My colleagues and I recently ran an experiment where we asked people from a range of backgrounds
to watch various sci-fi films about AI and then asked them questions about automation in everyday life. We found that, regardless of whether
the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future
polarised the participants’ attitudes. Optimists became more extreme in their enthusiasm for AI and sceptics became even more
guarded.

This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes, a
deep-rooted human tendency known as confirmation bias. As AI is reported and represented more and more
in the media, it could contribute to a deeply divided society, split between those who benefit from AI and
those who reject it. More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious
disadvantage.

[ ] Trust is impossible.
Joshua James Hatherley 19, Ph.D. candidate in the School of Philosophical, Historical and
International Studies at Monash University, M.A. in Bioethics from Monash University, “Limits of trust in
medical AI,” Journal of Medical Ethics, Vol. 46, No. 7, 2019, http://dx.doi.org/10.1136/medethics-2019-
105935

To say that one can trust an AI system, or that the AI is trustworthy, is merely to say that one can rely on the AI system,
or that the system is reliable. Yet as we have seen, reliability is insufficient to generate a relation of trust under any of its
familiar philosophical notions, which all require characteristics essential and exclusive to beings with a form of
agency. What does this mean for the pursuit of ‘Trustworthy AI’ initiated by the European Union’s High Level Expert Group on Artificial
Intelligence (HLEG AI)?18 Although valuable, the pursuit of trustworthy AI represents a notable conceptual
misunderstanding, since AI systems are not the appropriate objects of trust or trustworthiness. Interestingly,
this has also been suggested by a key member of the HLEG AI, Thomas Metzinger.27 Rather than trustworthy AI , this pursuit may
be better served by being reframed in terms of reliable AI, reserving the label of ‘trust’ for reciprocal relations between
beings with agency.

[ ] Personhood snowballs, wrecking established legal functioning.


Abbott and Sarch 19 [Ryan Abbott, Professor of Law and Health Sciences, University of Surrey School
of Law and Adjunct Assistant Professor of Medicine. Alex Sarch, Reader (Associate Professor) in Legal
Philosophy, University of Surrey School of Law. “Punishing Artificial Intelligence: Legal Fiction or Science
Fiction ,” University of California, Davis [Vol. 53:323], 2019,
https://lawreview.law.ucdavis.edu/issues/53/1/articles/files/53-1_Abbott_Sarch.pdf, ///k-ng]

Earlier, we discussed some of the potential costs of AI punishment, including conceptual confusion,
expressive costs, and spillover. Even aside from these, punishment of AI would entail serious practical
challenges as well as substantial changes to criminal law. Begin with a practical challenge: the mens rea
analysis.213 For individuals, the mens rea analysis is generally how culpability is assessed. Causing a
given harm with a higher mens rea like intent is usually seen as more culpable than causing the same
harm with a lower mens rea like recklessness or negligence.214 But how do we make sense of the
question of mens rea for AI? Part III considered this problem, and argued that for some AI, as for
corporations, the mental state of an AI’s developer, owner, or user could be imputed under something
like the respondeat superior doctrine. But for cases of Hard AI Crime that is not straightforwardly
reduced to human conduct — particularly where the harm is unforeseeable to designers and there is no
upstream human conduct that is seriously unreasonable to be found — nothing like respondeat superior
would be appropriate. Some other approach to AI mens rea would be required. A regime of strict
liability offenses could be defined for AI crimes. However, this would require a legislative work-around
so that AI are deemed capable of satisfying the voluntary act requirement, applicable to all crimes.215
This would require major revisions to the criminal law and a great deal of concerted legislative effort. It
is far from an off-the-shelf solution. Alternately, a new legal fiction of AI mens rea, vaguely analogous to
human mens rea, could be developed, but this too is not currently a workable solution. This approach
could require expert testimony to enable courts to consider in detail how the relevant AI functioned to
assess whether it was able to consider legally relevant values and interests but did not weight them
sufficiently, and whether the program has the relevant behavioral dispositions associated with mens
rea-like intention or knowledge. In Part III.A, we tentatively sketched several types of argument that
courts might use to find various mental states to be present in an AI. However, much more theoretical
and technical work is required and we do not regard this as a first best option. Mens rea, and similar
challenges related to the voluntary act requirement, are only some of the practical problems to be
solved in order to make AI punishment workable. For instance, there may be enforcement problems
with punishing an AI on a blockchain. Such AIs might be particularly difficult to effectively combat or
deactivate. Even assuming the practical issues are resolved, punishing AI would still require major
changes to criminal law. Legal personality is necessary to charge and convict an AI of a crime, and
conferring legal personhood on AIs would create a whole new mode of criminal liability , much the way
that corporate criminal liability constitutes a new such mode beyond individual criminal liability.216
There are problems with implementing such a significant reform. Over the years, there have been many
proposals for extending some kind of legal personality to AI.217 Perhaps most famously, a 2017 report
by the European Parliament called on the European Commission to create a legislative instrument to
deal with “civil liability for damage caused by robots.”218 It further requested the Commission to
consider “a specific legal status for robots,” and “possibly applying electronic personality” as one
solution to tort liability.219 Even in such a speculative and tentative form this proposal proved highly
controversial.220 Full-fledged legal personality for AIs equivalent to that afforded to natural persons,
with all the legal rights that natural persons enjoy, would clearly be inappropriate. To take a banal
example, allowing AI to vote would undermine democracy, given the ease with which anyone looking to
determine the outcome of an election could create AIs to vote for a particular candidate.221 However,
legal personality comes in many flavors , even for natural persons such as children who lack certain rights
and obligations enjoyed by adults. Crucially, no artificial person enjoys all of the same rights and
obligations as a natural person.222 The bestknown class of artificial persons, corporations, have long
enjoyed only a limited set of rights and obligations that allows them to sue and be sued, enter contracts,
incur debt, own property, and be convicted of crimes.223 However, they do not receive protection
under constitutional provisions, such as the Fourteenth Amendment’s Equal Protection Clause, and they
cannot bear arms, run for or hold public office, marry, or enjoy other fundamental rights that natural
persons do.224 Thus, granting legal personality to AI to allow it to be punished would not require AI to
receive the rights afforded to natural persons, or even those afforded to corporations. AI legal
personality could consist solely of obligations. Even so, any sort of legal personhood for AIs would be a
dramatic legal change that could prove problematic.225 As discussed earlier, providing legal personality
to AI could result in increased anthropomorphisms. People anthropomorphizing AI expect it to adhere to
social norms and have higher expectations regarding AI capabilities.226 This is problematic where such
expectations are inaccurate and the AI is operating in a position of trust. Especially for vulnerable users,
such anthropomorphisms could result in “cognitive and psychological damages to manipulability and
reduced quality of life.”227 These outcomes may be more likely if AI were held accountable by the state
in ways normally reserved for human members of society. Strengthening questionable anthropomorphic
tendencies regarding AI could also lead to more violent or destructive behavior directed at AI, such as
vandalism or attacks.228 Further, punishing AI could also affect human well-being in less direct ways,
such as by producing anxiety about one’s own status within society due to the perception that AIs are
given a legal status on a par with human beings. Finally, and perhaps most worryingly, conferring legal
personality on AI may lead to rights creep, or the tendency for an increasing number of rights to arise
over time.229 Even if AIs are given few or no rights initially when they are first granted legal
personhood, they may gradually acquire rights as time progresses . Granting legal personhood to AI may
thus be an important step down a slippery slope. In a 1933 Supreme Court opinion, for instance, Justice
Brandeis warned about rights creep, and argued that granting corporations an excess of rights could
allow them to dominate the State.230 Eighty years after that decision, Justice Brandeis’ concerns were
prescient in light of recent Supreme Court jurisprudence such as Citizens United v. Federal Election
Commission and Burwell v. Hobby Lobby Stores, which significantly expanded the rights extended to
corporations.231 Such rights, for corporations and AI, can restrict valuable human activities and
freedoms.

That creates uncertainty that freezes corporate research and investment


Jaynes 2019 [Tyler L. Jaynes - Utah Valley University. Non-traditional bioethicist who conducts
research into new technologies. “Legal personhood for artificial intelligence: citizenship as the exception
to the rule ,” AI & SOCIETY (2020) 35:343–354, Published online: 25 June 2019,
https://link.springer.com/content/pdf/10.1007/s00146-019-00897-9.pdf, ///k-ng]

The struggle with generating a set of rights for NBIs is intrinsically one centered around the rights of the
corporations and individuals who produce NBI systems. These groups have developed NBI systems
primarily to benefit corporations economically given that there has been a rising demand for NBI
systems in the workplace. Given the amount of time, resources and effort that has gone into developing
deep-learning systems and other aspects of NBI structures, it is only natural that those who have
invested in this research wish to see their investments returned with interest (Locke 1980).62 By
suggesting that governments should grant legal protections to NBIs, we are necessarily implying that
these interested parties should be altruistic enough not to expect an economic return for their
investments. How can this be fair? How should these investors be compensated? These are questions
that have to be answered before legal protections for NBIs can be implemented. Given the structure of
capitalist societies,63 it will not be to a corporation’s benefit to produce AGI if NBIs gain legal
personhood under the law. The most significant reason for this, beyond the argument that the costs of
research and development need to be supplemented, is that there is a fine legal line between slavery
and employment. If we conclude that NBI systems should compensate the corporation or organization
that developed it, it will need to earn a wage. This conclusion then implies that the system will be
required to labour and that the nature of this labour will need to be legal.64 How can corporations and
judiciaries determine a fair wage if an AGI or NBI is employed under the assumption that the salary it
earns will compensate the corporation’s investment into developing the system? If this standard is to be
set at the wage of an average employee, it is feasible that the AGI or NBI system will be employed by the
corporations who developed them for hundreds of years. Assuming that the NBI system is being paid at
the average rate of an individual with at least their Bachelor’s degree, the system may net $2.1 million
throughout thirty years working forty hours per work week (Thompson 2009). Depending on the
capacity in which the NBI system is being employed, it is feasible that it would only have to work for
thirty-or-so years; yet that time will inevitably depend on the corporation and the position given to the
NBI system. What we are faced with is akin to the contractual servitude cases of the eighteenth and
nineteenth centuries, where immigrants coming to the USA would agree to work for those who
sponsored their travel to the country. The significant difference between AGI and foreign nationals in
this circumstance is that AGI does not have the choice to be developed .65 To a more extreme degree,
AGI and other NBI systems in this context could be equated to the African slaves existing during the
early years of America’s history (Wein 1992). Their proliferation is inevitable, yet those who “own” them
can profit from NBI’s labours or the selling of these systems. Religious rationales for slavery aside, there
has been the argument that AGI and other NBI systems are born natural slaves (Solum 1992)—
mimicking similar arguments made for slavery in the eighteenth and nineteenth centuries. The slaves
fought for their freedom because they knew they were more than what their owners insisted they were.

You might also like