Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

The Fallacy of Inevitability: From Chattel Slavery to

Digital Technology

Robert Hanna

A collection of stills from Alphaville, directed by J.-L. Godard (1965)

Sadly, moral thinking and sociopolitical thinking are rife with fallacies. Here’s the
general form of a particularly pernicious one that I’ll call The Fallacy of Inevitability:

(i) X is a very large social institution (in terms of the number of people who are
members of that institution), X is very profitable for an elite group of powerful
people, X has been in existence for a very long time (let’s say, anywhere from 50
years to hundreds or even thousands of years), and X is very widespread
(whether locally, regionally, nationally or even globally),

(ii) therefore X is inevitable, and

(iii) because X is inevitable, therefore X ought to be accepted by us even if X itself


and its consequences are very bad, false, and wrong, and furthermore,

(iv) therefore the most we can do in order to respond to the badness, falsity, and
wrongness of X is to impose legal audits, regulations, or restrictions on X, while
still allowing X to exist essentially unchanged in its current form.

1
Correspondingly, let’s consider some examples of real-world values of X:

X = chattel slavery

X = people’s possession and use of guns

X = the State’s possession and use of guns and weapons of mass destruction

X = military and domestic intelligence, together with policing, together with


coercive authoritarian government, aka the surveillance State, aka the security
State

X = eco-destructive, technocratic, global corporate capitalism

X = any digital technology—including computers, algorithms, digital data or


information, artificial intelligence/AI, and robotics—whose use violates human
dignity, up to and including cybernetic totalitarianism: as, for example, brilliantly
imagined in Godard’s 1965 dystopian science fiction classic, Alphaville

Now there’s not merely one fallacy, but in fact three sub-fallacies, contained under
The Fallacy of Inevitability, that I’ll call the three modes of The Fallacy.

First, the step from from (i) to (ii) is clearly a non sequitur. Just because a social
institution is very large, very profitable for an elite group of powerful people, has been
in existence for a very long time, and is very widespread, since no social institution is
necessitated either by the laws of logic or the laws of nature, then that social institution is
not inevitable. More generally, because humanity freely create and freely sustain all
actual and possible social institutions, then humanity can also freely abolish, refuse, or
at the very least radically devolve-and-replace (hence, transform into its moral
opposite) any such social institution. That’s mode 1 of The Fallacy.

Second, step (iii) clearly contains another non sequitur. Since social institutions
are contingent facts about humanity, but no contingent fact automatically entails a
moral obligation, then step (iii) is clearly an instance of the naturalistic fallacy, namely,
arguing directly from the factual (the “is”) to the morally obligatory (the “ought”).
Indeed, and diametrically on the contrary, if any social institution itself and its
consequences are bad, false, and wrong, then humanity ought freely to abolish it, refuse it,
or at the very least radically transform it into its moral opposite. That’s mode 2 of The Fallacy.

2
Third, step (iv) is yet another clear non sequitur. On the assumption that some
social institution itself and its consequences are bad, false and wrong, then it simply
does not follow that the most we can do in order to respond to the badness, falsity, and
wrongness of X is to impose legal audits, regulations, or restrictions on X, while still
allowing X to exist essentially unchanged in its current form. Diametrically on the
contrary, humanity ought freely to abolish it, refuse it, or at the very least radically
transform it into its moral opposite. And that’s mode 3 of The Fallacy.

Consider, for example, chattel slavery, a social institution that’s been in existence
since even before the emergence of the earliest States,1 and during certain periods has
been practiced virtually worldwide. Chattel slavery is self-evidently inherently bad,
false, and wrong, and its consequences are bad, false, and wrong, precisely because it
violates human dignity.2 And this holds even if chattel slavery is a very large social
institution (say, forcibly employing and/or non-forcibly servicing millions of people
living in the USA), very profitable for an elite group of powerful people (say, tobacco
plantation owners and cotton plantation owners, and their business affiliates, in the
American South), has been in existence for a very long time (say, from 1776 to 1865, and
also even earlier during the pre-Revolutionary period), and is very widespread (say,
spread all across the American South during those periods). Chattel slavery wasn’t and
isn’t inevitable, and it wasn’t and isn’t even morally impermissible, much less morally
obligatory, precisely because it violates human dignity, even when it was an actual
social fact. Hence it would have been a moral scandal, not to mention a moral absurdity,
merely to impose legal audits, regulations, and restrictions on chattel slavery: the moral
scandal and moral absurdity of the very idea of “common sense slavery control” are
self-evident. Diametrically on the contrary, and also self-evidently, it was, is, and forever
will be humanity’s moral obligation to abolish chattel slavery, refuse it, or at the very least
radically transform it into its moral opposite. And indeed, that’s what actually happened to
chattel slavery in the USA by the end of The Civil War—although, catastrophically and
tragically, as everyone knows, the USA has continued to suffer for 156 years from
persistent and pervasive racist violations of human dignity and other malign
consequences of structural or systematic racism (aka “white rage”3), from
Reconstruction and the Jim Crow period, through the Civil Rights era and its troubled
aftermath in the 1970s to the end of the 20th century, through the Black Lives Matter era,
right up to 6 am this morning.

Finally, let’s briefly consider a corresponding contemporary example, digital


technology. Here’s an excerpt from a recent article by a consultant expert in the relatively
new field of digital ethics, including AI ethics:

3
We have a new A.I. race on our hands: the race to define and steer what it means to
audit algorithms. Governing bodies know that they must come up with solutions to the
disproportionate harm algorithms can inflict.

This technology has disproportionate impacts on racial minorities, the economically


disadvantaged, womxn, and people with disabilities, with applications ranging from
health care to welfare, hiring, and education. Here, algorithms often serve as statistical
tools that analyze data about an individual to infer the likelihood of a future event—for
example, the risk of becoming severely sick and needing medical care. This risk is
quantified as a “risk score,” a method that can also be found in the lending and
insurance industries and serves as a basis for making a decision in the present, such as
how resources are distributed and to whom.

Now, a potentially impactful approach is materializing on the horizon: algorithmic


auditing, a fast-developing field in both research and application, birthing a new crop of
startups offering different forms of “algorithmic audits” that promise to check
algorithmic models for bias or legal compliance….

Recently, the issue of algorithmic auditing has become particularly relevant in the
context of A.I. used in hiring. New York City policymakers are debating Int. 1894–2020,
a proposed bill that would regulate the sale of automated employment decision-making
tools. This bill calls for regular “bias audits” of automated hiring and employment tools.

These tools—résumé parsers, tools that purport to predict personality based on social
media profiles or text written by the candidate, or computer vision technologies that
analyze a candidate’s “micro-expressions”—help companies maximize employee
performance to gain a competitive advantage by helping them find the “right” candidate
for the “right” job in a fast, cost-effective manner.

This is big business. The U.S. staffing and recruiting market, which includes firms that
assist in recruiting new internal staff and those that directly provide temporary staff to
fill specific functions (temporary or agency staffing), was worth $151.8 billion in 2019. In
2016, a company’s average cost per hire was $4,129, according to the Society for Human
Resource Management.

Automated hiring and employment tools will play a fundamental role in rebuilding
local economies after the Covid-19 pandemic. For example, since March 2020, New
Yorkers were more likely than the national average to live in a household affected by
loss of income. The economic impact of the pandemic also materializes along racial lines:
In June 2020, only 13.9% of white New Yorkers were unemployed, compared to 23.7% of
Black residents, 22.7% of Latinx residents, and 21.1% of Asian residents.

4
Automated hiring tools will reshape how these communities regain access to
employment and how local economies are rebuilt. Against that backdrop, it is important
and laudable that policymakers are working to mandate algorithmic auditing.4

This argument is a perfect example of The Fallacy of Inevitability in all three of its
modes. No form of digital technology is inevitable,5 even if it’s very large in the social-
institutional sense, very profitable for an elite group of powerful people, has been in
existence a very long time (i.e., at least 50 years), and is very widespread. Moreover,
humanity is under absolutely no moral obligation whatsoever to use any form of digital
technology that violates human dignity, just because it’s an actual social fact. And above all,
the assumption that the most we can do is to impose legal audits, regulations, and
restrictions on digital technology is completely false. Although “common sense digital
technology control” might not look morally scandalous and morally absurd at first
glance, just as “common sense gun control” might not look morally scandalous and
morally absurd at first glance, in fact they both are.6 Diametrically on the contrary, then,
if any form of digital technology violates human dignity, then humanity should freely
abolish, refuse, or at the very least radically transform it into its moral opposite.7 Period. —No
matter how much this rejection disrupts the wonders of “innovation,” the self-interests
of digital technology billionaires and their global corporations, and/or the putatively
public interests of neoliberal governments.8

Again: if digital technology is bad, false, and wrong because it violates human
dignity, then, just like chattel slavery, humanity not only can but should abolish it, refuse
it, or at the very least radically transform it into its moral opposite. More explicitly and
generally, my view is that digital technology, no matter how powerful, sophisticated, or
profitable, is nothing more and nothing less than a tool created by humanity for the
sake of humanity, whose use therefore can and should be strictly constrained by general
and specific moral principles flowing from the concept and fact of human dignity.
Digital technology is our tool, not our master.

5
NOTES
1See, e.g., J.C. Scott, Against the Grain: A Deep History of the Earliest States (New Haven, CT: Yale Univ.
Press, 2017), p. 155.

2 See, e.g., R. Hanna, “A Theory of Human Dignity,” (Unpublished MS, 2021), available online HERE.

3See, e.g., C. Anderson, White Rage: The Unspoken Truth of Our Racial Divide (2nd edn., London/New York:
Bloomsbury, 2017).

4M. Sloane, “The Algorithmic Auditing Trap,” OneZero (16 March 2021), available online at URL =
<https://onezero.medium.com/the-algorithmic-auditing-trap-9a6f2d4d461d>.

5The false thesis of the inevitability of digital technology is sometimes called “technological
determinism.” But this term is also used in a baffling variety of other non-equivalent senses: see, e.g., A.
Dafoe, “On Technological Determinism: A Typology, Scope Conditions, and a Mechanism,” Science,
Technology, & Human Values 40 (2015): 1047-1076, also available online at URL =
<https://journals.sagepub.com/doi/abs/10.1177/0162243915579283?journalCode=sthd>. Hence it’s
philosophically least confusing and best simply to avoid using that term in this context altogether.

6For an application of dignitarian moral reasoning to the case of gun possession and use, see R. Hanna,
“Gun Crazy: A Moral Argument For Gun Abolitionism” (For the King Soopers Shooting Victims,
Unpublished MS, 23 March 2021), available online HERE.

7See also, e.g., R. Hanna and E. Kazim, “Philosophical Foundations for Digital Ethics and AI Ethics: A
Dignitarian Approach,” AI and Ethics (26 February 2021), available online at URL =
<https://link.springer.com/article/10.1007/s43681-021-00040-9>, and also in preview HERE.

8To be fair, in “The Algorithmic Auditing Trap,” Sloane does go on to say, immediately after the text I
quoted earlier, that

we are facing an underappreciated concern: To date, there is no clear definition of “algorithmic


audit.” Audits, which on their face sound rigorous, can end up as toothless reputation polishers,
or even worse: They can legitimize technologies that shouldn’t even exist because they are based
on dangerous pseudoscience. (underling added)

That underlined sentence, at least prima facie, looks like the dignitarian abolish-it-refuse-it-or-radically-
transform-it-into-its-moral-opposite principle I’m defending. But unfortunately, by the end of the article,
she’s fallen back into The Fallacy of Inevitability, by merely recommending “three steps”:
(i) “transparency about where and how algorithms and automated decision-making tools are deployed,”
(ii) “need[ing] to arrive at a clear definition of what ‘independent audit’ means in the context of
automated decision-making systems, algorithms, and A.I.,” and (iii) “need[ing] to begin a conversation
about how, realistically, algorithmic auditing can and must be operationalized in order to be effective.”
Morally speaking, that’s pretty thin gruel.

You might also like