Can There Be A Dumb Superintelligence A

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

ACADEMIA Letters

Can there be a dumb Superintelligence? A critical look at


Bostrom’s notion of Superintelligence
Ralf Stapelfeldt

For a few decades now, the idea of an artificial Superintelligence has been discussed in many
philosophical contributions and especially in detail and influentially by Nick Bostrom (Bostrom
2014). The idea can be reconstructed as follows:

Since its beginnings in the field of Artificial Intelligence (AI) in the 1950s until today
artificial systems have become increasingly intelligent. This trend can be extrapolated into
the future to a point where the level of human intelligence is reached. From this moment on,
at the latest, it is no longer just humans that produce AI systems, but AI itself that optimizes
itself, either by perpetual machine learning or by producing and programming new, optimized
AI systems. This leads to an explosion of further development towards ever more intelligent
systems, until a machine is clearly more intelligent than even the smartest humans combined.
That would be the birth of superintelligence, which is by far faster thinking and smarter than
a single human or humanity as a collective.

By a closer look there first can be noted that it is philosophically not clarified whether an
AI can reach the level of human intelligence in principle that is to say whether the thesis of
Strong AI that is accepted in the background, is true (for the distinction between Strong and
Weak AI see Searle 1980, who argues among other aspects with his famous Chinese-Room-
Argument against Strong AI). According to this thesis an Artificial General Intelligence (AGI)
is possible in principle. Critics only agree with the thesis of Weak AI, according to which
artificial systems can, in principle, only successfully simulate individual aspects of human
intelligence, but can never be described as fully intelligent (Stapelfeldt 2021, p. 244).

Academia Letters, July 2021 ©2021 by the author — Open Access — Distributed under CC BY 4.0

Corresponding Author: Ralf Stapelfeldt, ralf.stapelfeldt@gmail.com


Citation: Stapelfeldt, R. (2021). Can there be a dumb Superintelligence? A critical look at Bostrom’s notion of
Superintelligence. Academia Letters, Article 2076. https://doi.org/10.20935/AL2076.

1
Bostrom believes that while machines of today are still significantly less intelligent than
humans, they will most likely be superintelligent by the end of the century (Bostrom 2014, p.
26 and Müller / Bostrom 2014, p. 9). Thus, he clearly professes his expectation that an AGI
will be developed in a not-so-distant future. But what kind of intelligence are we talking about
here, and how does Bostrom’s commitment to the thesis of Strong AI relate to the criticism
of this thesis? It is striking that Bostrom works with a truncated concept of intelligence when
he focuses solely on the analytical abilities of an instrumental intelligence. This becomes
clear when he exemplarily refers to the potential danger of a future Superintelligence which is
programmed with the aim to produce paper clips and which then subsequently makes paper
clips out of the whole material universe (all humans included) with its superhuman cognitive
abilities (Bostrom 2014, p. 150). To us humans, such behavior does not seem very in-
telligent, because obviously there is more to our conceptual understanding of ‘intelligence’
than the ability to achieve a concrete operational goal particularly efficiently and ruthlessly.
Bostrom’s understanding, however, reduces intelligence to just that, while social, emotional
or moral aspects - to name but a few - are left out as constitutive aspects. This is perplexing
insofar as a Superintelligence, by definition, is supposed to fully cover the level of human in-
telligence and even exceed it by far. A system that accepts as its sole goal the transformation
of the universe into paper clips seems foolish and dumb to us because a fully human intelli-
gence, what is chosen as benchmark in the first place seems inseparable from a meaningful
and balanced pursuit of goals.
This truncation of the concept of intelligence can also be seen in Bostrom’s Orthogonality
Thesis. This thesis states that the intelligence level of a system is independent of the goal it
pursues. Goals and intelligence would be orthogonal to each other, or in other words, any
even singular goal would be combinable with any intelligence level (Bostrom 2014, p. 130).
However, this does not seem plausible against the background of a general understanding
of human intelligence. Bostrom reduces intelligence to an instrumental function to achieve
arbitrary goals or even a single goal, whereas human intelligence is characterized by the ability
to pursue multiple goals in a highly complex world and to weigh trade-offs when goals conflict
with each other.
These considerations do not say anything about the basic possibility of an AGI or Super-
intelligence and thus do not shed any light on the question whether strong AI is possible in
principle or not. However, they make clear that the stance to the thesis of Strong AI depends,
besides other convictions on the notion of intelligence being used. In addition it becomes
apparent that some apologists of this thesis like Bostrom work with a narrowing of the con-
cept of ‘intelligence’, which allows to use the prefix ‘super’ for systems that could already be
denied the predicate ‘intelligent’. However, if we shorten the concept of ‘intelligence’ in this

Academia Letters, July 2021 ©2021 by the author — Open Access — Distributed under CC BY 4.0

Corresponding Author: Ralf Stapelfeldt, ralf.stapelfeldt@gmail.com


Citation: Stapelfeldt, R. (2021). Can there be a dumb Superintelligence? A critical look at Bostrom’s notion of
Superintelligence. Academia Letters, Article 2076. https://doi.org/10.20935/AL2076.

2
sense, what might be a legitimate move, then a Superintelligence, in case it will ever exist,
might also be pretty dumb.

References
Bostrom, Nick (2014): Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford Uni-
versity Press 2014.

Müller, Vincent C. / Bostrom, Nick (2014): Future progress in artificial intelligence: A poll
among experts, in: AI Matters 1 (1), 2014, 9 – 11.

Searle, John (1980): Minds, brains, and programs, in: Behavioral and Brain Sciences 3 (3),
1980, 417-424.

Stapelfeldt, Ralf (2021): Einleitung Teil 2: Grenzen und Folgen Künstlicher Intelligenz,
in: Strasser, Anne / Sohst, Wolfgang / Stapelfeldt, Ralf / Stepec, Katja (Ed.): Künstliche
Intelligenz – Die große Verheißung. Series: MoMo Berlin, Philosophische KonTexte 8.
Berlin: Xenomoi 2021, 243 - 255.

Academia Letters, July 2021 ©2021 by the author — Open Access — Distributed under CC BY 4.0

Corresponding Author: Ralf Stapelfeldt, ralf.stapelfeldt@gmail.com


Citation: Stapelfeldt, R. (2021). Can there be a dumb Superintelligence? A critical look at Bostrom’s notion of
Superintelligence. Academia Letters, Article 2076. https://doi.org/10.20935/AL2076.

You might also like