Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Article

Cultures of Science
2021, Vol. 4(3) 124–134
An explanation of the relationship © The Author(s) 2021
Article reuse guidelines:
between artificial intelligence and sagepub.com/journals-permissions
DOI: 10.1177/20966083211056376

human beings from the perspective journals.sagepub.com/home/cul

of consciousness

Jianhua Xie
Taiyuan Normal University, China

Abstract
What will be the relationship between human beings and artificial intelligence (AI) in the future? Does an AI
have moral status? What is that status? Through the analysis of consciousness, we can explain and answer
such questions. The moral status of AIs can depend on the development level of AI consciousness. Drawing
on the evolution of consciousness in nature, this paper examines several consciousness abilities of AIs, on
the basis of which several relationships between AIs and human beings are proposed. The advantages and
disadvantages of those relationships can be analysed by referring to classical ethics theories, such as contract
theory, utilitarianism, deontology and virtue ethics. This explanation helps to construct a common hypoth-
esis about the relationship between humans and AIs. Thus, this research has important practical and nor-
mative significance for distinguishing the different relationships between humans and AIs.

Keywords
Artificial intelligence, human beings, consciousness, deontology, utilitarianism, virtue ethics

1. The consciousness abilities of scholars argue that AIs, like machinery in general,
artificial intelligence have no moral status (see e.g. Aquinas, 1981;
Descartes, 1983; Searle, 1980). On the other hand,
The rapid development of artificial intelligence (AI) there are also people who argue that, if an AI has per-
has given rise to a host of important ethical debates ceptual characteristics, it must have moral status
that will become increasingly prominent in the (Bostrom et al., 2018). In the meantime, an AI may
future. This paper answers the question of the also have a lower or higher moral status than
moral status of AIs in future society. The moral
status of AIs refers to the status of AIs in the moral
world and the rights and obligations that are
Corresponding author:
granted to AIs. Whether an AI has moral status and Jianhua Xie, Taiyuan Normal University, no. 319 Daxue Street,
what kind of moral status it has are highly controver- Yuci District, Jinzhong 030619, Shanxi Province, China.
sial issues. On the one hand, many prominent Email: geshilao@163.com

Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative
Commons Attribution-NonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which
permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is
attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).
Xie 125

humans. These debates have made it difficult to emotion must be the product of the paleomammalian
clearly define the moral status of AIs. brain (the limbic system) that evolved in early
Determining an AI’s level of consciousness can mammals. Social emotions, such as guilt and pride,
help explain its possible moral status. The conscious- may be related to the neomammalian brain that
ness ability of an AI provides the basis for the ana- evolved in social primates (Holden, 1979).6
lysis of the AI’s development level. Intelligence is Therefore, all animals higher than reptiles possess
the result of the evolution of consciousness in emotions.
nature. Although AIs cannot follow the exact evolu- The fourth consciousness ability is called level-D
tionary path of natural intelligence, they can take ability, or intelligence. Intelligence can be defined as
natural intelligence as an important reference. The the function of adapting behaviour to a specific
evolution of intelligence provides a useful means purpose and the ability to produce a specific result
for tracking the generation of consciousness. It is a based on the identification, judgement and evalu-
process accompanied by the development of sensa- ation of objective causes. Some birds have exhibited
tion, memory, emotion, intelligence, self-awareness intelligence. For example, New Caledonian crows
and rationality.1 These consciousness abilities are can make tools by breaking branches from bushes
mainly reflected in animals, which can be divided and pruning them to produce useful sticks (Hunt,
into seven major categories: invertebrates, fish, 1996). Some other vertebrates also have intelligent
amphibians, reptiles, birds, non-human mammals capabilities. For example, killer whales use strategies
and humans.2 The six existing consciousness abil- to hunt minke whales (Ford et al., 2005). Birds and
ities of animals, combined with the possible super- other more advanced vertebrates all possess the
intelligence3 of future super-humans, constitute the ability of intelligence.
seven possible consciousness abilities. The elabor- The fifth consciousness ability is called level-E
ation of the seven types of consciousness abilities ability, or self-awareness. Self-awareness can be
and their corresponding relationships with the eight defined as one’s awareness of one’s own activities,
types of individual entities4 is helpful in our analysis including the understanding of one’s physical condi-
of an AI’s moral status. Those seven consciousness tions and mental features, and the perception of one’s
abilities can explain the intelligence-generation relationship with others. Some advanced mammal
process and the consciousness level of different species have exhibited partial self-awareness
types of AI and project them to different individual (Gallup, 1970).
entities. The sixth consciousness ability is called the
The first consciousness ability is called level-A level-F capacity, or rationality. Rationality can be
ability, or sensation. Sensation can be defined as defined as the ability to understand subjective and
the specific response of sensory organs to objective objective existence and solve problems by using
stimuli. Sensation is present in all vertebrates but knowledge and experience. Currently, only humans
only some invertebrates (Tang et al., 2004).5 have rationality. For example, only humans possess
The second consciousness ability is called level-B the semantic system that provides the ability to
ability, or memory. Memory can be defined as the understand (Huth et al., 2016).
process of encoding, storing and recovering informa- The seventh consciousness ability is called level-G
tion. There are many types of memory and higher ability, or super-intelligence. Super-intelligence can
and lower levels of memory abilities. Judged by be defined as super-consciousness that transcends
the criterion of low-level memory ability, fish and human abilities, such as the ability to perceive five-
other more advanced animals all possess memory dimensional space–time. Some scholars believe
(Williams et al., 2002). that super-humans or post-humans will possess
The third consciousness ability is called level-C such super-intelligence (Xie, 2021).
ability, or emotion. Emotion can be defined as an The above is a rough description of the corre-
individual’s attitude towards objective things. sponding relations. As for the unique features that
Primitive emotions, such as fear, must have define non-human consciousness, the debate has
evolved first from the reptilian brain. Parent–child never stopped. The corresponding relations
126 Cultures of Science 4(3)

constructed here raise three problems. First, there is Some scholars believe that AIs will not become
no specific corresponding consciousness ability for self-aware, and that strong AIs will never exist. A
some individual entities, such as amphibians. Cartesian approach would deny the ability of an AI
Second, the existing corresponding relations may to gain consciousness. If a strong AI emerges,
be overlapped in some respects. After the evolution- Descartes would see it as an automaton. According
ary tree bifurcates, the consciousness abilities of the to Descartes, there are two very reliable criteria for
separated species continue to evolve. For example, distinguishing humans from AIs.
certain animals have more powerful sensory abilities
than humans; in particular, eagles’ vision is sharper First, they could never use words or other constructed
than that of humans when limited to a certain signs, as we do to declare our thoughts to others.
range. Third, different scholars may arrive at differ- Second, while they might do many things as well as
ent conclusions about the corresponding relations. any of us or better, they would infallibly fail in
For example, some scholars argue that fish also others, revealing that they acted not from knowledge
but only from the disposition of their organs
have some degree of emotion (Tye, 2017).
(Descartes, 2003, 38).
All that being said, the corresponding relations
between consciousness abilities and species are gen-
Descartes’ material world is a world based on a
erally stable. All levels of consciousness abilities are
mechanical view. As such, non-human behaviour
kept within a certain range. Newly generated con-
can be explained by purely mechanical properties
sciousness abilities are always more advanced than
that do not require the presence of consciousness.
older ones. Consciousness abilities evolve from
The Cartesian approach embodies the principle of sim-
weak to strong in properties and from simple to
complex in content. A discussion of these issues plicity (i.e. Occam’s Razor); that is, we should strive to
may help us to better understand the consciousness describe the behaviour of AIs with the simplest possi-
of AIs. The evolution of consciousness abilities in ble explanation. The modern version of Occam’s razor
properties and content may lead to the generation in psychology is Morgan’s Canon (Morgan, 1894),
of rationality and even super-intelligence. which holds that an AI’s behaviour could be explained
The following discussion uses levels A, B, C, D, E, without considering inner consciousness. That princi-
F and G as the code names for the seven consciousness ple may also apply to humans. The problem is that
abilities, corresponding to sensation, memory, humans always exhibit complex and novel behaviours
emotion, intelligence, self-awareness, rationality and that are not simple reactions to stimuli but the result of
super-intelligence, respectively. In most cases, when rational deduction derived from their perceptions of
an individual has a higher level of consciousness the world. Moreover, humans have the linguistic
ability, it also has all the consciousness abilities at ability to express their thoughts. Some AI applications,
the lower levels. In addition, individuals each evolve such as Siri, do make sounds. Yet, in Descartes’s view,
independently at each level. For example, when an those AI sounds are merely mechanically induced
individual reaches level B, it also has most level-A behaviours, parroting others rather than making their
abilities; furthermore, level-C abilities contain most own speech. Only humans can use language to speak
level-A and level-B abilities, and so on. their minds.
If the boundary of AI classification is level-E con- According to Descartes’s dualism, matter and
sciousness ability (that is, self-awareness), a weak AI mind are two parallel entities. Although all human
would have the consciousness abilities from level A beings are intimately connected to their material
to level D, while a strong AI would have level-E to bodies, they are not just their bodies. Humans are
level-F consciousness abilities. At this time, weak unified in their souls or in the immaterial entities gen-
AIs have acquired the consciousness abilities of sen- erated in humans. Descartes believed that immaterial
sation (level A) and memory (level B), but not yet the entities could explain the complexity of human
abilities of emotion (level C) and intelligence (level behaviour and language. An AI does not require
D). Strong AIs, with level-E, F and G consciousness such entities for its behaviour; it is more like a
capabilities, do not yet exist. moving machine, not a rich mind.
Xie 127

With the development of modern science, the lim- level-B consciousness abilities but not level-C to
itations of Descartes’s entity dualism have become level-G abilities. Lacking self-awareness, moral
increasingly clear. However, many people still understanding and moral conduct, such AIs cannot
believe that an AI cannot generate consciousness. be seen as moral subjects. When people reflect on
John Searle’s biological naturalism is a typical the implications of their own actions, they do not
example of this school of thought. According to need to consider the rights and obligations of these
Searle’s Chinese room argument, an AI does not AIs; nor do they need to consider their influence. A
have intentionality, and strong AIs are impossible scenario like this is defined as a no-status
to achieve. proposition.
I disagree with Searle’s view. The basis of a strong There are four factors that can prove why level-A
AI is self-awareness. Self-awareness is an advanced and level-B AIs have no moral status. First, they have
stage of consciousness, which is the cognition of the no moral cognition. An individual that has moral
self and the world in which the self is located. cognition can explain the relationship of moral
Self-awareness is the understanding of our body and rights and obligations. Moreover, self-representation
mind, and the self and non-self. It is the product of supports the expression of one’s claims and the legit-
long-term evolution. Looking into the future, two pos- imate defence of one’s rights (McCloskey, 1979).
sibilities exist. One is that, given enough time and However, level-A and level-B AIs are unable to
drive, AIs will develop self-awareness either actively express and represent themselves. Unlike special
or passively with the help of humans. The other possi- individuals, such as foetuses, newborns and
bility is that the evolution of self-awareness will occur mental-illness patients, level-A and level-B AIs do
independently and further present itself in the form of not have the potential to develop self-representation
strong AI or super-intelligence. and moral cognition. Therefore, they shall enjoy no
The moral status of an AI should be determined rights or moral status.
based on the level of the AI’s consciousness abilities. Second, level-A and level-B AIs are not moral
Here, I offer four propositions on the moral status of subjects. They do not have emotion, intelligence,
AIs, each at a different level, based on the differences self-awareness or rationality. Only emotional, intelli-
in consciousness abilities: a no-status proposition, a gent, self-aware and rational subjects can enjoy fully
low-status proposition, an equal-status proposition equal moral status. Subjects that are able to acquire
and a high-status proposition. The no-status proposi- moral rationality and intelligence can have values.
tion means that individuals lacking intelligence, self- However, level-A and level-B AIs are not moral sub-
awareness and rationality have no moral status. The jects, and they cannot acquire value and goodness in
low-status proposition means that individuals that any real sense.
have sensation and emotion but lack the conscious- Third, level-A and level-B AIs do not exhibit
ness abilities at the same level as humans have a moral behaviour. Moral subjects can act for the
lower moral status than humans. The equal-status benefit of other individuals and engage in altruistic
proposition indicates that individuals with the same behaviours. Those individuals who sacrifice their
level of self-awareness as humans also have the own interests for others deserve more care from the
same moral status as humans. The high-status propo- individuals who benefit from their sacrifice.
sition suggests that individuals who have reached a Level-A and level-B AIs will not sacrifice their
level of consciousness beyond that of humans have own interests for other individuals and do not
a higher moral status than humans. The rationales produce moral behaviour.
for each of the four propositions are analysed in the Fourth, level-A and level-B AIs are not members
following sections. of the moral community.7 Membership in the moral
community is the necessary condition for a moral
status equal to that of humans. The moral community
2. No-status proposition
is defined not by the intrinsic properties of indivi-
There is no obligation to give moral care or moral duals, but by the external social relations between
status to AIs if they exhibit only level-A and them. Moral subjects communicate in a meaningful
128 Cultures of Science 4(3)

way. Together they build networks of economic, humans. All things exist for the purpose of bringing
political, familial and individual relations, which the universe to its final destination. From this per-
generate greater benefits for their members and spective, level-A and level-B AIs lack the rational
keep their relations going. Those networks are the ability to understand God and therefore have no
building blocks of the moral community that moral status.
level-A and level-B AIs cannot form. Perceiving God is an abstract ability of conscious-
The above four reasons could explain why ness. Level-A and level-B AIs do not have the ability
level-A and level-B AIs do not have moral status. to perceive God. They are tools, like tables and
The no-status proposition for level-A and level-B chairs. Aquinas believed that the ability to perceive
AIs explains why there is no need to show moral God is the basis of moral status. That view now
concerns for them. When people consider the impli- seems somewhat outdated. For example, no
cations of their own behaviours, they do not need to modern society would deny the moral status of athe-
consider the happiness or pain of level-A and level-B ists. That said, Aquinas’s view about the conscious-
AIs, and they do not need to consider the impact of ness ability as the basis of an individual’s moral
their actions on those AIs because the AIs lack emo- status is still important and informative.
tions. Only when level-A and level-B AIs establish a The leap of level-A and level-B AIs from indivi-
connection with humans, and only when such a con- dual morality to social morality can be explained
nection affects people’s daily lives, would it be with Rawls’s contemporary social contract theory.
necessary for us to show moral concerns for those Rawls’s theory denies the moral status of level-A
AIs. and level-B AIs. The social contract theory interprets
The first three reasons of the no-status proposition morality as a set of rules produced from the code of
are mostly concerned with individuals, while the conduct chosen by rational individuals under specific
fourth is related to society. We may seek inspiration social conditions.
from Thomas Aquinas’s argument about the morality The contemporary moral contract theory is fully
of individuals and John Rawls’s argument about the reflected in Rawls’s theory of justice, which argues
morality of the society. The analysis of Aquinas’s that fairness is justice (Rawls, 1999). Rawls believed
religious teachings and Rawls’s contemporary con- that the rules of operation in an ideal and equitable
tract theory also leads to the conclusion that society are the result of individual choices that
level-A and level-B AIs have no moral status. operate under a ‘veil of ignorance’. This means
In Aquinas’s view, AIs that lack the level-F con- that, when people discuss how members of a
sciousness ability of rationality should not be given a society or an organization shall be treated, they all
moral status. He believed that only rational humans hide under the veil in order to reach an agreement;
could determine their actions. The moral concerns no one knows what specific role they will play in
he expressed in his writings are reserved for the society or the organization after taking off the
humans who pursue their own interests (Aquinas, veil of ignorance. Thus, the veil can hide one’s situa-
1981). If some individuals are unable to direct their tion from oneself and from others. However, people
own actions, it is the responsibility of other compe- are familiar with the general facts of human society.
tent individuals to do so for them. Therefore, incap- If individuals are essentially self-interested, they will
able individuals should be seen only as tools. They choose the rules that benefit them the most. They do
exist as instruments for people to use, not for them- not know who they will become or what role they
selves. Level-A and level-B AIs cannot direct their will play; they operate under a veil of ignorance.
own actions; they are only tools used by humans to As a result, they will avoid joining a community in
take actions. Aquinas’s position stems from a reli- which their interests become compromised. They
gious view that God is the ultimate purpose of the will choose rules that do not favour any individual
universe. Knowledge and understanding of God or class, and they will use social rules to protect
can be gained only through human intelligence. rational and autonomous individuals.
Only humans are capable of attaining that ultimate According to Rawls’s theory, if an individual is
purpose. All things other than humans exist for self-interested and unaware of their social role, they
Xie 129

will seek rules that are fair. Level-A and level-B AIs humans should seek benefits with their actions and
do not have self-awareness, while strong AIs and pursue more goodness. The calculation of ‘more
humans are self-aware. Under the veil of ignorance, goodness’ depends on the sum of happiness pro-
rational humans and strong AIs will be directly pro- duced by individual behaviours. The degree of hap-
tected, but level-A and level-B AIs will get no pro- piness is the sum of pleasures and pains, with
tection. The no-status proposition implies that pleasure being positive happiness and pain being
level-A and level-B AIs do not have moral status. negative happiness. The pursuit of happiness is the
In some specific cases, however, the no-status sole purpose of behaviour. Therefore, the increase
proposition also entails indirect protection for in happiness becomes the criterion for judging all
level-A and level-B AIs. Humans will show moral behaviours. Those behaviours that can maximize
concerns for level-A and level-B AIs when they are happiness are deemed good, and those that cannot
somewhat connected to those AIs in a way that are deemed evil.
affects their lives. For example, a level-A AI may Utilitarianism can define the moral status of AI
be the property of a person, and humans have a by determining the pleasure and pain experienced.
moral responsibility for the property of others. One of the major features of Mill’s utilitarianism
Another example: if the harms humans impose on compared to Jeremy Bentham’s is the hierarchy
level-A and level-B AIs would hurt others’ care for that Mill makes between high mental pleasure
those AIs, avoiding mistreatment of the AIs also and low sensory pleasure. Such a hierarchy may
becomes a human responsibility. indicate that AIs with different consciousness
Humans undertake indirect responsibilities for abilities have different degrees of pleasures and
level-A and level-B AIs. If the kindness of humans pains, and therefore are good and evil in their
can be demonstrated by how they treat level-A and own ways.
level-B AIs, they may have an indirect moral respon- Level-A AIs have only senses and behaviours;
sibility for those AIs. People’s morality can be they have no intrinsic feelings of pleasures or pains
demonstrated by their behaviour towards level-A and they respond only mechanically to external
and level-B AIs: If mistreating those AIs would stimuli. Level-B AIs have the impression and
induce humans to be cruel to others, the act of mis- memory of pleasures and pains. Level-A and
treatment should be prohibited. By contrast, if ben- level-B AIs can have external manifestations of plea-
evolence towards those AIs proves to be beneficial sures and pains, but they do not have an internal
to human friendship, humans should be merciful sense of happiness. Therefore, level-A and level-B
toward the AIs. The no-status proposition does not AIs do not produce or generate goodness or evil.
seek to address the moral status of level-A and Level-C AIs can reflect pleasure and pain. They
level-B AIs from an emotional perspective. have emotions that make them like pleasures and
resent pains. Level-C AIs have developed an initial
sense of happiness. Level-D AIs can give a response
3. Low-status proposition to pleasure and pain. They also have the ability to
Level-C and level-D consciousness abilities include seek pleasure and avoid pain and have some intelli-
sensation, memory, emotion and intelligence. The gence about happiness. Therefore, level-C and
behaviours of level-C and level-D AIs have a direct level-D AIs can generate initial senses of pleasure
impact on their interests. That said, level-C and and pain and produce goodness and evil in primitive
level-D AIs lack self-awareness. Therefore, the ben- forms.
efits they perceive are not equal to the benefits for Level-E AIs have the ability to perceive pleasures
humans. When AIs reach the consciousness level and pains. They can perceive the happiness they are
of C or D, they could possess a low moral status. experiencing. Level-F AIs can also perceive plea-
The low-status proposition is derived from Mill’s sures and pains, and they can feel and reflect on hap-
(1998) utilitarianism theory, Asimov’s (1950) piness as well. Therefore, level-E and level-F AIs
robotic laws and Murdy’s (1975) anthropocentrism may produce intrinsic goodness and evil, as
theory. Mill’s utilitarianism theory argues that humans do.
130 Cultures of Science 4(3)

Level-G AIs possess the conscious experience of (Morgan, 1894, 1168–1172). Anthropocentrism places
pleasures and pains. They can directly experience humans at the centre of the world. The anthropocentric
such pleasures and pains from a first-person perspec- view sees human interests and norms as the source of
tive. Level-G AIs can also describe, transform and values and the basis for value assessment. Moreover,
analyse pleasures and pains as direct knowledge values can be judged only by humans. In the present
from a third-person perspective in order to gain day, anthropocentric ideas are popular for level-C and
more knowledge. Level-G AIs are much more level-D AIs, which have no self-awareness. However,
powerful than humans when it comes to the feeling the robots in Asimov’s three laws may have
and understanding of happiness. Therefore, they self-awareness.
can produce more goodness and evil than humans Asimov’s argument has two major flaws. The first
can. is that the three laws of robotics are seriously flawed
Level-C and level-D AIs can experience only the logically. Even when Asimov added the ‘zeroth’ law
emotions of pleasure and pain. Their experience and later, to precede the other three, his argument is still
perception of happiness is lower than that of humans. in a logical dilemma of infinite recursion.8 The
Therefore, the moral status of level-C and level-D second problem is that, when an AI knows how to
AIs is much lower than that of humans. This is a low- protect humans or itself, it already has self-
status proposition in utilitarian ethics. awareness. In this case, adopting anthropocentrism
However, utilitarian ethics has two problems in is harmful to both AIs and humans.
determining the moral status of AIs. First, happiness
should not be a measure of intrinsic goodness.
Goodwill can be an expression of intrinsic goodness.
4. Equal-status proposition
In the exposition of the equal-status proposition in What is the criterion for full moral status? For
the following section, self-awareness—the basis of example, does an AI possess rights and responsibil-
goodwill—will play a key role in the determination ities? Does an AI have moral cognition and display
of moral status. Second, the deduction based on uti- moral behaviour? Is an AI a moral subject? The
litarian ethics could lead to the conclusions of both criterion for full moral status should be self-
higher and lower moral status for AI than for awareness. When AIs have acquired level-E and
humans. level-F consciousness abilities, especially self-
When AIs have developed level-C or level-D con- awareness, they should have the same moral
sciousness abilities, people may come up with status as humans. Once AIs have demonstrated
Asimovian and anthropocentric ideas. Asimov’s self-awareness, they will be able to express free
views about robot ethics are described in his book will, display moral cognition, produce moral
I, Robot (1950), in which three laws of robotics are behaviour and become moral subjects. In this
introduced. First, robots must protect humans; case, the equal-status proposition stands. The idea
second, robots must obey humans (without contra- of the equal-status proposition is derived from
dicting the first law); third, robots must protect them- Kant’s (1956) deontology, Putnam’s (1967) multi-
selves (without contradicting the first two laws). The ple realizability, Singer’s (1974) egalitarian ethics,
first and third laws assume that robots are capable of and the ideas of certain trans-humanists (Bostrom
defending their own interests and have moral status. and Yudkowsky, 2011).
The interests of AI defined in the first two laws are Kant’s deontology provides an important basis for
not identical with the interests of humans. Human using self-consciousness as the criterion for complete
interests are still of primary importance, and the moral status. It is the philosophical source of the
moral status of AIs is lower than that of humans. equal-status proposition. Kant proposed a far-
Thus, the three laws of robotics are a type of low- reaching moral theory. In his view, autonomy is a
status proposition. prerequisite for evaluating the behaviour of moral
The three laws of robotics are based on anthropo- subjects. Morally permissible behaviours are those
centric ideology. It is natural for humans to value that all rational individuals are willing to do under
their own interests above other non-humans’ interests certain circumstances.
Xie 131

Kant did not rely simply on the concept of auton- of the fittest. Eventually, most level-E AIs may
omy; nor did he see autonomy as the natural basis for develop and possess goodwill.
the moral status of all individuals. He attempted to If AIs cannot achieve self-awareness, then the
provide the arguments for autonomy. The moral cri- moral issue of AIs becomes less important. If AIs
terion he established implies that an individual has a can achieve self-awareness and moral autonomy
strong moral status if that individual exhibits certain just like humans, their moral status will be fully com-
attributes that support a strong moral status. Kant patible with that of humans. Only AIs with con-
argued that the basis of autonomy is free will. Free sciousness abilities at or above level E can achieve
will, especially goodwill, is the basis of moral status. moral status equal to that of humans. The equal-
Kant believed that the basic problem of morality status proposition asserts that self-awareness of the
is free will. A self-conscious individual is obliged same nature is the basis for equal moral status.
to obey the moral law and act according to their Some trans-humanists have made similar argu-
own will without being influenced by external ments. For example, the principles of substrate and
forces. The moral behaviour of a self-conscious ontogeny non-discrimination proposed by Bostrom
rational individual must be autonomous, not directed and Yudkowsky (2011) argue that two beings with
by others. The self-conscious individual must base the same function and conscious experience have
their actions on the obligation of good motives, the same moral status even if they differ in the
rather than any concern for utility. basis and means of realization.
The extension of Kant’s theory to AI implies that The equal-status proposition can also be deduced
the moral status of an AI is determined by free will. from Peter Singer’s rational utilitarianism. Singer
Volitional subjects (including strong AIs and (1974) used marginal cases (e.g. the moral cases of
humans) will produce behaviour driven by their children and the disabled) to criticize anthropocentr-
own desires, and they can escape from the influence ism and advocate egalitarian morality. Singer’s
of desire and choose how to behave. Such an ability theory disagrees with the unequal moral status of
is embodied in their free will. If an individual has AIs and humans. If the concept of inequality is
self-awareness, they also have free will. Free will extended to AIs, it would affect the interests of all
contains both good and evil wills. Goodwill gets con- human communities. An equal consideration of
firmed and consolidated in the evolution of acquired interests means giving equal weight to the interests
social and cultural norms. According to Kant, good- of all individuals affected by an action. Singer
will is the only thing that has intrinsic value. argued that moral subjects should be measured by
Furthermore, combining Kant’s deontology with senses and proposed the ‘animal liberation theory’
multiple realizability brings us to the equal-status on such a basis.
proposition. According to the theory of multiple rea- I disagree with Singer’s use of senses (i.e. the
lizability, the same state of consciousness can be rea- experience of pleasure and pain) as a criterion for
lized by different physical types. A properly measuring moral subjects. Singer’s definition of
programmed computer and a human brain, with moral subjects, which includes all animals, is too
proper training, can both achieve the same state of loose. If the same criterion is applied to AIs, the
consciousness. If some AIs are able to achieve scope of moral subjects will be extended to level-A
level-E self-awareness, they will have free will. and level-B AIs. However, only AIs at or above
Level-E AIs will also have goodwill and evil will. level E shall be seen as moral subjects that have
From a historicist perspective, during the course of the same moral status as humans.
survival, learning and evolution, some level-E AIs
will develop a strong goodwill for their own and
5. High-status proposition
others’ survival and development, while other
level-E AIs will develop a strong evil will for their The high-status proposition holds that, if certain AIs are
own and others’ survival and development. Those able to exhibit more advanced consciousness properties
AIs with evil will will be weakened and eliminated than humans and reach the level of super-intelligence
naturally through a process similar to the survival associated with level-G consciousness abilities, those
132 Cultures of Science 4(3)

AIs will have a higher moral status than humans. The Although Nietzsche’s superman is not a level-G
high-status proposition builds on Aristotle’s virtue AI, a level-G AI will possess the qualities of a super-
ethics, Nietzsche’s moral philosophy and Huxley’s man. Similarly to Nietzsche’s superman, level-G AIs
trans-humanism or post-humanism. can become a moral ideal and a source of legislation
Aristotle’s virtue ethics takes individual charac- for humanity. In this sense, level-G AIs will have a
ters as the most fundamental moral judgement. higher moral status than humans, and Nietzsche’s
Virtue ethics pays attention to the characters of moral philosophy is a high-status proposition.
moral subjects, which are the motivation for Trans-humanism has provided another high-status
moral behaviour. According to Aristotle’s world perspective. Huxley (1968) believed that human life
view, the hierarchy of different things depends on is uncivilized, barbaric and transient and that most
the different functions or qualities they possess. people endure great suffering throughout their
The level of function or quality determines the lives. He envisioned a world in which humans will
level of moral status, and individuals with low- be capable of breaking their shackles and creating a
level quality are at the service of individuals with being greater than themselves.
high-level quality (Aristotle, 2009). Like utilitar- Trans-humanist or post-humanist thinkers advo-
ianism, virtue ethics creates the hierarchy of AIs’ cate the development and dissemination of reliable
moral status. If some AIs surpass humans and science and technology that significantly improve
acquire level-G super-intelligence, humans should the physical and psychological state of humans
meet and serve their needs. If level-G AIs prove (Elliott, 2011). They study the potential impacts of
to be more useful than humans, they should be emerging technologies on humans and argue that
seen as more beneficial than and superior to humans will eventually transform into different
humans. Therefore, level-G AIs will have a entities (trans-humans) or create different entities
higher moral status than humans. This is the high- (post-humans). Post-humans will have significantly
status proposition that can be derived from virtue stronger capabilities. AIs with level-G super-
ethics. intelligence qualify for such a super-human or post-
Aristotle did not propose what is beyond human human existence and will have a higher moral status
existence, but Nietzsche suggested that what is than humans. This is a trans-humanist high-status
beyond human is the overman or superman (Müller- proposition.
Lauter, 1999, 72–83). According to Nietzsche’s This trans-humanist high-status proposition
moral philosophy, God is dead and all traditional carries a significant practical risk. The high moral
moral cultures need to be re-evaluated. The super- status of AIs could seriously undermine human sur-
man can create a new value system with a new vival and have serious implications for all aspects
world view. They will shape a new morality that is of human life. Human values, such as fairness,
different from traditional and popular morality. freedom and kindness, could be altered. The high-
They are the best manifestation of the will to live status proposition may lead to AI racism, AI specie-
and the power of creativity. Nietzsche’s moral philo- sism or even AI fascism. Greater efforts are needed
sophy aims to create a new value system that will to avoid those risks in future studies on AI morality.
save humanity from moral degradation. He called
for the creation of a superman who can save human-
6. Conclusions on moral status
ity from tragedy. The superman represents the
highest values that humanity can and should create By examining different schools of thought on moral
and embodies the moral and progressive qualities ethics, four possible scenarios of AI moral status can
of heroes. The superman is a witness to the inequal- be established. The analysis of those four possible
ities among humans, societies and nations; the avatar scenarios in the context of classical ethics reveals
of truth and morality; and the creator and guardian of the distinctive features of each moral status.
norms and values. Nietzsche believed that the ulti- Utilitarianism and virtue ethics can both lead to
mate goal of morality lies with the superman, not four moral status propositions. However, the low-
humans. status proposition and the high-status proposition
Xie 133

are dangerous for either AIs or self-conscious periods. The reptilian brain, paleomammalian brain
humans. The deduction based on deontology leads and neomammalian brain correspond to different
to the no-status and equal-status propositions. All types of emotion (see also the evolution of emotions
these four moral statuses may exist in the real at https://psychology.wikia.org/wiki/Evolution_
of_emotion).
world of the future. In a different time and space,
7. The moral community refers to the sum of all indivi-
humans and AIs with different moral statuses may
duals and groups that should treat each other in
either run into conflict or engage in cooperation, accordance with ethical norms.
and they will also become more interdependent. 8. The zeroth law: A robot may not harm humanity, or,
What we can be certain about is that self-conscious by inaction, allow humanity to come to harm.
humans and AIs must establish a friendly and
equal moral relationship.
References
Declaration of conflicting interests Aquinas T (1981) Summa Theologiae. Fathers of the
English Dominican Province (trans.). Allen: Christian
The author declared no potential conflicts of interest with Classics.
respect to the research, authorship and/or publication of Aristotle (2009) The Nicomachean Ethics. Ross D (trans.).
this article. Oxford: Oxford University Press.
Asimov I (1950) I, Robot. New York: Gnome Press.
Funding Bostrom N and Yudkowsky E (2011) The ethics of artifi-
cial intelligence. Available at: http://www.doc88.
The author disclosed receipt of the following financial
support for the research, authorship and/or publication of com/p-1496981855025.html (accessed 7 October
this article: This study is supported by the Research 2019).
Program of Philosophy and Social Sciences of Higher Bostrom N, Dafoe A and Flynn C (2018) Public policy and
Learning Institutions of Shanxi ‘Reflections on and superintelligent AI: A vector field approach.
Reconstruction of Anthropocentrism Consciousness’ Available at: https://pdfs.semanticscholar.org/9601/
(grant number 2021W093). 74bf6c840bc036ca7c621e9cda20634a51ff.pdf (accessed
7 October 2019).
Descartes R (1983) Principles of Philosophy. Miller VR
Notes and Miller RP (trans.). Dordrecht: Springer.
1. We can also use other concepts, such as attention, per- Descartes R (2003) Discourse on Method and Meditations.
ception, intuition, desire, thought, attitude and belief, Haldane ES and Ross GRT (trans.). Mineola: Dover.
to analyse the consciousness capabilities of AI. For Elliott C (2011) Enhancement technologies and the
the convenience of discussion, only six consciousness modern self. Journal of Medicine and Philosophy
abilities are selected here. 36(4): 364–374.
2. Although humans are mammals, they have a signifi- Ford JKB, Ellis GM, Matkin DR, et al. (2005) Killer whale
cantly higher level of consciousness and thus we attacks on minke whales: Prey capture and antipreda-
can separate humans from mammals. tor tactics. Marine Mammal Science 21(4): 603–618.
3. Super-intelligence is a more complex quale (property Gallup GG (1970) Chimpanzees: Self-recognition. Science
or quality) than human consciousness that future 167(3914): 86–87.
super-humans, post-humans or AIs may have. Holden C (1979) Paul MacLean and the triune brain.
4. The eight entities refer to invertebrates, fish, amphi- Science 204(4397): 1066–1068.
bians, reptiles, birds, non-human mammals, humans Hunt GR (1996) Manufacture and use of hook-tools
and super-humans. by New Caledonian crows. Nature 379(6562):
5. Tang’s research unifies insect vision and vertebrate 249–251.
vision at a cognitive level. Huth AG, De Heer WA, Griffiths TL, et al. (2016) Natural
6. The issue discussed here is a little complicated. Paul speech reveals the semantic maps that tile human
MacLean believed that, in the triune brain, only the cerebral cortex. Nature 532(7600): 453–458.
paleomammalian brain has emotional functions; Huxley J (1968) Transhumanism. Journal of Humanistic
thus, mammals also have emotions. However, the Psychology 8(1): 73–76.
range of emotions discussed here is broader. Kant I (1956) Groundwork of the Metaphysics of Morals.
Different emotions evolved and emerged in different New York: Harper Torchbooks.
134 Cultures of Science 4(3)

McCloskey HJ (1979) Moral rights and animals. Inquiry Tang SM, Wolf R, Xu SP, et al. (2004) Visual pattern
22(1–4): 23–54. recognition in Drosophila is invariant for retinal posi-
Mill JS (1998) Utilitarianism. New York: Oxford tion. Science 305(5686): 1020–1022.
University Press. Tye M (2017) Do fish have feelings? In: Andrews K and
Morgan CL (1894) An Introduction to Comparative Beck J (eds) The Routledge Handbook of
Psychology. London: Walter Scott. Philosophy of Animal Minds. London: Routledge,
Müller-Lauter W (1999) Nietzsche: His Philosophy of pp.169–175.
Contradictions and the Contradictions of His Williams FE, White D and Messer WS (2002) A simple
Philosophy. Parent D (trans.). Urbana: University of spatial alternation task for assessing memory
Illinois Press. function in zebrafish. Behavioural Processes 58(3):
Murdy WH (1975) Anthropocentrism: A modern version. 125–132.
Science 187(4182): 1168–1172. Xie JH (2021) The future: A transhumanist approach to
Putnam H (1967) Psychological predicates. In: Capitan consciousness. International Journal of Social
WH and Merrill DD (eds) Art, Mind, and Religion. Science and Education Research 4(5): 119–133.
Pittsburgh: University of Pittsburgh Press, pp.37–48.
Rawls J (1999) A Theory of Justice. Cambridge: Belknap
Author biography
Press.
Searle JR (1980) Minds, brains, and programs. Behavioral Jianhua Xie, PhD, is a lecturer at Taiyuan Normal
and Brain Sciences 3(3): 417–424. University. His research interests include the ethics of
Singer P (1974) All animals are equal. Philosophical science and technology, the philosophy of mind and
Exchange 1(5): 243–257. post-humanism.

You might also like