Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Artificial Intelligence and Moral

Agency
Nuclear energy taught us how difficult it is to control the technological genie
once it gets out of the bottle.
If we believe in the law saying "Anything that can go wrong will go wrong“,
(Murphy’s law), then would it be wise to ban developing super-intelligent
machines to their fullest potential?
In this unit, we discuss one of the main ethical issues related to artificial
intelligence. Can artificial intelligence be a moral agent?

Moral Agent vs. Moral Subject

A moral agent is a conscious being that is responsible for its actions.

A moral subject is an entity that is entitled to be treated in a certain way,


that is, it is entitled to moral protection.
Not all moral subjects are moral agents. For example, a two-year-old baby is
a moral subject. That is, the baby is entitled to be under our moral
protection, but we don't think that a two-year-old baby can be fully
responsible for his/her acts. Therefore, we don't think that the baby is a
moral agent.

Your cat might be a moral subject, but you don't think that your cat is
morally responsible for what it does, so your cat is not a moral agent.

Many things are neither moral agents nor moral subjects. For example, your
car by itself is neither a moral agent nor a moral subject. If someone
damages your car, your moral rights, as the owner of the car, has been
violated, but we don't think that the moral right of the car itself has been
violated.

WATCH

The Moral Status of Humanoid Robots


The first issue with regard to ethics and artificial intelligence is the moral
relationship between human beings and artificial intelligence. In order to
discuss the moral relationship between humans and super-intelligent
machines, we need to clarify the moral status of humanoid artificial
intelligence like Sophia the Robot. Super-intelligent humanoid robots engage
with us socially. In the near future, human beings will interact with robots in
their everyday lives. What would be our moral relationship with those
robots?
The question about the moral status of artificial intelligence (humanoid
super-intelligent machines) is important because of some issues. First of
all, do fully autonomous and highly sophisticated robots that are nearly
indistinguishable from humans have their own moral rights? Should human
beings be ethically responsible for humanoid robots? Is it acceptable to have
them as our compulsory labor forces? Is it acceptable to treat robots in a
way normally considered to be unacceptable? Who would be responsible if a
robot fails and people get hurt? Can we rely on our robots to distinguish
between right and wrong, morally speaking?
With regard to the moral status of artificial intelligence, there are two main
questions:
 Is a humanoid robot, like Sophia the Robot, a moral agent?
 Is a humanoid robot, like Sophia the Robot, a moral subject?
A moral agent is a conscious being that is responsible for its actions and
a moral subject is an entity that is entitled to moral protection. A moral
subject is a being that has moral rights. In this unit, we focus on the first
question. Is a humanoid robot a moral agent? Can humanoid robots be
ethically responsible for human beings and other machines? Is it possible for
a machine to conform to the kind of moral norms that we require for any
moral agent? Is Sophia the Robot a moral agent? If it appears as a moral
agent, is it a “genuine moral agent,” or it behaves as if?

Moral Agency and Personhood

WATCH

We can discuss the relationship between moral agency and artificial


intelligence based on the concept of Personhood. Personhood is the
necessary and sufficient condition for being a moral agent. All persons are
moral agents. If we think that Sophia the Robot is a person, then Sophia the
Robot is a moral agent. As discussed, the core of personhood is to have self-
consciousness. A person is a moral agent because persons are beings that
are capable of reflecting on themselves. For example, a person is an agent
who is capable of thinking of "I should not have done that." This is a thought
that only a moral agent can think of.

Is Sophia the Robot a moral agent? Based on this view, to be a moral agent,
the agent needs to have personhood. An entity is a person if and only if it
has self-consciousness. Therefore, Sophia the Robot is a moral agent if and
only if it is a person. Sophia the Robot is a moral agent if and only if it has
self-consciousness. So, in order to answer the question, we need, first,
answer the question of whether Sophia the Robot has self-consciousness or
not. To solve this moral problem, we need to discuss the concepts of
consciousness and self-consciousness in artificial intelligence. We know that
having self-consciousness means having an internal understanding of 'I' or
“the consciousness of myself.” If we think that artificial intelligence cannot
have consciousness, then consequently we think that artificial intelligence
cannot have self-consciousness. Therefore, artificial intelligence cannot be a
moral agent. For example, according to John Searl, machines merely use
syntactic rules to manipulate symbol strings, but they have no
understanding and consciousness. So, Sophia the Robot does not have
consciousness and self-consciousness. So, Sophia the Robot is not a person.
Therefore, Sophia the Robot is not a moral agent. Even if it acts like a moral
agent, that does not mean it is a genuine moral agent.
The Three Requirements
Some scholars argue that in order to be a moral agent, the agent does not
need necessarily to be a person. John P. Sullins: "It is not necessary for a
robot to have personhood in order to be a moral agent" (Sullins, 2011,
pp.151-162). Sullins mentioned that for a robot to be a moral agent, the
robot needs to meet three requirements:
 Is the robot significantly autonomous? The robot needs to have free
will. Autonomy is achieved when the robot is significantly autonomous
from any programmers or operators of the machine.
 Is the robot’s behavior intentional? The robot needs to have
intentionality. Intentionality is achieved when one can explain the
robot’s behavior only by ascribing to it 'intentions' to do good or harm.
 Is the robot in a position of responsibility? Robot moral agency
requires the robot to behave in a way that shows an understanding of
responsibility to others.
If our answer is ‘yes’ to all of the above three questions, then the robot is a
moral agent (Sullins,2011).
Based on the three requirements, there are four possible views on the moral
agency of robots:
 Robots are not now moral agents but might become in the future.
 Robots are incapable of becoming a moral agent now or in the future.
 Human beings are not moral agents, but Robots are.
 A robot is not a fully moral agent like human beings, however, it could
have a kind of moral agency. We can discuss the moral agency of
robots on the basis of a different understanding of the three
requirements (Sullins, 2011).

The First View


Robots are not now moral agents but might become in the future.
Daniel Dennett in his essay “When HAL Kills, Who is to Blame?” (1998)
argues that we have no machine now that have the three characteristics
(the three requirements), but we might have machines with those
characteristics in the future (Sullins, 2011, p. 155).

The Second View


Robots are incapable of becoming a moral agent now or in the future.
Selmer Bringsjord argues that robots will never have autonomous free will
since they can never do anything that they are not programmed to do. In
order to be morally responsible, a robot needs to be able to choose between
two options (morally bad and morally good). If a robot has been fully
programmed, then this is the programmer that ultimately makes a decision
and chooses one of the two options. So, robots cannot have free will.
Therefore, robots don't have autonomy (Sullins, 2010, p. 156-7). "The only
way that [a robot] can do anything surprising to the programmers requires
that a random factor be added to the program, but then its actions are
merely determined by some random factors, not freely chosen by the
machine, therefore the robot is no moral agent" (Sullins, 2011, p. 156).
Those scholars who argue that robots can be moral agents try to reject the
second view by saying that all human beings have been programmed by
nature or nurture (genes, culture, and education, etc.) So, if we take
Bringsjord’s argument seriously, then even human beings are not moral
agents. This is because we are not fully autonomous in that sense.

The Third View


We are not moral agents, but robots are.
Joseph Emile Nadeau argues that only those agent are moral agents that
whose actions are free actions and action is a free action if and only if it is
based on reason (logical theorems).
Robots are programmed fully based on logic while human beings’ actions are
not fully on the basis of logic (Sullins, 2010, p.156). So, only robots are
moral agents and human beings are not true moral agents.

The Fourth View


A robot is not a fully moral agent like human beings, however, it could have
a kind of moral agency. We can discuss the moral agency of robots on the
basis of a different understanding of the three requirements
Luciano Floridi and J. W. Sanders argue that we don’t need to base our
discussion of moral agency on the concepts of free will and intentionality
since these are debatable concepts in the philosophy literature that are
inappropriately applied to artificial intelligence. If an agent has interaction
with its surrounding environment and its programming is independent of the
environment and its programmers, then we can maintain that the agent has
its own agency. If such an agent causes any harm, we can logically ascribe a
negative moral value to it (Sullins, 2011, p.157). Based on the Fourth
position we can offer arguments to prove the existence of (a kind of) moral
agency for robots.
Sullins argues that we need to revise our interpretations of the three
requirements― autonomy, intentionality, and responsibility― for being a
moral agent.
The first question (about autonomy) asks if the robot could be seen as
significantly autonomous from any programmers, operators, and users of the
machine. What do we mean by the term ‘autonomy?’ There are two different
meanings:
 Philosophical autonomy: So far, we have interpreted ‘autonomy’ based
on the philosophical conception of autonomy. An agent is an
autonomous agent if its actions are truly its own actions and its actions
are not (even partly) caused by any factor outside of its control. It
requires to have absolute free will. Do human beings are fully
autonomous in that sense? If not, are we real moral agents? Based on
this definition of autonomy, even we are not sure human beings are
autonomous.
 Engineering autonomy: The machine is not under the direct control of
any other agent or user. The robot must not be a telerobot. If the
robot has this level of autonomy, then the robot has a practical
independent agency.
So, Sullins argues that if machines have engineering autonomy, then
machines are capable of being moral agents. But, is engineering autonomy
sufficient for being a moral agent?

The second question (second requirement) is about intentionality. To be


morally responsible for its acts, the agent necessarily needs to have
“intending to act,” that is, the agent needs to have intentionality. Do robots
have intentionality? It might be difficult to say that robots can have
intentionality in that sense. But what if we consider a weak sense of
intentionality? "If the complex interaction of the robot’s programming and
environment causes the machine to act in a way that is morally harmful or
beneficial, and the actions are seemingly deliberate and calculated, then the
machine is a moral agent." So, Sullins argues that if machines seemingly
have intentionality, then machines are capable of being moral agents
(Sullins, 2011, p. 158).

The third question (third requirement) is about responsibility. Based on the


third requirement, if a robot behaves in such a way that we can only make
sense of its behavior by assuming it has a responsibility to others and we
can ascribe to it the ‘belief’ that it has the responsibility to others, then the
robot meets the third criterion. But can we ascribe beliefs to robots? Sullins
argues that beliefs do not have to be real beliefs. We don’t know if machines
have consciousness or not. So, we cannot ascribe real beliefs to machines.
However, we might be able to ascribe a kind of belief. It means if a robot
behaves in such a way that we can only make sense of its behavior by
assuming it has the responsibility to others, then we can ascribe "a kind of
belief" to the robot (Sullins, 2011, p.159).
Consider this example. Sullins argues that if a caregiver robot is not under
the control of any other agent (it has engineering autonomy)
and seemingly has intentionality and behaves in such a way that we can
only make sense of its behavior by assuming that it has a responsibility to
others, then we can maintain that the robot is a moral agent, though not in
the sense of agency we ascribe to human beings.

Artificial Intelligence Is a Moral Entity

Deborah G. Johnson argues that AI is not a moral agent, but it is a moral


entity. We cannot say that robots are capable of being moral agents because
they don't have free will in the same way human beings have free will.
However, it is not the case that AI has no moral responsibility. AI is in the
realm of morality. Robots are not moral agents, but they are a part of the
moral world. Johnson argues that robots don't have intentionality like human
beings, but they have intentionality in a sense (Johnson, 2011, pp.199-200).
Robots have built-in intentionality. Computers and robots are created by
human beings as a result of their intentionality. “Computers have
intentionality, but the intentionality put into them by the intentional acts of
their designers. The intentionality of artifacts is related to their functionality"
(Johnson, 2011, p. 201). A robot (AI) does not have mental states and
intentionality by itself. But it has built-in intentionality as it is being poised to
behave in certain ways in response to certain situations by human beings
(Johnson, 2011, p. 202). "The intentionality of computers are related to the
intentionality of the designer and the intentionality of the user" (Johnson,
2011, p. 201).
Robots and computers have built-in intentionality once they have been
produced. They are able to act independently and without human
intervention. “The intentionality of computer systems means that they are
closer to moral agents than is generally recognized. This does not make
them moral agents because they do not have mental states and intending to
act, but it means that they are far from neutral...Computers are closer to
being moral agents than are natural objects" (Johnson, 2011, p. 202).
Artificial Intelligence Is a Fellow Moral Agent
Sullins suggested another view about artificial intelligence and moral agency.
Sullins uses an example from another human technology over the history of
human civilization i.e. the domestication of wild animals e.g. dogs. According
to Sullins, we can say that artificial intelligence is a fellow moral agent like a
trained dog. Let’s look at the example of guide dogs for visually impaired
people. Sullins: "Certainly, providing guide dogs for the visually impaired is
morally praiseworthy, but is a good guide dog morally praiseworthy in itself?
I think so" (Sullins, 2011, p.153). Consider a guide dog that helps a visually
impaired person. The dog was trained by a trainer. According to Sullins, the
trainer of the dog and the dog itself share a moral agency.
Is a robot (nurse robot) similar to the guide dog? Or it is just a tool like a
hammer?
Sullins: “No robot in the real world or that of the near future is, or will be, as
cognitively robust as a guide dog. Yet even at the modest capabilities robots
have today some have more in common with the guide dog than a simple
tool like a hammer” (Sullins, 2011, p. 153).
When it comes to the behaviors of a robot, it is not the only locus of moral
agency, however, it can be seen as a fellow moral agent in a community of
moral agents. Moral agency is found in a web of relations within a
community. So, a robot itself is a fellow moral agent while its programmers,
builders, users of the robot, and even its marketers are all morally
responsible. All of them form a community of interaction within which the
robot itself can be considered as a fellow moral agent. So, the programmers
of the robot are somewhat responsible, but not entirely (Sullins, 2011,
p.155).
References:

Sullins J.P. (2011). When is a robot a moral agent? In M. Anderson & S. L.


Anderson (Eds.), Machine ethics (pp. 151-162). Cambridge: Cambridge
University Press.

Johnson D.G. (2011). Moral entities but not moral agents. In M. Anderson &
S. L. Anderson (Eds.), Machine ethics (pp. 168-183). Cambridge: Cambridge
University Press.

You might also like