Professional Documents
Culture Documents
zREAD ME BITCH ASSIGNMENT 2 INFO
zREAD ME BITCH ASSIGNMENT 2 INFO
Agency
Nuclear energy taught us how difficult it is to control the technological genie
once it gets out of the bottle.
If we believe in the law saying "Anything that can go wrong will go wrong“,
(Murphy’s law), then would it be wise to ban developing super-intelligent
machines to their fullest potential?
In this unit, we discuss one of the main ethical issues related to artificial
intelligence. Can artificial intelligence be a moral agent?
Your cat might be a moral subject, but you don't think that your cat is
morally responsible for what it does, so your cat is not a moral agent.
Many things are neither moral agents nor moral subjects. For example, your
car by itself is neither a moral agent nor a moral subject. If someone
damages your car, your moral rights, as the owner of the car, has been
violated, but we don't think that the moral right of the car itself has been
violated.
WATCH
WATCH
Is Sophia the Robot a moral agent? Based on this view, to be a moral agent,
the agent needs to have personhood. An entity is a person if and only if it
has self-consciousness. Therefore, Sophia the Robot is a moral agent if and
only if it is a person. Sophia the Robot is a moral agent if and only if it has
self-consciousness. So, in order to answer the question, we need, first,
answer the question of whether Sophia the Robot has self-consciousness or
not. To solve this moral problem, we need to discuss the concepts of
consciousness and self-consciousness in artificial intelligence. We know that
having self-consciousness means having an internal understanding of 'I' or
“the consciousness of myself.” If we think that artificial intelligence cannot
have consciousness, then consequently we think that artificial intelligence
cannot have self-consciousness. Therefore, artificial intelligence cannot be a
moral agent. For example, according to John Searl, machines merely use
syntactic rules to manipulate symbol strings, but they have no
understanding and consciousness. So, Sophia the Robot does not have
consciousness and self-consciousness. So, Sophia the Robot is not a person.
Therefore, Sophia the Robot is not a moral agent. Even if it acts like a moral
agent, that does not mean it is a genuine moral agent.
The Three Requirements
Some scholars argue that in order to be a moral agent, the agent does not
need necessarily to be a person. John P. Sullins: "It is not necessary for a
robot to have personhood in order to be a moral agent" (Sullins, 2011,
pp.151-162). Sullins mentioned that for a robot to be a moral agent, the
robot needs to meet three requirements:
Is the robot significantly autonomous? The robot needs to have free
will. Autonomy is achieved when the robot is significantly autonomous
from any programmers or operators of the machine.
Is the robot’s behavior intentional? The robot needs to have
intentionality. Intentionality is achieved when one can explain the
robot’s behavior only by ascribing to it 'intentions' to do good or harm.
Is the robot in a position of responsibility? Robot moral agency
requires the robot to behave in a way that shows an understanding of
responsibility to others.
If our answer is ‘yes’ to all of the above three questions, then the robot is a
moral agent (Sullins,2011).
Based on the three requirements, there are four possible views on the moral
agency of robots:
Robots are not now moral agents but might become in the future.
Robots are incapable of becoming a moral agent now or in the future.
Human beings are not moral agents, but Robots are.
A robot is not a fully moral agent like human beings, however, it could
have a kind of moral agency. We can discuss the moral agency of
robots on the basis of a different understanding of the three
requirements (Sullins, 2011).
Johnson D.G. (2011). Moral entities but not moral agents. In M. Anderson &
S. L. Anderson (Eds.), Machine ethics (pp. 168-183). Cambridge: Cambridge
University Press.