Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

G u e s t E d i t o r s ’ I n t r o d u c t i o n

Machine Ethics
Michael Anderson, University of Hartford

Susan Leigh Anderson, University of Connecticut

P ast research concerning the relationship between technology and ethics has

focused largely on the responsible and irresponsible uses humans make of tech-

nology; a few people have also been

interested in how human beings ought

to treat machines. In all cases, only the

humans have engaged in ethical rea-

soning. We believe the time has come

for adding an ethical dimension to at

least some machines. Adding this

dimension acknowledges the ethical

ramifications of recent and potential

developments in machine autonomy.

In contrast to computer hacking, software


property issues, privacy, and other topics nor-
mally ascribed to computer ethics, machine
ethics is concerned with how machines behave
toward human users and other machines. A
goal of machine ethics is to create a machine
that’s guided by an acceptable ethical princi-
ple or set of principles in the decisions it makes
about possible courses of action it could take.
The behavior of more fully autonomous
machines, guided by such an ethical dimension,
is likely to be more acceptable in real-world
environments than that of machines without
such a dimension.

10 1541-1672/06/$20.00 © 2006 IEEE IEEE INTELLIGENT SYSTEMS


Published by the IEEE Computer Society
AAAI 2005 Fall Symposium fiction, however, Asimov himself wrote ciples into an autonomous system’s deci-
This special issue stems from the AAAI often and convincingly about the ambigui- sion procedure. Tom Powers’ “Prospects for
2005 Fall Symposium on Machine Ethics. ties, inconsistencies, and complexities with a Kantian Machine,” on the other hand,
The symposium brought together partici- these principles. Finally, in “The Bicen- assesses the feasibility of using deontic and
pants from computer science and philoso- tennial Man,” Asimov clearly rejected these default logics to implement Immanuel
phy to clarify the nature of this newly laws on ethical grounds as an ideal basis for Kant’s categorical imperative.
emerging field and discuss potential machine ethics. Christopher Grau discusses utilitarian-
approaches toward realizing the goal of cre- Turning from speculative fiction to more ism, another well-known ethical theory that
ating an ethical machine. plausible bases for machine ethics, three arti- might serve as a basis for implementation.
The projections that researchers have cles here explore a range of machine learning “There Is No ‘I’ in ‘Robot’: Robots and
made for autonomous technology are lim- techniques to codify ethical reasoning from Utilitarianism” investigates utilitarianism’s
itless. South Korea has recently mustered examples. In “Particularism and the Classi- viability as a foundation for machine ethics
more than 30 companies and 1,000 scien- fication and Reclassification of Moral from both human-machine and machine-
tists toward the goal of putting “a robot in Cases,” Marcello Guarini advocates a neural machine perspectives.
every home by 2010.” DARPA’s Grand network approach that classifies particular
Challenge to have an autonomous vehicle ethical judgments as acceptable or unac-
drive itself across 132 miles of desert ter- ceptable. Bruce McLaren’s “Computational
rain has been met, and a new Grand Chal- Models of Ethical Reasoning: Challenges,
lenge is in the works to have vehicles
maneuvering in an urban setting. The US
Army’s Future Combat Systems program is
Initial Steps, and Future Directions” details
a case-based-reasoning approach to devel-
oping systems that can provide guidance in
N ot everyone might be comfortable
with the notion of machines making
ethical decisions, preferring machines to
developing armed robotic vehicles that will ethical dilemmas. In “An Approach to Com- defer to human judgment. However, it’s
support ground troops with “direct fire” and puting Ethics,” an invited article, we team important to observe that such a position
antitank weapons. with Chris Armen to develop a decision pro- entails curbing present and future machine
From family cars that drive themselves cedure for an ethical theory that has multiple autonomy to an extent that could severely
and machines that discharge our daily chores prima facie duties. Using inductive-logic pro- hamper the investigation of machine intel-
with little or no assistance from us, to fully gramming, the system learns relationships ligence itself. That said, there’s every rea-
autonomous robotic entities that will begin among these duties that reflect the intuitions son to believe that we can develop ethically
to challenge our notions of the very nature of of ethics experts. sensitive machines. Ethics experts continue
intelligence, the behavior of autonomous Deontic logic—a formalization of the to make progress toward consensus con-
systems will have ethical ramifications. We notions of obligation, permission, and cerning the right way to behave in ethical
contend that machine ethics research is related concepts—is a prime candidate as a dilemmas. The task for those working in
key to alleviating concerns with such sys- basis for machine ethics. In “Toward a Gen- machine ethics is to codify these insights,
tems. It could be argued that the notion of eral Logicist Methodology for Engineering perhaps even before the ethics experts do
autonomous machines without such a dimen- Ethically Correct Robots,” Selmer Bringsjord, so themselves. We hope this special issue
sion is at the root of all fears concerning Konstantine Arkoudas, and Paul Bello will encourage you to join us in this chal-
machine intelligence. Furthermore, investi- describe how deontic logic might be used lenge. Visit www.machineethics.org for
gation of machine ethics, by making ethics to incorporate any given set of ethical prin- more information.
more precise than it’s ever been before, could
lead to the discovery of problems with cur-
rent ethical theories, advancing our thinking
about ethics in general.
T h e A u t h o r s
The articles
Michael Anderson is an associate professor of computer science at the Uni-
In this special issue, two articles explore versity of Hartford. His research interests include machine ethics and dia-
the nature and significance of machine grammatic reasoning. He received his PhD from the University of Connecticut
ethics. Colin Allen, Wendell Wallach, and in computer science. He’s a member of the Yale Bioethics and Technology
Iva Smit provide motivation for the disci- Working Research Group, the AAAI, and Sigma Xi. Contact him at the Dept.
of Computer Science, Univ. of Hartford, 200 Bloomfield Ave., West Hart-
pline in “Why Machine Ethics?” James
ford, CT 06117; anderson@hartford.edu.
Moor considers different possible meanings
of adding an ethical dimension to machines,
as well as problems that might arise in try- Susan Leigh Anderson is a professor of philosophy at the University of
ing to create such a machine, in “The Connecticut, Stamford campus. Her research interests focus on applied ethics.
She received her PhD in philosophy from the University of California at Los
Nature, Importance, and Difficulty of Angeles. She’s a member of the Yale Bioethics and Technology Working
Machine Ethics.” Research Group and the American Philosophical Association. Contact her at
Often cited as representing an ideal set the Dept. of Philosophy, Univ. of Connecticut, 1 University Pl., Stamford,
of ethical principles for machines to follow CT 06901; susan.anderson@uconn.edu.
are Isaac Asimov’s “laws of robotics.” In his

JULY/AUGUST 2006 www.computer.org/intelligent 11

You might also like