Programming Machine Ethics

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Minds & Machines (2017) 27:253–257

DOI 10.1007/s11023-016-9398-x

Luı́s Moniz Pereira & Ari Saptawijaya, Programming


Machine Ethics
Studies in Applied Philosophy, Epistemology and Rational Ethics,
Switzerland: Springer, 2016, €99.99, ISBN 978-3-319-29353-0

Sean Welsh1

Received: 2 August 2016 / Accepted: 15 August 2016 / Published online: 22 August 2016
Ó Springer Science+Business Media Dordrecht 2016

Should robotic scholars of the future come to write books about the early history of
the ethical systems implemented in their cognition, Programming Machine Ethics
will feature prominently. The first generation of machine ethicists landed on the
beach of the Robotic New World, viewed the lay of the land and debated how to
hack a path through the mountainous jungle of the interior. Many flowers bloomed.
Many withered. This book picks up a machete and blazes a trail into the interior that
cuts deeper and broader than any other book yet on the market.
By machine ethics is meant the specific technical endeavour of programming
ethics into machines (such as robots). This relates to but is distinct from broader
ethical debates relating robots and automata such as whether or not ‘‘killer robots’’
should be banned and how society will cope with ubiquitous automation. This book
is mostly about programming normative rules into robots and machines rather than
deciding what those normative rules should be and what normative rules should or
should not be programmed into robots. However, in the closing chapters the authors
offer some tantalizing glimpses as to how normative rules evolve in groups of
intelligent agents seeking to maximize their survival through cooperation, the
behaviours such agents exhibit (guilt, regret, apology, forgiveness), and how such
evolution might be formalized.
Whether the pioneering trail the authors blaze becomes the main highway of
machine ethics remains to be seen. The book is strongly committed to functionalism
in the philosophy of mind, contractualism in ethics and XSB Prolog as its preferred
programming language. All these commitments can be challenged.
Functionalism may not be an entirely satisfactory account of mind. Contractu-
alism may not be an entirely satisfactory moral theory. XSB may not be the best
dialect of Prolog. Prolog may not be the best logic programming language. Logic

& Sean Welsh


sean.welsh@pg.canterbury.ac.nz
1
Department of Philosophy, University of Canterbury, Christchurch, New Zealand

123
254 S. Welsh

programming may not be the best approach to programming ethics. Programming


ethics may not even be possible.
There are many scholars of a particularist or virtue ethicist bent who dispute the
codifiability thesis of ethics: the view that ethics can be fully expressed in a set of
computable rules. Even if programming ethics is possible, many question the
propriety of delegating moral decisions to machines, especially military ones.
Putting aside the views of those who dismiss the entire field of machine ethics as
immoral (because such activity should be reserved to virtuous humans) or
impossible (because ethics cannot be fully codified), there remains a wide range
of viable options in machine ethics. One might prefer utilitarianism to contractu-
alism, Lisp to Prolog or Deep Learning to Logic Programming. To get off the beach,
though, one needs to make choices and implement them in working code. It could
be many choices of moral analysis lead to programming dead-ends. As the field
matures, which choices work best and which trails lead to viable moral solutions in
autonomous robots will become apparent.
The authors make their choices clear and blaze their trail with clarity, brevity and
rigour.
In Chapter 1 they defend functionalism. Material substrate is not important. What
matters is realization of function. One can do morality on a silicon chip. The biology
of the brain is not relevant.
Certainly, there are battles to be fought here. While one might think the material
substrate matters not for geometric concepts such as shape, size and trajectory, when
one moves to phenomenal concepts of deep moral relevance such as joy and fear
some might be inclined to push back a little and suggest that joy and fear
implemented on silicon might struggle to attain the exact same functionality as joy
and fear implemented with oxytocin and cortisol in a blood-brain interface. The
authors contend (as many do) that a mathematical functional model of a
psychological theory of emotion will suffice for practical purposes in machine
ethics.
Chapter 2 provides a very brief survey of research in machine ethics. It focuses
mainly on the logic programming (LP) approaches, stressing the importance of
knowledge representation and reasoning features essential to moral agency. These
include abduction with integrity constraints, preferences of abductive scenarios,
probabilistic reasoning, counterfactuals and updating.
Chapter 3 moves into ethical details, focusing on notions of moral permissibility,
with reference to the doctrines of Double Effect, Triple Effect and Scanlonian
contractualism. At a cognitive level the authors describe the dual process model that
stresses interaction between deliberation and reaction in moral decision making.
Finally, they describe the role of counterfactual thinking in moral reasoning.
Counterfactual thinking is of great importance in moral reasoning over time. The
dual-process model is relatively orthodox and unobjectionable. The doctrines of
Double and Triple Effect are more contentious in certain bioethical applications,
notably abortion for medical reasons.
If the polling of Bourget and Chalmers (2014) is to be believed, relatively few
professional philosophers subscribe (as yet) to the moral theory of contractualism
advocated in Scanlon (1998) and defended by the authors. Time will tell whether

123
Luı́s Moniz Pereira & Ari Saptawijaya... 255

being relatively ‘‘early adopters’’ will pay off for the authors as machine ethics
matures. But even if contractualism turns out not to be ‘‘the correct moral theory’’
there is much to learn from the technical details of this book.
One of the most attractive features of the book is the use of technology to solve
classic ‘‘trolley problems’’ such as Switch and Footbridge. These problems have
been long debated in moral philosophy. Should one throw the switch and kill one to
save five? Should one push the fat man from the footbridge and kill one to save five?
The Switch and Footbridge problems have been subject to extensive philosophical
debate since Foot (1967). Their formalization in Prolog is welcome, impressive in
its detail and timely in a world where people are starting to debate trolley problems
as they apply to autonomous cars that are already on the road. Should the
autonomous car swerve and kill one driver to save five jaywalkers?
Even if one challenges its ethical analysis the project is worthy and breaks much
new ground.
Chapter 4 embarks on elucidating the technical capabilities needed to solve moral
problems and provides an overview of the technical core of the book. Abduction is
needed to generate scenarios and to engage in hypothetical reasoning about the
future (what if I throw the switch) but also about the past (what if I had thrown the
switch). Preferences need to be formalized so as to choose the better scenarios from
those determined by abduction. Probabilistic LP is needed to handle uncertainty in
scenarios. LP updating is needed to update the agent’s knowledge representation
whether it is actual or hypothetical. LP counterfactuals permit ‘‘if only’’ type
reasoning: hypothesizing about the past taking present knowledge into account.
Tabling, a relatively recent innovation in Prolog, is applied to enable the reuse of
previously computed solutions. This feature would enable faster responses in a
computational moral agent.
Chapter 5 describes novel ways to employ tabling in abduction and updating.
TABDUAL facilitates the tabling abductive solutions in contextual abduction.
EVOLP/R supports the incremental tabling of fluents and permits rules to be
asserted and retracted.
Chapter 6 focuses on counterfactuals. Classically, the trolley problems are
typically not presented as probabilistic scenarios (which would be more
realistic) but as non-probabilistic (if I do not throw the switch five will certainly
die). The authors follow this tradition in their formalizations of the trolley
problems.
Chapter 7 discusses ACORDA, PROBABILISTIC EPA, and QUALM in
detail, emphasizing how each of them distinctively incorporates a combination
of the LP-based representation and reasoning features discussed in Chapter 4.
ACORDA is a system that implements prospective logic programming.
Prospective logic programming enables an evolving program to look ahead
prospectively into its possible future states and to select preferred states to
satisfy goals. EPA adds probabilistic reasoning. QUALM combines the
functions of tabling and updating.
Chapter 8 details the applications of ACORDA, PROBABILISTIC EPA, and
QUALM for modelling various details relevant to the chosen moral problems.

123
256 S. Welsh

Chapter 9 begins the transition from ‘‘the individual realm’’ (single-agent moral
decision-making) to ‘‘the collective realm’’ (multi-agent decision-making over
time). It draws on Evolutionary Game Theory and focuses on features of cognitive
agents that support the evolution of norms over time. Cognitive abilities such as
intention recognition, commitment, revenge, apology and forgiveness, support the
emergence of cooperation in the population. Populations of agents that lack such
capabilities are less cooperative.
Chapter 10 discusses the links between the individual realm and the collective
realms over time in the context of evolutionary game theory. The authors contend
that evolutionary game theory gives support to contractualism as a moral theory and
that contractualism is a good fit for robotic implementations of moral functionality.
The authors’ main conclusions are that abductive and counterfactual reasoning
are essential in moral cognition. They argue that substrate does not matter. One can
do ethics within the resources of logic programming and without a beating human
heart. They provide abundant detail and demonstrate numerous programming
techniques that will enable correct moral functionality in autonomous robots. They
apply all this to well-known ‘‘off the shelf’’ problems from the philosophical
literature.
They challenge the view that machine learning approaches are adequate for
machine ethics. No doubt this will be a red flag to proponents of deep reinforcement
learning as a fast track to general moral intelligence. The authors claim that only
small, circumscribed, well-defined domains have been susceptible to rule generation
through machine learning. Rules, they say, are all important for moral explanation,
justication and argumentation. Logic programming is thus well suited to working in
a world of rules.
Such commitments to rules will not endear the authors to particularists and virtue
ethicists who rely on value holism rather than moral principles to navigate the moral
landscape. But, in the field of machine ethics, as it currently stands, it is not possible
to please everyone. Numerous matters are up for debate both in terms of ethics itself
and how ethics can be implemented in machines.
Pereira and Saptawijaya make their choices and make them clearly. It may be
that the field comes to reject some of them. Even so, the fact they have made them
and defended them ably will no doubt cause others to suggest better ways to
penetrate the ethical jungle.
Programming Machine Ethics is a tough read for a non-programmer. Even for an
experienced programmer, it is far from easy. It is a challenging combination of
cutting-edge logic programming applied to novel formalizations of ethics. Even so,
as Selmer Bringsjord says in his Foreword, the book delivers what is ‘‘currently the
most sophisticated, promising, robust approach to machine ethics, period.’’ As such,
it repays the effort of understanding it.
The true merit of a book is whether it changes your thinking about a subject. This
reviewer can honestly say it has changed his thinking on numerous aspects of
machine ethics. For those interested in the formalization of moral problems such
that they can be solved by robots, it is a must read.

123
Luı́s Moniz Pereira & Ari Saptawijaya... 257

References

Bourget, D., & Chalmers, D. (2014). What do philosophers believe? Philosophical Studies, 170(3),
465–500.
Foot, P. (1967). The problem of abortion and the principle of double effect. Oxford Review, 5, 5–15.
Scanlon, T. (1998). What we owe to each other. Massachusetts: Harvard University Press.

123

You might also like