Professional Documents
Culture Documents
Programming Machine Ethics
Programming Machine Ethics
Programming Machine Ethics
DOI 10.1007/s11023-016-9398-x
Sean Welsh1
Received: 2 August 2016 / Accepted: 15 August 2016 / Published online: 22 August 2016
Ó Springer Science+Business Media Dordrecht 2016
Should robotic scholars of the future come to write books about the early history of
the ethical systems implemented in their cognition, Programming Machine Ethics
will feature prominently. The first generation of machine ethicists landed on the
beach of the Robotic New World, viewed the lay of the land and debated how to
hack a path through the mountainous jungle of the interior. Many flowers bloomed.
Many withered. This book picks up a machete and blazes a trail into the interior that
cuts deeper and broader than any other book yet on the market.
By machine ethics is meant the specific technical endeavour of programming
ethics into machines (such as robots). This relates to but is distinct from broader
ethical debates relating robots and automata such as whether or not ‘‘killer robots’’
should be banned and how society will cope with ubiquitous automation. This book
is mostly about programming normative rules into robots and machines rather than
deciding what those normative rules should be and what normative rules should or
should not be programmed into robots. However, in the closing chapters the authors
offer some tantalizing glimpses as to how normative rules evolve in groups of
intelligent agents seeking to maximize their survival through cooperation, the
behaviours such agents exhibit (guilt, regret, apology, forgiveness), and how such
evolution might be formalized.
Whether the pioneering trail the authors blaze becomes the main highway of
machine ethics remains to be seen. The book is strongly committed to functionalism
in the philosophy of mind, contractualism in ethics and XSB Prolog as its preferred
programming language. All these commitments can be challenged.
Functionalism may not be an entirely satisfactory account of mind. Contractu-
alism may not be an entirely satisfactory moral theory. XSB may not be the best
dialect of Prolog. Prolog may not be the best logic programming language. Logic
123
254 S. Welsh
123
Luı́s Moniz Pereira & Ari Saptawijaya... 255
being relatively ‘‘early adopters’’ will pay off for the authors as machine ethics
matures. But even if contractualism turns out not to be ‘‘the correct moral theory’’
there is much to learn from the technical details of this book.
One of the most attractive features of the book is the use of technology to solve
classic ‘‘trolley problems’’ such as Switch and Footbridge. These problems have
been long debated in moral philosophy. Should one throw the switch and kill one to
save five? Should one push the fat man from the footbridge and kill one to save five?
The Switch and Footbridge problems have been subject to extensive philosophical
debate since Foot (1967). Their formalization in Prolog is welcome, impressive in
its detail and timely in a world where people are starting to debate trolley problems
as they apply to autonomous cars that are already on the road. Should the
autonomous car swerve and kill one driver to save five jaywalkers?
Even if one challenges its ethical analysis the project is worthy and breaks much
new ground.
Chapter 4 embarks on elucidating the technical capabilities needed to solve moral
problems and provides an overview of the technical core of the book. Abduction is
needed to generate scenarios and to engage in hypothetical reasoning about the
future (what if I throw the switch) but also about the past (what if I had thrown the
switch). Preferences need to be formalized so as to choose the better scenarios from
those determined by abduction. Probabilistic LP is needed to handle uncertainty in
scenarios. LP updating is needed to update the agent’s knowledge representation
whether it is actual or hypothetical. LP counterfactuals permit ‘‘if only’’ type
reasoning: hypothesizing about the past taking present knowledge into account.
Tabling, a relatively recent innovation in Prolog, is applied to enable the reuse of
previously computed solutions. This feature would enable faster responses in a
computational moral agent.
Chapter 5 describes novel ways to employ tabling in abduction and updating.
TABDUAL facilitates the tabling abductive solutions in contextual abduction.
EVOLP/R supports the incremental tabling of fluents and permits rules to be
asserted and retracted.
Chapter 6 focuses on counterfactuals. Classically, the trolley problems are
typically not presented as probabilistic scenarios (which would be more
realistic) but as non-probabilistic (if I do not throw the switch five will certainly
die). The authors follow this tradition in their formalizations of the trolley
problems.
Chapter 7 discusses ACORDA, PROBABILISTIC EPA, and QUALM in
detail, emphasizing how each of them distinctively incorporates a combination
of the LP-based representation and reasoning features discussed in Chapter 4.
ACORDA is a system that implements prospective logic programming.
Prospective logic programming enables an evolving program to look ahead
prospectively into its possible future states and to select preferred states to
satisfy goals. EPA adds probabilistic reasoning. QUALM combines the
functions of tabling and updating.
Chapter 8 details the applications of ACORDA, PROBABILISTIC EPA, and
QUALM for modelling various details relevant to the chosen moral problems.
123
256 S. Welsh
Chapter 9 begins the transition from ‘‘the individual realm’’ (single-agent moral
decision-making) to ‘‘the collective realm’’ (multi-agent decision-making over
time). It draws on Evolutionary Game Theory and focuses on features of cognitive
agents that support the evolution of norms over time. Cognitive abilities such as
intention recognition, commitment, revenge, apology and forgiveness, support the
emergence of cooperation in the population. Populations of agents that lack such
capabilities are less cooperative.
Chapter 10 discusses the links between the individual realm and the collective
realms over time in the context of evolutionary game theory. The authors contend
that evolutionary game theory gives support to contractualism as a moral theory and
that contractualism is a good fit for robotic implementations of moral functionality.
The authors’ main conclusions are that abductive and counterfactual reasoning
are essential in moral cognition. They argue that substrate does not matter. One can
do ethics within the resources of logic programming and without a beating human
heart. They provide abundant detail and demonstrate numerous programming
techniques that will enable correct moral functionality in autonomous robots. They
apply all this to well-known ‘‘off the shelf’’ problems from the philosophical
literature.
They challenge the view that machine learning approaches are adequate for
machine ethics. No doubt this will be a red flag to proponents of deep reinforcement
learning as a fast track to general moral intelligence. The authors claim that only
small, circumscribed, well-defined domains have been susceptible to rule generation
through machine learning. Rules, they say, are all important for moral explanation,
justication and argumentation. Logic programming is thus well suited to working in
a world of rules.
Such commitments to rules will not endear the authors to particularists and virtue
ethicists who rely on value holism rather than moral principles to navigate the moral
landscape. But, in the field of machine ethics, as it currently stands, it is not possible
to please everyone. Numerous matters are up for debate both in terms of ethics itself
and how ethics can be implemented in machines.
Pereira and Saptawijaya make their choices and make them clearly. It may be
that the field comes to reject some of them. Even so, the fact they have made them
and defended them ably will no doubt cause others to suggest better ways to
penetrate the ethical jungle.
Programming Machine Ethics is a tough read for a non-programmer. Even for an
experienced programmer, it is far from easy. It is a challenging combination of
cutting-edge logic programming applied to novel formalizations of ethics. Even so,
as Selmer Bringsjord says in his Foreword, the book delivers what is ‘‘currently the
most sophisticated, promising, robust approach to machine ethics, period.’’ As such,
it repays the effort of understanding it.
The true merit of a book is whether it changes your thinking about a subject. This
reviewer can honestly say it has changed his thinking on numerous aspects of
machine ethics. For those interested in the formalization of moral problems such
that they can be solved by robots, it is a must read.
123
Luı́s Moniz Pereira & Ari Saptawijaya... 257
References
Bourget, D., & Chalmers, D. (2014). What do philosophers believe? Philosophical Studies, 170(3),
465–500.
Foot, P. (1967). The problem of abortion and the principle of double effect. Oxford Review, 5, 5–15.
Scanlon, T. (1998). What we owe to each other. Massachusetts: Harvard University Press.
123