Logical Tools For Modelling Legal Argument

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 319

LOGICAL TOOLS FOR MODELLING LEGAL ARGUMENT

Law and Philosophy Library


VOLUME32

Managing Editors

FRANCISCO J. LAPORTA, Department ofLaw,


Autonomous University of Madrid, Spain
ALEKSANDER PECZENIK, Department of Law, University of Lund, Sweden

FREDERICK SCHAUER, John F. Kennedy School ofGovernment,


Harvard University, Cambridge, Mass., U.SA.

Former Managing Editors


AULIS AARNIO, MICHAEL D. BAYLESt, CONRAD D. JOHNSONt, ALAN MABE

Editorial Advisory Board

AULIS AARNIO, Research Institutefor Social Sciences,


University ofTampere, Finland
ZENON BANKOWSKY, Centre for Criminology and the Social and Philosophical
Study ofLaw, University ofEdinburgh
PAOLO COMANDUCCI, University ofGenua, Italy
ERNESTO GARZON VALDES, Institut für Politikwissenschaft,
Johannes Gutenberg Universität Mainz
JOHN KLEINIG, Department ofLaw, Police Science and Criminal
Justice Administration, John Jay College ofCriminal Justice,
City University ofNew York
NEIL MacCORMICK, Centre for Criminology and the Social and
Philosophical Study ofLaw, Faculty ofLaw, University ofEdinburgh
WOJCIECH SADURSKI, Faculty ofLaw, University ofSydney
ROBERT S. SUMMERS, School ofLaw, Cornell University
CARL WELLMAN, DepartmentofPhilosophy, Washington University
HENRYPRAKKEN
Department o/Computer Seien ce.
Free University 0/ Amsterdam. The Netherlands

LOGICAL TOOLS FOR


MODELLING LEGAL
ARGUMENT
A Study of Defeasible Reasoning in Law

SPRINGER-SCIENCE+BUSINESS MEDIA, s.y.


A C.I.P. Catalogue record for this book is available from the Library of Congress.

ISBN 978-90-481-4928-5 ISBN 978-94-015-8975-8 (eBook)


DOI 10.1007/978-94-015-8975-8

Printed on acid-free paper

All Rights Reserved


© 1997 Springer Science+Business Media Dordrecht
Originally published by Kluwer Academic Publishers in 1997
Softcover reprint of the hardcover 1st edition 1997
No part of the material protected by this copyright notice may be reproduced or
utilized in any form or by any means, electronic or mechanical,
inc\uding photocopying, recording or by any information storage and
retrieval system, without written permission from the copyright owner
Table of Contents

PREFACE xi

1 INTRODUCTION 1
1.1 AI, Logic and Legal Reasoning: Some General Remarks 1
1.1.1 An Overview . . . . . . . . . . . . . . . . 1
1.1. 2 Artificial Intelligence . . . . . . . . . . . . 2
1.1.3 Computable Aspects of Legal Reasoning . 5
1.1.4 The Role of Logic 6
1.2 The Focus of Research . . . . . . . . . . . . . . 7
1.3 Logic and AI . . . . . . . . . . . . . . . . . . . 8
1.3.1 The Declarative vs. Procedural Debate . 8
1.3.2 Logics and Programming Systems 9
1.3.3 Logic and Reasoning . 11
1.4 Points of Departure . . . . 12
1.5 The Structure of this Book 13

2 THE ROLE OF LOGIC IN LEGAL REASONING 15


2.1 Three Misunderstandings about Logic . . . . . . . . . 16
2.1.1 'To Formalize is to Define Completely' . . . . . 16
2.1.2 'Formalization Leaves No Room for Interpretation' 17
2.1.3 'Logic Excludes Nondeductive Modes of Reasoning' . 18
2.2 The 'Deductivist Fallacy' 18
2.2.1 'Naive Deductivism' . . 19
2.2.2 The Criticism . . . . . . 20
2.2.3 The Misunderstanding . 23
2.2.4 The Merits of the Criticism 25
2.3 Noninferential Reasoning with Logical Tools . 26
2.4 Rule-based and Case-based Reasolling 30
2.5 Summary . . . . . . . . . . . . . . . . . . . . 31

3 THE NEED FOR NEW LOGICAL TOOLS 33


3.1 The Separation of Rules and Exceptions in Legislation 34
3.1.1 Terminology 35
3.1.2 Examples..................... 36

v
vi TABLE OF CONTENTS

3.1.3 Formalizations in Standard Logic . 37


3.1.4 Nonstandard Methods 41
3.2 Defeasibility of Legal Rules . . 47
3.3 Open Texture . . . . . . . . . . 49
3.3.1 Classification Problems 50
3.3.2 Defeasibility of Legal Concepts 52
3.3.3 Vagueness............ 54
3.4 Which Nonstandard Techniques are Needed? 55
3.4.1 Reasoning with Inconsistent Information. 55
3.4.2 Nonmonotonie Reasoning . . . . . . . . . 56
3.5 AI-and-Iaw Programs with Nonstandard Features. 61
3.5.1 The Law as Logic Programs . 61
3.5.2 TAXMAN II . . . . 61
3.5.3 Gardner's Pro gram . 62
3.5.4 CABARET..... 63

4 LOGICS FOR NONMONOTONIC REASONING 67


4.1 Nonmonotonic Logics . . . . . . . . . 68
4.1.1 Consistency-based Approaches 68
4.1.2 Autoepistemic Logic . . 73
4.1.3 Minimization...... 76
4.1.4 Conditional Approaches 87
4.1.5 Inconsistency Handling 89
4.2 General Issues. . . . . . . . . . 93
4.2.1 Prcferential Entailment 93
4.2.2 Properties of Consequence Notions 94
4.2.3 Connections . . . . . . . . . . 96
4.2.4 Truth Maintenance Systems . 97
4.3 Objections to Nonmonotonie Logics 97
4.3.1 'Logic is Monotonie' 97
4.3.2 Intractability......... 99

5 REPRESENTING EXPLICIT EXCEPTIONS 101


5.1 Introduction..................... 102
5.1.1 Methods of Representing Rules and Exceptions 102
5.1.2 Kinds of Exceptions . . . . . . . . . . . . . . . 102
5.1.3 Requirements for Representing Rules and Exceptions 103
5.2 Default Logic . . . . . . . . . . . . 105
5.2.1 Specific Exception Clauses . 106
5.2.2 General Exception Clauses 107
5.2.3 Evaluation 111
5.3 Circumscription........... 112
TABLE OF CONTENTS VII

5.4 Poole's Framework for Default Reasoning 117


5.5 Logic-programming's Negation as Failure 120
5.5.1 Specific Exception Clauses. . . . . 121
5.5.2 General Exception Clauses . . . . 122
5.5.3 Logic Programs with Classical Negation 125
5.5.4 Summary............ 129
5.6 Evaluation................ 129
5.6.1 A Formalization Methodology . 130
5.6.2 Directionality of Defaults . . . 134
5.6.3 Contrapositive Inferences . . . 135
5.6.4 Assessment of the Exception Clause Approach 136

6 PREFERRING THE MOST SPECIFIC ARGUMENT 141


6.1 Introduction..................... 141
6.2 Poole: Preferring the Most Specific Explanation. 143
6.3 Problems . . . . . . . . . . . . . . . . . . 148
6.3.1 Some Possible Facts are Irrelevant . . . . 148
6.3.2 Multiple Conflicts Ignored . . . . . . . . . 149
6.3.3 Defaults Cannot be Represented in Standard Logic. 150
6.4 A System for Constructing and Comparing Arguments . 151
6.4.1 General Remarks . . . . . . . . . . 151
6.4.2 The Underlying Logical Language 152
6.4.3 Arguments . . . . . . . . . . 154
6.4.4 Confiicts Between Arguments 156
6.4.5 Comparing Arguments. 158
6.4.6 Informal Summary. . . 163
6.5 The Assessment of Arguments. 163
6.5.1 The General Idea . . . . 163
6.5.2 The Dialogue Game Defined . 166
6.5.3 Illustrations.......... 170
6.6 Combining Priorities and Exception Clauses . 172
6.6.1 Extending the System 172
6.6.2 Illustrations. 175
6.7 Evaluation........... 177

7 REASONING WITH INCONSISTENT INFORMATION 179


7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .. 179
7.2 Existing Formalizations of Inconsistency Tolerant Reasoning 180
7.2.1 Alchourron & Makinson (1981) . . . . . . 181
7.2.2 Belief Revision Approaches . . . . . . . . 183
7.2.3 Brewka's Preferred-subtheories Approach 187
7.3 Diagnosis . . . . . . . . . . . . . . . . . . . . . . 188
viii TABLE OF CONTENTS

7.4 Hierarchical Defeat . . . . . . . . . . . . 191


7.5 General Features of the System . . . . . 193
7.5.1 Properties of the Consequence Notion 193
7.5.2 Sceptical and Crcdulous Reasoning . 195
7.5.3 Floating Conclusions . 196
7.5.4 Accrual of Arguments . . . . . . . . 198
7.6 Conclusion . . . . . . . . . . . . . . . . . . 200

8 REASONING ABOUT PRIORITY RELATIONS 203


8.1 Introduction.......... 203
8.2 Legal Issues . . . . . . . . . . . . . . . . . . 204
8.2.1 Legal Collision Rules . . . . . . . . . 204
8.2.2 Requirements for a Formal Analysis 205
8.3 Extending the Definitions . . . 206
8.4 A Formalization Methodology . 210
8.5 Examples . . . . . . . . 212
8.6 An Alternative Method . . . . 217

9 SYSTEMS FOR DEFEASIBLE ARGUMENTATION 219


9.1 Argumentation Systems . . . . . . . . . . . . . . . . . . 219
9.2 Some Argumentation Systems. . . . . . . . . . . . . . . 221
9.2.1 The Bondarenko-Dung-Kowalski-Toni Approach 221
9.2.2 Pollock . . . . . . . . . . . . . . . . . . . . . 226
9.2.3 Lin and Shoham . . . . . . . . . . . . . . . . 229
9.2.4 Vreeswijk's Abstract Argumentation Systems 230
9.2.5 Nute's Defeasible Logic . . . . . . . . . . . 232
9.2.6 Simari and Loui . . . . . . . . . . . . . . . 235
9.2.7 Geffner and Pearl's Conditional Entailment 235
9.2.8 General Comparison . 237
9.3 Other Relevant Research. . . 238
9.3.1 Brewka's Later Work . 238
9.3.2 Reason-based Logic . 240

10 USING THE ARGUMENTATION SYSTEM 249


10.1 A Comparison of the Methods for Representing Exceptions 249
10.2 Implementational Concerns . . . . . . . . . . . 253
10.3 Applications. . . . . . . . . . . . . . . . . . . . 255
10.3.1 Toulmin on the Structure of Arguments 255
10.3.2 The System as a Tool in Reasoning. . . 256
10.4 A Logical Analysis of Some Implemented Systems 258
10.4.1 Gardner's Program . . . . . . . . . . . . . 258
10.4.2 CABARET . . . . . . . . . . . . . . . . . 261
10.4.3 Applications of Logic Metaprogramming . 262
TABLE OF CONTENTS IX

10.4.4 Freeman and Farley's DART System . 263


10.4.5 The Pleadings Game . . . . . 264
10.5 Four Layers in Legal Argumentation . . . . . 270

11 CONCLUSION 275
11.1 Summary . . . . . . . . . . . 275
11.2 Main Results . . . . . . . . . 276
11.3 Implications for Other Issues 281
11.4 Suggestions for Further Research 284

A NOTATIONS, ORDERINGS AND GLOSSARY 287


Al General Symbols and Notations. . . . . . . . . . . 287
A2 Ordering Relations . . . . . . . . . . . . . . . . . . 288
A3 Notions of the Argumentation System of Chapters 6-8 289
A4 G l o s s a r y . . . . . . . . . . . . . . . . . . . . . . . . . . 289

REFERENCES 293

INDEX 303
PREFACE

This book is a revised and extended version of my PhD Thesis 'Logical


Tools for Modelling Legal Argument', which I defended on 14 January
1993 at the Free University Amsterdam. The first five chapters of the thesis
have remained almost completely unchanged but the other chapters have
undergone considerable revision and expansion. Most importantly, I have
replaced the formal argument-based system of the old Chapters 6, 7 and
8 with a revised and extended system, whieh I have developed during the
last three years in collaboration with Giovanni Sartor. Apart from some
technical improvements, the main additions to the old system are the
enriehment of its language with a nonprovability operator, and the ability
to formalise reasoning about preference criteria. Moreover, the new system
has a very intuitive dialectieal form, as opposed to the rather unintuitive
fixed-point appearance of the old system.
Another important revision is the split of the old Chapter 9 into two
new chapters. The old Section 9.1 on related research has been updated
and expanded into a whole chapter, while the rest of the old chapter is
now in revised form in Chapter 10. This chapter also contains two new
contributions, a detailed discussion of Gordon's Pleadings Game, and a
general description of a multi-Iayered overall view on the structure of argu-
mentation, comprising a logieal, dialectical, procedural and strategie layer.
Finally, in the revised conclusion I have paid more attention to the relevance
of my investigations for legal philosophy and argumentation theory.
Some parts of this book are based on previously published articles.
Section 3.1 is based on Prakken & Schriekx (1991), while Seetions 6.1-
6.3 and 7.1-7.3 are extended and revised versions of parts of, respectively,
Prakken (1991a) and Prakken (1991b) (combined in Prakken, 1993). Fur-
thermore, the Sections 6.4, 6.5, 6.6 (partly), 7.4 and parts of the Sec-
tions 8.3-8.5 and 9.1 are based on Prakken & Sartor (1996b). Finally, some
parts of Section 10.5 are based on Prakken (1995b).

AS. Acknowledgements

Four and a half years have passed since I defended my PhD thesis, the
basis of the present book. I here briefly repeat the acknowledgements and

xi
xii PREFACE

thanks that I expressed in my thesis. The PhD project was initiated by


the first director of the ComputerjLaw Institute of the Free University
Amsterdam, Prof. Guy Vandenberghe, who sadly died before the project
was completed. I thank my supervisors Arend Soeteman and John-Jules
Meyer, my co-supervisor Anja Oskamp, and the external referee Marek
Sergot for their enthusiastic support and supervision. I also thank the many
people with whom I had fruitful discussions during my research student
years, in particular, Jaap Hage, Henning Herrestad, Giovanni Sartor, Frans
Voorbraak and Gerard Vreeswijk. Furthermore, I thank Joost Schrickx for
allowing me to use our joint paper in Section 3.1, and Cees Groendijk for
proofreading most chapters of my thesis. FinaIly, I would like to correct an
omission in the acknowledgements of my thesis: the regular meetings of the
Dutch Working Group on Nonmonotonic Reasoning were a very rich and
stimulating source of inspiration.
I also have to thank many people for their role in my postdoctoral
research, which lead to the book in its present form. To Marek Sergot I am
extremely grateful for his continuing support. Among other things, I have
bellefited from his invitation to work with hirn at the Logic Programming
Group of Imperial College London in 1993, where I could also meet the
developers of the technical framework that was to be the basis of the new
Chapters 6, 7 and 8 of this book, Andrei Bondarenko, Phan Ming Dung,
Bob Kowalski and Francesca Toni.
During my postdoc years, my fruitful discussions with Gerard Vreeswijk
and especially Jaap Hage continued. I also learned very much from Gerd
Brewka, Tom Gordon and Ron Loui, during a two-week encounter at the
GMD Bonn, August 1994 and at severallater occasions.
Among the other persons with whom I had fruitful exchanges of ideas I
mention Don Nute, Mark Ryan, Pierre-Yves Schobbens and Bart Verheij.
And I thank Trevor Bench-Capon and Andrew Jones for correcting the
English of so me of the revised chapters.
Most of all, I am deeply indebted to Giovanni Sartor. Not only have
we developed an extremely fruitful, still ongoing collaboration, he also
generously permitted me to freely use the results of our joint work in the
present book. In particular the new system for argumentation that I present
in the Chapters 6, 7 and 8 was jointly developed by Giovanni and myself
in several publications. Although in our private discussions we refer to our
system as 'PRATOR', in the present book I have chosen to respect the
standard practice not to name logical systems after persons. Nevertheless,
the reader should be aware that where I speak of 'my system', I mean
PRATOR, as generalized in this book from the language of extended logic
programming to that of full first-order predicate logic.
Thanks are also due to several organizations and institutes for their
PREFACE Xlll

support. The Dutch Science Foundation provided the grant for my PhD re-
search, while the Royal Netherlands Academy of Arts and Sciences awarded
me a three-year postdoctoral fellowship, which I could carry out at the Com-
puter/Law Institute of the Free University Amsterdam. The EC Working
Group ModelAge (Esprit WG 8319) also provided partial support, and
its SG 1 subgroup on defeasibility was a stimulating audience. I thank
John-Jules Meyer for inviting me to take part in the ModelAge project.
The GMD Bonn sponsored my two-week visit to Bonn in August 1994 on
invitation by Tom Gordon, while the Artificial Intelligence Center of the
University of Georgia, directed by Don Nute, provided hospitality during
the last four months of 1995. And the Faculty of Law of the Free University
Amsterdam offered their hospitality after the end of my fellowship, which
made it possible for me to finish this book without problems.
Finally, I thank all my colleagues of the Computer /Law Institute for
makillg the institute such a stimulating and pleasant place to work, during
both my research student and my postdoc years.
CHAPTER 1

INTRODUCTION

This book investigates logical aspects of legal reasoning. It draws its in-
spiration from the field of Artificial Intelligence and law, but the main
techniques it uses are those of logic and formal philosophy. Thus the book
is an example of applied legal philosophy, applied in the sense that the
intended result consists of formal-Iogical foundations of computer systems
performing legal reasoning tasks. However, since legal philosophy does not
provide ready-made insights that can serve as such foundations, this book
also contributes to legal philosophy, in further developing these insights.
In this introductory chapter I make some general re marks on Artificial
Intelligence (AI), its applications to legal reasoning and the relevance of
logic for these applications (Section 1.1), I present the focus of research
(Section 1.2), and then address so me general issues concernillg the use of
logic in AI (Section 1.3). On the basis of these discussions I conclude in
Section 1.4 by stating the points of departure of the present study.
1.1. AI, Logic and Legal Reasoning: Some General Remarks
1.1.1. AN OVERVIEW

Research on Artificial Intelligencc has its roots in many other disciplines,


such as computer science, philosophy, mathcmatics, psychology and lin-
guistics. In the seventies lawyers also discovered AI, and AI discovered law.
Both sides have profi ted from this, since the law is an excellent domain for
studying many of the features which are among the central topics of AI,
such as reasoning with partially defined concepts, metareasoning, defeasible
reasoning, and combining deductive and nondeductive modes of reasoning.
Of the early attempts to apply AI research to the legal domain one of
the best-known is the TAXMAN project of McCarty (1977), which was
an attempt to formalize the statute rules and underlying concepts of a
subdomain of American tax law. In Europe an influential early develop-
ment was the research of Sergot and Kowalski (e.g. Sergot, 1982; Sergot et
al., 1986), on applying logic-programming techniques to the legal domain.
One of the central themes of AI-and-Iaw research is the combination of
reasoning with legal rules and with case law decisions. A forerunner of this
kind of research was Meldman (1977). Gardncr (1987) has also addressed
this issue, in studying the combilled use of rules and cases für the task

1
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
2 CHAPTER 1

of distinguishing 'dear' from 'hard' legal cases. And also well-known is


the CABARET project of Rissland and Skalak (1991), which combines
conventional AI techniques for reasoning with rules with a system which
exdusively models legal reasoning with cases, the HYPO system of Rissland
and Ashley (e.g. 1987).
AI research, in general as weIl as in law, not only consists of developing
computer programs; many investigations purport to develop general theo-
ries which formulate, justify or criticise the ideas on which programs can be
based. The present book is an example of this kind of foundational research,
in studying logical aspects of artificial-intelligence research applied to legal
reasoning. Since also with respect to logical issues the legal domain is not
self-contained, the relevance of the present study, too, extends to general
research on logic and AI.
I am not a pioneer in studying logic in relation to computerized legal rea-
soning: to my knowledge Frank (1949) was the first to note the significance
of logic for representing law in computer systems (in his terms "legal-Iogic
machines"), followed by e.g. Allen (1963). The just mentioned develop-
ments in AI-and-Iaw research have inspired more detailed investigations
of the logical aspects of legal reasoning. In particular, McCarty's (1986)
research on the logic of permissions and obligations was directly motivated
by his experiences with the TAXMAN project. Recently, he has proposed
and partly developed a "language for legal discourse" (McCarty, 1989):
this language is meant to be a knowledge representation language with
solid logical foundations and supporting definitions of important common-
sense concepts underlying legal reasoning, related to categories such as
time, action and causation. Among the others having done logical research
with an eye to automated legal reasoning are Gordon (1988; 1991; 1995),
Jones (1990), Herrestad (1990), Herrestad & Krogh (1995), Sartor (1991;
1994), Hage (1996; 1997) and Verheij (1996).
In legal philosophy the logical aspects of legal reasoning have also been
studied. For instance, Lindahl (1977) has formalized Hohfeld (1923)'s ideas
Oll legal relations. And Alchourron has studied the logical characterization
of normative systems, with Bulygin in e.g. Alchourron & Bulygin (1971)
and Bulygin & Alchourron (1977) and with Makinson in Alchourron &
Makinson (1981). Aqvist has also applied logic to law, for instance to
issues of causation and liability in Aqvist (1977), while Soeteman (1989)
has investigated legal applications of deontic logic.
With respect to the topic of this book three basic questions arise: what
exactly is Artificial Intelligence, to which aspects of legal reasoning can
AI be applied, and what has logic to do with it? These questions will be
discussed in the next three subsections.
INTRODUCTION 3

1.1.2. ARTIFICIAL INTELLIGENCE

I shall answer the question what AI is by glvmg my personal view on


the aims and results of AI research to datej for the sake of discussion I
shallleave out many nuances. At its very simplest, AI can be described as
the science which aims at designing computer systems that perform like
intelligent human beingsj however, this needs some refinement, which will
be done after the following brief discussion of the results of AI to date.
In my opinion, the his tory of AI has shown that the aim of making
really intelligent computer systems is, at least for the foreseeable future,
too ambitious. The key problem has turned out to be so-called 'common-
sense knowledge', which is knowledge about nothing in particular, or the
kind of knowledge that humans acquire by experience throughout their
lifetimes. This kind of knowledge is so diverse and abundant that the task
of storing it in the computer is too difficult. Attempts to overcome this
problem by restricting the performance of computer systems to narrowly
defined domains have not been very successful, witness the rather modest
results of research on so-called 'expert systems': such systems are programs
which were initially meant to perform at the level of a human expert on
highly specialized domains like, for example, medical diagnosis of infection
diseases, investment issues, or car maintenance. Up to now, these systems
have behaved like 'idiot savants', with much high-level knowledge about
a specific domain, but without enough low-Ievel knowledge to apply it to
real-world problems (cf. Hofstadter 1985). Apart from these attempts to
restrict the domain, it was initially thought that the problem of common-
sense knowledge, although not an easy one, could be solved by telling the
computer as much as possible about the worldj cf. McCarthy (1968, pp. 403,
408), or as quoted in Kolata (1982). Projects were undertaken to formalize
large portions of common-sense knowledge on, for example, 'naive physics'.
However, while some interesting results have been obtained, for instance in
the CYC project of Lenat & Guha (1990), the overall goal still seems to be
far away (cf. McDermott, 1987). Moreover, for the ne ar future there are no
prospects for developing computer systems which will have the ability to
acquire common-sense knowledge by experience in the same general way in
which humans do. It is therefore not surprising that most areas in which
AI programs can compete with intelligent humans are domains in which
the world can be defined completely at a manageable scale, such as the
reality constituted by the rules of a game. My personal opinion is that
for dealing with the problem of common-sense knowledge the key issue is
leaming . However, in current AI research on this topic the expectations
are for the near future rat her modest: the prevailing paradigm in AI is the
symbolic approach, which regards thinking as manipulating formal symbols
4 CHAPTER 1

according to explicit ruIes; many AI researchers nowadays feel that with


respect to Icarnillg this view has inherent limitations, and for this reason
they expect more of a new paradigm: connectionism, or the neural networks
approach; however, the common opinion is that this paradigm has still a
long way to go in proving itself.
However, there are also forms of AI research for which the question
of giving up symbolic methods does not arise, and it is these forms on
which this book will mainly focus. Being of the opinion that the aim
of making really intelligent programs is, at least for the time being, too
ambitious, many AI researchers set themselves more practical goals, while
still using AI-ideas about different styles of programming and designing
computer systems. One of those ideas is the separation of knowledge and
ways of reasoning with it. The term which is often used here is 'declarative
knowledge representation', as opposed to 'procedural knowledge represen-
tation' in conventional programming, in which knowledge about a domain
is largely implicit in the way of solving the problem; when represented
in a more declarative way, on the other hand, knowledge is regarded as
saying something about the world, rat her than as saying how to solve
a problem. AI systems of which declarative knowledge representation is
a feature are generally called 'knowledge-based systems'. In my opinion,
research in knowledge-based technology need not abandon the symbolic
paradigm: instead, since this kind of AI research claims to develop systems
which can be practically useful in the foreseeable future, symbolic methods
are very important, since they provide means to validate the behaviour of
knowledge-based systems, which for practical purposes is essential.
Declarative representation of knowledge is generally regarded to have
the following advantages (cf. Genesereth & Nilsson, 1988). The first is that
since declaratively represented knowledge is not about solving a particular
problem, it can be used for more than one task: for illstallce, for solv-
ing different types of problems, or even for performing different modes of
reasoning, such as deductive, inductive and analogical reasoning. Another
advantage is that declaratively represented knowledge is easier to maintain,
since it is kept separately from the rest of the program, perhaps even in a
form which is more like naturallanguage than like computer code. This is
particularly important for legal applications, since laws and legal opinions
change continuously. A final advantage is that if knowledge is represented
declaratively, then reasoning about knowledge, which is called metalevel
reasoning, is easier to implement; this is also particularly sigllificant for the
legal domain, since in this domain many statements are about other state-
ments, like standards for choosing between conflicting norms or arguments,
norms ab out the consequences of lack of evidence, and prescriptions about
changing or illterpreting statutes. On the other hand, declarative knowledge
INTRODUCTION 5

representation is also said to have the dis advantage that it may make
systems computationally less efficient than if their knowledge is tailored
to one specific task: if knowledge can be used in more than one way, often
more ways have to be tried before a solution has been found (Niisson, 1991,
p. 43). Moreover, as will be further explained in Section 1.3.2, declarative
knowledge representation is more a matter of degree than of 'yes or no',
since in practice the situation of purely declaratively represented knowledge
canllot be obtained. Still, one of the basic assumptions of the present
investigation is that in AI and law it is both desirable and possible to
represent a substalltial body of knowledge in a way which is as declarative
as possible.
In conclusion, there are, roughly, two kinds of AI research, distinguished
by their aims: some researchers try to let computer systems perform like
intelligent human beings, while others restrict themselves to more practical
aims. Of course, in reality this distinction is not that clear-cut, but is more
a matter of degree while, moreover, practical results are often the spin-off
from ambitious research. However, in spite of this, I have in this overview
chosen to leave out most of the nuances, in order to state my position
clearly: this book chooses for aims which are closer to the practical side of
the scale; when it comes to AI, it is not about logical aspects of making
really intelligent artificial judges or solicitors but about logical aspects of
legal knowledge-based systems: of systems of which a main feature is the
separation of knowledge and ways of using it.

1.1.3. COMPUTABLE ASPECTS OF LEGAL REASONING

A second basic question is to which aspects of legal reasoning knowledge-


based technology can be applied. In general, the answer is that it can be
applied only to those aspects of legal reasoning which are suited for a formal
description, since only problems which can be formalized are computable
(although not everything which can be formalized can be computed). Now
it might as a philosophical opinion be held that in principle everything
can be formalized, but if the 'practical' view on AI is accepted, then
this ans wer means that the applicability of AI to law is rat her restricted.
The reason is that unlike 'ambitious' AI, which might eventually (if my
pessimism is unjustified) result in computer systems wh ich perform weIl in
ways which their designers do not understand, 'practical' AI has to rely on
formal descriptions which can be programmed explicitly; and most aspects
of legal reasoning, such as arguing for legal principles or rules of law to
start reasoning with, resolving ambiguities in their formulation, evaluating
evidence, determining the weight of illterests, showing a sense of humanity,
and so on, are matters of content rather than of form: in logical terms these
6 CHAPTER 1

things are matters of selecting the premises, rat her than of reasoning with
them. It may be that also for these activities rational procedures exist,
but the problem is that they will be too vague to be formalized, or timt
they will be too tentative, that is, they do not have compulsory force,
for which reason they must inevitably leave room for human judgementj
and this means that an essential aspect of these procedures escapes for-
mal treatment. To understand the scope of the present investigations it
is important to be aware of the fact that the information with which a
knowledge-based system reasons, as weIl as the description of the problem,
is the result of many activities which escape a formal treatment., but which
are essential elements of what is called 'legal reasoning'. In sum, the only
aspectS of legal reasolling which can be formalized are those aspectS which
concern the following problem: given a particular interpretation of a body of
information, and given a particular description of some legal problem, what
are then the general rational patterns of reasoning with which a solution
to the problem can be obtained? With respect to this question one remark
should be made: I do not require that these general patterns are deductivej
the only requirement is that they should be formally definable. This point
will be discussed in more detail in Section 1.3.2 and in Chapter 2.
It should be noted that, although the investigations will be restricted to
certain aspects of legal reasoning, they will not be restricted to certain legal
problem solving tasks, such as deciding a case, giving advice to a dient,
assisting in legal planning, mediating legal disputes, and so on. The reason
is that, as will be argued throughout this book, logic can be used as a tool
in many different problem solving tasks. Therefore, wh at in AI research
is often appropriate, specifying the task of the computer program, is for
present purposes not necessary.

1.1.4. THE ROLE OF LOGIC

A final question is what logic has to do with these issues. Timt logic is at
least a possible candidate for analysing formal aspects of legal reasoning
will not come as a surprise, since it is the very essence of logic to systematize
formal patterns in reasoning. Furthermore, although humans somehow dur-
ing their lifetime acquire the ability to reason according to certain patterns
(which is why lawyers can do without a course in formallogic), computers
do not yet grow up and therefore the ability to follow certain patterns
of reasoning has to be programmed explicitly. Finally, logic is an obvious
candidate for modelling the separation of knowledge and ways of using
it, because in logic this separation is total in the form of premises in some
formallanguage on the one hand, and an inferential apparatus on the other.
In condusion, logic is, at least at first sight, very relevant for 'practical' AI.
INTRODUCTION 7

In spite of this, both in general AI research and in AI and law the rele-
vance of logic has been questioned. Since the arguments require a detailed
discussion, an entire section of this chapter is devoted to the debate on the
significance of logic for AI, and even the entire next chapter is about the
relevance of logic for AI and law. First, however, more will be said about
the focus of research.

1.2. The Focus of Research

As explained, the subject of this book is logical aspects of applying AI


research on knowledge-based technology to the legal domain. Now there
are many logical aspects which could be investigated. An obvious candidate
would be reasoning with deontic concepts, like 'obligatory', 'permitted' and
'forbidden'; such an analysis would bring us to the field of deontic logic.
However, I have chosen to study two other aspects of legal reasoning, wh ich
are both by legal philosophers and AI-and-Iaw researchers sometimes put
forward as a challellge to the relevance of logic for the study of legal reason-
ing. These aspects are defeasible reasoning and reasoning with inconsistent
information. Interestingly, these phenomena are also in other domains often
put forward as evidence against the relevance of logic for AI, and this makes
the present study not only relevant for AI and law, but also for Artificial-
Intelligence research in general.
To say some more about some researcher's doubts about logical methods
in law, a first point of criticism has been based on the observation that the
law is meant to apply to an open, unpredictable world, for which reason no
definite rules can be given: legal norms are taken to be illherently subject to
exceptions, that is, they are defeasible. Now this feature is sometimes held
to make logic-based methods inapplicable to legal rules since 'logical rules'
would permit no exception (cf. Hart, 1949, pp. 146-56; Toulmin, 1958, pp.
117-8; Leith, 1986; Berman & Hafner, 1987, p. 4). Although in the course
of my analysis it will become apparent that this is an appropriate criticism
of some systems of logic in particular, a major aim of this book is to argue
that it is by no means a valid criticism of logic in general: we shall see that
research in AI and cognitive psychology on the nature of concepts has led
to a new development in logic, viz. the study of so-called llonmonotonic
reasoning, which has made it possible to give a logical analysis of defeasible
as weIl as of deductive reasoning.
A second criticism of logic has beell based on the observation that in law
there is very much room for disagreement (Gardner, 1987; Gordon, 1989;
Perelman, 1976, pp. 18-9; Rissland, 1988, pp. 46-8): very often different
opinions are possible, not only about matters of evidence, but also about
what the valid principles, rules or precedents are, depending upon one's
8 CHAPTER 1

moral opinions, political goals and the like, or even upon the interests of
one's client in a law suit. It is gene rally acknowledged that there is a need
for legal knowledge-based systems which can give insight into the amount
of disagreement, hut some claim that for such systems logic-based methods
will not be of any use, since logic would require consistency (cf. Berman
& Hafner, 1987, p. 31 and also Birnbaum, 1991, pp. 63-4). Another major
objective of the present study is to show that this is not the case; that also
of reasoning with inconsistent information a logical analysis is possible, at
least if the results of the logical study of nonmonotonic reasoning are taken
into account.
As a final objection to the relevance of logic for legal reasoning it is
sometimes said that most patterns of legal reasoning are not deductive but,
instead, analogicalor inductive (PereIman, 1976, p. 17; Rissland, 1988, p.
48; Rissland & Ashley, 1989, p. 214); and since logical reasoning is deductive
reasoning, so it is said, a logic-based system would exclude these patterns
of reasoning. Now, although analogical and inductive reasoning will not be
a major suhject of this hook, this criticism allows me to state one of my
main points of departure, which is that logic should not be seen as a model
of, but rat her as a tool in legal reasoning. Thus other tools, like inductive
or analogical reasoning, are not excluded from the model and, moreover,
on various occasions it will become apparent that logic can even be a very
useful tool in these other kinds of reasoning.
In summary, the aim of the present investigations is to give a logical
analysis of two aspects of legal reasoning which are sometimes held to escape
such an analysis: reasoning with defeasible information and reasoning with
inconsistent information. The main hypothesis is that recellt developments
in the logical study of common-sense reasoning have made such an analysis
possible, while a secondary hypothesis is that the analysis will, together
with the instrumental view on the role of logic in reasoning, also help in
clarifying some noninferential aspects of legal reasoning. Finally, as already
indicated, the relevance of the present study may be expected to exceed
the legal domain, since I shall focus on issues which have also appeared in
the general debate on the role of logic in AI.

1.3. Logic and AI


1.3.1. THE DECLARATIVE VS. PROCEDURAL DEBATE

As was said in the previous sections, not only in the legal domain, but also
in AI research in general the usefulness of logic is often put into question,
often on similar grounds. 1 The origins of the controversy on logic can be
IThis section only claims soundness, not originality: I have profi ted much from dis-
cussions in Hayes (1977), Israel (1985), Moore (1982) and Nilsson (1991).
INTRODUCTION 9

traced back to research in the second half of the sixties on the development
of general purpose theorem provers for first-order predicate logic. Research
on this topic became popular after Robinson's (1965) resolution principle,
a rule of inference which is complete for first-order logic and which for
a useful fragment of first-order logic allows the development of efficient
theorem provers. Many researchers in AI thought that the problem of
artificial reasoning could be solved by developing efficient theorem provers
for fuH first-order logicj what would then remain to be done was formalizing
domain knowledge declaratively in first-order formulas. This is known as
the 'general problem solver' paradigm. However, this approach soon turned
out to be unworkable: in realistic applications the search space for these
theorem provers rapidly became too large and logic offered no way of
telling the system which deductions to choose from the many possible
ones, since problem solving strategies like forward or backward chaining,
or domain dependent heuristics cannot be expressed in a logical language
meant for formalizing knowledge about the world. This debate is known as
the declarativist/proceduralist controversy. 'Proceduralists' held the view
that it was impossible to strictly separate knowledge about the domaiIl
and knowledge about the control of the reasoning process. They argued
that knowledge should not be represented declaratively but procedurally: a
knowledge engineer should not only be able to represent domain knowledge
but also to express of a particular piece of knowledge how it should be used
in the reasoning process.
Nowadays the criticism that early theorem proving research neglected
the need for expressing control information is generally held to be justified
(see e.g. Moore, 1982, p. 431). As a consequence, ways have been developed
to extend logic-based systems with me ans to control the deduction process,
for example, by designing metalevel architectures in which knowledge about
the way to solve a problem can be expressed declaratively at a metalevel.
Moreover, the strict view on the separation between declarative and pro-
cedural formalisms has been weakened, as will be discussed in the next
subsection.

1.3.2. LOGICS AND PROGRAMMING SYSTEMS

Apart from the need for ways to express control information, a second
conclusion from the debate on control issues is that the sharp dichotomy
declarative/procedural knowledge representation languages is not a correct
one (Hayes, 1977). To explain this, more needs to be said about what a
logic iso
In a logical system three components can be distinguished. First there
is a fm·mally specified languagej then there is an interpretation of this
10 CHAPTER 1

language: its formal semanties, which for each well-formed formula specifies
what it means that it is truej and, finally, there is an inferential appamtus
defined over the languagej normally this apparatus is intended to justify
only inferences which are valid according to the semantics: that is, only
those formulas should be derivable which according to the semantics of the
language must be true if the premises are true. If this is guaranteed, the
inferential system is said to be soundj if, in addition, all such formulas are
derivable, then the system is complete.
Now, according to Hayes (1977) the proceduralists had a misconception
about what logic iso Logic is not a programming system but a formal system
which, instead of making inferences only defines what a valid inference is;
an AI system, on the other hand, performs inferences. Despite the fact
that a knowledge representation language of an AI system can provide
means to express procedural knowledge, as a knowledge representation
language it also has logical aspects: the knowledge which its represents
is knowledge about the world, which can therefore be true 01' falsej and it
is logical semantics which specifies what it means that an expression is true
or false; moreover, in doillg so it gives criteria for determining the validity
of inferences, thus providing a standard for characterizillg the inferential
behaviour of a system. In addition to these logical aspects, knowledge
representation languages also have procedural aspects: for example, certaill
special tasks, like doing caIculations on numbers or dates, or inspecting
certain tables, might be performed by procedures, or there can be ways
of indexing knowledge for ease of retrieval. The important point is that
even if a representation language has some of these procedural features,
still quest ions can be asked like 'is it based on propositional 01' on predicate
logic?', 'does it only use universal or also existential quantifyers?' and so
on. In this respect it is also important to note that logic does not enforce a
particular syntax. For instance, it is perfectly possible to use a semantic net
or a frame as an alternative syntax for the language of first-order predicate
logic or so me other logical system. The important role of logic is, rat her
thall to determine a specific notation, to specify the meaning of a notation: a
notational syst.em which does not have a specified meaning is of little use in
representing knowledge, since there is no way of determining what it means
to say that a certain represented piece of knowledge is true, and therefore
the behaviour of the system cannot be criticised. This remark implies that
if, for example, semantic nets or frames are preferred over logical formulas
because of their access and retrieval features, this is not a choice for a
different semantics, but a choice for different procedural features.
In sum, it is impossible to make a sharp distinction between pro ce-
dural and decIarative or even between logical and nonlogical knowledge
representation languages: knowledge representation formalisms have both
INTRODUCTION 11

procedural and declarative aspects, and the importance of logic lies in its
ability to analyze the declarative aspects. Of course, some formalisms make
it easier to abstract their declarative parts than other ones and therefore
the distinction proceduraIfdeclarative is still valuable in beillg a matter of
degree. Now knowledge-based systems are normally meant to model general
ways of solving problems rather than to solve a particular problem, for
which reason their knowledge will not be tailored to deal with one particular
kind of problem; therefore knowledge-based systems will normally be more
to the declarative end of the scale, and this ensures that a logical analysis
of such systems will not be too abstract to cease to have value.

1.3.3. LOGIC AND REASONING

So far the debate has mainly been about control issues. However, the
criticism of early theorem proving research that it neglected the need for
expressing control knowledge rapidly grew into a general distrust of any
use of logic in AI. Well-known is the discussion between John McCarthy
and Marvin Minsky about whether AI should be 'strong' or 'weak' (cf.
Kolata, 1982; Israel, 1985). According to Minsky, who adopts the 'strong
AI' view, the best way to make computer systems intelligent is to let them
reason in the same way as the human mind does, and in his opinion this
is almost certainly not with logic. Among other things, Minsky points at
the defeasible nature of common-sense knowledge, that is, at the many
exceptions which a rule can have in every-day life. As noted in Section 1.2,
legal scholars have said the same of legal information. Minsky illustrates his
point with an example which has become classical. Consider the common-
sense rule 'Birds fly': what if some bird Tweety is an ostrieh or a penguin?
These exceptions could be attached to the rule, but what then if Tweety
is dead or has its feet set in concrete? Minsky's point is that the list of
exceptions to such common-sense rules is open-ended, that it can never
be completely specified once and for all. McCarthy, on the other hand,
employing the 'weak AI' view, says that it does not matter whether an
AI program works in a psychologically realistic way, since AI is about
intelligence that is artificial. To the problems with exceptions in common-
sense reasoning he responds with a new view on logic, which allows for
so-called nonmonotonic reasoning.
Many others have also criticized what is sometimes called the logicist
approach to AI. 'Logicism' is one of those terms which is used in many
different ways, but a good example of its use in criticism of logical methods
is provided by Birnbaum (1991, p. 59), who describes one of the features
of logicism as "( ... ) its emphasis on sound, deductive inference", for which
reason "logicists tend to ignore other kinds of reasoning", one of which is
12 CHAPTER 1

"reasoning by analogy". Some, e.g. Israel (1985) and Nilsson (1991) have
warned against a confusion which sometimes underlies this criticism, viz.
thinking that using a logical language (i.e. a language with a logically
specified meaning) for representing domain knowledge would necessarily
imply a choice for modelling the entire reasoning process as "running a
sound theorem prover over such a language" (Israel, 1985, p. 432). Although
it is true that reasoning is more than deduction, it should be realized that
this observation is perfectly compatible with using logic-based languages
to represent knowledge. Among other things, the use of a logicallanguage
does not make it impossible to formally define over the language certain
reasonable but deductively unsound mo des of reasoning like defeasible,
analogicalor inductive reasoning. For example, a careful analysis of many
AI algorithms for analogicalor inductive reasoning will reveal that these
algorithms use logic as a tool in that they operate on the form of expres-
sions instead of on their contentj later chapters of this book contain some
examples. Likewise is it not the case timt logic is only useful in case of
consistent knowledge bases: on the contrary, the observation that a certain
set of premises is inconsistent can be used to let the system undertake so me
action: for instance, to let it perform some kind of belief revision or to let
it prefer some consistent part of the knowledge, with or without calling a
higher level of knowledge with which a choice might be made. The role of
logic is dear: it defines when a set of premises is inconsistent and what
the consequences are of possible revisions or preferences. In condusion,
both in case of nondeductive types of reasoning and in case of inconsistent
knowledge logic can be useful: what is essential is that logic should be
regarded as a tool in a larger framework, which is called reasoningj using
logic to represent domain knowledge does not commit at all to a particular
way of modelling ,·easoning: in particular, it does not commit to regarding
reasoning as no more than mechanically deriving logical consequences from
a set of logical formulas.

1.4. Points of Departure

On the basis of the above discussions this section formulates some points
of departure for the coming investigations. The first is that the kind of
AI research which will be investigated is eloser to the 'practical' side of
the scale of aims than to the 'ambitious' side in that the focus will be on
logical aspects of applying knowledge-based technology to law. A second
point of departure is that logic should be regarded as a tool in, rat her than
as a model of reasoning. Thus other kinds of reasoning are not only made
possible, but can also be better understood, precisely because logic is one
of their components. Furthermore, in Hne with the aims of 'practical' AI
INTRODUCTION 13

research the analysis will be prescriptive rat her than descriptive: it will be
ab out what is sound, rational or principled reasoning rat her than about
how people actually reason. For this reason I regard remarks like 'mistakes
are tolerable since people also sometimes make mistakes' as besides the
point (moreover, in my opinion it is not tolerable that computers make the
same mistakes as humans do, as long as computers do so few things weH
which humans do weH). FinaHy, the investigations will be at a formal level
of analysis. I do not intend to develop directly implementable algorithmsj
rather, my aim is to develop the formal foundations which can be used in
a critical analysis of implemented systems. This leaves room for the use of
an implemented syntax other than logical formulas, and also for systems
with parts of their knowledge represented in a procedural way.

With respect to the need for formal foundations, two final remarks must
be made. The first is about the fact that the articles on which the previous
section was based mainly stress the need for formal semanticsj in fact, the
meaning of 'semantics' should be extended a little. In recent years, formal
theories have been developed in which logical languages are embedded
in larger frameworks, which are also specified in a formal way. Examples
related to the subject of this book are Poole (1988)'s abductive framework
for default reasoning, belief revision research of e.g. Gärdenfors (1988) and
research on metalevel architectures, cf. Genesereth & Nilsson (1988, Ch.
10), to name but a few. And also the main contribution of this book will not
be a formal semantics of some logicallanguage, but a larger framework, in
which a logicallallguage with a formal semantics is only a component. Such
broader theories can also serve as formal foundations for AI research, in
specifying the meaning (in a broader sense) ofprograms, and in determining
the soundness of the system in doing some task still closely related to logical
proof, like comparing arguments or revising beliefs. In conclusion, while
still adhering to the general conclusion of the previous section, the range
of useful formal theories should be broadened.

A second issue is that even if it is taken for granted that AI systems


need theoretical foundations, it might be questioned why they should be
expressed in a formal way. A first answer to this is that many elements of
AI systems are directly based on logic (albeit sometimes in disguise) which
makes a description on a logical level obviously appropriate. Moreover,
although theories expressed in natural language or in so me semiformal
language can certainly be useful (see for discussions on various levels of
analysis of knowledge-based systems e.g. NeweH, 1982) the point is that
such theories are usuaHy not sufficient, since their distance to the level of the
computer systems is too large. This distance should be bridged by a formal
and more exact intermediary and for this logic is an obvious candidate.
14 CHAPTER 1

1.5. The Structure of this Book


This book starts in the Chapters 2 and 3 with an informal analysis of
the research topic. In Chapter 2 the discussion on the use of logic in AI
is applied to the legal domain and connected with a similar discussion in
legal theory, after which Chapter 3 examines in more detail certain patterns
of legal reasoning which cannot be analyzed by standard systems of logic,
and which are therefore often put forward as achallenge for logic-based
methods. It will be concluded that a new view on logic should be employed,
allowing for nonmonotonic reasoning.
The formal investigations start in Chapter 4 with an overview of some
of the main approaches to formalizing nonmonotonic reasoning, after which
Chapter 5 makes a beginning with the discussion how these methods can be
applied to model the defeasibility of legal reasoning. One of these methods,
which is based on regarding nonmonotonic reasoning as a kind of inconsis-
tency handling, is further investigated in Chapters 6, 7 and 8; these chapters
tlms unify the two major topics of this book: defeasible reasoning and rea-
soning with inCOIlsistent information. They contain the main contribution
of this book, a logical system for defeasible argumentation, i.e. a formal
system for constructing and comparing arguments. This system is not only
a contribution to AI-and-Iaw research, but also to general AI research on
nonmonotonic reasoning. Chapter 9 then reviews related work in AI, while
Chapter 10 discusses some issues concerning applications of the system in
AI-and-Iaw and in legal philosophy. I end in Chapter 11 with a summary
and some concluding remarks.
CHAPTER 2

THE ROLE OF LOGIC IN LEGAL REASONING

The aim of this chapter is to illustrate the point of departure of this book
that logic should not be regarded as a model of, but as a tool in legal
reasoning. I shall do so mainly by discussing some objections to logical
methods which can be found in legal philosophy and AI-alld-Iaw. Some of
them are an application to the legal domain of more general criticism of
logic discussed in Chapter 1. Section 2.1 briefly discusses some basic misun-
derstandings about the nature of logic, after which Section 2.2 examines in
more detail another misunderstanding, the opinion that applying logic to
legal reasoning would be the same as regarding law as a coherent system of
rules which can somehow be discovered and formalized. Then, Section 2.3
deals with the criticism that logical methods can only model a small part of
legal reasoning since most forms of reasoning used in law are nondeductive.
Finally, Section 2.4 relates the mainly philosophical discussions of the first
three sections to an issue in AI-and-Iaw research, the distinction between
'rule-based' and 'case-based' reasoning.
I shall not discuss purely fundamental issues concerning the application
of logic to law, such as the question whether norms can have truth values
(Von Wright, 1983; Alchourron & Bulygin, 1984; Susskind, 1987; Soete-
man, 1989; Herrestad, 1990). In principle this issue is certainly relevant to
our subject, since in Chapter 1 I argued that, although a direct use of logic
in AI occurs mainly at the proof-theoretical level, every proof-theoretic
account of logical consequence should be based on a semantical theory.
However, for practical reasons I confine myself to very briefly indicating
my position. The problem is clearly stated by J0rgensen (1938) in what has
been named the 'Jorgensen dilemma'. On the one hand, it seems obvious
that only descriptive statements can be true or false and therefore, since
norms do not describe but prescribe, norms cannot be true or false. On the
other hand, it seems also obvious that the relation between the premises and
the conclusion of a legal or moral argument is not arbitrary. The dilemma
is that, assumed that norms can indeed not be true or false, it has either
to be defended that norms have no logical relations, or that a logic of
norms cannot be based on truth. One approach to this dilemma is trying
to avoid it by arguing that a logic of norms can be based on truth, at
least if the concept of truth is extended to cover not only the real world

15
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
16 CHAPTER 2

but also 'ideal worlds'. This is the standard approach to deontic logic (cf.
F0llesdal & Hilpinen, 1970). Others, on the other hand, e.g. Alchourron
and Bulygin (1984), have tried to base a logic of norms on something else
than truth, and some have even chosen for the first horn of the dilemma:
Von Wright (1983, p. 132), for example, argues that the relations between
norms are not a matter of logic, but of principles of rational legislation.
Although this book is not about deontic logic, I shall informally adhere
to the view that if thc concept of truth is cxtended to ideal and almost ideal
worlds, norms can have truth values. Of course, this might be the wrong
choice; it might turn out that logics of norms based on something else than
truth are capable of giving a superior analysis of the formal aspects of
legal reasoning. However, in my opinion non truth-based logics have not
yet reached that level of superiority (see also Herrestad, 1990,- p. 36) and
therefore I shall in this book assume that norms can be true or false in the
sense just explained.

2.1. Three Misunderstandings about Logic

In the next two seetions I discuss four misunderstandings about logic which
can sometimes be found in the literat ure. Since the fourth requires more
than just a few paragraphs, it will be dedicated aseparate section.

2.1.1. 'TO FORMALIZE IS TO DEFINE COMPLETELY'

Sometimes it is argued that a logical formalization of a body of norms


attempts to give a complete definition of all legal concepts which occur in
it. If this criticism were correct, formalizing the law would not be a good
idea, for there is no doubt that many legal concepts are not completely
defined, as the llUmerous discussions in legal theory on 'open-textured'
concepts show (cf. Section 3.3 below).
Howcver, it is easy to show that a logical formalization can leave con-
cepts completely or partially undefined. Consider a legal norm forbidding
the use of a vehicle in a public park. This prohibition can be formalized as
(1) v -+ -,p

where v stands for 'the object is a vehicle' and -,p stands for 'the object is
not allowed to enter the park' (the formal symbols used in this book are
explained in the Appendix. Now, if there are no further rules about what
objects are vehicles, the concept 'vehicle' is not defined at all, and even
when rules are added like
(2) c -+ v 'a car is a vehicle'
(3) w -+ v 'a wheelchair is a vehicle'
THE ROLE OF LOGIC IN LEGAL REASONING 17

then 'vehicle' is defined only partially, since from the rules (1-3) nothing
follows about whether, for instance, a skateboard or a war memorial tank is
a vehicle. The logical formalization does not imply that (2) and (3) are the
only possible instantiations of the concept 'vehicle'. This becomes different
only if the following formula is added.
(4) (...,e t\ ...,w) -t ""v
Only this formula would make 'vehicle' a completely defined concept. How-
ever, whether a formula like (4) is true depends completely on how the
norm is formulated by the legislator or interpreted by the judiciaryj it does
not become true because of the logical formalization. This shows that a
formalized concept can be partially undefined, that a 'logical' world can be
open.

2.1.2. 'FORMALIZATION LEAVES NO ROOM FOR INTERPRETATION'

The second argument against the use of logic in legal reasoning is based
on the observation that, because of the ambiguity and vagueness of natural
language, formalizing the law is impossible without interpreting it, for which
reason a formalization carries more information than its natural-Ianguage
counterpart, which is always open to different interpretations. According to
Leith (1986, p. 546) this goes against the observation that in law it is the
judiciary who interprets a norm, when applying it to the facts of a casej
therefore an interpretation beforehand by a knowledge-engineer would be
premature.
Of course, the observation that each formalization is an interpretation
of the law is correct, and almost all who use logic to formalize a piece
of legislation are aware of this. For this reason some authors, e.g. Bench-
Capon & Sergot (1985) and Allen & Saxon (1991), discuss the possibility
to let a legal knowledge-based system contain more alternative syntactic
formalizations. However, if it is said that by formalizing astatute no room
is left at all for the judiciary to interpret it, a distinction is ignored between
two levels of interpretation. The first is the syntacticallevel, on which prob-
lems occur with respect to the correct logical form of an expression, like:
does a norm saying that christian schools are entitled to receive financial
support exclude other religious schools from financial support? In other
words: is the norm a conditional ('if') or a biconditional ('if and only if')?
A logical formalization will have to choose between the two possibilities
and will therefore interpret the norm. A variant of this situation is when it
can be debated whether a natural-Ianguage expression occurring in several
norms has the same or a different meaning in each norm. This must also
be decided beforehand, while formalizing the norms. In sum, with respect
to the syntactical level of interpretation Leith's observation is correct.
18 CHAPTER 2

However, besides a syntactical level there is also a cOllceptual level of


interpretation: at this level issues concerning interpretation are about the
classification of factual situations as instances of legal concepts. U nlike
problems at the syntacticallevel, these problems are not matters of logical
form, but of content and, as was shown in the previous section, a logical
formalization can leave such quest ions open. Therefore, although a formal-
ization leads to a unique syntactical interpretation of what is formalized,
it does not enforce a unique conceptual interpretation.

2.1.3. 'LOGIC EXCLUDES NONDEDUCTIVE MODES OF REASONING'

A third misunderstanding about logic, which is somewhat less basic than


the previous two, has already been discussed in 1.3.3: it is the view that
formalizing a piece of information in logical formulas would make it impos-
sible to reason with this information in other than deductive ways: for this
reason, it is said, a knowledge-based system based on logic cannot perform
other useful modes of reasoning, like inductive or analogical reasoning; and
particularly for systems in the legal domain this would be a shortcoming,
since lawyers very often reason nondeductively. In fact, the view which is
rejected by this criticism is regarding systems of law as axiomatic systems:
that is, as regarding legal rules as axioms (in a neutral sense) and regarding
reasoning as no more than deduction from these axioms. I shall call this
view on legal reasoning the axiomatic view.
However, logic does not at all enforce the axiomatic view on reasoning.
As already noted in Chapter 1, the criticism neglects two distinctions. First,
it overlooks the difference between the semantics of a logicallanguage and
the inference rules defined over the language. Although it is true that
logicians are often mainly concerned with proof systems which respect the
formal semantics, not hing precludes the possibility to work with inference
rules which do not. Almost the entire formal part of this book (Chapters 4
to 9), will be devoted to one of such deductively unsound inference modes,
nonmonotonie reasoning. A second distinction which the criticism ignores is
the one between logical inference and reasoning. In 1.3.3 it was argued that
reasoning includes more than only drawing logical inferences: this will be
further illustrated in Section 2.3, where one noninferential way of reasoning
will be discussed, viz. analogical reasoning. First, however, a criticism will
be discussed which blames logic for implying something even stronger than
the axiomatic view on reasoning.

2.2. The 'Deductivist Fallacy'


This section discusses a final misunderstanding about logic, the criticism
that using logical methods would imply a commitment to some naive,
THE ROLE OF LOGIC IN LEGAL REASONING 19

simplistic view on how to do justice. Although this naive view is indeed


false, which has some consequences with regard to the use of logic, it will
be explained that using logical methods does not at all commit to this view.

2.2.1. 'NAIVE DEDUCTIVISM'

The view on the legal reality which is the focus of the present criticism
is almost never explicitly maintained, but very often attacked: in the lit-
erature it is referred t.o in various ways, like 'mechanical jurisprudence',
'conceptualism', 'formalism' , 'legalism' and, in one usage of the term, 'rule-
based reasoning'. In the rest of this book I shall refer to it as the naive
deductivist view on legal reasoning. It is the old-fashioned view that the
law is a consistent and complete body of rules which can somehow be
discovered. In this view, all there is to legal reasoning is finding the valid
rules and applyillg them to the facts in a deductive manner. This way
of looking at the law should be carefully distinguished from the axiomatic
view, described in Section 2.1.3, which is only a formal view on how to solve
problems and implies nothing about the status of the axioms; essentially,
the naive deductivist view on law is the axiomatic view on legal reasoning
supplemented with a belief in the fact that the truth or validity of the
premises can easily be established.
Perelman & Olbrechts-Tyteca (1969, pp. 197~8) give a nice description
of the naive deductivist view, not restricted to reasoning in law, when dis-
cussing strategies to deal with incompatibilities (note that in this fragment
the terms 'logic' and 'logical' are used in an informal sense, as only referring
to the described strategy and not to formal logic):
The first (strategy), which may be called the logical, is that in wh ich the primary
concern is to resolve beforehand all the difficulties and problems which can arise
in the most varied situations, which one tries to imagine, by applying the rules,
laws, and norms oue is accepting. (... ) The logical approach assumes that one can
clarify sufficiently the ideas one uses, make sufficiently clear the rules one invokes,
so that practical problems can be resolved without difficulty by the simple process
of deduction. This implies moreover that the unforeseen has been eliminated, that
the future has been mastered, that all problems have become technically soluble.
Nobody doubts that as a model of the legal reality the naive deductivist
view is much too simplistic. A first source of complexity in legal reasoning
is the fact that legislation is often the product of a heavy socio-political
battle: therefore it is often ambiguous and sometimes even inconsistent.
Another complicating feature of the legal domain is that the world is so
complex that the legislator cannot foresee everything. This has various
consequences: a first one is that legislation and established precedent often
simply do not address a certain problem, for which reason other grounds for
adecision have to be found; another one is that legislation often contains
20 CHAPTER 2

general terms, of which the applicability to the facts of a case can lead
to substantial disagreement; a final consequence is that, even if a rule is
itself clear and unambiguous, unforeseen circumstances might arise in which
applying the norm is regarded as inappropriate.
These complicating phenomena have at least two consequences for a
realistic model of legal reasoning. The first is that, since in dealing with
incomplete, inconsistent or ambiguous rules and precedents social and po-
litical factors inevitably play a role, ignoring these factors in a model
of the legal reality would often be a severe oversimplification. Another
consequence is that, even if it could be defended that in theory there is
a right answer to every legal problem, few believe that in practice this
anS wer can be obtained by just applying some undisputed legal axioms
to the facts of the case. Therefore a more realistic picture of the legal
domain is that, even if the facts of a case are agreed upon, there can be
dis agreement about the interpretation of a legal rule or concept, about the
validity of a rule, about whether there is an exception to a rule, based
on some legal principle or socio-political demands, about the binding force
of a precedent, etcetera. As a result it is sometimes said that in law it is
possible to argue for almost anything: in practice legal knowledge cannot be
'discovered' by so me objective method but must be argued for in a 'battle of
persuasion'. Of course, it can be debated to which degree the legal domain
contains disagreement; at this point more and less extreme positions have
been defended. Nevertheless, I believe that the just sketched 'realistic' view
contains a considerable degree of truth and therefore thecoming logicai
investigations will bear this view in mind.

2.2.2. THE CRITICISM


Leith
We are now in the position to describe the final misunderstanding about
logic: in One sentence it is the opinion that using logical methods is a sign of
employing the naive deductivist view On reasoning. A very clear example of
this criticism is provided by Leith (1986, p. 546), who first nicely describes
the naive view as the assumption that in law
(... ) there is such a thing as a 'dear rule' - it is a rule which can, to a large
extent, be applied without further thought.
but then immediately adds
to logicians it is the major premise from which the judicial argument must and
does proceed.
Furthermore, On p. 549 Leith says
In fact, a logical sentence is just another form of a dear rule - it contains no
contextual information, it is a piece of law which is supposed to explain its own
THE ROLE OF LOGIC IN LEGAL REASONING 21

context and which, its proponents have argued, is not open to negotiation. [my
italies, HP]
Finally, on p. 550 he says:
Legal logicians are certainly so me of those who are most adamant that justice is
best found through Iogic - using Iogical reasoning, proceeding step by step through
the legal norms. This is a view we reject; ( ... ) judges in our view should be held
responsible for their adjudications. They should apply the law, not in a formal
manner excluding the social factors and context, but as the Iegisiators and society
would wish.
This criticism is quite different from the last argument of the previous sec-
tion: that argument only amounted to the formal criticism that lawyers do
not always reason deductively, but often in other modes; Leith's criticism,
however, is not a matter of form, but of content: he states that logicians have
"a false epistemology" with respect to law (1986, p. 552): according to Leith
using logical methods for representing knowledge implies a commitment to
the philosophical doctrine that a complete and consistent body of legal
rules can somehow be discovered as being 'the law', largely by looking
at legislation and established precedent, and that deduction from the thus
discovered body of rules is all that is necessary in doing justice. Particularly
Leith's above remark on what a logical sentence is shows that according to
hirn to formalize a rule is to sanction it as a valid rule of law.
Because of these effects logic would force judges to make decisions
they do not want to make; it would exclude socio-political considerations
from the legal process and it would deprive a judge of his or her moral
responsibility for adecision: s/he would often be forced to say: 'I would like
to decide otherwise, but, unfortunately, logic dictates this decision'. The
last of the above quotations of Leith (1986) provides an excellent example
of this criticism, and also in other places the view timt it is Iogic and not
the premises which can be held responsible for an undesirable conclusion
can be found: Susskind (1987, p. 197) cites a British judge who did not like
the outcome of applying a certain statute and therefore said:
In my opinion, the rules of formal Iogic must not be applied to the Act with too
great strictness
And Soeteman (1989, p. 230, fn. 18) points at a similar view held by
Enschede (1983, pp. 53-4).

Toulmin
Leith is not the ouly oue iu ascribing the naive deductivist view to logicians:
the same criticism can be found in the work of the philosopher Stephen
Toulmin, whose ideas have in recent years attracted the attention of AI-and-
law researchers (e.g. of Gordon, 1994, 1995 and Freeman & Farley, 1996).
In Toulmin's case a more detailed analysis is necessary.
22 CHAPTER 2

so
Data -------,--------+ Qualifier, Conclusion

unless
since
Exception

Warrant

I on account of

Backing

Figure 2.1. Toulmin 's argument sehe me

The main elements of Toulmin's (1958) model of argumentation are


the following ones. Conclusions can be obtained via data, which are made
into reasons for the conclusion by warrants. Conclusions are qualified (for
instance, by 'certainly', 'probably', or 'presumably') and they can he re-
butted in exceptional circumstances. As for warrants, logicians might be
tempted to see them as universal premises - or the 'major' in a syllogism
- but Toulmin seems to interpret them as domain specific inferellce rules
(in his terms "inference licences", p. 98) which, moreover, being subject to
rebuttal, are defeasible. Warrants should be justified by backings: in factual
domains these can, for example, be observations of certain regularities of
which the warrant is the generalization by way of induction; in the legal
domain an example of a backing is observations about the terms and
dates of enactment of the relevant provisions. Logicians would say that
backings are the reasons for accepting the premises of an argument. Toulmin
summarizes this in the scheme displayed in Figure 2.1.
An important distinction in Toulmin's model is the one between analytic
and forrnally valid arguments. Arguments are analytic if the data and back-
ing together 'entail' the conclusion, otherwise they are suhstantial. Forrnally
valid arguments, on the other hand, have the form: 'Data, Warrant, so
(Qualifier) Conclusion', where the warrant indeed permits the (qualified)
derivation of the conclusion. According to Toulmin (pp. 119-120) the essen-
tial difference is timt any argument can be transformed into a formally valid
form, viz. by making explicit the appropriate warrant, whereas, on the other
hand, only a few arguments are analytic, viz. those ofmathematics (p. 127).
Now, what for our purpose is essential in Toulmin's line of reasoning is that
he does not discuss the possihility that in a similar way any argument can he
THE ROLE OF LOGIC IN LEGAL REASONING 23

made analytic by making explicit the appropriate backingj therefore it may


be held that Toulmin regards the backing of analytic arguments as somehow
objectively given. In conclusion, we may paraphrase Toulmin's definition of
an analytic argument as one in wh ich the conclusion can be derived by only
formally valid steps from unchallengeable propositions. Loosely speaking
(since Toulmin does not give exact definitions of his concepts), analytic
arguments are arguments in which the appropriate warrant for making the
argument formally valid is 'equivalent' to 01' 'entailed' by the backing.
Now, Toulmin's criticism of logicians is that they take the mathematical
model of reasoning, by which he means using only analytic arguments, as
the paradigm of all reasoning. According to Toulmin, logicians are not
concerned with formally valid arguments, but with analytic arguments
(p. 149). In conclusion, what may be derived from Toulmin's model of
argumentation and his criticism of formal logic is that Toulmin blames
logicians for holding the naive deductivist view on any kind of reasoning.

2.2.3. THE MISUNDERSTANDING

Historically, ascribing the naive deductivist view to logicians is understand-


able, since the spectacular development of logic in the last century was
essentially a result of research on the foundations of mathematics, viz. of
attempts to formalize the traditional view on mathematics as being sound,
deductive reasoning from evident first principles (cf. e.g. Beth, 1965). Often
it is assumed that if logic is applied to law, this is done for similar reasons
as it was applied to mathematics, viz. to put legal science on asound and
evident basis.
As a response to this criticism it should first of all be remarked that the
naive deductivist view is not implied by logic itself. 1 Logic is only about the
form of an argument: the only thing logic does is relating premises to con-
clusionsj the quest ion whether the premises are acceptable falls completely
outside the scope of logic. Even whether thc conclusion is acceptable is not
a matter of logic: if the conclusion of a valid argument is regarded as false
because of some other reasons, not contained in the premises, then not hing
in logical theory prevents a change of the premisesj logic is not concerned
with finding correct answers but with employing correct inference rules. In
other words, if you don't like the conclusion, not logic but the premises are
to biarne.
To apply these re marks to Toulmin's criticism, it should be stressed
that the 'mathematical' model of reasoning as Toulmin describes it is an
epistemological one and not a formal one. Legal logicians do not require
ISome others who have pointed at the misunderstanding are Nieuwenhuis (1976),
Alexy (1978, p. 282), Soeteman (1989, pp. 229 ff) and Sergot (1990, p. 28).
24 CHAPTER 2

at all that a solution to a legal problem should logically follow from sound
and evident first principles; logicians only require, if they require anything
at all, that the conclusion follows from, in Toulmin's terms, the data and
so me appropriately selected warrant, whether that warrant is sufficiently
backed or not. The latter question falls outside the scope of logic; it is an
epistemological question. Again in Toulmin's terms, but contrary to what
he says: logicians study formally valid arguments, not analyt.ic ones.
Leith (e.g. 1990, p. 63) and Toulmin are also aware ofthe limited, formal
scope of logic; their main criticism seems to be that logicians themselves
do not realize this. However, as an empirical statement this is simply false;
among the many counterexamples is the group which is the most severely
criticised by Leith, the logic-programming group of Imperial College in
London (cf. Bench-Capon & Sergot, 1985; Sergot, 1990); and this book
aims to provide yet another counterexample.
A final reason why I have discussed the criticism of Leith and Toulmin
in detail is that they often express their criticism in such a way that
readers with little knowledge of logic might easily be led to believe that it is
logic itself which implies the naive deductivist view. The above quotationd
re mark of Leith (1986, p. 549) on logical sentences provides an example,
and another critic of logic, Chaim Perelman, who is hirnself weIl aware of
the limited scope of logic, sometimes also expresses his views in a confusing
way, e.g. in Perelman & Olbrechts-Tyteca (1969, pp. 197-8) as cited above
and in Perelman (1976, p. 6):

Parce qu'j) est presque toujours controverse, le raisonnement juridique, par oppo-
sition au raisonnement deduetif purement formel, ne pourra que tres rarement etre
eonsidere eomme correct ou incorrect, d'une fac;on, pour ainsi dire, im personelle
(Beeause it is almost always ehallenged, a legal argument ean, as opposed to a
purely formal deduetive argument, only sei dom be eonsidered as objeetively eorreet
or incorreet, in a, so to speak, impersonal way).

This statement would be less problematic if the words "par opposition au


raisonnement deductif purement formel" were replaced by "par opposition
au raisonnement mathematique": then the contrast would be a contrast in
the status of the premises, which is presumably what. Perelman intends to
say.
To conclude this subsection, it should once more be stressed that logic
does not commit at all to a particular epistemological view on the nature of
the knowledge which is formalized: formalizing legal rules in logic does not
sanction these rules, and logic can even be used if it is held that there is no
objective standard at all to determine the truth or validit.y of the premises.
The truth of the premises in a logical argument is entirely hypothetical.
THE ROLE OF LOGIC IN LEGAL REASONING 25

2.2.4. THE MERITS OF THE CRITICISM

Although Leith, Toulmin and others are wrong in ascribing the naive de-
ductivist view to logicians, AI-and-Iaw researchers can certainly learn from
their remarks. A first merit of the critics is that they have pointed at the
limited scope of formal methods in modelling legal reasoning, by stressing
the fact that inevitably room has to be left for socio-political or moral de-
cis ions of content. However, it should be realized that this correct criticism
applies not only to logic, but to alt attempts to computerize legal reasoning,
since, as already said in Chapter 1, AI has to restrict itself to aspects of
reasoning wh ich can be formalized.
A second merit of the criticism is that it shows that even the formal
part of legal reasoning is more complicated than the axiomatic view on
legal reasoning, described in Section 2.1.3. As already noted, this view is
essentially the naive deductivist view without its epistemological ambitions.
In fact, the critics of logic support the view of this book that logic should
not be regarded as a model of but as a tool in legal reasoning. In short,
we can say that the criticism makes us aware that there is a distinction
between legislation and law. By this I do not me an that there are also
other important sources of legal knowledge, like precedentsj this goes, of
course, without sayingj what I mean is that legal rules do not express 'the
law' but are "objects of discourse" (Leith, 1986, p. 548), which can be
applied, but which can also be put into question. Accordillg to Leith (1990,
p. 103) rules are "flexible objects which cannot be used in the deductive,
axiomatic manner ... ". Similarly, Gardner (1987, p. 3) describes reasoning
of lawyers as "rule-guided" instead of "rule-governed" .
This has some important consequences for the way in which logic should
be used in formal models of legal reasoning or in knowledge-based systems.
At the very least, since the legal domain is full of dis agreement , there should
be knowledge-based systems which are able to reason with an inconsistent
knowledge base in a nontrivial way - that is, systems which are able to gen-
erate alternative consistent arguments for or against a certain conclusion,
just as lawyers can do in practice. Furthermore, if a program should do more
with rules than only apply them, it should be able to reason about rules,
and therefore to express knowledge about them, for example, in the form of
standards to compare arguments or rules. In other words, there should be
systems which are able to contain metalevel knowledge, which points at the
need for modelling some kind of metalevel reasoning. Both points will be
discussed in much more detail in the next chapter. Finally, as writers such
as Toulmin, Perelman, Ashley and Rissland have rightly observed, in legal
problem solving induction and analogy play an important role. Therefore,
logical methods should be embedded in models of legal reasoning which
26 CHAPTER 2

also allow for these other modes of reasoning. More on how this should be
done will be said in the next section.
The criticism of this section is not only relevant for the question in
which way logic should be used, but also for the question which system of
logic is the best to use. There does not exist one unique, universally appli-
cable logical system: different logical systems formalize different aspects of
reasoning, and in various levels of detail; moreover, logicians often disagree
on the best way to allalyze a certain aspect of reasoning, for which reason
often riyal theories are proposed. Now, some of the criticism is justified
as objections to so me particular existing logical systems. For example,
Toulmin (1958, p. 142) rightly blames the logicians of his days for having
neglected the defeasible nature of legal reasoning, and in the previous
chapter criticism of so me AI researchers was discussed that standard logic
cannot analyze the way people obtain useful information from inconsistent
knowledge. However, objections to particular logical systems should not
be confused with a rejection of logic as such; a logical analysis of both
defeasible reasoning and drawing nontrivial conclusions from inconsistent
knowledge is very weIl possible, as has already been demonstrated by others
and as will become furt her apparent in the next chapters.

2.3. Noninferential Reasoning with Logical Tools


This section is devoted to a final criticism of the use of logic. Some crit-
ics (e.g. Perclman & Olbrechts-Tyteca, 1969, pp. 1-4; Rissland, 1988, p.
46) hold that, although as a tool in reasoning logic does not give rise
to mistakes, it cannot be of any use in nondeductive reasoning, which
forms an important part of reasoning in law (and many other domains).
The purpose of this section is to show that the role of logic is not so
unimportant as its critics claim; this will bc illustrated by an analysis of
one type of nondeductive reasoning, analogical reasoning. Another purpose
is to show that analogical reasoning is not just another mode of reasoning
than deduction, but that it is an instance of an essentially different kind
of activity than justifying adecision, for which reason it should be called a
noninferential rat her than a nondeductive mode of reasoning.
Analogical reasoning is often regarded as a main element of reasoning
with cases: cf. Ashley & Rissland (1987, p. 67):

To justify an assertion that a dient should will in a particular fact situation,


attorneys draw analogies to prior cases where similarly situated parties WOll.

It is supposed that analogical reasoning is nondeductive in character, for


instance, by Rissland & Ashley (1989, p. 214) who, contrasting in a math-
ematical context reasoning with cases with deductive reasoning, say:
THE ROLE OF LOGIC IN LEGAL REASONING 27

... (in mathematics) one does not justify a conclusion by citing cases but rat her
through the methods of logical inference.
According to these quotations, the difference with justification by deductive
reasoning would be that justifying adecision by analogy does not require
that the new case exactly matches all features of the precedent, but only
that the cases are sufficiently similar. These remarks contain two elements:
the first is that analogy is held to be a kind of justification, and the second
is that it is nondeductive.
What this section purports to show is that the justifying force of an
analogy is entirely a matter of content, for which reason analogical reasoning
should not be regarded as a way of justifying a conclusion, but as a way
of suggesting new premises. Furthermore, it will be shown that as such,
analogical reasoning uses logic in a particular way. In the philosophical
and AI-literature both various types of analogy and various uses which can
be made of them are discussed. I shall concentrate on analogies which are
meant to justify a normative decision in a case. Such a supposed justification
consists of stating a similarity between the current case and a past case
on which it is supposed a certain normative conclusion may be based
(Sacksteder, 1974, p. 235). In stating such an analogy two main elements
can be distinguished: first it must be decided which aspects of the cases
should be compared, and then it has to be decided und er which conditions
the cases are similar.
An interesting example, giving much insight into the nature of analogical
reasoning, can be found in Dutch civillaw (Scholten, 1974, p. 60ff). This ex-
ample consists of comparing, instead of two cases, a case and an anteccdent
of a statutory rule. Thc Dutch civil code (BW) contains a provision saying
that selling an apartment does not termiIlate an existing lease contract
(1612 BW). The case to be compared was donating a house instead of
selling it. Now, the way in which in Dutch law the analogy was justified was
by first observing that both selling and donating are instances of the more
general legal concept 'transferring property' and by then arguing that 1612
BW is based on the principle that transferring property does not terminate
an existing lease contractj since this rule was indeed accepted as being
the principle behind 1612 BW, both 1612 BW and the decision to let the
act of donating a house not terminate an existing lease contract could be
derived logically from the rule. This example shows that what is important
in an analogy is that the two cases which are matched are both instances
of a mOre general rute or principle from which the desired conclusion in
both cases can be derived (cf. also Sacksteder, 1974, p. 236j Bing, 1992,
pp. 105-6). If such a rule is already present in the knowledge base then, of
course, the system can apply this rule, which is simply logical inferencej the
interesting cases are those in which the system has to construct such a rule
28 CHAPTER 2

in a nondeductive but formally defined way from certain knowledge-base


items.
What are in this example the main elements of making an allalogy?
As for the relevant aspects, clearly the legal status of the acts involved
is considered as relevant while the financial consequences of the acts are
regarded as irrelevant. Furthermore, the cases are regarded as similar if of
both acts the legal status is an instance of the concept 'transfer of property'
and if, in addition, the other conditions of 1612 BW are satisfied. Thus the
two main elements of the analogy are reflected in the acceptance of the rule
which is constructed.
Now to answer the questions lying behind the discussion of this example,
why is suggesting analogies not an inference mode? In this respect the well-
known distinction should be kept in mind between the context of discovery
and the context of justification of solving a problem. Logic is essentially
a matter of justifying a solution to a problem given a set of premises, in
whatever way the solution has been obtained and the premises have been
selected. In general, every inference notion is an aspect of the context of
justification: something can be called an inference mode only if it, in some
way and to some degree, justifies a certain conclusion as being implied by
the selected premises. This means that a mode of reasoning has (given the
premises) a justifying force only if the conclusion is somehow based on the
way it has been derived - that is, if it is based on the form of the mode
of reasoning.
Now, how about the justifying force of analogy as a form of reasoning?
The important point is that the main elements of an analogy, deciding
which aspects to compare and deciding when the cases are similar, are
not matters of form but of content: in the above example these decisions
are, as just said, reflected in the acceptance of the general rule behind the
analogy, and whether this rule should be accepted is a normative decisionj
it is this decision which determines the justifying force of the analogy. That
this is entirely a matter of content is shown by the fact that if a match
between two cases is imperfect, it is always possible to instead construct
from exactly the same premises a rule for the opposite conclusion based
on the difference between the two cases, and if this is indeed done, for
example, by the opponent in a law suit, then a choice has to be made
between the rules. This shows that, even if the premises are accepted, the
justification of an analogically suggested conclusion is not determined hy
the way it is ohtained, hut entirely by the decision to accept the rule which is
behind the analogy or, in more general terms, by the decision which aspects
are relevant and what is the similarity metric. If these considerations of
content are made explicit, it seems that the only style of justification which
is needed is drawing logical inferences. In conclusion, analogical reasoning
THE ROLE OF LOGIC IN LEGAL REASONING 29

is a formal way of suggesting additional premises if in a particular case


the rules 'run out', without providing any conclusive reason to accept the
suggested premise. Analogical reasoning is not an inference mode but a
heuristic principle for trying to find additional information and as such
it is an aspect of the context of discovery of problem solving (see also
Nieuwenhuis, 1976).
In fact, a system like HYPO of Rissland and Ashley works accordillg
to this analysis of analogy. It does not have an absolute notion of being
sufficiently similar, but only a relative notion of being more similar. Fur-
thermore, its task is not to justify adecision in a certain case, but to
suggest possible argument moves based on analogizing or distinguishing a
case, whatever the user of the system wishes, without giving any reason to
prefer one of the possible moves. HYPO models aspects of the context of
discovery rather than of the context of justification.
The above example has also shown that logic still has a role in analogical
reasoning. In general, also heuristic principles for finding new knowledge
have to operate on the semantics of logical expressions. Thus, a first role of
logic is positive in that it points at rules on which an analogy can be based:
only those rules are useful which logically imply both rules and/or case law
decisions which are compared. Moreover, logic has a critical role, in saying
that an argument based on analogizing or distinguishing a case does not
logically follow from the initial knowledge base. This role is not entirely
negative, since it has a practical importance, which becomes apparent if
analogical reasoning is contrasted with deductive reasoning. If an answer
follows deductively from the knowledge base thell, although a user of the
system will, of course, not know that s/he wins the case, what the user
does know is that s/he will only loose if the information contained in the
knowledge base turns out to be false. Therefore the user only needs to
anticipate on attacks on the premises. However, if the argument is based
on analogy, s/he has to anticipate on more, since the opponent might also
come up with an argument based on the same premises, but constructed
by distinguishing the cases, or based on a conßicting precedent, Le. a
counterexample. Therefore in such cases s/he should also anticipate on
attacks of the rule or principle behind the analogy.
To summarize the conclusions of this section, rather than being a differ-
ent way of deriving conclusions, analogical reasoning is a formally defined
heuristic to suggest new premises with which a desired conclusion can be
obtained by logical means, but without itself providing any reason to accept
the premise. Thus analogical reasoning is an example both of the fact that
reasoning is more than drawing inferences, and of the fact that logic can
be a tool in noninferential reasoning.
30 CHAPTER 2

2.4. Rule-based and Case-based Reasoning

I end this chapter with relating the first three sections to one of the primary
topics in AI-and-Iaw research, the distinction between so-called 'rule-based'
and 'case-based reasoning'. Research on case-based reasoning (CBR) (e.g.
Ashley & Rissland, 1987; Rissland & Ashley, 1987; Rissland & Ashley, 1989;
Ashley, 1990; Skalak & Rissland, 1992) arose out of discontent with earlier,
what CBR-proponents have called 'rule-based' developments. In general,
the criticism is that rule-based systems are based on models of legal rea-
soning which have too much confidence in the possibility to solve legal
problems with general rules (Rissland, 1988, 1990). Although authors do
not always make explicit what they me an by the terms 'rule' and 'RBR',
mostly RBR is described in terms which indicate that the axiomatic view
is meant. See e.g. Rissland (1990, p. 1969, fn. 56):
( ... ) a typical rule-based approach to handling difficulties with thc rule set is to
resolve conflicts by hand before encoding the rules in thc program.
and Rissland (1990, p. 1967):
( ... ) the rule- based approach assumes that the set of rules has no inherent difficul-
ties, like ambiguities, gaps and conflicts.
Sometimes it is even equated with the naive deductivist view, e.g. by
Leith (1986, p. 546):
'Rule based' philosophers of law posit that we can view the law as a collection
of rules which seem, at least to me, to have 'an existence of their own'. It is not
unfair to see these philosophcrs as agents of a formal technical view of the law,
since they hold that justice is best arrived at by 'applying the rules' in a formal
and technical manner.
According to CBR-proponents assumptions like these are unrealistic: of-
ten legal disputes cannot be solved with the available rules and therefore
lawyers have to resort to reasoning with individual cases, whether prece-
dents or hypothetical examples. CBR-research concentrates on this aspect
of legal reasoning.
What is relevant for the present chapter is that the term 'rule-based' is
often equated with 'logic-based', for which reason the negative connotations
of the term 'RBR' carry over to the word 'logic'. However, by now it will
be clear that using logic does not imply at all that the domain knowledge is
seen as a consistent axiomatic system; it is perfectly possible to use logic as
a tool in other kinds of legal reasoning. Therefore I propose to use the term
'logic-based' also for systems which use logic as a too!. If used in that way,
the term also covers things like specifying the meaning of some knowledge
representation language(s), (which implies defining when a knowledge base
is inconsistent or when two arguments are contradictory), determining what
the consequences are of a particular revision of an inconsistent knowledge
THE ROLE OF LOGIC IN LEGAL REASONING 31

base or of a solution of a eonflict between two arguments; furthermore, in


this way induetive and analogical reasoning ean also be ealled 'logie-based'
sinee, as we have seen, these mo des of reasoning ean be defined as operating
on logical expressions. In eonclusion, even aspeets of reasoning which are
often said to be 'ease-based', like defeasibility of legal eoneepts, handling
eonfliets between rules and analogie al reasoning, can (partly) be modelled
with logie-based methods: logical analysis ean not only be used to clarify
aspeets of RBR, but also to add to the understanding of CBR: logic can be
used as a tool in CBR as weIl as in RBR.

2.5. Summary
The purpose of this ehapter was to determine the proper role of logic
in AI-and-Iaw research. We have seen that using logic to formalize legal
knowledge does not commit to so me extreme view on how to do justice,
viz. as applying legal rules in a strict way, without looking at the social or
moral consequenees. Sinee logic has nothing to do with the status of the
premises but only with the form of an argument, it is always possible to
reject the premises if the conclusion is considered undesirable.
We have also seen, however, that realistic models of legal reasoning
should allow for more than just running a sound theorem prover over a set
of legal 'axioms' formulated in some logicallanguage. The incompleteness,
uncertainty and inconsisteney of mueh legal knowledge requires that logie
is used in a different way than in this axiomatie view on reasoning: drawing
logical inferenees should not be seen as a model of, but as a tool in reasoning.
This view on the use of logie in AI and law has been illustrated by a
diseussion of a noninferential kind of reasoning, analogical reasoning.
Furthermore, briefly so me reasons were indieated why legal reasoning
requires logical tools which are different from the tools of standard logic,
which were mainly developed for being used in mathematieal reasoning.
The next ehapter will be devoted to a detailed analysis of so me of these
nonstandard features of the legal domain: some of them eaIl for the need to
draw nontrivial eonclusions from inconsistent information, and other ones
require the possibility to draw eonclusions which are defeasible.
CHAPTER 3

THE NEED FOR NEW LOGICAL TOOLS

As already indicated in Chapter 2, the untenability of the naive deductivist


view on legal reasoning limits the applicability of standard logical methods,
for which reason new logical tools are needed. In this chapter I shall in
more detail go into the causes of the limitations and the nature of the
needed tools. A first cause is the rule-guided rat her than rule-governed
nature of legal reasoning. Because of the open, unpredictable nature of the
world to which the law applies and also because of the many competing
interests or moral opinions involved in legal disputes, legal rules are often
not followed but put into question: as a consequence, they are often subject
to exceptions which are not explicitly stated in legislation, and this calls for
ways of representing the provisional or 'defeasible' character of legal rules.
Section 3.2 will be devoted to this issue. The second source of problems for
standard logic also has to do with the fact that the legislator cannot forcsee
everything: for this reason legal rules often deliberately leave open which
factual circumstances classify as instances of the concepts occurring in the
rule's conditions and, consequently, questions of classification often leave
room for conflicting opinions. The logical problems to which this gives rise,
will be discussed in Section 3.3.
These two challenges for a logical analysis of legal reasoning have their
origins in the problematic nature of information from legal sources, viz. its
uncertainty and incompleteness. In addition, legislation has some particular
structural features which, when preserved in a formalization, also need non-
standard logical tools. One of them is the separation of rules and exceptions
in statutes, and another one is the use of collision rules. Both features will
be dealt with in Section 3.1. The formal discussions of this section are also
relevant for the rest of this chapter since, although the discussed phenomena
are legally of a different nature than those of Sections 3.2 and 3:3, their
logical' structure is generally the same. For this reason Section 3.1 will go
deeper into the formal details than thc other sections.
Following the three sections with examples of nonstandard aspects of
legal reasoning, Section 3.4 discusses the general formal character of thc
new logical tools which are necdedj furthermore, it gives a final ovcrview of
the reasons for studying them in AI-and-Iaw research. Finally, Section 3.5
gives an overview of existing AI-and-Iaw projects modelling some of the

33
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
34 CHAPTER 3

nonstandard features of legal reasoning identified in this chapter.

3.1. The Separation of Rules and Exceptions in Legislation

The structure of legislation exhibits some notable features restricting the


application of standard logic. A first is that within astatute exceptions to
a rule are often not induded in the rule's formulation, but are expressed
somewhere else. Another feature is that even within legislation conflicts
between norms cannot always be prevented. Sometimes such conflicts can
be solved by general collision rules based on the hierarchical status of the
conflicting norms, their time of enactment 01' their scope of application.
However, even in these cases standard logic alone cannot model the rea-
soning process, as will be shown in this section. In fact, these collision
rules give also rise to separated rules and exceptions, since the effect of
one rule having precedence over another is that the preceding rule forms
an exception to the defeated one in the cases in which they collide. For
this reason, the main theme of this section will be separating rules and
exceptions; other aspects of the collision rules will be briefly discussed at
the end of this section.
The purpose of this section is to illustrate the logical problems which
the just mentioned features cause for the aim of keeping the structure of
the formalization as elose as possible to the original sourees. The main
problem is that if standard logical techniques are used, problems arise with
maintaining the separation between rule and exceptioll in the formalization
(cf. also Nieuwenhuis, 1989, p. 62). Many hold a loss ofthis separation to be
adefeet of a formalization: in recent years the method of being faithful to
the original text, whieh is sometimes called 'isomorphie formalization', has
often been advocated as the best way of structuring legislative knowledge
for the purpose of building legal knowledge-based systems (Karpf, 1989;
Nieuwenhuis, 1989; Routen & Bench-Capon, 1991; Bench-Capon & Co-
enen, 1992); it has been argued that preserving the structure of the original
text benefits, among other things, validation and maintenance. However,
this aim faces so me obstaeles, some of which have to do with the fact that
in legislation exceptions are often separated from the general rule.
In AI separate formalization of rules and exceptions is also indepen-
dently of the aim of structural resemblance to the source often advocated
as the best way of representing exceptions, again for reasons of valida-
tion and maintenance. (cf. Touretzky, 1984, pp. 107-8; Loui, 1987, p. 106;
Poole, 1985, p. 146). It is said to reduce the complexity of individual rules
and to support a modular way of formalizing, i.e. representing a piece of
information independently from the rest of the domain: for example, it
makes it possible to add new exceptions without having to change old rules.
THE NEED FOR NEW LOGICAL TOOLS 35

Because of this, the present investigations are not only relevant to AI and
law, but also to research in knowledge representation in general.
In AI-and-Iaw research the desirability of structural resemblance be-
tween source and formalization is by no means undisputed: see, for example,
Sergot's objections, discussed in Bench-Capon & Coenen (1992). In this
respect it should be remarked that it is not my aim to argue in favour of
this methodology; my position is mainly that the value others attribute
to structural resemblance makes it worthwhile to investigate under which
conditions this kind of representation is possible. The main conclusion will
be that the discussion on this issue is not complete without mentioning
nonstandard logics, since if one wants to maintain the separation of rules
and exceptions, one has to accept that the reasoning behaviour of a system
becomes nonstandard, in particular nonmonotonic. Of course, I would not
go into this issue if I would not see any point at all in aiming for structural
resemblance, but in my opinion its desirability still has to prove itself in
practice; in fact, my investigations will even provide some ammunition
for opponents of this aim, as will be furt her explained in Chapter 10,
Section 10.1. In this section I shall, following a few terminological matters,
present some examples of separate rules and exceptions in legislation, after
which I show why they cause problems for standard reasoning techniques
if their separation is to be preserved. Then semiformally two nonstandard
methods are presented, which both provide solutions for these problems.

3.1.1. TERMINOLOGY

In the rest of this book I shall speak of structural resemblance as a relation


between SOU1'ce units and KB-units ('KB' stands for 'Knowledge Base').
By a source unit I me an the smallest identifiable unit of the source from
which a norm can be extracted. What this smallest unit exact1y is will
partly depend on things like the purpose of the formalization and the
required level of detail: I expect that in most cases it will be a section
or a subsection of a code; in the examples in this section this has turned
out to be a workable criterion. By a KB unit I mean a conditional or a
biconditional (recall that the investigations in this book are on the logical
level of knowledge representation). Structural resemblance is an aspect of
the result of the formalization process, and should be carefully distinguished
from modulaTity, wh ich is an aspect of the process of formalizing itself: it
can be described as formalizing a source unit without having to consider
other source units. In other places, where often the term 'isomorphism' is
used, several, not totally equivalent, definitions of 'isomorphism' can be
found (Bench-Capon & Coenen, 1992; Karpf, 1989; Nieuwenhuis, 1989).
However, I believe that this description captures the essence of all of them,
36 CHAPTER 3

for which reason in the present context their differences do not matter.
The above implies that the two following situations are deviations from
structural resemblance: the situation that one source unit is formalized in
more KB units; and the situation in which one KB unit contains concepts
from more source units, unless a source unit itself refers to other source units
(as in Example 3.1.3 below). Generally, only the second kind of situation is
held to cause problems with respect to validation and maintenance. There-
fore, the main problem discussed in this section and the rest of this book
is how to avoid the second situation when in legislation general rules and
exceptions are expressed separately. First some examples will be presented,
all taken from Dutch law.

3.1.2. EXAMPLES
I now list some examples of expressions and techniques which are typical
for legislation. They will serve as illustrations throughout this book: among
other things, they will in Section 5.1 illustrate a classification of kinds of
exceptions.
Example 3.1.1 Section 2 of the Dutch rent act (in Dutch 'Huurprijzen-
wet', HPW for short) states that the act is not applicable to lease contracts
which by their nature concern a short termed usage. Since 2 HPW is not
mentioned in the rest of the act, this section causes in fact all other sections
of the HPW to have an implicit exception. Note that 2 HPW does not
itself contradict another rule but only renders other rules under certain
conditions inapplicable.
Example 3.1.2 According to section 6:2-(2) of the Dutch civil code (BW)
a rule binding upon a relation between a creditor and a debtor by virtue
of, among other things, the law, does not apply to the extent tImt, in the
given circumstances, this would be unacceptable according to the principles
of reasonableness and equity. The role of this norm is similar to the one
of 2 HPW in Example 3.1.1: it causes all norms concerning creditor-debtor
relations to have an implicit exception.
Example 3.1.3 Section 4 HPW lays down certain necessary and sufficient
conditions for the possibility to change rents once a year. Section 30-(2)
HPW explicitly makes an exception to this section by stating that "contrary
to 4 HPW" rents included in a re nt contract of before 1976 must remain
unchanged until three years after the enactment of the HPW.
Example 3.1.4 Section 287 of the Dutch code of criminallaw (Sr) attaches
a maximum penalty of fifteen years of imprisonment to killing someone on
purpose, whereas section 154-(4) Sr imposes a maximum penalty of twelve
years upon killing someone in a life-and-death duel.
THE NEED FOR NEW LOGICAL TOOLS 37

Example 3.1.5 Section 25 of the Dutch trafik road act (WVW) forbids
any behaviour which can cause danger in trafik. This norm adds an implicit
exception to all obligations and permissions in Dutch trafik law.
Example 3.1.6 Section 1624 BW declares that if a contract has features
of both lease of business accommodation and another contract, and a norm
concerning the other contract type conflicts with a norm concerning lease
of business accommodation, the latter prevails. This is a very complex
example of implicit exceptions: every rule which is not about lease of
business accommodation has an implicit exception for cases in which it
is in conflict with a norm concerning lease of business accommodation. To
complicate the example even further, section 1637a BW gives in a similar
way precedence to norms concerning labour contracts.
Example 3.1.7 Section 3:32-(1) BW declares every person to have the
capacity to perform juridical acts, "to the extent that the Zaw does not
provide otherwise". One of the places in which the law does so is 1:234-(1)
BW, but with the same qualification: minors do not have the capacity to
performjuridical acts, to the extent that the law does not provide otherwise.
Subsection (2) of this norm contains such an exception in case, under certain
additional conditions, the minor acts with consent of its legal representative.

3.1.3. FORMALIZATIONS IN STANDARD LOGIC


Now I shall consider various possible 'standard' formalizations of these
examples in first-order predicate logic. As for notation, in this book I
shall abbreviate the standard notation for universally quantified conditional
formulas, such
Vx((Px 1\ Qx) --t Rx)
as
Vx.Px 1\ Qx --t Rx
Furt hermore , inspired by Kowalski (1995) I shall often use a semi-formal
notation for predicate and function symbols. For instance, the formula
x and y duel on-life-and-death
is my version of the more usual notation
D(x, y)
(in logic) or
DueLon_life-and-death(x, y)
(in AI). Moreover, I shall write constants and variables in italics and
predicate and function symbols in verbat im (unless they arejust one letter,
when they are in italics).
38 CHAPTER3

Specijic Exception Clauses


If only standard reasoning techniques are used, the simplest way of for-
malizing these examples is by using explicit specific exception clauses: this
method consists of adding the negation of the condition of the exceptional
rule to the antecedent of the general rule. In formalizing Example 3.1.1 this
is captured by the following formalization scheme for an unspecified section
N of the HPW (but not section 2).

A First Formalization
Example 3.1.1
N HPW: 'v'x.conditions /\ -, x is a short-termed contract ---t
conclusion
Note that thus 2 HPW is not formalized as aseparate KB unit. A similar
formalization can be given of Example 3.1.2. The other examples become 1
Example 3.1.3
4 HPW: 'v'x.conditions /\ -, contract date of x before 1976
---t rent of x can be changed
30-(2) HPW: 'v'x.conditions /\ contract date of x before 1976
---t -, rent of x can be changed

Example 3.1.4
287 Sr: 'Ix, y. x kills Y /\ x acts wi th intent /\ -, x and y
duel on life-and-death ---t maximum penalty for x is
15 years
154-(4) Sr: 'Ix, y. x kills Y /\ x and y duel on life-and-death---t
maximum penalty for x is 12 years

Example 3.1.5
If, for convenience, we express deontic concepts with first-order predicates,
thell 25 WVW can be represented in predicate logic as
25 WWW: 'Ix. x causes danger ---t -, x is permitted
where x is assumed to range over trafik acts. Furthermore, every obligation
and permission in Dutch trafik law should have an extra condition
-, x causes danger
Example 3.1.6
1624 BW: this would require for each rule possibly coming into conflict
with a norm about lease of business accommodation to have a condition
lWith respect to Example 3.1.4 Brouwer (1994, p. 15) remarks that it is philosoph-
ically questionable to regard intent as a property of an action. Although his point is
interesting, I ignore such subtleties for explanatory purposes.
THE NEED FOR NEW LOGICAL TOOLS 39

\:Ix . ..., x is alease of business accommodation


Finally, the formalization of Example 3.1.7 gives a nice illustration of the
problems which have to be solved if all individual exceptions are explicitly
contained in the general rule. What is at least needed is that the antecedent
of the general rule contains the negated antecedents of all rules which form
an exception. To this end first all exceptions have to be discovered and then
rules like the following ones should be formulated (all exceptional circum-
stances other than being a minor are called exceptionI, ... , exceptionk).

1 \:Ix. x is a person /\ ..., x is a minor /\ ..., exceptionl /\ ... /\


..., exceptionk - t X has legal capacity
2 \:Ix. x is a person /\ x is a minor - t ..., x has legal
capacity
3 \:Ix. x is a person /\ exceptionl - t ..., x has legal capacity

k +2 \:Ix. x is a person /\ exceptionk - t ..., x has legal capacity

However, this is still not sufficient, since, in addition, all possible exceptions
to the exceptions should be taken into account. Therefore, (2) should be
changed into something like
2' \:Ix. x is a person /\ x is a minor /\ ..., x acts wi th consent of
his legal representative /\ ..., exceptionkH /\ ... /\
..., exceptionn - t ..., x has legal capacity

and in addition to (1) the following rule should be included:


I' \:Ix. x is a person /\ x is a minor /\ (x acts with consent of
his legal representative V exceptionkH V ... V exceptionn )
- t x has legal capacity

Note that this only deals with exceptions to 1:234-(1) BW; ifthe exceptions
to the other exceptions are also induded, the formalization becomes still
more complex.
Since in these formalizations several non-contiguous source units are
mixed in one KB unit, it is obvious that structural resemblance is lost
completely. Other disadvantages of this way of formalizing are that each
time a new exception is added to the knowledge base one or more old rules
have to be changed, and that if a rule has many exceptions, its formulation
can become very complex, particularly if there are also exceptions to the
exceptions. Furthermore, in formalizing a rule the entire domain has to be
considered.
40 CHAPTER 3

Some, e.g. Nieuwenhuis (1989, p. 62), defend deviatiolls from the struc-
ture of the source by arguing that thus the implicit logical structure of
the statute is made explicit. However, in the next seetion a nonstandard
logical structure of legislation will be assumed, a structure which allows
the separation of general rules and exceptions, and which is therefore al-
ready explicit in the statute itself. First, however, another attempt within
standard logic will be discussed.

General Exception Clauses


At first sight there seems to be a way to retain the standard view on the
logical structure of legislation by formalizing in a careful way, viz. with
general exception clauses. This method seems particularly appropriate if
the law uses phrases like 'except when otherwise provided', 'unless the
contrary can be shown', etcetera. The method can also be used when, as
in Example 3.1.1, a norm states the inapplicability of other norms. Then
an applicability clause can be used, as in the following formalization of
Example 3.1.1. Below N is a constant standing for any seetion of the HPW
other than section 2 and n is assumed to be a variable ranging only over
sections of the HPW.
Formalization 2 of Example 3.1.1
2 HPW: Vx, n. x is a short-termed contract Anis a HPW
section A n ::J 2 - t -, n is applicable to x
N HPW: Vx. conditions ANis applicable to x - t conclusion
Thus, since the applicability clause does not refer to any particular source
unit, no source units are mixed in the formalization. An additional advan-
tage is that now 2 HPW is formalized as aseparate KB unit, and that if a
new exception is added, the old rule need not be changed any more.
However, this is not the whole story: if the knowledge base only contains
these two rules, then N HPW will never apply to a situation, since there
is no way to conclude N is applicable to x. Therefore an additional
formula is needed of which the antecedent contains the conjunction of the
negations of all ways to render a rule inapplicable.
Vx, n. -, x is a short-termed contract -t n is applicable to x
This method is known as the completion ofthe predicate is applicable to.
Therefore I shall call such extra formulas completion formulas. More formal
details of this method will be discussed in Chapter 4, Section 4.1.3.
So far no source units have been mixed in KB units but things become
more complex if there is more than one way to render a rule inapplicable.
For example, section 7-(2) HPW declares inapplicable only chapter III of
the act in case of lease of adependent apartment:
THE NEED FOR NEW LOGICAL TOOLS 41

7-(2) HPW: \Ix, n. x concerns adependent apartment 1\ n is a


section of Ch. III HPW - t -, n is applicable to x
which results in the following modification of the completion formula.
\Ix, n. -, x is a short-termed contract 1\ (x concerns adependent
apartment - t -, n is a section of Ch. III HPW) - t
n is applicable to x

Now there is a problem with respect to structural resemblance, since in


this formula concepts of several source units, viz. 2 and 7-(2) HPW, are
mixed. An additional problem is that if a norm like 7-(2) HPW is found
at a later time, then the old completion formula has to be changed, which
means that another advantage over the first approach, a modular way of
formalizing, is also given up again. Finally, if still more exceptions are
added, or if exceptions to exceptions are found, then the completion formula
can become exceedingly complex, which gives away the final advantage of
separating rules and exceptions.
Similar problems occur in the formalization of Example 3.1.7, wh ich at
first sight would see m to have a natural formalization with general exception
clauses.

Formalization 2 0/ Example 3.1.7


3:32-(1) BW: \Ix. x is a person 1\ -, exc(3:32,x) -t
x has legal capacity
1:234-(1) BW: \Ix. x is a minor 1\ -, exc(1:234,x) -t exc(3:32,x) 1\
-,xhas legal capacity
(exc(n,x) is short for x is exceptional with respect to n). The prob-
lem is that in order to make -, exc(3:32,x) and -, exc(1:234,x) true, for
both clauses a rule has to be formalized which has as its antecedent the
conjunction of the negations of all the ways in which legislative provisions
declare, respectively, persons incompetent and minors competent. Again,
in these completion formulas elements of several source units are mixed in
one KB-unit.
In conclusion, both the first and the second attcmpt to retain stan-
dard reasoning techniques give rise to formalizations of legislation which
considerably deviate from the structure of the source, which dccrease the
IIlodularity of thc job of the knowledge enginecr, and which give rise to
complex formalizations. Although at first sight the method with using
applicability or general exception clauses seemed to solve these problems,
this has turned out to be an illusion, particularly because of the need for a
completion formula. Therefore, a way is needed to avoid this formula. Such
a way will be discussed next.
42 CHAPTER 3

3.1.4. NONSTANDARD METHODS

General Exception Clauses in Nonstandard Formalisms


I shall now further develop the idea to use general exception or applicability
clauses. It is important to realize that in principie every norm can have
separate exceptions, now or in the future: therefore, every KB unit must
have a general exception clause, including exceptions themselves, such as
2 HPW. Above this was ignored in order not to complicate the discussion
too much, but now general exception (or applicability) clauses will be added
to every rule.
As was concluded, a way has to be found to avoid the completion
formulas. A natural way of doing this is to assume that the appIicabiIity or
'no exception' condition is satisfied uniess there is evidence to the contrary,
for example, by using a nonprovability operator, such as the negation-as-
failure method oflogic programming (which will be denoted by '''-''). To this
end in Example 3.1.1 an inapplicability clause is needed, with which the
example can be formalized as follows. Note that, since I have not yet exactly
specified the logical interpretation of "-' (this will be done in Chapter 4),
the following 'formalization' is of a semiformal nature.
Formalization 3 0/ Example 3.1.1
2 HPW: \Ix, n. x is a short-termed contract 1\ n is a HPW
section 1\ n i- 2 1\ "-' 2 is inapplicable to x - t
n is inapplicable to x
N HPW: \Ix. conditions 1\ "-' N is inapplicable to x - t
conclusion
Thus no completion formulas are needed, as becomes apparent when we
consider a backward reasoning mechanism which tries to derive the conse-
quent of N HPW. Assurne that, apart from the inapplicability clauses, all
conjuncts of the antecedents of the rules are entered by the user. The system
will then (for a given contract c) try to derive N is inapplicable to c.
In doing so, it calls 2 HPW, substituting N for n, and starts a derivation
for 2 is inapplicable to c. It again calls 2 HPW, this time substituting
2 for n; since the conjunct n i- 2 cannot be satisfied and since there are
no other rules for the predicate is inapplicable to, the system cannot
complete this derivation and concludes to "-' 2 is inapplicable to c.
This makes 2 HPW 'fire' for n = N, for which reason the antecedent
"-' N is inapplicable to c of rule N HPW cannot be satisfied and its
consequent cannot be derived.
We have now reached a first point in our analysis on which standard
logical methods turn out to be insufficient. Although at first sight this
formalization seems but a minor modification of the second one, in fact a
crucial decision has beeIl made, viz. the abandonment of the standard view
THE NEED FOR NEW LOGICAL TOOLS 43

on logical consequence. Standard logical, deductive, consequence relations


are monotonie, which me ans that new information can never invalidate
conclusions drawn from the information obtained so far. Formally this can
be stated as follows. A consequence notion 11= is monotonic if and only if
for all formulas <PI, ... , <Pn, <Pn+ I, 'IjJ the following holds:

If <PI, ... , <Pn 11= 'IjJ, then also <PI,···, <Pn, <Pn+1 11= 'IjJ
However, the just sketched way of reasoning does not have this property,
since it offers a way to draw conclusions on the basis of the nonderivability
of other conclusions, and the important point is that such conclusions can
become invalid if additional information leads to the derivability of these
other conclusions. Consider again the example: if first only condition is
added to the rules, then conclusion can be derived, but if after that
also the fact e is a short-termed contract is added, then in the way
explained above N is inapplicable to e becomes derivable, which blocks
the application of N HPW. This kind of reasoning, in which conclusions can
be invalidated by additional information, is called nonmonotonie, and it is
obvious that a logical analysis of this kind of reasoning needs methods which
go beyond standard logical tools. In fact, I have already briefly mentioned
one possible way to account for negation as failure, a way which tries to
stay as close to standard logic as possible: this is the method of adding
completion formulas to the premises and to apply standard logic to the
resulting set. As noted, the formal details will be discussed in Chapter 4.
It might be argued that there is another method of preserving the
separation of rules and exceptions, a method in which standard logic need
not be given up. This will be discussed next.

Collision Rules
Making the assumption that there are no exceptions if the opposite cannot
be proven is not the only formal method of separating rules from exceptions.
Legal systems have ways of solving conflicts between norms by way of
collision rules and, as will be shown now, these principles provide another
way to represent rules and exceptions in a way resemblillg the source. Two
such principles, based, respectively, Oll notions of specificity and priority,
will be discussed. This method is particularly appropriate for systems which
can generate alternative arguments from an inconsistent knowledge base,
since the collision rules can be used in comparing the arguments to see
which one is the best. At first sight it seems that a formal description of
this method does not need a nonmonotonic logic, since it seems that these
collision rules can simply be added to a standard logical system. However,
things are not so simple as they seem, as will be explained after the details
of the method have been discussed by way of so me examples.
44 CHAPTER 3

Specijicity
In Example 3.1.4 the interpretation which structurally is the most faithful
to the original text is to regard 154-(4) Sr as a rule which covers a more
specific case than 287 Sr, which makes the legal principle Lex Specialis
Derogat Legi Generali applicable. This rule is generally held to be a collision
rule of any legal system. What for our purposes is particularly interesting
about this rule is that in AI its counterpart the 'specificity' principle is
often advocated as thc best way of separating rules from exceptions in a
knowledge base (e.g. by Poole, 1991). The general idea is, instead of intro-
ducing a new operator into the language of a logical system, to augment
an existing logical system with a metaprinciple to restore consistency if a
contradiction has been derived. In case of specificity, this boils down to
defining an algorithm which determines by syntactic means which of the
contradictory conclusions is based on the most specific premises.
To illustrate this with Example 3.1.4, ass urne that it is formalized in
standard predicate logic augmented with the Lex Specialis principle. Then
at first sight a problem is that the element x acts with intent is not part
of the formulation of 154-(4) Sr, which prevents the antecedent of 154-(4)
Sr from being a specific case of the antecedent of 287 Sr. Neverthcless,
accordillg to common-sense the agreement to duel on life-and-death implies
the intention to kill, and if this piece of common-sense knowledge is explic-
itly added to the formalization, specificity becomes syntactically apparent
(the technical details will be discussed in Chapter 6). Now a formalization
of this example illustrating the specificity method is the followillg one (in
which Iassume that enough axioms are added to make the consequents of
the two norms contradict each other).
Formalization 2 0/ Example 3.1.4
287 Sr: 't/x, y. x kills Y 1\ x acts wi th intent --+
maximum penalty for x is 15 years
154-(4) Sr: 't/x, y. x kills Y 1\ x and y duel on life-and-death--+
maximum penalty for x is 12 years
CS: 't/x, y. x and y duel on life-and-death--+
x acts with intent

Above Example 3.1.3 was formalized with an exception clause, but it can
also be regarded as a Lex Specialis case. In this interpretation it is formal-
ized as follows.
Formalization 20/ Example 3.1.3
4 HPW: 't/x. conditions == rent of x can be changed
30-(2) HPW: 't/x conditions 1\ contract date of x before 1976 --+
--, rent of x can be changed
THE NEED FOR NEW LOGICAL TOOLS 45

Again, the idea is that the reasoning mechanism will recognize the second
rule as a lex specialis of the first, since the antecedent of 30-(2) HPW
is a specific case of the one of (the if-part of) 4 HPW. Note that the
incorporation of the conditions of 4 HPW in the formalization of 30-(2)
HPW does not mix several source units, since 30-(2) HPW itself refers to
those conditions by me ans of "contrary to 4 HPW".
With respect to knowledge representation the advantages of the speci-
ficity method will be obvious: apart from preserving the structure of the
text, this method makes it also possible to add new exceptions without
having to change the old rule and, moreover, the formalized rules maintain
a simple and elegant structure. Formally, at least two things are required to
make this work. First, the reasoning mechanism should be able to identify
consistent subsets of the premises to serve as the basis for possible con-
clusions; and secondly, it should have logical criteria available by which it
can determine which conclusion is based on the more specific information.
How these things are possible, will be the subject of Chapter 6. It will be
obvious, however, that this will involve more than mechanically deriving
deductive consequences from a set of logical formulas, as in the axiomatic
view on reasoning.

Pl'iol'ities
Sometimes another metalevel method is the most natural way of dealing
with separate exceptions, viz. a method based on hierarchical relations
between norms. Consider Example 3.1.5: since almost all penal traffic pro-
visions are contained in a regulation which is hierarchically lower than
the Dutch traffic act, an ordering based on the Lex Superior Derogat Legi
Inferiori principle will give precedence to the prohibition of 25 WVW if it
conflicts with a lower permission. The Lex Superior principle is based on
the general hierarchy of a legal system: it says that if legislative authority is
divided along the lines of the hierarchical structure of a legal system, lower
authorities have to respect what higher authorities have issued. Another ex-
ample is Example 3.1.6, which is a complex example of implicitly expressed
exceptions. Its most natural interpretation is as folIows: if conflicting rules
apply to a case with a contract of mixed nature, 1624 BW gives rules
concerning lease of business accommodation priority over rules concerning
other types of contracts. Note that in this example the priorities are not
determined by the general hierarchical structure of the legal system, but
by a specific collision rule which is itself part of astatute. In Chapter 8 I
discuss the question how such specific collision rules like 1624 BW can be
combined with general principles like Lex Superior and Lex Specialis.
Like Lex Specialis, the Lex Superior principle also has its counterpart
in AI: sometimes (e.g. by Brewka, 1989) it is suggested that the separation
46 CHAPTER3

of rules and exeeptions in knowiedge bases should be obtained by using


priorities. In order to reason with hierarehieal relations between norms, a
system must again be able to identify eonsistent parts of the premises on
which conclusions can be based. Furthermore, it must provide for ways of
expressing priorities between rules and to take these priorities into account
in the reasoning process. Again, these facilities are not provided by standard
logical systems; Chapter 7 of this book will be devoted to this subject.

Nonmonotonicity 0/ Using Collision Rules


Obviously, reasoning with these collision rules goes beyond the axiomatic
view on reasoning as running a sound theorem prover over a set of logical
formulas. Nevertheless, as just said, it might at first sight be held that it is
possible to retain standard logic in the sense that these rules can simply be
added to a standard logical system. As we will see in coming chapters, this
method has indeed been investigated, both in AI and in philosophy. How-
ever, it must be realized that in any case this method affects the reasoning
process in a way similar to the use of a nonprovability operator, since again
the derivation of a formula can depend on the nonprovability of another one:
more specifically, a conclusion derived from a general or lower rule depends
on the failure to derive the opposite conclusion from a more specific or
higher rule. For instance, ifin Example 3.1.4 section 287 Sr has been applied
because only x kills y 1\ x acts wi th intent is known as a fact, then the
mere addition of x and y duel on life-and-death without retracting any
of the other premises invalidates the conclusion the maximum penalty for
x is 15 years. In other words, if a consequence notion is defined over a
standard logical system cxtended with collision rules, it will be nonmono-
tonic. In conclusion, also the second method of preserving the separation
of rules and exceptions turns out to be a form of nonmonotonic reasoning.

Legal Aspects 0/ Collision Rules


The principles Lex Superior and Lex Specialis are generally regarded as
collision rules belonging to any legal system, together with a principle
based on the time of enactment of a norm: Lex Posterior Derogat Legi
Priori, which says that the rule enacted at a later point in time has priority
over the earlier one. Some legal systems have codified these principles but
mostly they are unwritten. As Example 3.1.6 shows, the general hierar-
ehy of a legal system can be refined by special collision rules in statutes,
stating hierarchical relations between special classes of norms. In addition
it is possible to use collision rules without an official legal status: not hing
prevents a knowledge engineer of a legal knowledge-based system to design
his or her own hierarchy on parts of the domain knowledge. The discussion
of Gardner's program in Section 3.5 will provide an example of such a non-
THE NEED FOR NEW LOGICAL TOOLS 47

official collision rule. An interesting quest ion is how in legal reasoning all
these collision rules are combined. Both the legal and the logical aspects of
this issue will be investigated in Chapter 8.

3.2. Defeasibility of Legal Rules


The previous section has revealed that legislation itself already exhibits
an internal structure which, when preserved in a formalization, goes be-
yond standard logic. In this section I go into nonstandard aspects of legal
reasoning which, rather than by internal features of legislation, are caused
by problems which occur when rules are applied to the world. One of the
important observations of the previous chapter was that legal rules are
objects of discourse, which can not only be applied but also be put into
question. More specifically, legal rules can in some circumstances be set
aside by considerations based on non-rule standards, such as the purpose
of a rule, or legal principles. Although in Chapter 2 it was said that this
does not preclude a logical analysis of legal reasoning, in this section it will
be shown that it does have some bearings on the question which logical
tools should be used.

EXAMPLES

A standard example from legal philosophy is the 'vehicle in the park'


example, occurring in a debate between Hart (1958) and Fuller (1958),
and already briefly discussed in the previous chapter. Assurne that a park
regulation states that vehicles are not allowed to enter the park. Assurne
further that some person takes a military jeep into the park to serve as a
war memorial. Since there can be little doubt that a jeep is a vehicle, there
is no problem ab out the conditions of the rule being satisfied. However,
what can be doubted is whether its normative consequence, that the jeep
may not be taken into the park, should be attached to this specific instance
of the condition: it might be argued that the purpose of the rule, which
is making the park a place of rest and quietness, motivates an exception
for this case, since a war memorial will hardly disturb any ordinary park
visitor. Consider now a variant of this example, in which a car is used to
transport an injured person quickly from the park to the hospital. In this
case it is not the purpose of the rule but a legal principle which challenges
the rule's applicability: although the car will certainly disturb park visitors,
it might be argued that the principle 'One ought to do what one can to
help injured people' gives rise to an exception to this rule. To summarize,
legal rules can be subject to exceptions which cannot all be anticipated by
the legislator, but which are motivated by the purpose of the rule or by
legal principlesj in one word: legal rules are defeasible.
48 CHAPTER 3

Also in real life examples of this kind can be found. Snijders (1978)
discusses, among other ones, the following examples from Dutch law. Sec-
tion 95 of the Dutch accident insurance act gives the insurer the right of
recourse against the injurer. In one such case the injurer and the injured
person were married in communal estate; the Dutch supreme court (Hoge
Raad) decided in HR 2-2-1973, NJ 1973, 225 that applying the rule to such
cases would not be reasonable. Another example concerns Dutch labour
law. Section 1638b BW states as a general rule that the employee's right to
receive wages ends if s/he does not perform the agreed labour. In addition
to the two legislative exceptions to this rule in the sections 1638c and 1638d
BW the Dutch supreme court formulated in HR 10-11-1972, NJ 1973, 60
another exception: the right to receive wages does not end if not doing the
agreed labour was caused by circumstances of which it is more fair that the
employer rat her than the employee should bear the risk. In the case at hand
such a circumstance was a strike of other employees which prevented the
employee to do his job. Finally, Dutch civillaw contains the analogue of the
famous American Riggs v. Palmer case, discussed in Dworkin (1977). In the
Dutch supreme court case (HR 7-12-1990, NJ 1991, 593) a man had killed
his wife five weeks after they had married. At the time of their marriage the
woman was not only much older but also much richer than the man, and
since they had married in communal estate the man claimed his half of the
community according to section 1:100 of the Dutch Civil Code; however,
the Dutch supreme court decided that the principles of reasonableness and
equity motivated an exception to this rule in this case.
In sum, legal rules are defeasible: since the legislator cannot foresee every
possible situation which satisfies a rule's conditions, in some such situations
principles or the purpose of the rule may give rise to an exception.

PROBLEMS FOR STANDARD LOGIC

With respect to knowledge representation the defeasibility of legal rules


causes exactly the same problems as the 'structural' separation of rules
and exceptions within legislation itself. Particularly, there is a problem
with respect to preserving the structure of the sourees, since the exceptions
making a rule defeasible will normally occur separately from the general
rule (typically in case law decisions). Furthermore, it is easy to see that
the same holds for the other problems, having to change old rules when
new exceptions are added, having to consider the entire domain when
formalizing one rule, and needing complex formalizations: these problems
also occur again when for representing these exceptions standard techniques
are used.
In addition to these knowledge representation problems there is another
THE NEED FOR NEW LOGICALTOOLS 49

reason why standard logic cannot cope with the defeasibility of legal rules,
which has more to do with their application to the world. Someonc requiring
absolute certainty of derived conclusions would in case of defeasiblc rules
have to say: 'unless I positively know that there are no exceptional cir-
cumstances, I cannot apply the rule'. This, however, is simply not wh at
happens in the practice of legal reasoning: often only some exceptions
are considered, and many rules are applied even without any reference to
the absence of exceptional circumstances. In criminal cases, for example,
normally only a small proportion of the possible exceptional circumst.ances
is considered explicitlyj for t.he rest their absence is simply tacitly assumed.
And when laywers refer to a norm concerning debtor-creditor relations
(cf. Example 3.1.2 above), they will often not even mention 6:2-(2) BW.
The reasons for this are twofold: firstly, information concerning possible
exceptions is often simply not. availablej and secondly, even if it is in
principle available, trying to find it would often be too tedious and time
consuming. In conclusion, in every-day li fe defeasible rules are not only
applied when it is known that all exceptions are false, but also when
information concerning exceptions is absent, and this means that if this
information becomes available at a later time, the conclusion is not valid any
more. However, as explained above, standard logic cannot account for this
kind of reasoning, since standardly valid conclusions cannot be invalidated
by adding new premises.

3.3. Open Texture


A next source of problems for standard logic is one of the central aspects of
legal reasoning: the interpretation of legal concepts (in Section 2.1.2 named
'conceptual interpretation'). Its importance is due to the fact, already
mentioned a few times, that the lcgislator cannot foresee the entire future,
for which reason of many legal concepts it is impossible to state beforehand
which fact situations will classify as an instance. Because of this, lawyers
confronted with a classification question can often not rely on established
legal knowledge. As Hart (1958, p. 23) puts it:
Fact situations do not await us neatly labelIed, creased, and folded; nor is their
legal classiflcation written on them to be simply read off by the judge. Instead, in
applying legal ruIes, someone must take the responsibility of deciding that words
do or do not cover some case in hand, with all the practical consequences involved
in this decision.
This phenomenon, which is often referred to as the open texturedness of
legal concepts, creates serious problems, for a computer even more than for
a human lawyer: a human can bridge the gap between facts and concept
with his or her own judgement or prediction, but a computer which has not
enough information to solve a problem, will have to remain silent. Making
50 CHAPTER3

the computer speak about problems of open texture is one of the major
tasks of AI-and-Iaw research.
However, I shall not investigate this problem, but a related problem of
a logical nature: even if the computer speaks in problems of open texture,
it cannot always do so in the same way as about other problems; more
specifically, even if additional information is available, for example judi-
ciary decisions in previous cases or expert opinions, it is often of such an
inconsistent, incomplete or vague nature that the way lawyers reason with
it cannot be modelled with standard methods. The reasons for this will be
examined in the present section; my analysis will be based on a distinction
between four logical types of open text ure or, more accurately, between four
kinds of situations which are in the literat ure often referred to as problems
of open texture. Other terms are also used, in particular, 'vagueness' and
'semantic indeterminacy'. See Susskind (1987, pp. 187-8) for a discussion
of tcrminological matters.

3.3.1. CLASSIFICATION PROBLEMS

The open texturedness of legallanguage is sometimes regarded as causing


problems for deductive reasoning, e.g. by Hart (1958, p. 23):
If a penumbra of uncertainty must surround all legal rules, then their application
to specific cases in the penumbral area cannot be a matter of logical deduction,
and so deductive reasoning, which for generations has been chcrished as the very
perfection of human reasoning, cannot serve as a model for what judges, or indeed
anyone, should do in bringing particular cases under general rules. In this area
men cannot live by deduction alone.
This quotation should not be understood, however, as saying that for
lawyers deduction is not the appropriate way of making inferences; Hart
only means that solving problems of open texture is not a matter of logic,
but of content: "Logic is silent on how to classify particulars - and this is
the heart of a judicial decision" (1958, p. 25). Accordingly, Hart (1958,
p. 22) formulates his well-known distinction between "core cases" and
"penumbra cases" (or 'clear' and 'hard cases') as being based on the relia-
bility or availability of legal knowledge instead of on its logical form:
... if, as in the most elementary form of law, we are to express our intentions
that a certain type of behaviour be regulated by rules, then the general words we
use (... ) must have some standard instance in which no doubts are feIt about its
application. There must be a core of settled meaning, but there will be, as weIl, a
penumbra of debatable cases in which words are neither obviously applicable nor
obviously ruled out.
In other words: in penumbra cases there is uncertainty whether the current
case is an instance of the concept, whereas in core cases an undebatable
decisioll is possible, because of thc presence of furt her definitions in astatute
THE NEED FOR NEW LOGICAL TOOLS 51

or of previous judiciary decisions which have become commonly accepted,


or simply because of common-sense. As described thus, open text ure is
only a problem of content: a problem of whether a certain fact situation
can be classified as an instance of a legal concept on thc basis of established
legal knowledge. After Gordon (1991) I shall call this kind of open texture
underdetermination.
Later writers, notably Dworkin (1977), have argued that even in penum-
bra cases additional knowledge is available to solve the problem, viz. in
the form of legal principles. However, in a formal analysis there are some
problems with these principles. The first is that they are much too tentative,
for which reason they often run into conflict with each other, so that their
application still requires decisions of content, viz. about the weight of the
various conflicting principles. In addition, there is the problem of matching
the description of factual cases against principles. A system will only on a
few occasions be able to do this, because principles are usually too general
to lead directly to conclusions about specific cases: consider for example 'No
person shall profit from his own wrongdoing'. In summary, legal principles
are even more open-textured than legal rules, for which reason they cannot
bridge the gap between the facts of the case and the formulation of the
rule.
Hart's version of the vehicle example, which was discussed in Chaptcr 2
and in Section 3.2, is an example of underdetermination. Consider again
the park regulation forbidding the use of vehicles in the park; ass urne now
that John drove in the park with an automobile and Bill did the same
with a skateboard. In order to know whether the prohibition applies to
John and to Bill, it must be determined whether an automobile and a
skateboard are vehicles. With respect to the automobile common-sense
dictates an affirmative answer, but the skateboard case is not so clear.
Therefore, the first times the skateboard appears in the park, there will
be no straightforwardly applicable legal knowledge available at all on the
problem, because a skateboard is a new phenomenon the legislator did not
foresee. This is a problem of an epistemic nature: the quest ion is whether
a skateboard can be classified as a vehicle, whether the proposition 'If an
object is a skateboard, then it is a vehicle' is true. As a matter of content
this case does not give rise to problems of a logical nature: the problem is
simply that the concept 'vehicle' does not have a complete definition, and
in Section 2.1.1 it has already been shown that this situation can easily be
represented in standard propositional logic. The only problem is that no
answer can be given to the quest ion whether it is allowed to enter the park
with a skateboard.
However, there is a related problem of open texture which indeed causes
problems for 10gic.1t is likely that when legal doctrine develops on the issue,
52 CHAPTER 3

after aperiod of time a body of conflicting information exists, consisting


of judiciary decisions in the first cases, expert opinions, dictionary inter-
pretations, etcetera. I shall, as Gordon (1991), call such cases, in which
conflicting classification rules exist, cases of overdetermination. It might be
argued that, as long as the issue has not been settled, a legal knowledge-
based system should not contain just one of the possibilities, but should
give insight into the 'hard' nature of the case by providing for ways of
alternatively constructing arguments for both positions. This me ans that
the knowledge base has to deal with inconsistent information, and in the
axiomatic view on legal reasoning as just running a theorem prover over a
set of formulas this cannot be done, since standardly from a contradiction
everything can be derived. Wha't is necessary is that the system is able to
regard all consistent subsets of the KB alternatively as the set of premises
from which conclusions should be derived, just as was the case with ap-
plying collision rules. Since withih each consistent part of the KB still the
axiomatic method can be applied, this is an example of a problem which
can be solved by regarding deduction as a tool in reasoning. In the next
subsection, however, I shall discuss a type of open texture for which this is
not enough and for which new logical tools have to be developed.

3.3.2. DEFEASIBILITY OF LEGAL CONCEPTS

Sometimes commonly accepted rules about the classification of a legal con-


cept do exist, but when new circumstances arise, they happen to be defeasi-
ble in the same way as legal rules have been shown to be. A nice example is
provided by Dutch trafik law. 2 In its second explanatory memorandum (in
dutch: Nota van Toelichting: NvT) to the Dutch trafIic regulation (RVV)
the legislator attempts to define the concept of a vehicle by, besides giving a
number of instances of a vehicle, introducing the new concept 'an object not
meant for normal transport'. On the basis of this it then concludes without
any restriction that roller skates are not vehicles, for which reason roller
skaters should follow the rules for pedestrians. However, it can be doubted
whether the rule 'roller skates are not meant for normal transport' or the
rule 'objects not meant for normal transport are no vehicles' will be applied
as a hard and fast, rule. Over the past few years, roller skates have more
and more been used as anormal me ans of transport and, furthermore, one
type of roller skates, called 'skeelers', has been used by speed skaters by way
of summer training, and they are able to obtain a speed which often even
exceeds. the speed of acyclist. In conclusion, there can be serious doubts
whether the opinion of the 'NvT' will hold in all circumstances: the rule
'roller skates are not meant for normal transport' or even the rule 'objects
2This example was suggested to me by Peter van den Berg.
THE NEED FOR NEW LOGICAL TOOLS 53

not meant for normal transport are no vehicles', may turn out to be subject
to exceptions.
Judges are generally well aware of the defeasible nature of their classifi-
cation rules: see, for example, the following fragment of the Dutch supreme
court case HR 13-12-1968, NJ 1969, 174 on the classification of a case as
force majeure:
If the nature of the contract, common opinion, or the reasonableness of an indi-
vidual case do not indicate reasons for deciding the contrary, it can as a general
rule be assumed that ...
According to Hart (1949; 1961) this phenomenon is a common characteristic
of legal concepts: it is almost never possible to give sufficient conditions for
the applicability of a concept: any rule which tries to do so, is subject to
exceptions, because it is impossible to foresee all possible situations which
might occur in real life. Hart (1949) has called this characteristic of legal
language the "de/easibility 0/ legal concepts".
This development in legal philosophy mirrors similar developments in
analytic philosophy, cognitive psychology and Artificial Intelligence (see
for an overview Johnson-Laird, 1988, pp. 242-246). In cognitive psychology
concepts as people use them are not regarded any more as being strictly
defined by a set of necessary and sufficient conditions, but as being captured
in a more loose sense by a set of prototypes, from which deviations in
individual cases are possible. An often-used example, already discussed
in Section 1.3.3, is the concept 'bird': typically birds can fly, but some
instances, like penguins and ostriches, lack this property. In AI these devel-
opments have caused the critical remarks on logic of among others Minsky
(cf. 1.3.3), who has in Minsky (1975) suggested a knowledge representation
formalism called 'frames', in which it is possible to specify default values
for features of a concept, which can be overridden by specific instances.
One of the philosophical sources of this new way of looking at concepts is
the ideas of Wittgenstein's (1958) Philosophical Investigations on instances
of concepts as having only 'family resemblances' instead of all having the
same set of essential features. In conclusion, both legal theory, philosophy
and cognitive psychology provide ample evidence for the defeasibility of the
concepts which people use in every-day life.
Logically the defeasibility of legal concepts is of the same type as the
defeasibility of legal rules, discussed above: in both cases the antecedent of
a conditional statement is true, but reasons exist for making an exception
to the rule. Unlike cases in which a concept is open textured because it
lacks a complete definition, this kind of situation indeed creates problems
for standard logic, viz. those discussed in Sections 3.1 and 3.2: both their
separated representation and their application in case not everything is
known about exceptions requires nonmonotonic reasoning methods.
54 CHAPTER3

3.3.3. VAGUENESS

Still another aspect of legal language which has often been referred to
as open text ure often (but not exdusively) goes together with terms
like 'reasonable', 'sufficient', 'appropriate', 'suitable', etcetera, named by
Hart (1961, pp. 128-130) "variable standards". Consider by way of example
the question whether an apartment is suitable for a candidate tenant. While
in some cases judges state their decision explicitly in the form of a rule,
often the decision is justified in the followillg way: 'On the one hand A, B
and C, but on the other hand X, Y and Z and in this case the latter factors
outweigh the first: therefore the apartment is not suitable'. This style of
justifying adecision can particularly be found if factors can have many
different values, as is the case with income, age, rent, etcetera. Consider
the following typical fragment from a Dutch supreme court decision (HR
19-5-67, NJ 1967, 261), in which the question whether the debtor can
appeal to a liability restricting dause in a contract was held to
depend on numerous circumstances, such as: the seriousness of the fault, also in
connection with the nature and the weight of the interests involved in any conduct,
the nature and the furt her content of the contract in which the dause is contained,
the social position and the mutual relationship of the parties, the way in which
the dause has come into existence, the degree in which the other party was aware
of the scope of the dause.
From such a justification no general rules can be derived, because the fact
that the judge refers to the weight of the factors is an indication that new
factors might easily alter the outcome; all that can be known is that factors
have a certain weight, that their presence or absence in a case influences the
outcome of a case to a certain degree. An expert on landlord-and-tenant
law might, for example, know that the presence of an elevator increases the
chance for a house to be suitable if the tenant is old, without being able to
tell 'if there is an elevator and the tenant is old and ... , then the house is
suitable'. I shall call this situation the vagueness type of open texture. It is
obvious that knowledge of and reasoning with the mere weight of factors
cannot be modelled in a logic-based way: a numerical treatment is more
appropriate. Candidate methods are, for example, statistical analysis (e.g.
De Wild & Quast, 1989), or a neural network approach (e.g. Van Opdorp
& Walker, 1990; Van Opdorp et al., 1991; Bench-Capon, 1993). Numerical
methods, however, fall outside the scope of the present research.
Nevertheless, I would like to make two remarks on this issue. The first is
that these Illunerical methods are ways to make the system speak in cases
which have never been considered before. What these methods do in cases
with new combinations of elements, is giving an answer which as dosely
as possible reflects the role of the elements in previous cases in different
combinations. Of course, since the combination of the case at hand has
THE NEED FOR NEW LOGICAL TOOLS 55

never occurred before, the answer has a certain degree of uncertainty, but
in trying to deal with new, previously undecided cases, they are an attempt
to break the silence of a system in cases of open text ure. In this respect, they
are like induction or analogy, although they go further: as was argued above,
analogy is only a heuristic principle for suggesting adecision in a new case,
since as a mode of reasoning it has no justifying force at all; the numerical
methods, however, provide at least a certain degree of justification, based
on the statistical aspects of the theories underlying them.
A second remark concerns the fact that sometimes juzzy logic is sug-
gested as a way of dealing with open texture, e.g. by Franken (1983, p. 32).
Fuzzy logic is based on the concept of 'fuzzy sets', of which membership is a
matter of degree. For example, a person being 1.60 tall might be regarded as
a tall person to degree 0,1, a person being 1.80 to degree 0,5 and someone
being 1.95 to degree 0,9. However, this example already shows that this
approach is, like the other numerical methods, not applicable to the first
two types of open texture, but only to the vagueness type (as observed
earlier by Bench-Capon & Sergot, 1985). The reason is that in the first two
types being an instance of a legal concept is not a matter of degree, but a
matter of 'yes or no'. For instance, in the vehicle example the quest ion is
not whether a skateboard is more or less a vehicle, but whether it is or is
not. Being a vehicle to degree 0.7, if to say so makes any sense at all, does
not have less serious legal consequences then being a vehicle to degree 0.9.
Fuzzy logic is only applicable if so me of the 'input' or 'output concepts' are
multi-valued, like 'rent' 'compensation', 'suitability' or 'reasonableness', or
if a multi-valued concept is made two-valued by a 'threshold' qualification,
as in 'sufficiently suitable'. Of course, as already indicated, such concepts
certainly occur in law; in such cases, however, fuzzy methods are more
an alternative to other numerical methods in modelling vagueness than of
symbolic methods in modelling defeasibility and disagreement.

3.4. Which Nonstandard Techniques are Needed?

In this section more will be said ab out the nonstandard aspects of legal
reasoning identified in this chapter: reasoning with inconsistent information
and reasoning with defeasible rules. Most attention will be paid to the latter,
since in practice reasoning with inconsistent information rapidly becomes
a special kind of defeasible, or non mono tonic reasoning.

3.4.1. REASONING WITH INCONSISTENT INFORMATION

The first reason why a legal knowledge-based system should be able to


reason with an inconsistent knowledge base is that, as we have seen in
Section 3.1, collision rules are often used as a legislation technique: Lex
56 CHAPTER 3

Specialis is often and Lex Superior is sometimes used as a way of formu-


lating exceptions to rules, and the Lex Posterior principle is often used
as a way of issuing new legislation without explicitly having to retract old
regulations; moreover, some statutes contain special collision rules about
conflicts between certain classes of norms; cf. Example 3.1.6 above. Other
reasons were given in Section 2.2.4. Because of the incompleteness, ambigu-
ity and vagueness of many legal rules and case law decisions, and since they
are often put into question, lawyers have much room to disagree about what
'the law' iso In this chapter a particular important source of disagreement
has been identified, overdetermination in case of open texture:, lawyers
often disagree on the classication of a case under a legal concept. Because
of the occurrence of disagreement, a system containing only one consistellt
view on the law does not give a realistic picture of the legal reality.
Of course, for some purposes this might be different. For instance, if
the aim is to represent a particular, relatively clear piece of legislation
and to determine what the consequences are of applying it to the facts
as they are classified by the user, then one consistent formalization may
suffice. Particularly if the classification task is excluded from the scope
of the system, the need for ways of modelling disagreement is reduced.
Examples of such systems are the formalization of the British Nationality
Act of Sergot et al. (1986) and the TESSEC system of Nieuwenhuis (1989),
containing the Dutch Social Welfare Act and some related regulations.
In addition, it might be the case that a particular field of law is so weIl
established that the need for alternative opinions does not arise.
However, these cases do certainly not constitute the general rule; it will
often be desirable to let a system present alternative, internally consistent
arguments from a knowledge base which reflects the disagreement among
lawyers, particularly if the classification task is not entirely left to the user.
As was said above in Section 3.3.1, a system which can handle this kind of
reasoning provides an example of logic being used as a tool in reasoning.
Some of the systems which actually provide this possibility will be discussed
in the next section. It is important to realize that the mere generation of
alternative arguments does not yet introduce nonmonotonic reasoning: this
happens only if arguments have to be compared, as will be shown in the next
subsection, when the formal nature of nonmonotonie reasoning is discussed.

3.4.2. NONMONOTONIC REASONING

Relevance
To summarize the potential reasons which we have identified to use methods
of nonmonotonic reasoning in AI and law, we can divide them illtO two
groups. The first concerns reasons of knowledge representation: we have
THE NEED FOR NEW LOGICAL TOOLS 57

seen that if the separation of rule and exception is to be preserved, which


for reasons of validation and maintenance is often regarded as desirable, the
best methods are those which have the 'side effect' that the reasoning pro-
cess becomes nonmonotonic. The second group is based on a philosophical
analysis of reasoning with defeasible rules. As was shown in Sections 3.2
and 3.3, many, if not all legal rules are defeasible: since the world is not
closed, it is always possible that new circumstances give rise to an exception
on the basis of principle or purpose. Now, in Section 3.2 I argued that in
the practice of legal reasoning most legal rules will be applied without an
exhaustive search for all possible exceptions to a rule, whether stated in case
law or otherwise; this would simply not be feasible. However, in Section 3.1
we have seen that in standard methods all exceptional circumstances have
to be explicitly stated as being absent, otherwise the general rule cannot
apply. Therefore, the way lawyers reason with legal rules in practice cannot
be analyzed with standard logic; a way has to be found to assume that
there are no exceptions unless the contrary has been shown. As was said,
this calls for a nonmonotonic consequence notion.
The nonstandard structural aspects of law appeared to be directly
relevant for AI-and-Iaw research, because the issue of structural resem-
blance is a recognized issue in knowledge representation. However, it might
be argued that the analysis of the defeasibility of legal reasoning is only
relevant from a philosophical point of view, since in a legal know ledge-based
system it can be assumed that the only possible exceptions are those which
are listed, and if it is ensured that the number of exceptions is not too
large, an exhaustive search is both possible and desirable. For example,
Susskind (1987, pp. 193-8), although admitting that legal rules have a
nonstandard logical behaviour, suggests that in practice it is sufficient to
let a conclusion be accompanied by a message to the user that it is subject
to implied exceptions on the basis of principle or purpose. In such a view the
defeasibility of legal rules and concepts would be aspects of the evolution
of a legal system in time, which is more a matter of maintenance than of
reasoning: the knowledge base should simply be kept up-to-date.
ladmit that there is a considerable degree of truth in this, particularly
if the principal aim is to build systems which are easily usable in practice.
However, despite this, there are still reasons to place nonmonotonic rea-
soning on the agenda of AI-and-Iaw research. One of the extra possibilities
offered by nonmonotonic formalisms compared to standard methods, in
virtue of their ability to draw conclusions under incomplete information,
is preventing too many quest ions of a system to the user (see also Gor-
don, 1988, p. 119). If, for instance, in Example 3.1.1 (formalization 3) in a
negation-as-failure proof for a fact N is inapplicable to x information
about a formula <p is needed, then a standard reasoning mechanism has to
58 CHAPTER 3

know whether <p is true or false, otherwise it cannot draw any conclusion.
A nonmonotonic system, on the other hand, can also draw conclusions if
nothing is known ab out <p, and this ability is desirable in two cases. The
first case is when it is appropriate not to ask the user for the missing
information, as, for example in Example 3.1.1, since the concept 'short
termed contract' is worked out in a large amount of case law which is
highly factual and specific in nature (see Walker et al., 1991, p. 45). The
second case is situations in which the missing information about <p is asked
to the users, but in which the user can give 'I don't know' answers. Such
an answer has the same effect as no information about <p without asking.
In addition to this, designers of knowledge-based systems sometimes feel
the need to assurne certain trivial factual information to be true without
having to ask the user whether these facts indeed hold. For example, the
PROLEXS system on Dutch land lord and tenant law (Walker et al., 1991)
assumed the user to be an adult and the lease contract to be valid. In
conclusion, even in a practical environment there can be reasons to apply
general rules und er incomplete information about exceptions.
In addition to these arguments it may be remarked that even if from
a reasoning point of view a certain application does not need nonmono-
tonic methods, there are still the above mentioned arguments from the
knowledge representation point of view: separating rules and exceptions in
a formalization often better preserves the structure of what is formalized,
and apart from this many hold that it benefits validation and maintenance.
Finally, from a research point of view the nonstandard features of legal
reasoning deserve to be investigated simply because they are aspects of
legal reasoningj and of AI research it is not that strange to say that every
feature of reasoning is in principle a subject of research, no matter what
the immediate practical gains are of investigating them.
In conclusion, although it has by no means been shown that legal
knowledge-based systems are not worth their money if they cannot reason
nonmonotonically, what has been shown is that nonmonotonic aspects of
legal reasoning at least deserve to be placed on the agenda of AI-and-Iaw
research. The final section of this chapter will discuss so me existing projects
in the legal domain which have already done this, although sometimes
without being explicitly based on a theory of nonmonotonie reasoning.
First, however, I discuss in more detail the formal nature of reasoning with
an inconsistent knowledge base and of nonmonotonie reasoning.

Characteristics
As explained, nonmonotonie reasoning is drawing conclusions which might
be invalidated by additional information. Using a nonprovability operator
has been shown to be a form of nonmonotonic reasoning: if conclusions de-
THE NEED FOR NEW LOGICAL TOOLS 59

pend on the faHure to derive other conclusions, they become invalid as soon
as new information makes it possible to derive these other conclusions. Also
using collision rules intro duces nonmonotonicity: in case of, for example,
Lex Superior and Lex Speeialis the validity of a conclusion derived from
a lower or more general rule depends on the faHure to derive the opposite
conclusion from a higher or more specific rule.
This, by the way, explains the last remark of Section 3.4.1 that the
comparison of arguments makes modelling dis agreement nonmonotonie: if
standards are used to determine whieh of the conflicting arguments is the
best, then it might be that additional information gives rise to a new argu-
ment, being better than the one whieh was preferred on the basis of the old
information. Therefore, allowing for inconsistent knowledge bases to model
disagreement becomes nonmonotonie as soon as conflieting arguments are
eompared to see which one is the best.
As described so far, nonmonotonicity is a formal property of conse-
quence not ions; however, also computationally nonmonotonie reasoning
systems differ from standard logies in an important respect. Classieal proof
methods are loeal: if there is a monotonie proof for a formula cp from a
part of the knowledge base (KB), this counts as a proof from the entire
KB, since new information cannot invalidate this proof. For example, if 'IjJ
describes the facts of a case and a monotonie system tries to answer the
question whether cp holds, then if the first rule it checks is 'IjJ ---t cp, it can stop
as soon as it has applied modus ponens to this rule. Nonmonotonie proof
methods, on the other hand, are global, since if they have found a way to
derive cp, they have to continue searching, because new information might
invalidate nonmonotonie conclusions. For example, in the general exception
clause approach a conclusion cp based on the nonderivability of a formula 'IjJ
from apart of the KB can be invalidated by a derivation of 'IjJ from another
part of the KB; and in the collision rule approaches the opposite of cp may
be derivable from a more specific or higher rule in the KB. Obviously, global
proof methods are less tractable than local ones, since they require timt for
every conclusioll the entire KB has to be inspected.
To summarize, nonmonotonie reasoning differs from standard reasoning
in two important respects: conclusions may be invalidated by additional
information; and for every step in a derivation the entire amount of available
information has to be considered.
It may be useful to compare nonmonotonie reasoning with other modes
ofreasoning with respect to their role in adversarial argumentation. Imagine
a dialogue situation in whieh the parties first re ach agreement on the basis
for the discussion, i.e. on the initial premises, and then try to persuade
each other of a certain conclusion. If one party presents a conclusion whieh
is derived from the premises in a deductive, monotonie way, then the only
60 CHAPTER 3

thing the opponent can do to attack this conclusion is withdrawing his


or her acceptance of the premises. If, on the other hand, the conclusion
has been derived by nonmonotonic means, the opponent has an additional
strategy: s/he can point at new premises which invalidate the conclusion.
However, if s/he does not come up with such new information, and if
s/he neither withdraws the acceptance of the initial premises, then the
nonmonotonic conclusion has to be accepted. Now, what is the role of
induction and analogy in this dialogue scenario? We have said above that
they are heuristic principles to suggest new premises, for which reason the
parties first have to re ach agreement about the truth of the new premises.
In other words, the difference with nonmonotonic eonclusions is that if a
party presents a conclusion based on analogicalor inductive reasoning, the
opponent need not come itself with new information which invalidates this
conclusion, as s/he had to do with nonmonotonic reasoning: it is sufficient
not to accept the analogicalor inductive reasoning step.
An interesting question is how the differenee between nonmonotonic
reasoning on the one hand and analogical and inductive reasoning on the
other hand can be formally characterized. Generally, it is held tImt a
necessary condition for an inference notion to be deductive is that it has the
formal property of monotonicity. The quest ion is whether a nonmonotonic
consequence notion can also be characterized by properties which it has,
rat her than only by a property which it lacks. It seems that only if such
properties can be found, nonmonotonic reasoning can be distinguished in
a formal sense from analogical and inductive reasoning. In fact this is
the question what the minimal properties are of any consequence notion,
which is one of the new logical research questions raised by AI research on
nonmonotonic reasoning. I return to this quest ion in Chapter 4.
To conclude this section, should we be happy or unhappy ab out non-
monotonic behaviour of legal knowledge based systems? On the one hand,
monotonicity is areassuring property, both pragmatically and computa-
tionally. Pragmatically it is reassuring since it guarantees that inferenees
whieh are made at some point in the reasoning proeess stay valid even if
new information is added to the system. For the user this makes, of course,
monotonie conclusions the most reliable ones: whatever new information
s/he may present to the system, what it has already derived stays derivable.
Also eomputationally monotonie inferenee is attraetive, sinee it ean make
use of loeal proof methods, whieh do not for every eonclusion have to inspeet
the entire KB, in contrast to the global proof methods of nonmonotonic
systems. However, despite this eomputational attraetiveness, monotonicity
beeomes a disadvantage if it is a characteristic of the domain that sometimes
eonclusions are drawn on the basis of the failure to derive other eonclusions.
And in law this seems undoubtedly the ease, as has been argued above.
THE NEED FOR NEW LOGICAL TOOLS 61

Many other writers have made the same point, in the AI-and-Iaw domain
e.g. Gardner (1987), Hage (1987), Gordon (1988) and Sartor (1991), to
name but a few. In conelusion, it can be said that the intractability of the
formalisms is caused by the intractability of the kinds of reasoning which
are formalized (cf. Etherington, 1988, p. 69).

3.5. AI-and-Iaw Programs with Nonstandard Features


To indicate the relevance of the coming investigations for AI-and-Iaw re-
search, this section discusses some existing programs with nonstandard
features of the kind discussed above. Two re marks should be made before-
hand. The first is that most of these systems the nonmonotonic behaviour
is caused by the fact that they compare alternative arguments generated
from an inconsistent knowledge base. Apparently, this way of reasoning is
rather elose to actual legal reasoningj in line with this, a large part of this
book, Chapters 6 to 10, will be devoted to a formal study of this way of
arguing. A second remark is that most systems lack an explicit formal basis,
for which reason they form an "excellent opportunity to test my elaim that
formal research can be useful in criticizing and analysing implementations.
Therefore I co me back to so me of these systems in Chapter 10.

3.5.1. THE LAW AS LOGIC PRO GRAMS

First I discuss something which, rat her than a particular system, is a general
approach to formalizing law for computer applications. This is the idea of
formalizing law as logic programs, viz. as a set of formulas of a logical
language for which automated theorem provers exist. This approach, of
which the underlying ideas are set out in Sergot (1988), is most closely
associated with Sergot and Kowalski. The best known application is the for-
malization of the British Nationality Act (Sergot et al., 1986). The authors
have made ample use of the negation as failure approach to negation, which
is characteristic of logic programming methods and which, as explained in
Section 3.1, is a form ofnonmonotonic reasoning. They regard this approach
as particularly appropriate for exceptions. Accordingly, they use negation
as faHure specifically for norms containing phrases like 'unless the contrary
is shown' and 'subject to section ... '. They do not use general but specific
exception elauses. An example of the use of negation as failure is the norm
to the effect that, under certain additional conditions, an abandoned child
acquires British citizenship if the identity of the parents is unknown. The
authors explicitly regard this as a form of nonmonotonic reasoning and,
moreover, they observe that the legislator has not anticipated the possibility
that at a later point in time it becomes known that the parents were not
British citizens, which would make the previous conelusiön invalid.
62 CHAPTER 3

3.5.2. TAXMAN 11

The TAXMAN II project of McCarty (e.g. McCarty & Sridharan, 1981;


McCarty, 1995) aims to model how lawyers argue for or against the ap-
plication of a legal concept to a problem situation. In McCarty & Sridha-
ran (1981) only a theoretical model is presented but in McCarty (1995) an
implementation is described of most components of the model. However,
their interaction in finding arguments is still controlled by the user.
Among other things, the project involves the development of a logical
knowledge representation language for expressing theories about a legal
domain, and the design of a method for representing legal concepts, cap-
turing their open-textured and dynamic nature. This method is based on
the view that legal concepts have three components: firstly, a (possibly
empty) set of necessary conditions for the concept 's applicabilityj secondly,
a set of instances ("exemplars") of the conceptj and finally, a set of rules for
transforming a case into another one, particularly for relating "prototypi-
cal" exemplars to "deformations". According to McCarty, the way lawyers
typically argue about application of a concept to a new case is by finding
a plausible sequence of transformations which maps a prototype, possibly
via other cases, onto the new case.
Although this view on legal concepts has many other aspects, for our
purposes the most interesting quest ion is how it affects the logical as-
pects of a system. In the initial publications on TAXMAN II this issue
is not addressed, but the logical representation language presented in Mc-
Carty (1989) involves elements of nonmonotonic reasoning. This is what
would be expected on the basis of the above discussion on the defeasibility of
legal concepts. Moreover, since the overall task of the system is to generate
plausible arguments for and against concept application, clearly the ability
to deal with inconsistent information is needed.

3.5.3. GARDNER'S PROGRAM

Gardner (1987) has designed a pro gram to perform so-called 'issue spotting'
in the American common-Iaw domain of the formation of contracts by offer
and acceptance. The task of the program is to determine of an input case
which legal quest ions to be solved are easy and which are hard and to solve
the easy questions. If all the questions involved are easy, the case can be
reported as dear, otherwise it is hard. The system contains domain knowl-
edge of essentially three different legal types: firstly, general legal rules,
derived from case law in a semi-official way (the Restatements of American
contract law)j secondly, so-called 'common-sense rules', providing general
non legal knowledge about aspects of the world relevant to the program
domain; and finally, case law, generalized in the form of universally quan-
THE NEED FOR NEW LOGICAL TOOLS 63

tified conditionals. The program considers a quest ion as hard if either "the
rules run out" (the underdetermination-type of open texture), or different
rules or cases point at different solutions, without there being any reason
to prefer one over the other (the overdetermination-type). Because of the
second possibility the program is able to deal with conflicting information.
The reason why Gardner's program performs nonmonotonie reasoning is
that, before a case is reported as hard, conflicting alternatives are compared
to check whether one is preferred over the other and, as noted above, this
is a form of nonmonotonie reasoning. Because of this feature, the system is
also able to handle the defeasibility type of open texture, which it does in
the following way. Gardner uses case law in her program for indirectly
representing purposes of rules and principles: a common way in which
judicial decisions do this, is to set aside legal rules or literal interpretations
of legal concepts (represented in Gardner's program by the common-sense
rules) by an interpretation based on principle or purpose. The way Gardner
implements this, is to give case law precedence over common-sense rules.
Assume by way of an example that the program tries to apply a rule
forbidding to sleep in a railway station, to a commuter who has dozed
off (an example of Fuller, 1958). The common-sense rules would no doubt
say that the man is sleeping, but an argument based on the purpose of the
rule (to keep tramps outside the station building) could say otherwise. The
corresponding judicial decision then sets the literal reading aside. Thus,
Gardner's system provides an example of a nonstandard collision rule.
Furthermore, in using collision rules to compare conflicting arguments, her
system performs nonmonotonie reasoning.

3.5.4. CABARET

The CABARET system (Rissland & Skalak, 1991) is an excellent example


of a system which performs reasoning tasks which, in my opinion, im-
plicitly use logic as a tool in, rather than as a model of legal reasoning
(the designers, however, do not give a formal account of their system).
Like Gardner's system, CABARET provides ways of letting cases put into
question what can be derived with legal rules. Unlike Gardner's system,
however, CABARET does not attempt to give a solution to a legal problem;
rather, its task is to model aspects of the heuristic phase of adversariallegal
reasoning (cf. Section 2.3 above): depending upon the user's point of view
it provides the user with many possible ways of arguing für a conclusion, to
put apremise of the opponent's argument into question, to suggest needed
information by way of analügies, etcetera, but all these things are done
almost without any assessment of their justifying force on the basis of the
know ledge base.
64 CHAPTER 3

CABARET is a combination of two subsystems: a traditional rule-based


expert system supplemented with ways to observe its behaviour, and a case-
based reasoner based on the HYPO system (Rissland & Ashley, 1987,1989).
Unlike Gardner's system, in wh ich the case base is simply a set ofuniversally
quantified if-then rules, HYPO represents cases in a much more complex
way. HYPO analyses both stored cases and the new case on the basis of
its so-called dimensions. Dimensions are abstractly formulated aspects of
a case, identified by the knowledge engineer to be relevant for or against
a certain legal claim. HYPO's task is not to give a solution of the input
problem, but to provide each side of a case with reasonable precedents to
eite, and to assess the relative merit of these precedents. One reason which
HYPO provides to eite a case is that it is similar, or analogous, to the
current fact situation (CFS) and that it contains the decision the party
is arguing for. HYPO's notion of similarity is implemented as follows: all
stored cases which share at least one dimension with the CFS are ordered
on the basis of set inclusion of the factors they share with the CFS. If such
a set of one case includes that of another, the former case is considered
more similar ("more on point") to the CFS than the latter. The maximal
elements of the ordering are considered most analogous to the CFS. Note
that, as was said in Section 2.3, HYPO does not in any way give a clue as
to whether the cases are suffieiently similar to warrant the same conclusion.
Furthermore, it does not try to find a general rule behind the analogy.
In CABARET the case-based reasoner can in several ways be used to
argue for conclusions for which the rules run out, or even to argue against
results obtained with the ruIe-based part. An example of the first possibility
is that if the fact situation does not satisfy all conditions of a rule, an
analogy might be suggested to a stored case in which these conditions were
satisfied; an instance of the second way of arguing is that a rule used by
the opponent to obtain a conclusion can be discredited by finding a case in
which the rule-conditions were all satisfied, but in which the rule was not
applied.
Although CABARET's principal aim is to model the heuristic phase
of legal reasoning, it has, of course, to do so with a background view on
the justificatory phase: otherwise the effect of possible argument moves
cannot be defined at all. Now, it is this background view in which aspects
of nonmonotonic reasoning can be found. An example is the just described
'rule-discrediting' heuristic: in allowing a case to point at the opposite
conclusion in cases in which the rules' conditions are satisfied, it is clearly
based on a defeasible interpretation of the rules in much the same way
as Gardner's system iso Also the way in which cases are compared with
each other to see which one is more on point, expresses a defeasible view on
legal reasoning, in employing a restricted version of the specificity principle.
THE NEED FOR NEW LOGICAL TOOLS 65

These two features might be combined in an attempt to look at CABARET


from a justificatory point of view: for example, if a rule can be discredited
by a case, but a more on point case confirms the rule, the rule's conclusion
might be regarded as the overall answer of CABARET's knowledge bases
to the legal issue.
In conclusion, although CABARET mainly models the heuristic part of
legal reasoning, its view on the justificatory force of the argument moves
incorporates elements of nonmonotonic reasoning.
CHAPTER4

LOGICS FOR NONMONOTONIC REASONING

This chapter gives an overview of various existing formalizations of non-


monotonie reasoning. 1 To start with, it may be useful to derive from the
previous chapter two basie motivations for using nonmonotonie forms of
reasoning. The first is common to all domains of common-sense reasoning.
In solving a problem people do not always have enough information to
make a safe step towards the conclusion; instead, they often have to jump
to conclusions by applying general, defeasible rules, since finding the infor-
mation which would guarantee a safe landing is often too costly or even
impossible. But instead of acting, planning or judging on an ad hoc basis
or even worse, doing llothing, people still want to take their decisions in a
rational way; and a general rational principle people employ is: assurne as
much as possible that things are normal; under this assumption conclusions
can be drawn whieh have to be retracted only in unusual circumstances.
However, in law the second form of nonmonotonic reasoning, employing
general conflict resolution metarules, has a different motivation, which
seems to be specific to the legal domain. Although these principles also have
the effect that additional information may invalidate previously obtained
conclusions, the reasons why they are used have less to do with the cost
of searching for more information, and more with the efficiency of the
'legislation business'. Regulations come into being and cease to exist in
complex ways, involving different authorities at different times in different
places; all this can easily give rise to inconsistencies. Now, if every time when
a conflict occurs new legislation were necessary to res tore consistency, the
legislation business would be obstructed in an inadmissible way. Therefore,
lawyers have developed ways of anticipating such conflicts based on the
same structural features of legal systems by which the conflicts are caused.
In conclusion, not only the costs of making knowledge complete, but also
the costs of keeping legal systems consistent gives rise to nonmonotonie
forms of reasoning.
I shall start with a review of approaches to modelling reasoning with
IThis chapter is not meant to be original; I have benefited much from overviews
of, among others, Ginsberg (1987), Reiter (1987), Etherington (1988), Brewka (1991a),
Lukaszewicz (1990), Bidoit (1991) and Roos (1991). The reader who misses references on
technical details is referred to one of these texts.

67
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
68 CHAPTER 4

incomplete information. An interesting fact is that one of these approaches


is based on regarding defeasible reasoning as reasoning with inconsistent
premises, and therefore it will not be necessary to give a separate review of
reasoning with inconsistent information. After the overview of the various
nonmonotonic logics some issues in the general study and the unification of
these logics will be reviewed and, finally, some objections to the enterprise
of developing nonmonotonic logics will be discussed.

4.1. Nonmonotonie Logies

Formalizations of nonmonotonic reasoning can be classified in sevcral ways:


I have opted for a classification as to what are the logical instruments
used to ass urne what cannot be known for certain. The first approach
amounts to assuming a fact if it is consistent with what is known. The
best known exponent of this method is default logic. A second method is
using modalities for representing and reasoning about one's knowledge and
ignorance as, for example, in autoepistemic logic. Another approach is to
minimize certain kinds of information, such as atomic facts or predicate
extensions; two fields in which this idea is applied are circumscription and
the semantics of logic-programming's negation as failure. Also investigated
is the idea of using conditional operatoTs expressing 'typical' or 'normal'
conditional relationships. Finally, ways of inconsistency handling have been
proposed, in which it is possible to prefer the exceptional conclusion if the
premises turn out to be inconsistent.

4.1.1. CONSISTENCY-BASED APPROACHES

NONMONOTONIC LOGIC OF MCDERMOTT AND DOYLE

An early consistency-based approach was Nonmonotonic Logic (NML) of


McDermott & Doyle (1980), which was based on adding a new operator
M, standing for consistency, to the language of propositionallogic. Thus, a
formula like

(b !\ M -,p) -+ f

could be used to express that if something is a bird and is consistent with


not being a penguin, it can fly. Since M is a new constant in a logical
language, it should have a semantic interpretation. Because of inherent
difficulties in defining this interpretation, notably in the attempt of Mc-
Dermott (1982), NML has not survived very long.
LOGICS FOR NONMONOTONIC REASONING 69

DEFAULT LOGIC

A more successful consistency-based approach is Reiter's (1980) default


logic, in which the consistency operator is not added to the objectlevellan-
guage of a logical system, but is incorporated in domain specific defeasible
inference rules. These inference rules can be used to extend a classical first-
order theory containing what is known with formulas which are not clas-
sically entailed by the theory but which are nevertheless plausible enough
to believe on the basis of what is known. Defaults are defined as follows
(Reiter, 1980, p. 88)).
Definition 4.1.1 (defaults, default theories) Adefault is any expression
of the form
a(x) : ßl (x), ... ,ßm(x)
w(x}
sometimes written as
a(x) : ßl (x), ... ,ßm(x)jw(x)
where a(x), ßl(X), ... , ßm{x) and w{x) are wJJ's (well-formed formulas) of
which the free variables are among those of x = Xl, ... , x n . a{x) is ca lied the
prerequisite, ßl(X), ... , ßm(x) the justifications and w(x) the consequent.
Adefault is closed iJJ none of a, ßl, ... ,ßm, w contains a free variable.
Adefault theory is a pair (F,~) where ~ is a set of defaults and F a
set of closed first-order predicate logic well-formed formulas.
Informally adefault reads as 'If a(x) holds and ßl (x), ... ,ßm(x) may
be consistently assumed, w(x) may be inferred'. An open default can be
regarded as a scheme for all its ground instances, i.e. for the set of closed
defaults which can be obtained by replacing its free variables with ground
terms.
Consider Reiter's version of the Tweety example: 2
~ = {Bird(x): Canfly(x)jCanfly(x)}
F = {Bird(Tweety), V'x.Penguin(x) ---+ -'Canfly(x)}
Since Canfly(Tweety) is consistent with what is known, we can apply
the default to Tweety and defeasibly derive Canfly(Tweety). That this
inference is indeed defeasible becomes apparent if also Penguin(Tweety) is
added to the facts: then -'Canfly(Tweety) is classically entailed by what
is known and the consistency check for applying the default fails, for which
reason Canfly(Tweety) cannot be derived any more.
The precise formalization of what it is for a formula to 'hold' is given by
the definition of an extension. As was said, defaults serve to extend classical
2Since this chapter focuses on formal aspects rather than on applications, I shall write
the examples in more formal notation than in the other chapters.
70 CHAPTER4

first-order theories to beliefs which may be reasonably held on the basis of


these theories but which are not c1assically entailed by them. Informally,
this works in the following way. Relative to a given default theory (F,~)
new beliefs can be derived by using ground instances of any default of ~ one
wishes, as long as consistency is preserved. If as many defaults as possible
are thus used, Le. if applying any new default would cause an inconsistency,
sets result which are called extensions of (F, ~). These sets can be seen
as possible maximal sets of beliefs which may be held on the basis of the
facts Fand default assumptions ~. Extensions are defincd by way of the
following fixed point definition (Reiter, 1980, Th. 2.1).
Definition 4.1.2 (extension) Let E be a set of closed wff's and (F, ß) be
a closed default theory. Define a sequence of sets E o, ... ,Ei, . .. such that
Eo=F
and for i ~ 0
Ei+ 1 = T h( Ei U {w I 0: : ßl, ... , ßm / W E ~
where 0: E Ei and ·ßl, ... , ·ßm rt E})
Then E is an extension of (F,~) iff
E = U~OEi.
At first sight this definition seems to construct an extension 'bottom-up',
starting with the facts Wand at each following step applying any default
of which the antecedent is in the extension as thus far created, and the
justifications are consistent with that extension. However, this constructive
appearance is deceptive: the justifications should not be consistent with the
extension as thus far created, which is Ei, but with the end product of the
construction, whieh is E. And the problem is that this end product has to
be guessed in advance, and that there is no automatie procedure that is
guaranteed to provide the right guess. This nonconstructive 'lookahead' is
what gives the definition its fixed point character.
Since defaults can confliet, adefault theory may have more, mutually
inconsistent, extensions. Consider the following default forrnalization of
Example 3.1.4 (the variables in the defauIts are left implicit and arithmetic
is assumed to be contained in F).
Kill/\ Intentional: Maximum = 15
Maximum = 15
Kill/\ life and death duel : Maximum = 12
Maximum - 12

F: {life_and_death_duel - t Intentional, Kill


life_and_death_duel, .(Maximum = 12 /\ Maximum = 15)}
~: {dl' d 2 }
LOGICS FOR NONMONOTONIC REASONING 71

It must now be chosen which default to apply, since applying both of them
makes the extension inconsistent. Now if dl is applied, then Maximum = 12
is not consistent any more with the resulting extension and the application
of d2 is blocked. If, on the other hand, d2 is applied first, d l is blocked.
Thus, this default theory has two extensions:
EI = TU {Maximum = 15}
E2 = Tu {Maximum = 12}
Both are reasonable sets of beliefs on the basis of (T, Ö), but they cannot
be held simultaneously: it is necessary to make a choice between them.
Because of the possibility of multiple extensions, two alternative conse-
quence notions can be defined for default logic, depending upon whether a
formula is in some 01' in all extensions of adefault theory. Generally these
two kinds of reasoning are called cTedulous and sceptical reasoning, and
they can also be defined for many other nonmonotonic logics.
Definition 4.1.3 (credulous and sceptical consequence). Let (T, Ö) be a
default the01'Y. Then
- (T,Ö) I",s cp iffcp is in all extensions of(T,Ö) (sceptical consequence)
- (T, Ö) I",c cp iff cp is in some extension of (T, Ö) (credulous conse-
quence)
The idea behind sceptical consequence is to capture precisely those conclu-
sions which, on the basis of the present knowledge, cannot be challenged
at all: if a conclusion cp follows sceptically from (T, Ö), no argument at
all can be set up against cp on the basis of (T, Ö). Of course, a different
matter is whether cp can be challenged under additional factual knowledge.
Credulous consequence, on the other hand, is very permissive: if cp is a
credulous consequence of (T, Ö), this only means that there is at least one
possible state of affairs compatible with the facts of T and generated by
the defaults of Ö such that in this state of affairs cp holds.
The observation that in case of multiple extensions sometimes only one
ofthem is intended led Reiter & Criscuolo (1981) to use so-called seminor-
mal defaults, giving up the idea of Reiter (1980) that normal defaults are
the only ones that are needed. Adefault is nOTmal iff the justification is
identical to the conclusion, i.e. iff it is of the form
a(x) : w(x)/w(x)
and it is seminormal iff it is of the form
a(x) : ß(x) 1\ w(x)/w(x)
Otherwise it is nonnormal.
Typically, ß(x) will be the negation of the prerequisite of a conflict-
ing default which should have priority (the reader will recognize the
72 CHAPTER4

specific-exception-clause method of Section 3.1). Thus, if the negation of


life_and_death_duel is added to the justification of the default formaliza-
tion of 287 Sr in the following way.

KillA Intentional: -,life_and death duelA Maximum = 15


Maximum - 15

then the revised default theory (F, {d l " d2 }) has only one extension, viz.
E 2 , since d l , is blocked by the fact that its justification, now entailing
-'life_and_death_duel, is not consistent any more with the facts. Accord-
ing to Dutch law E 2 is indeed the intended extension, since 154-(4) Sr is
meant as an exception to 287 Sr. Note that since -'life_and_death_duel
is added to the justification of dl' instead of to its prerequisite, it is not
necessary to prove the absence of the exceptional circumstance in order
to apply the general rule. For this reason this method is in principle a
candidate for capturing the defeasibility of legal rules. In Chapter 5 I shall
go in detail into the quest ion whether this method is indeed satisfactory in
all cases.
Many modifications and extensions of default logic have been pro-
posed, for instance by Lukaszewicz (1990) and by Brewka (1991b; 1994a;
1994c), whose work will be discussed in Chapter 9. Furthermore, Ether-
ington (1988) has developed a model-theoretic semantics for default logic,
which, however, only gives a semantic characterization of extensions, and
not of defaults themselves. Informally, the idea is that all subsets of the
set of models for the first-order theory F are ordered as to how weIl they
satisfy the defaults. Each maximal set in this ordering is the set of models
for some extension.
Although its intuitive clarity is very appealing, default logic also has
some recognized drawbacks. One of them is its computational complexity,
a main source of which is the fact that the definition of an extension is,
as explained above, not constructive. The problem is that in applying the
consistency check to see whether adefault can be used to enlarge an exten-
sion, it is not sufficient to inspect the extension as it has been constructed
so far by other defaults; instead the content of the entire extension has
to be guessed beforehand, and this cannot easily be mechanized. More
precisely, it is known that the extensions of default logic are not 'recursively
enumerable': i.e. the problem of determining whether a given formula is in
some extension of a given default theory (Reiter, 1980, Th. 4.9) is not
semi-decidable.
A furt her problem is that in default logic it is impossible to reason
about defaults. Consider the next modification of Example 3.1. 7 about the
capacity to perform legal acts.
LOGICS FOR NONMONOTONIC REASONING 73

Minor( x) : -,Has_capaci ty( x)


-,Has_capac i ty( x)

Mentally...handi capped( x) : -,Has_capaci ty( x)


-,Has_capac i ty( x)

F: {Minor( J ahn) V Mentally ...handicapped( J ahn)}


~: {d 1 , d2 }
We would like to infer -,Has_capacity(Jahn), but since either Minor(Jahn)
or Mentally...handicapped(Jahn) has to be definitely known in order to
apply one of the defaults, we cannot do so. The problem is that it is
impossible to derive from d1 and d2 the following default.
Minor(x) V Mentally ...handicapped(x) : -,Has_capaci ty(x)
d3 : -,Has_capaci ty( x)
A final problem is that only normal default theories are guaranteed to have
extensions, which makes that for semi- and nonnormal default theories the
notion of logical consequence is not always defined. The problem is that
with semi- and non normal defaults it is possible to put so many constraints
on the applicability of adefault that the consequent of adefault conflicts via
a chain of other defaults with its own justification. A well-known example
is the following one, where F = (/) and ~ consists of the following three
defaults.
: p 1\ -,q : q 1\ -'1' : r 1\ -,p
P q l'

As Etherington (1988, pp. 84-5) remarks: " .. applying any one default leaves
one other applicable. Applying any two, however, results in the denial of
the nonnormal part of the justifications of one of them". The problem is
that defaults cannot be left unapplicable: Definition 4.1.2 requires that at
each step Ei+l all defaults are applied of which the prerequisite is in Ei
and the justification consistent with E.
Notwithstanding these problems, default logic is one of the best known
formalizations of nonmonotonie reasoning, probably because of the intu-
itive c1arity with which it can be presented and applied to common-sense
reasoning. For example, as will be shown in Chapter 6, it corresponds
rat her naturally to the adversarial aspect of legal reasoning, consisting of
generating and comparing alternative arguments for and against a certain
conclusion.

4.1.2. AUTOEPISTEMIC LOGIC

A second way of jumping to conclusions in case of incomplete information


is to attach consequences to explicit statements expressing one's ignorance.
74 CHAPTER 4

For example, if a person holds that a bird can fly unless it is known to be a
penguin, then if the person observes that s/he does not know that the bird
Tweety is a penguin, s/he can infer that Tweety can fly. To make this not ion
of not knowing or believing a fact precise, various formalisms have been
proposed which make use of modal operators interpreted in an epistemic
way. The best known is Moore's (1985) autoepistemic logic (AEL), which
arose out of an attempt to improve McDermott & Doyle's NML.
AEL tries to capture the beliefs 3 of an ideal introspective reasoner. The
idea is that on the basis of the premises or 'base beliefs' of the reasoner a set
of formulas is constructed which contains everything the reasoner believes,
not only about the world, but also about its own beliefs. The statements
expressing belief or ignorance are represented with the help of a modal
operator L, intuitively standing for 'it is known that'. The introspective
observations can subsequently be used to jump to conclusions. For example,
if the default 'birds fly' is represented as
(b 1\ -,Lp) -+ f
and not knowing of some bird that it is a penguin is expressed by
-,Lp

then it can by simple propositional reasoning be inferred that the bird


flies. Of course, to capture the defeasible nature of the conclusion, L has
to be interpreted in such a way that adding the information p makes the
statement -,Lp false. This can be achieved in the following way.
If a set T is to contain all an ideal introspective reasoner knows, it
should be expected to have some formal properties. First of all it should be
deductively closed, since an ideal reasoner does not only believe his or her
explicitly stated beliefs, but also everything which logically follows from
these beliefs. Furthermore, for the introspective statements the following
two conditions should hold:
- If <p E T, then L<p E T; and
- If <p rt T, then -,L<p E T.
Informally this says that if a person knows <p, then that person also knows
that si he knows <p, and if the person does not know <p, then s/he at least
knows that s/he does not know <p. Sets which have these three properties
are called stable sets. In addition to being stable a belief set should be
grounded in the reasoner's base beliefs. All this is captured by the following
fixed point definition.
Definition 4.1.4 T is a stable expansion of a set of base beliefs B iff
3Since in AEL 'knowing' a fact is ultimately based on the premises provided by the
reasoner, generally no distinction is made between 'knowing' and 'believing' a fact.
LOGICS FOR NONMONOTONIC REASONING 75

T = Th(B U {Lp I pET} U {-,Lp I p f/. T})

It can now be explained why AEL is nonmonotonic. The reason is that the
quest ion whether a statement Lep is in a stable expansion T of a base set
B depends both on what is and on what is not in B. Assurne a reasoner
considering the quest ion whether a givcn bird flies provides the following
base beliefs.
B = {b, (b 1\ -,Lp) --+ f}
Then -,Lp is in T since p is not in Tj furthermore, since B is included in
T and since T is deductively closed, by simple modus ponens also f is in
T. However, if p is added to the belief set B, then T contains Lp instead of
-,Lp and f will not be in T any more.
AEL has much in common with default logic. Among other things, stable
expansions are similar to extensions of default logic: for example, a set of
base beliefs can have multiple stable expansions. Consider the following set
of base beliefs with two atomic facts and two defeasible rules, representing
the Nixon diamond (cf. Reiter & Criscuolo, 1981).
Quaker 1\ -,L-,Pacifist --+ Pacifist
Republican 1\ -,LPacifist --+ -,Pacifist
Quaker,Republican
Since neither Pacifist, nor -,Pacifist is known, there is an expan-
sion with -,L-,Pacifist and therefore with Pacifist, but also one with
-,LPacifist and therefore with -,Pacifist. On the other hand, there is
no expansion with both Pacifist and -,Pacifist, since as soon as, for
example, the first default is applied, LPacifist holds, and the antecedent
of the second default is false.
AEL also shares a drawback with default logic. The fixed point definition
of a stable expansion is not constructive, for the same reason as for the
extensions of default logic: therefore AEL is, like default logic, computa-
tionally unattractive. Finally, it should be noted that, like default logic,
AEL is also extended and adapted in various ways (see the overviews of
footnote 4).
AEL will not be discussed in the rest of this bookj I confine myself
to briefly indicating the prima facie relevance of AEL for modelling legal
reasoning. It particularly makes a natural formalization possible of rules
which attach a normative consequence to certain facts 'provided that the
law does not state otherwise', since in such norms the law introspects on its
own content. For example, Section 3:32-(1) BW (cf. Example 3.1.7 abovc)
could in propositional AEL be formalized as
Person 1\ -,L-,Has_capaci ty --+ Has_capaci ty
76 CHAPTER4

Then, if the base set does not contain other norms and facts from which
-,Has_capaci ty can be derived, -,L-,Has_capaci ty is in T and the rule can
be applied. Rules on legal evidence procedures, such as the principle that
any suspect is held innocent as long as his/her guilt has not been proven,
are also of an introspective nature. A simplified formalization of this rule
in AEL is
-,LGuil ty -+ -,Guil ty

where the truth of -,LGuil ty depends on introspection upon the base


set of initial facts, criminal evidence rules, and scientific or common-sense
empirical observations.

4.1.3. MINIMIZATION

A third basic approach to jump to conclusions in the absence of exceptional


information is to minimize the amount of information, in order to make the
description of the world closedj if this is done, then a fact not known to
be true can be assumed to be false. Among other things, this can be used
to model the assumption that the world is as normal as possible, since
exceptional facts which are not known to be true can be regarded as false,
after which the jump to the default conclusion can be made.
This idea has two syntactic variants: the first is to interpret the negation
sign as a nonprovability operator, as is done in logic programming, and the
second one is to express as an explicit axiom in the object language the
assumption that only those objects have a certain property of which the
domain theory says that they have that propertYj this is called circum-
scription.

NEGATION AS FAlLURE

In the 70's in various areas of computer science the idea arose to interpret
the negation sign as 'failure to derive the opposite'. In one of these areas,
the theory of deductive databases, this was motivated by reasons of efficient
storage of data. It is often impractical to store all negative facts in a
databasej the standard example is an airline database: if it had to contain all
connections which do not exist, it would become much too large: therefore
it gives the answer 'there is no flight from Amsterdam to Miami' if it cannot
derive from the database that there is such a flight.
This kind of reasoning is correct under the Closed- World Assump-
tion (CWA), which says that all positive facts are actually stored in
the database. If this assumption is correct, we can safely say that a fact
which cannot be derived from the database is false. Obviously, this kind of
reasoning is nonmonotonic, since if additional positive facts become known,
LOGICS FOR NONMONOTONIC REASONING 77

the assumption turns out to be false and the negations of the additional
facts cannot be derived any more. Often the CWA is expressed as an
informal notion, but several attempts have been made to give a formal
account. Reiter (1978) proposed the following formalization.
Definition 4.1.5 Reiter's closed-world assumption. Assume DB is a set
of first-order formulas and cp an atomic first-order formula. Then
DB I-cw A ,cp ifJ not DB I- cp

This works fine if all facts in the database are atomic: for example, if D B =
{Flight(Amsterdam, London)}, then DB I-CWA ,Flight(Amsterdam,
M iami). However, if the language is extended, then problems can arise: for
instance, if DB = {Pa V Qa}, then both DB I-CWA ,Pa and DB I-CWA
,Qa, but this is inconsistent with Pa V Qa.
To deal with this problem, more sophisticated formalizations of reason-
ing with tbe CWA have been developed. Circumscription, to be discussed
below, can be regarded as one of them. Now I discuss the largest fragment
of the language of first-order logic in which the above 'naive' CWA is
unproblematic. This fragment plays an important role in a new use of the
language of first-order logic in computer science: until the 70's it was used
only to say things about programs and programming languages, but then
the idea arose to use logic itself as a programming language. PROLOG was
one of the results of this development.
The basic fragment of logic programming is the part consisting only of
Horn clauses.
Definition 4.1.6 (Horn dause logic) AHorn dause is of the form
\/XI, ... , Xj.LI /\ ... /\ Ln -+ L m
where all Li are atomic formulas, of which the variables are among Xl, ..• , Xj.
Normally the universal quantifier is lelt implicit in the notation. The
antecedent of aHorn clause is called the body and its consequent is called
the head. Atomic formulas Li are a border case of Horn clauses since they
can be regarded as aHorn dause with a valid lormula as body: T -+ Li.
Negated atoms Q1'e also a border case of Horn clauses since ,Li can be
written as a dause with a contradiction as head: Li -+ ..L. A set 01 Horn
dauses is called a logic program, and will be denoted by a possibly indexed
11.
One reason why Horn dauses are interesting for logic programming is that
for this fragment an efficient theorem prover has been developed, viz. so-
called SLD-resolution (cf. Lloyd, 1984). It is also known that if a logic
program contains only Horn dauses, Reiter's formalization of the CWA is
unproblematic, in that it does not give rise to inconsistencies. Accordingly,
78 CHAPTER4

in Horn programs queries -.Li? are answered positively Hf Li cannot be


derived by SLD-resolution. Observe that the disjunction Pa V Qa causing
problems for the naive CWA cannot be written as aHorn clause.
In Horn dause logic Reiter's CWA has an interesting semantic justi-
fication. The idea is to change the definition of semantic entailment: in
standard logic a conclusion follows from a set of premises just in case the
conclusion is true in all models of the premises; now, however, we are
interested in only some of these models, viz. those in which the extensions
of the predicates are minimal. A very simple example illustrates this idea.
Consider the program 11 = {Pa}. 11 has many models, in most of which a
lot more objects than a have the property P. However, it is obvious that
in all models of 11 in which as few objects as possible have the property P,
only a has this property; therefore, in those minimal models Pb is false for
every object b which is not equal to a. Now the idea is to capture the CWA
by saying that a conclusion follows from a logic program iff it is true in all
minimal models of the program.
One technicality is left to be explained: the domain of a logic program
is not just an ordinary set of objects, but it is constructed in the following
way: it consists of all the names which occur in the language of thc logic
program. The set with all these names is called the program's Herbrand
Universe (below I make the simplifying assumption that the language does
not contain function symbols, an assumption which is generally not made).
Because of this construction it is ensured that if an existential formula like
3xPx follows from a program, there is also a name n for the object making
this formula true such that Pn also follows from the program, which makes
it possible to give answer substitutions for each successful query.
This leads to the following definitions of a minimal model and of minimal
entailment.
Definition 4.1.7 (smallest Herbrand model). The smallest Herbrand Model
/L(II) of aHorn logic program 11 is the model satisfying the following
conditions. 4
1. The domain of /L(II) is its Herbrand Universe
2. for every individual constant a I(a) = a
3. For all predicate letters P of 11 and all models M of 11:
Ip.(ll)(P) ~ IM(p)·

The phrase 'smallest' Herbrand model is justified by the fact that the
minimal Herbrand models of Horn programs have been shown to be unique;
moreover, they have been shown to always exist. In such a case, where the
intended model is unique, sometimes the term canonical model is used.
Next the new entailment notion for Horn dauses can be defined.
4I(a) and I(P) denote the interpretation of the term a and the predicate P.
LOGICS FOR NONMONOTONIC REASONING 79

Definition 4.1.8 (minimal entailment for Horn dause logie)


II f=minH cp iff cp is true in j.l(ll).

As already indieated, nonmonotonicity of a negative condusion ,Li is


caused by the fact that information can be added to II whieh permits
to derive Li. For example, if II = {Flight(Amste1·dam, London)} , then
II f=minH ,Flight(Amsterdam, Miami), but if also Flight(Amste1·dam,
M iami) is added to ll, then this derivation is obviously not valid any
more. Semantically this can be understood as follows: as long as II contains
only the first flight, the model in whieh the extension of the predicate
Flight is {(Amsterdam, London)} is the smallest one, but if the second
flight has been added to the program, this model ceases to be a model of
ll, since it makes the atom Flight(Amsterdam,Miami) false; the small-
est model is now the model with I(Flight) = {(Amsterdam,London),
(Amsterdam, M iami) }, which makes both atomic formulas of II true.
In Horn dause logie only negative condusions are nonmonotonie; the
reason is that, since the smallest Herbrand model is known to be the
intersection of all models of aHorn program, it is a submodel of the models
of all Horn extensions of the program, and therefore positive condusions
are monotonie, since they will also be true in all models of such extensions.
For Definition 4.1.8 this means that if cp is an atom, minimal and dassieal
entailment coincide. This becomes different if negation is allowed in the
body of a dause, which leads to a generalization of Horn dause logic.
Definition 4.1.9 (Generallogic programs). A literal is an atomic formula
01' its negation. A dause is a formula of the form

\lXI, ... ,Xj.LI/\ ... /\ Ln --+ Lm


where LI, ... , Ln are literals and L m is an atomic formula, and the variables
of LI, ... , L m are among Xl, ... , Xj. A general logic program is a set of
general clauses.
It is important to note that dauses do not only have a dedarative but
also a procedural aspect (cf. Section 1.3.2). As program rules dauses are
directional: they are meant to be about their head: the body of the dause
is seen as something which can be used to derive instances of the head.
This is particularly significant if negation is allowed in the antecedent of a
dause: as was said, negation is procedurally interpreted as 'failure to derive
the opposite' and because of this it is possible that programs with the same
dassieal semantics have different procedural effects. Consider, for example,
the programs III = {,Pa --+ Qa} and ll2 = {,Qa --+ Pa}: these formulas
are classically equivalent, but since ' procedurally stands for negation as
faHure, from III Qa can be derived, but from ll2 instead Pa can be derived.
80 CHAPTER4

Note that both derivat ions are dassically invalid. Note finally that, again
because of the procedural interpretation of negation, it does not make sense
to allow negations in the head of a dause: 'if<p then 'lj; is not derivable' has
no sensible interpretation.
The first attempt to give a semantic account of negation as failure was
Clark's (1978) predicate completion, a method which I have already applied
in the formalizations in Section 3.1.3. Predicate completion can also be
regarded as an alternative formalization of the Closed-World Assumption.
The idea of the completion of a predicate P is to add extra information
to the premises expressing that the objects of which the premises say that
they have the property P, are the only ones which have this property. In
fact, this method makes sufficient conditions for deriving atoms with P
also necessary ones. More precisely, assurne we want to complete an n-
place predicate P. Then if <PI, ... ,<Pn are the bodies of all the dauses with
P(XI, ... , x m ) as their head, then the following completion formula for P
is added to the premises (note that ' stands for classical negation):

'<PI /\ ... /\ '<Pn ---. ,P(XI, ... ,Xm)


Now the idea is that if a logic program is extended with completion formulas
for all its predicates, a condusion follows with SLDNF resolution (Le. SLD
resolution augmented with the negation-as-failure rule) from the program
iff it follows deductively from the completed program.
The completion semantics for negation as failure has some problems,
one of which is that the completion of a consistent program may be incon-
sistent. For example, the completion of the program {,Px ---. Px} is the
inconsistent set {,Px == Px}.
To overcome the problems of predicate completion more recent de-
velopments have focused on a model-theoretic semantics for negation as
failure. Note that we cannot simply use the minimal entailment notion
of Definition 4.1.8, since general logic programs do not always have a
unique smallest Herbrand model. Consider the following counterexample.
The program n = {,Pa ---. Qa} has more than one model in which the
extensions of the predicates are minimal, viz. MI with I(P) = {a} and
I(Q) = 0, and M 2 with I(P) = 0 and I(Q) = {al. However, M 2 is the
intended model, since by negation as failure ,Pa can be derived, which
makes Qa derivable. The problem is that dassically ,Pa ---. Qa is equivalent
to a disjunction Pa V Qa, whereas procedurally the body and the head of
the dause have a different status.
Consequently, ways of enforcing the intended minimal model as the
canonical one have been investigated. For a certain syntactical cIass of so-
called stratified pro grams indeed a semantics has been developed which
results in a unique minimal model, by building up the intellded model
LOGICS FOR NONMONOTONIC REASONING 81

iteratively according to the directional procedural effect of the clauses of


a program. The idea of stratification is that 'recursion through negation'
is avoided: if all clauses are chained together, then it may not occur that
somewhere earlier in the chain a negated atom occurs in the body of a
clause while later on in the chain the atom itself occurs in the head of a
clause. Intuitively, this is understandable: if at some point a positive fact
Qa has been derived since another fact Pa cannot be derived, then it does
not make sense to let Pa later on be derivable on the basis of Qa, since the
reason why Qa could be derived was precisely that Pa was not derivable.
In formulas this becomes
II = {-,Pa -+ Qa, Qa -+ Pa}
Observe that with SLDNF-resolution this is a looping program.
The idea behind the formal definition of a stratified program is that
it must be possible to divide the predicate letters of the program into
a total order of disjoint classes or stmta, in a way which expresses that
negated conditions depend on lower strata only: thus 'recursion through
negation' is avoided. The following explanation of stratification is taken
from Etherington (1988, p. 61). That a program is stratified means that it
is possible to assigh each predicate letter to a unique stratum in such a way
that for each clause
PI (X) A ... A Pn(x) A -,Ql(X) A ... A -,Qm(X) -+ P(x)
in the program, stratum(P) is greater than or equal to stratum Pi (where
i ~ i ~ n) and strictly greater than stratum(Qj) (where 1 ~ j ~ m).
It can now be explained why the program {-,Pa -+ Qa, Qa -+ Pa}
is not stratifiable. Because of (-,Pa -+ Qa) Q must be in a strictly greater
stratum then P, since P occurs negated in the body of a clause with Q as
the head. However, because of (Qa -+ Pa) P should be in a stratum which
is at least equal to the stratum of Q, since Q occurs possible in the body
of a rule with P in the head; obviously, these two requirements cannot be
satisfied simultaneously.
For the semantics of stratified logic programs the reader is referred to
e.g. Bidoit (1991). I confine myself to a few general comments. The first is
that the semantics for stratified logic pro grams defines, like the semantics
for Horn clause logic, a unique canonical model; therefore, the definition of
minimal entailment FminC of logic pro grams is the same as Definition 4.1.8
above, if JL(II) is replaced by a symbol for the defined canonical model.
Unlike in the restriction to Horn clauses, however, for generallogic programs
also positive conclusions are nonmonotonic. In syntactical terms this is so
because the derivation of a positive atom may depend on the failure to prove
another atom, as in the logic program II = {-,Pa -+ Qa}; II FminC Qa but,
obviously, adding Pa to this program makes the derivation of Qa invalid.
82 CHAPTER 4

The semantical reason for this is that, although the canonical model of
stratified programs is unique in being the canonical model, it is, as shown
above, not always the unique minimal model of the program, for which
reason it is not the intersection of alt models of the program.
A second re mark is that the significance of the idea of stratification is
not restricted to logic programming; also when applied to other nonmono-
tonic formalisms it can identify suitable subsets of the language which are
computationally well-behaved (cf. 4.2 below; see also Etherington, 1988,
pp. 61,88 and Brewka, 1991a, pp. 91-93 for more comments on this issue).
Finally, it should be remarked that the stratified semantics for general
logic programs is not universal; it gives a semantic characterization for
stratified programs only. Because of this, research into the semantics of logic
programming has not stopped with stratification; over the past few years
many interesting developments have taken place. For example, Przymusin-
ski (1988) has developed a 'perfect model' semantics based on so-called
prioritized circumscription, which is a variant of another minimization
approach, to be discussed next. Furthermore, Gelfond & Lifschitz (1988)
have developed stable model semanties, which is a direct application of
autoepistemic logic: negation as failure is interpreted as -,L and then the
stable models of general programs are defined as the stable expansions
restricted to this syntactic class. Research also continues for other reasons:
for example, some researchers have focused on extending the language
of logic programming with classical negation (Gelfond & Lifschitz, 1990;
Kowalski & Sadri, 1990); thc reason is that in general logic programs all
negations stand for negation as failure, which for purposes of knowledge
representation is often a too heavy restriction. In Chapter 5 I return to this
lssue.

CIRCUMSCRIPTION

A second formalization of the idea to minimize exceptional knowledge is


circumscription, developed by McCarthy (1980; 1986). Like logic program-
ming, it has both a syntactical and a semantical appearance. The idea
behind the syntactical form is the same as the one underlying predicate
completion: if we are prepared to make assumptions about the domain
of a theory, then we should add a formula to the theory which explicitly
expresses these assumptions: if we do this, we can simply reason monoton-
ically from the extended theory. More formally, If T is the initial theory,
C is the formula expressing our assumptions (called the circumscription
fOTmula) and I- eire the nonstandard consequence not ion of circumscription,
then the idea is that

T I- eire c.p iff T + C I- c.p


LOGICS FOR NONMONOTONIC REASONING 83

The nonmonotonic nature of this idea is caused by the fact that the content
of the circumscription formula entirely depends on the theory: for this
reason, if the theory is extended, then the circumscription formula needs
to be changed to C', and deductive consequences of T + C need not hold
any more for T + C'.
More specifically, this technique works in the following way. If certain
predicates of a theory are regarded as expressing exceptional properties or
relations, then the circumscription formula circumscribes these predicates,
.Le. it says that only those objects have the exceptional property or relation
of which the theory says that they have it. For example, if the theory is
about criminallaw and it is regarded as exceptional that criminal offenders
are mentally ill, then the predicate 'mentally-ill' can be circumscribed: then,
if not hing can be proved ab out an offender's mental illness, s/he is assumed
to be in mental health. This explanation of circumscription sounds like the
closed-world assumption, and circumscription has indeed been studied as
a way of formalizing the CWA. For the oldest and simplest form of cir-
cumscription, called predicate circumscription the circumscription formula
is defined in the following way (taken from Brewka, 1991a).
Definition 4.1.10 (predicate circumscription). Let T be a first-order for-
mula (the conjunction of the premises of the theory) containing an n-place
predicate symbol P. Let T(P') be the 1'esult of replacing aU occurrences of
P in T by the predicate variable <p. Then the predicate circumscription of
P in T is the foUowing second order formula
T(P) 1\ VP'[(T(P') 1\ Vx(P'x ---t Px)) ---t Vx(Px ---t P'x)]
where x = Xl, ... , x n .
To give a hint as to how this formula 'works', the idea is that if the
circumscribed predieate P is known to hold for a certain object c, Le. if Pc
holds, then we should be smart enough to substitute for P' X the expression
X = c, after which we can deduee Vx(Px == X = c), which says that only
c has the property P. However, in this book I only foeus on the semantics
of circumscription; for examples applying the cireumscription formula the
interested reader is referred to the introduetory texts mentioned above.
The idea behind the semantical appearance of circumseription is, instead
of giving truth definitions for new logical operators of the language, to
define a new notion of semantie entailment over the language of first-order
predicate logic. The general idea is the same as in the semantics of logic
programming: instead of ehecking alt models of the premises to see if the
conclusion holds, only so me of the models are eheeked, viz. those in whieh
the extensions of the predicates are minimal. There are, however, two main
differenees with the semanties of logic programs. A technical difference is
that now ordinary models are considered instead of Herbrand models. A
84 CHAPTER4

difference which for our purposes is more important is that the notion of
a minimal model has to be refined: not all but only some of the predicate
extensions should be minimal, viz. the extensions of those predicates which
are circumscribedj in addition, some predicate-extensions should be fixed,
and other ones should be allowed to vary among the minimal models. This
second difference has to do with the nature of default rcasoning, in which
we can distinguish three kinds of information: first the 'input facts' of a
case, then the exceptiollal information, which should be assumed false if
possible, and finally the 'output facts', the conclusions we are interested
in. While, of course, the exceptional predicates should be minimized, the
input pTedicates should in all models stay fixed since they just describe
the facts of the case, and the output predicates should be allowed to vary,
since the answer should co me as a 'surprise'. This leads to the followillg
generalized definition of a minimal model, which, in fact, corresponds to
variable circumsc1'iption (McCarthy, 1986), of which the syntactical form
is slightly more complicated than of predicate circumscription.
Definition 4.1.11 (minimal model). Let T be a first-m·der the01'y and P,
Q and Z be disjoint sets of p1'edicate letters such that their union is the set of
all predicate letters of the language ofT. P denotes the set of circ'U1nscribed
predicates, Q is the set of fixed predicates, and Z contains the vaTiable
predicates. Let M and M' be two models of T. We say that M :::;P,Q,z M'
iJJ
1. M and M' have the same domain;
2. for all PEP: h1(P) ~ IM,(P)
3. for all Q E Qh1(Q) = h1,(Q).
M is a minimal model ofTp,Q,z iJJ M is minimal in the orde1'ing :::;P,Q,z,
i.e. iJJthe1'C is no model M' ofT such that M' :::;P,Q,z M and not M :::;P,Q,Z
M'.
Note that in this definition nothing is required about the variable predicates
Z.
Definition 4.1.12 (minimal entailment for variable circumscription)
Tp,Q,z Fmin <p iJJ <p is tme in all minimal models of Tp,Q,z according to
the usual tmth definitions.
Etherington (1988) has shown that the syntactic form of variable circum-
scription is complete with respect to this semantics: the minimal mod-
els of Tp,Q,z are exactly all models of T + C, where C is the formula
circumscribing all predicates in P according to the definition of variable
circumscription.
The following examples show how this definition of minimal entailment
can be used to model defeasible reasoning.
LOGICS FOR NONMONOTONIC REASONING 85

Example 4.1.13 This is a rat her simplified formalization of a criminal


code.
(1) Vx.Did_crime(x) 1\ -,Mentally_ill(x) ~ Punished(x)
(2) Did_crime(Alfred)
Since we are interested in knowing whether someone who has cOlnmitted
a crime can be punished, and given that we regard mental illness as an
exceptional state, the set P of minimized predicates is {Mentally_ill},
the set Q of input predieates is {Did_crime}, and the set Z of variable
predicates is {Punished}. Obviously, the minimal models of {l, 2}p,Q,z are
those in wh ich Mentally_ill(Alfr-ed) is false, and since Did_crime{Alfred)
is true in alt models, in all minimal models the antecedent of (1) is true,
whieh makes in all those models Punished{Alf1"ed) true as weIl. Hence, with
the given content ofP, Q and Z {1, 2} minimally entails Punished(AlfTed).
Note that without the possibility to let the extension of the output
predicate Punished vary this conclusion cannot be obtained, since then
there is also a minimal model M with Mentally_ill{Alfred) true and
Punished(Alfred) false, for the following reasons. Without allowing the
extension of Punished to have any size Mentally_ill(Alfred) cannot be
made false in M, because this model would then not be a model of (1) any
more, since it would make the antecedent of (1) true but its consequent
false. If, however, we are allowed to let the extension of Punished grow
if needed, we can make Mentally_ill{Alfred) false in M, since we can
make Punished(Alfred) true by adding Alfred to the extension ofPunished.
Actually, predieate circumscription did not have this possibility, but Mc-
Carthy (1986) corrected this with variable circumscription.
The nomnonotonicity of circumscription can be illustrated by simply
adding
(3) Ment ally_il 1 (Alfred)
to the premises: then in all minimal models of the premises the antecedent
of (1) is false, for whieh reason there are both minimal models with
Punished(AlfTed) false and with Punished(Alfred) true: hence the con-
clusion that Alfred is punished is not valid any more.
Example 4.1.14 In applieations of circumscription a common technique
is using an abnormality predicate, similar to the use of an inapplieability or
exception predieate in Section 3.1. This technique will be studied in detail
in Chapter 5; now I illustrate it with the following example. Assume tlmt
normally Welshmen are miners and rich people are conservative, and that
as a matter of fact no miners are conservative. Assume, furthermore, that
Neil is both rieh and a Welshman. This is represented with the following
formulas, in which expressions ab{n, x) read as 'x is abnormal in aspect n'.
86 CHAPTER 4

(4) Vx.Welshman(x) 1\ -,ab( 4, x) ---t Miner(x)


(5) Vx.Rich(x) 1\ -,ab(5, x) ---t Conservative(x)
(6) Vx-,(Miner(x) 1\ Conservative(x))
(7) Welshman(N eil) 1\ Rich(N eil)
The reader will recognize the similarity with the 'Nixon Diamond' of Sec-
tion 4.1.2. In this example the best circumscription policy is to make the
predicates WeIshman and Rich fixed and Miner and Conservative variable,
and to minimize the ab predicate. Then there are no models of {4 - 7}
in which both ab( 4, N eil) and ab(5, N eil) is false; therefore at least one
of these facts has to be true. Obviously, in the minimal models precisely
one of them is true, but since there are both minimal models with only
ab( 4, N eil) true, in which Neil is a conservative, and minimal models with
only ab(5, N eil) true, in which Neil is aminer, no definite conclusion can be
drawn about whether Neil is a miner or a conservative; what we do know,
however, is that Neil has one of these qualities, since in all minimal models
Miner(N eil) V Conservati ve(N eil) is true.
Example 4.1.15 The next example shows the need of a further refine-
ment of variable circumscription. McCarthy (1986, pp. 105-6) observes that
sometimes, in order to exclude an intuitively unintended minimal model,
the minimization with respect to one atomic formula should have priority
over a minimization with respect to another atom. Consider
(8) Vx.Driving_licence(x) 1\ -,ab(l, x) ---t May_drive(x)
(9) Vx.Drunk(x) 1\ -,ab(2, x)
---t ab(l, x)
(10) Driving_licence(Peter) 1\ Drunk(Peter)
P = {ab};Q = {Driving_licence,Drunk};Z = {May_drive}
Intuitively, ab(l, Pet er ) should be entailed by {8, 9,10 }p,Q,z, since Peter
is drunk and there is no reason why he should be abnormal in the second
respect (note that this is the result if {8, 9, 10} is a logic program). How-
ever, there are two minimal models, MI with I(ab) = {1, Peter} and M 2
with I(ab) = {2,Peter}. Note that again it is a disjunction wh ich causes
the problems, since from (8-10) it follows deductively that ab(l, Peter) V
ab(2, PeteT) is true. What is needed to make M2 the only minimal model
is that the atoms of the ab predicate can be made false according to so me
priority scheme: in this example ab(2, PeteT) should be made false at the
cost of ab(l, PeteT). Technically, this can be realized in so-called pTioTitized
ciTcumscTiption, if models are not compared with respect to predicate
extensions but with respect to sets of atomic formulas which they make
true. This is sometimes called pointwise circumscTiption (Lifschitz, 1987b).
Like most other nonmonotonic formalisms, circumscription also has a
few recognized problems. Most of these problems are caused by the general-
ity of circumscription when compared to logic programming. For example,
LOGICS FOR NONMONOTONIC REASONING 87

not all circumscriptive theories have minimal models, which means that not
every circumscriptive theory can be semantically characterized. Another
problem is the computational complexity of circumscription, which, among
other things, is caused by the fact that in general the circumscription
formula has to be second-order, because it has to quantify over predicates.
The problem is that second-order logic has unattractive proof theoretical
properties: particularly it is known to lack asound and complete proof pro-
cedure. To overcome this, research has been done to reduce certain classes
of circumscription axioms to first-order formulas, after which standard
theorem proving techniques can be used. And progress in theorem proving
research has also been made in the second-order case, while for certain
classes of circumscriptive theories equivalences with logic programs have
been identified. In general, however, circumscription is computationally
still a tough nut to crack, also because of another reason, which is that
in general there is no algorithm for solving the problem which predicate
has to be substituted for P' in the circumscription formula; this requires
real intelligence, and this is something which logical theorem provers cannot
provide.

4.1.4. CONDITIONAL APPROACHES

A fundamentally different attempt to model defeasible reasoning is regard-


ing defeasible statements as a special kind of conditional. The first who
did this was Delgrande (1988). Conditionallogics were originally developed
for modelling counterfactual reasoning. Counterfactual conditionals express
what would have been the case if something else had been the case, as, for
instance, in 'if Beethoven had been French, he would not have composed
symphonies'. A counterfactual conditional is interpreted in a possible-world
semantics: <p ::::} 1jJ is true just in case 1jJ is true in a subset of the worlds
in which <p is true, viz. in the possible worlds which resemble the actual
world as much as possible, given that in them <p holds. Now the idea is to
define in a similar way a possible-worlds semantics for generic phrases like
'normally jtypically A's are B's', in which a defeasible conditional <p ::::} 1jJ is
interpreted as 'in all most normal worlds in wh ich <p holds, 1jJ holds as welP.
Obviously, if read in this way, then modus ponens is not valid for such
conditionals, since even if <p holds in the actual world, the actual world
need not be anormal world. For defeasible conditionals the invalidity of
modus ponens is desirable, since this is a monotonie inference rule, while
defeasible statements should give rise only to nonmonotonie conclusions.
Of course, what defeasible reasoning is about is assuming that the actual
world is normal as long as there is no evidence of the contrary. To see how
Delgrande achieves this, it should be explained that he distinguishes two
88 CHAPTER 4

aspects of default reasoning. The first is reasoning about defaults: if we


know that normally birds fly and normally airplanes fly, we monotonieally
infer a new default that normally also objects fly which are either a bird or
an airplane. Recall that this aspect of default reasoning is outside the scope
of default logic, whieh is gene rally regarded as one of its drawbacks. The
second part of defeasible reasoning is using the defaults to draw default
conclusions: if we know that Jumbo is either a bird or an airplane then we
draw the defeasible conclusion that Jumbo can fly. It is this second part
in whieh the assumption of normality is made; in modelling the first part,
reasoning about defaults, Delgrande can do with a monotonie conditional
logic for defeasible cOllditionals <p => 1jJ. As was said, modus ponens is not
valid for this conditional. Neither is the augmentation principle

The invalidity of augmentation will be obvious if it is observed that this


principle is the object language counterpart of the nonmollotonicity prin-
ciple for consequence notions. To give an example of how Delgrande's
logie supports the derivation of new defaults, it should be noted that an
important principle whieh is valid is reasoning by cases:

It is this principle whieh allows the derivation of the 'disjunctive' default


in the just given example.
Since Delgrande's conditionallogie is monotonie, an additional machin-
ery has to be defined on top of this logie to make it possible to derive
nonmonotonie conclusions. To this end, Delgrande defines, analogous to
Reiter, the not ion of adefault theory 5: in (D, C) the set C is a set of
standard first-order formulas, standing for the contingellt facts, while D is
a set of conditional fornlUlas from the language of the conditional logie.
On the basis of this, Delgrande uses two furt her ideas to make defeasible
inferences. The first is that the set D of defaults is extended to a set
E(D) by adding new defaults according to the (monotonically invalid!)
augmentation principle, as long as this is possible: whether adding a new
default is possible, is determined by an implicit not ion of specijicity. The
second idea is to define a nonmonotonic consequence notion I""' in the
following way:
(D, C) I""' <p iff E(D) contains C => <p
Consider by way of example the default theory (D, C) where
D = {(I) 'v'x.Bird(x) => Canfly(x)}
C = {Bird(Tweety), Dead(Tweety), Green(Tweety)}
5Delgramle also presents an alternative method, wh ich I shall not discuss.
LOGICS FOR NONMONOTONIC REASONING 89

In this situation Delgrande's definitions allow the augmentation of (1) to


(1') \fx.Bird(x) /\ Dead(x) /\ Green(x) =? Canfly(x)
as weIl as its ground instance für Tweety, to be in E(D), after which accord-
ing to the definition of I", it is possible to defeasibly derive Canfly(Tweety).
If, however, a new default
(2) \fx.Bird(x) /\ Dead(x) =? -'Canfly(x)
is added, then the addition of (1') to E(D) is blocked, since (2) is a
conflicting default with a logically stronger, i.e. more specijic antecedent.
On the other hand, since there is no more specific default blocking the
augmentation of (2), the following default
(2') \fx.Bird(x) /\ Dead(x) /\ Green(x) =? -'Canfly(x)
can be added to E(D) together with its ground instance for Tweety, after
which -'Canfly(Tweety) can be defeasibly derived.
As was said, Delgrande's logic avoids one problem of Reiter's default
logic, the inability to reason about defaults; however, another problem,
intractability of the nonmonotonic part, has not been avoided, since there
seems to be no efficient way of computing the set E(D). And philosophically
Delgrande's logic also has some drawbacks, since his additional machinery
to capture nonmonotonic reasoning is rat her ad hoc, in that it is not based
on his semantics for defaults. Nevertheless, the basic idea of Delgrande's
approach is interesting and so me researchers have furt her developed it,
notably Asher & Morreau (1990) and Veltman (1996). With respect to the
rest of this book it is important to remark that both attempts are based
on the idea that the notion of specificity, which Delgrande uses implicitly
in extending the default theory, should be reflected in the semantics of the
logic for defeasible conditionals.

4.1.5. INCONSISTENCY HANDLING

All previous formalizations of nonmonotonic reasoning started with a con-


sistent set of premises and provided for ways of drawing plausible but de-
ductively unsound conclusions from them. A different approach is allowing
the premises to be possibly inconsistent and to prefer some part of the
premises if indeed an inconsistency occurs. In this view, defaults express
approximations to the real situation, which might need to be corrected in
specific circumstances: people reason with defaults as if they were true,
until in a certain case they give rise to an inconsistency. The attractiveness
of this idea is that if nonmonotonic reasoning is regarded as a kind of
inconsistency handling in classicallogic, no new logic needs to be developed.
For legal applications this approach is interesting for yet another reason,
90 CHAPTER 4

because it might result in theories which can also be used for modelling the
way lawyers reason with inconsistent normative systems, which above was
identified as another source of nonmonotonic reasoning. This topic will be
discussed in Chapters 7 and 8.
Among the researchers modelling nonmonotonic reasoning as incon-
sistency tolerant reasoning are Brewka (1989; 1991a), Poole (1985; 1988)
and Roos (1991; 1992). Earlier, Alchourron & Makinson (1981) were the
first who used similar ideas to model the way lawyers reason with hier-
archically structured normative systems. The general idea goes back to
Rescher (1964). If the set of premises turns out to be inconsistent, consistent
subsets are identified and ordered according to some preference relation:
in applications to nonmonotonic reasoning generally the subset containing
the most specific information is preferred. The preferred subsets can be
used to define two nonstandard consequence relations, corresponding to
following from one of the preferred subsets or from all of them. Thus, like
in Definition 4.1.3 above both a credulous and a sceptical consequence
not ion are dcfined, but a difference is tImt now membership of a subtheory
only counts if the subtheory is preferred.
The following definition is taken from Brewka (1991a).
Definition 4.1.16 (weak and strong consequence)
- A fo1'mula rp is weakly provable from a set of p1'emises T iff the1'e is a
prefe1'1'ed subtheo1'Y S of T such that SI- rp;
- A formula rp is strongly provable from a set of premises T iff fo1' alt
prefe1'1'ed subtheories S of T we have SI- rp.
In this subsection I present the approaches of Poole and Brewka.

POOLE'S FRAMEWORK FOR DEFAULT REASONING

Poolc (1988) models default reasoning as a kind of hypothetical reason-


ing; he claims that if defaults are regarded as possible hypotheses with
which theories can be constructed to explain observed facts, there is no
need to change the logic but only the way the logic is used. Accordingly,
the semantics and proof theory of Poole's "logical framework for default
reasoning" are simply those of first-order predicate logic. The basis of this
framework are the sets F and ~. F is a set of closed first-order formulas,
the facts, assumed consistent, and ~ is a possible inconsistent set of first-
order formulas, the defaults or possible hypotheses. Any ground instance
of adefault can be used as a hypothesis, as long as it is consistent with
the facts and the other defaults used so far. Formally, the framework is
captured by the following definitions.
Definition 4.1.17 A scenario of adefault theo1'Y F, ~ is a consistent set
D U F where D is a set of ground instances of the defaults of ~.
LOGICS FOR NONMONOTONIC REASONING 91

Theory formation consists of constructing an explanation for a given for-


mula.
Definition 4.1.18 If<p is a closed formula, then an explanation of<p from
F, ~ is a scenario of F, ~ implying <p.
Like Reiter, Poole defines the notion of an extension.
Definition 4.1.19 An extension of F, ~ is the set of logical consequences
of a maximal (with respect to set inclusion) scenario of F,~.
Po oie (1988, p. 45) briefly discusses the use of standards for preferring
explanations, which would allow the use of the consequence not ions of Def-
inition 4.1.16. Poole (1985) formally defines one such standard, preferring
the most specific explanation, which will be discussed in detail in Chapter 6.
I now give one example to illustrate Poole's framework.
Example 4.1.20 Consider:
(1) Has...motive(x) -+ Suspect(x)
(2) Has...motive(x) A Has_alibi(x) -+ -,Suspect(x)
F: {Has...motive(John), Has...motive(Bill), Has_alibi(Bill)}
~: {(1-2)}
On the basis of this default theory both Suspect(John), Suspect(Bill)
and -,Suspect(Bill) can be explained, with respectively the scenarios
EI = F U {Has...motive(John) -+ Suspect(John)}
E2 = F U {Has...motive(Bill) -+ Suspect(Bill)}
E3 = F U {Has...motive(Bill) A Has_alibi(Bill) -+ -,Suspect(Bill)}
If a preference notion is defined on the basis of a specificity principle, then
obviously according to Definition 4.1.16 Suspect(John) will be strongly
derivable since there is no explanation for its opposite, and -,Suspect(Bill)
will be strongly derivable because E 3 is based on more specific information
than E 2 •
Note that Poole's defaults are not implicitly quantified; instead, they have
the same role as Reiter's open defaults, which serve as a scheme for all their
ground instances. This has the following reason: if not the ground instances
of the defaults but their universal closures have to be used in a scenario,
then 'a default cannot be applied any more to normal cases as soon as there
is one exceptional case.
Example 4.1.21 Consider:
F = {Bird(Tweety), Bird( Jumbo), -'Canfly(Tweety)}
~ = {\ix.Bird(x) -+ Canfly(x)}
92 CHAPTER 4

The problem with this formalization is that the universally quantified


default cannot be used to explain Canfly(Jumbo), since it also implies
Canfly(Tweety), which contradicts the facts.

BREWKA'S PREFERRED SUBTHEORIES


Brewka (1989) presents a generalization of Poole's framework in which
there are not two distinct levels of information, but an arbitrary number,
determined by a strict total ordering of the premises. The idea is that the
domain theory is ordered into various levels of equally high premises and
that on the basis of this ordering a consistent set of premises is constructed
by first adding as many formulas of the highest level to the set as is
consistently possible, then adding as many formulas of the subsequent level
as is consistently possible, etcetera. If formulas of equallevel are in conflict,
the resulting set branches into alternative and mutually inconsistent sets,
analogous to, for example, the extensions of default logic. Formally this
results in the following definition. 6
Definition 4.1.22 (Brewka's preferred subtheories)
Adefault theory T is a tuple (Tl, ... , T n ), where Ti are (possibly
inconsistent) sets of formulas of first-order predicate logic.
5 = 51 u ... U 5 n is a preferred subtheory of T iff for alt i, 1 ~ i ~ n,
51 U ... U 5i is a maximal consistent subset of Tl U ... U Ti.
Considcr by way of illustration again Example 4.1.20 and assurne that thc
specificity of (2) is captured by placing it at a higher level than (1).
Tl =:F
T 2 = {(2)}
T 3 = {(I)}
Recall that both (1) and (2) are schemes standing for all their ground
instances. If we try to construct a preferred subtheory then, since the facts
are consistent,
51 =:F
Furthermore, for both John and Bill the ground instance of (2) can be
added:
52 = {HasJllotive(John) 1\ Has_alibi(John) --t --,Suspect(John),
HasJllotive(Bill) 1\ Has_alibi(Bill) --t --,Suspect(Bill)}
Finally, only for John the ground instance of (1) can be added, since 51 U52
implies --,Suspect(Bill).
6Brewka (1991a) presents a second generalization, in which the ordering of the
premises need not be total allY more. I shall not give this definition.
LOGICS FOR NONMONOTONIC REASONING 93

53 = {HasJIlotive(John) ---+ Suspect(John)}

The result 51 U 52 U 53 is consistent and implies Suspect(John) but


--,Suspect(Bill).
Note that if both (1) and (2) were in T 2 , then there would be four,
mutually inconsistent, preferred subtheories, since for both J ohn and Bill
we would have a choice whether to add to 52 the ground instance of (1)
or of (2). As a consequence, both Suspect(John), Suspect(Bill) and their
opposites would be (weakly) provable.

4.2. General Issues


The reader not yet acquainted with nonmonotonic logics will have been
rather confused by the variety of systems trying to formalize nonmonotonic
reasoning. In fact, this has been a general feeling among logicians; as a
consequence, in recent years attempts have been made to unify the study
of the various systems, in order to gain insight into the quest ion whether all
these systems are rivals in modelling the same kind of reasoning, or whether
they all model related but different aspects of reasoning.
In this section I discuss three ex am pies of this kind of unifying research;
firstly, the development of an abstract notion of nonmonotonic entailment,
to be further tailored to serve particular reasoning tasks; secondly, the
study of general properties of consequence notions; and finally, discover-
ies of formal connections between various individual systems. Apart from
this, I say some words about a general mechanism which can support the
implementation of nonmonotonic reasoning, so-called truth maintenance
systems.

4.2.1. PREFERENTIAL ENTAILMENT

Shoham (1988) has proposed a general framework for developing non-


monotonie logics. In his opinion the most important thing is to specify a
language with a semantics, and to define an entailment relation according
to the general idea of so-called prejerential entailment, which we have
already applied above: this idea is that conclusions follow nonmonotonically
from the premises iff they are true in some prejerred subset of the models
of the premises. In logic programming and circumscription the preferred
models are those in which the extensions of some predicates were minimal,
but other preference criteria are also possible, depending on the kind of
reasoning. Shoham himself has applied the idea to temporal reasoning,
where he defines the preferred models as those in which exceptional events
occur as late as possible (chronological minimization). He has also shown
that a number of other nonmonotonic logics fit into his framework: some,
94 CHAPTER 4

like those based on minimal model semantics, in a very natural way, but
others only after considerable modifications of the logicj notably Reiter's
original default logic cannot be captured in the framework of preferential
entailment. For this reason Shoham's notion of preferential entailment has
only partly turned out to be a unifying scheme for nonmonotonic logics.

4.2.2. PROPERTIES OF CONSEQUENCE NOTIONS

Another general issue in the study of nonmonotonic reasoning is the ques-


tion as to what are the minimal requirements which a function from sets
of formulas to formulas should satisfy in order to deserve being called a
consequence notion. Usually nonmonotonic reasoning is characterized only
by way of a property which it lacks, viz. monotonicity, but many feel
that a more positive characterization is desirable, in order to distinguish
nonmonotonic reasoning from manifestly unsound modes of reasoning. Now
some have noticed an interesting similarity between the idea of preferential
entailment and the possible-worlds semantics of counterfactual condition-
als: recall that in this semantics a conditional 'P =? 'Ij; is true just in c.a~ 'Ij;
is true in a subset of the worlds in which 'P is true, viz. in those which
resemble the actual world as much as possible. This is very similar to
saying that 'Ij; is entailed by 'P iff 'Ij; holds in a particular subset of the
models of 'P. The most striking similarity is that, as already indicated above,
counterfactual conditionals lack the property of augmentation, which is the
object level counterpart of monotonicity. However, counterfactual condi-
tionals do still have a number of other properties, and this has motivated
researchers (notably Kraus et al., 1990) to study the minimal requirements
for consequence notions systematically along the lines of conditionallogics.
Generally, an important requirement for consequence notions is held to be
cumulativity or cautious monotonicity. Let IF be any consequence notionj
then this requirement is

If 'P IF 'Ij; and 'P IF X then 'P /\ X IF 'Ij;


In words, this condition says that no conclusions can be invalidated by
adding conclusions to the premises. Its counterpart in conditional logic is

which is valid in all conditional logics. Shoham's notion of preferential


entailment satisfies this requirement, but, as Makinson (1989) has shown,
default logic does not. The fact that this discovery has not led to the total
downfall of default logic suggests that as yet there is no consensus on the
minimal requirements for consequence notions. It should be noted, however,
that Brewka (1991aj 1991b) has developed a cumulative version of default
LOGICS FOR NONMONOTONIC REASONING 95

logic. Below in Section 7.5.1 I co me back to the issue whether cumulativity


is a desirable property of nonmonotonic cOllsequence notions.
An interesting question, which was already touched upon in Section 3.4.2,
is whether there exists a condition which can distinguish nonmonotonic
reasoning from analogical reasoning, which above was argued not to be an
inference mode. A good candidate seems to be the conjunction principle
If cp 11= 1/; and cp 11= X, then cp 11= (1/; t\ X)
The conditional logic counterpart of this principle is valid and preferential
entailment also satisfies this prineiple: if in every preferred model of cp
both 1/; and X are true, then because of the usual truth conditions for the
connectives in every such model ('l/J t\ X) is also true. As can easily be
verified, default logic satisfies this prineiple only with respect to sceptical
reasoning. Similarly, of the provability notions of Rescher and Brewka (cf.
Definition 4.1.16) only the strong provability notion satisfies this principle.
An intuitive reason for regarding the conjunction prineiple as a minimal
property of a consequence notion is that it makes the situation that both
cp and -,cp can be derived trivial, in that it makes a contradiction derivable.
In fact, this forces a reasoner who wants to maintain consistency to state
a preference towards one of the possibilities, and preferring a formula over
its opposite seems to be a key feature of regarding a formula derivable. In
terms of preferential entailment: regarding a formula derivable is preferring
the models in which it is true over the models in which it is false.
Now if the derivability of both a formula and its opposite does net give
rise to an inconsistency, as, for instance, in credulous default reasoning,
no preference for a formula over its opposite is involved. And the point is
that this is indeed the case in analogical reasoning: as explained above in
Section 2.3, it is often possible to state similarities to cases with contrary
outcomes, or to both refer to asimilarity and to a difference with another
casej both situations should in a 'logic' of analogical reasoning be expressed
as
cp 11= an1/; and cp 11= an-,1/;
In practice, however, this situation is not regarded as laying down a pref-
erence for 1/; or -,1/;: one still has to choose which similar case to eite, or
to decide whether to state the similarities or the differences with another
case. In conclusion, this situation provides a counterexample against the
conjunction principle in case of analogical reasoning, and this makes the
prineiple, together with its intuitive justification, a good candidate for
discarding analogical reasoning as an inference mode (but, of course, not as
a useful heuristic prineiple for finding new premisesj cf. Section 2.3 above).
Note that this also puts credulous default reasoning into question as an
inference mode.
96 CHAPTER 4

4.2.3. CONNECTIONS

Although initially most nonmonotonic logics were proposed as rivals of each


other, in recent years formal relations between various systems have been
discovered. The first was Konolige's (1988a) discovery that, under a suitable
translation, default logic and autoepistemic logic produce equivalellt results
with respect to nonmodal formulas. More specifically, Konolige has shown
that every AEL theory has a normal form in which all sentences have the
form
(Lt.p /\ -,L'I/11 /\ ... /\ -,L'I/1n) -4 X
and that if this is translated into adefault
t.p : -''1/11, ... , -''I/1n/ X
then for every extension of the resulting default theory there is a so-called
'strongly grounded expansion' of which the nonmodal part is the same as
the extension of the default theory. Other researchers have found similar
relations between default logic and circumscription, logic programming and
circumscription, and logic programming and autoepistemic logic. Results
which involve logic programming are particularly relevant for efficient im-
plementations of other formalisms. It should be noted, however, that the
relations are not always as dear as above and, moreover, sometimes concern
only parts of the systems. I now briefly mention a few results that will play
a role in the coming discussions in Chapter 5.
Lifschitz (1987a) has used prioritized circumscription to provide a se-
mantics for stratified logic programs. He has shown that the intended
minimal model of stratified logic programs can be obtained if the program
is regarded as a circumscriptive theory and if the circumscription policy
is such that predicates or atoms in lower strata are minimized at the
cost of predicates or atoms in higher strata. A more general result is
obtained by Gelfond & Lifschitz (1989), who show that circumscriptive
theories can be translated into logic programs iff the priority scheme of
the predicates or atoms to be minimized is a stratification. Furthermore,
in Gelfond & Lifschitz (1990), which contains an 'answer set semantics'
for logic programs with dassical negation (cf. Section 5.5.3 below), they
establish a conncction with a useful fragment of default logic. Another
interesting result is provided by Poole (1988) on the relation between his
framework and default logic. He shows that if all elements t.p of his set b..
of defaults are translated into Reiter's so-called free defaults, which are
normal defaults without aprerequisite (: t.p/t.p) , then Poole's and Reiter's
extensions are the same.
We can end the overview of unifying investigations by conduding that,
although interesting results have been obtained, the question has not yet
LOGICS FOR NONMONOTONIC REASONING 97

been settled whether the various logics model different aspects of reasoning,
or whether they are rivals in formalizing the same kind of reasoning.

4.2.4. TRUTH MAINTENANCE SYSTEMS

When systems for nonmonotonie reasoning are implemented, it is often


useful to let the inference engine interact with a so-caIled truth maintenance
system (TMS). The reason for this is that nonmonotonie conclusions can
be invalidated by new information, and rather than restarting the entire
reasoning process with the added information, it is more efficient to retract
only the invalidated conclusion; however, if a formula has to be retracted,
everything whieh was derived in the meantime with this formula should be
retracted as weIl, and to ensure that this indeed happens, a book-keeping
system is needed whieh keeps track of the logieal dependencies between
derived formulas. Apart from retracting invalidated conclusions, such a
book-keeping system can also be used for tasks whieh occur in any system,
whether classieal or nonmonotonic. It can be of help when premises are
changed, for example by way of a 'what if' option, which option would be
impractical if everything had to be recomputed. A TMS can also detect the
sources of inconsistencies, whieh is useful for systems whieh have me ans
to deal with an inconsistent knowledge base. The two best known truth
maintenance systems are Doyle's (1979) TMS, which tries to maintain a
single consistent state of the knowledge base, and the assumption-based
TMS of De Kleer (1986), whieh re cords the minimal sets of assumptions
under which a conclusion can be consistently held.
Conceptually TMS are neither logies defining derivability, nor inference
engines making derivat ions, but just book-keeping systems keeping track of
the logieal dependencies between formulas whieh hold relative to an external
inference engine. For this reason TMS will not be discussed in the rest of this
book. However, it should be noted that in recent years the sharp distinction
has been weakened by formal results making it possible to regard truth
maintenance systems as theorem provers for certain nonmonotonic logies,
for example default logie and autoepistemie logie (cf. Brewka, 1991a, pp.
125-137).

4.3. Objections to Nonmonotonie Logics


4.3.1. 'LOGIe IS MONOTONIC'

Sometimes it is held that monotonicity is an essential characteristie of


drawing inferences, for which reason it does not make sense to speak of
a nonmonotonic logic: logic, it is said, is monotonie. When nothing more
is said than this, it seems to be merely a matter of terminology: the main
98 CHAPTER4

question with which logicians in the field are concerned is not whether for-
mal theories of nonmonotonic reasoning can be called a logic, but whether
these theories correctIy systematize rational patterns of reasoning with
incomplete information. I think that most of these logicians would not
object to use a different name for such theories. In fact, a parallel can
be drawn with the discussion whether norms are subject to logical rela-
tions (see above, p.i5). One of the positions which has been taken is Von
Wright's (1983, p. 132) opinion that norms are not subject to logicallaws,
but to requirements of rational legislation. It seems that this is the same
inessential terminologie al issue as in case of nonmonotonic reasoning.
A more interesting criticism is put forward by Israel (1980). He attacks
the basic assumption of researchers in nonmonotonie logic that the rational
patterns of defeasible reasoning can be expressed with the mathematical
tools of nonmonotonie logics. Israel recognizes that sometimes beliefs have
to be retracted when new beliefs are added, but he denies that this is
a matter of inference; instead, he regards it as a matter of rational belief
revision or acceptance, which according to hirn is one of the things in which
reasoning is more than logic (cf. Section 1.3.3 above). What is essential in
Israel's criticism is that he claims that for the question whether a belief
can be rationally held no proof-theoretic account can be given at all; the
best we can hope for is heuristic rules, and Israel claims that these can
be found in the literat ure on the philosophy of science rat her than in the
logicalliterature.
Etherington (1988, p. 65) replies that, even if nonmonotonic reasoning
is regarded as an aspect of belief revision/fixation, it does not follow that
the rational patterns of this activity cannot be formalized at all with the
help of logical notions. In my own opinion the development in the field of
nonmonotonie reasoning since 1980 supports this reply to a considerable
degree, witness, for example, the systems reviewed in this chapter. Another
interesting reply is of Reiter (1987, pp. 181-2), who makes a distinction
between two questions in so-called abd'Uctive reasoning, wh ich is finding a
theory which can explain a certain observed fact (cf. Poole's framework
for hypothetical reasoning): the first question is what the space of possible
theories is, and the second one is what the best theories are in this space.
According to Reiter, the proper role of nonmonotonic logics is to specify the
space of possible theories, while thc second question is governed by rational
criteria of kinds which go beyond the scope of current nonmonotonie logics.
However, Reiter goes on to suggest that at least so me of these criteria might
be stated in formal terms. In fact, this is wh at Chapters 6, 7 and 8 of this
book will be about: in these chapters my airn is to show timt not only the
first but also the second quest ion, how to compare alternative theories, call
to a considerable degree be allalyzed with the help of logical notions.
LOGICS FOR NONMONOTONIC REASONING 99

4.3.2. INTRACTABILITY

Perhaps the most serious drawback of nonmonotonic logics is that without


severe restrictions they cannot be computed efficiently. In philosophical
applications this is not really a problem, but in AI this is obviously different.
A first computational problem was already identified in Section 3.4.2: it
was explained that the dependence of nonmonotonic conclusions on the
underivability of other formulas makes the reasoning process slower, since to
show that a formula is not derivable involves checking the entire knowledge
base, and this means that for every nonmonotonic derivation the entire
knowledge base has to be inspected. This was called the globality of a
nonmonotonic proof system. However, the situation is even worse. A second
tractability problem is due to the fact that nonmonotonic logics generally
incorporate first-order predicate logic, which is only semidecidable. This
means that if the question is whether a certain formula is derivable, the
best that algorithms can do is always finding the correct answer if this is
'yes'j if the ans wer is 'no', there is no guarantee that the algorithm does not
loop: hence, the question of the underivability of a formula is undecidable.
Now with nonmonotonic logics the case is even worse, because of the depen-
dence of the validity of a nonmonotonic conclusion on the underivability of
other formulasj this particularly holds for alliogics involving a consistency
check, since checking for consistency me ans showing that the negation of a
conclusion cannot be derived. This means that even if the ans wer is 'yes', the
algorithm is not guaranteed to find it, since it has t.o show that the answer to
another question is 'no', which is an undecidable question. For these reasons
it is not only that the nonmonotonic reasoning process becomes slower, it is
also that nonmonotonic logics are not even semidecidable any more, W hich
in turn means that implementations either run the risk of looping, or callnot
guarantee that their ans wer is always correct. The only nonmonotonic logic
which does not require a consistency check is circumscription, but this has
another computational drawback in that it is based on second-order logic,
which is provably incomplete. In conclusion, there are, apart from some
small fragments, no complete proof procedures for nonmonotonic logics.
Does their intractability make the development of nonmonotonic logics
of little use for AI research? In my opinion, it does not: as e.g. Ether-
ington (1988) has convincingly argued, formal theories of nonmonotonic
reasoning still have an important role in that they can serve as standards for
correct behaviour of a computer program which implements nonmonotonic
reasoning. Such implementations can deal with the intractability problem in
two ways: they can restrict the language of a logic to a fragment for which
efficient proof proced ures exist (as, for example, in logic programming),
and they can be approximations to the logic, in that they are sometimes
100 CHAPTER 4

allowed to make mistakes. In the first option expressiveness is sacrificed


and in the second option soundness. Now since nonmonotonic logics define
what a correct inference is, they are relevant for both options: in the first
option they determine how far the language has to be restricted to retain
soundness, and in the second option they tell which mistakes a program
can make. In sum, although in their general form nonmonotonic logics
cannot be implemented, they can still serve as a metric for evaluating and
criticizing the behaviour of implemented systems. Accordingly, in the rest
of this book the focus will not be on directly implementable formalisms,
but on formalisms which define nonmonotonic notions of what a correct
inference iso
CHAPTER 5

REPRESENTING EXPLICIT EXCEPTIONS

The structure of the rest of this book requires some explanation. As a


consequence of the conclusion of Chapter 3 that legal reasoning is defeasi-
ble, so me logics for nonmonotonie reasoning were discussed in Chapter 4.
However, the discussion was still at a general level; in the next two chapters
these logics will be applied to defeasible reasoning in law. The investigations
will be based on a distinction between two conceptual methods of dealing
with exceptions. The first method, using explicit exception clauses 1 to
obtain unique answers, will be discussed in this chapter, while the second
method, leaving exceptions implicit and preferring the most specific of two
conflicting conclusions, is the subject of Chapter 6. However, a complication
is that the second method is not merely a way of dealing with exceptions,
but also one possible application of more general techniques of reasoning
with inconsistent information, which is the second major topic of this book.
Therefore, Chapter 6 can not only be regarded as the second chapter on
dealing with exceptions, but also as the first chapter on inconsistency
handling. Chapters 7 and 8 then continue the discussion of this topic,
and in one of the sections of Chapter 10 I return to the application of
inconsistency handling to defeasible reasoning, in a general comparison of
the two methods of representing exceptions.
Before the Sections 5.2 to 5.6 of this chapter investigate the explicit
representation of exceptions, Section 5.1 introduces the general problem of
this and the next Chapter: modelling the defeasibility of legal rules. After
outlining the possible approaches it gives a list of kinds of exceptions which
have to be dealt with, and a list of requirements for representing rules and
exceptions. It should be noted that in the coming chapters two kinds of
comparisons will be made: firstly, the alternative conceptual methodologies
for dealing with exceptions will be compared; and secondly, the various
nonmonotonic formalisms will be tested on their suitability to cope with
these methodologies. However, in both comparisons the same list of kinds
of exceptions, as weH as the same list of requirements for representing them,
will be used.
INote that, when I speak of an 'exception clause', I do not mean a logic-programming
clause, but an exceptional condition of a rule.

101
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
102 CHAPTER 5

5.1. Introduction

5.1.1. METHODS OF REPRESENTING RULES AND EXCEPTIONS

In Section 3.1, where the separate occurrence of rules and exceptions in


legislation was discussed, the two ways of dealing with exceptions were
aiready informally introduced. The first, which is the subject of the present
chapter, is to attach explicit exception clauses to defeasible rules and to
assume the faisity of these dauses uniess there is evidence to the contrary.
The idea of this method is to assign the exception dauses in such a way that
in exceptional circumstances onIy one extension/minimal model/maximal
scenario is obtained, containing the exceptional condusion. The second
method, which will be discussed in Chapters 6, 7 and 8, leaves the ex-
ceptions implicit and uses metalevel collision rules to choose between con-
fticting rules or arguments, in such a way that the most specific rule or
argument is preferred. Since this method provides ways of coping with
multiple extensions, it does not aim to avoid them. In the past few years
another method has been developed, which also leaves exceptions implicit,
but which aims at always obtaining unique answers: this method, which
was briefty discussed in Section 4.1.4, consists of treating specificity as a
principle of the semantics of a logic for defeasible conditionals. In the rest
of this book I shall, for reasons to be explained in Section 7.1, not explicitly
discuss this third method.

5.1.2. KINDS OF EXCEPTIONS

Although in the literature on nonmonotonic reasoning some remarks have


occasionally been made on distinctions between kinds of exceptions (cf. e.g.
the references below), for our purposes a more systematic description will be
usefu!. Sartor (1991) gives an extensive dassification of exceptions in terms
of legal theory; particularly interesting is his discussion in terms of 'burden
of (legal) proof'. It would be interesting to investigate how his classification
can be combined with the distinctions which I shall express in logical
terms. In my dassification the examples will be taken from Chapter 3.
A first distinction is between exceptions which give rise to the negation of
the general condusion, and exceptions which only make the general rule
inapplicable, without themselves conduding the opposite. 30-(2) HPW in
Example 3.1.3 and 1:234-(1) BW in Example 3.1.7 are examples ofthe first
kind, whereas 2 HPW in Example 3.1.1 and 6:2-(2) BW in Example 3.1.2
are examples of the second kind. Using terms of Pollock (1987) I shall
call the first type of exception 'rebutting defeaters' and the second type
'undercutting defeaters'. Brewka (1991a, pp. 40-1) uses the terms "hard"
and "weak" exceptions and with respect to legal rules, Sartor (1991) uses
REPRESENTING EXPLICIT EXCEPTIONS 103

the terms "exceptions to effects" and "exceptions to norms" .


Exceptions can also be distinguished with respect to the question of
whether they are themselves also subject to exceptions or not, a distinction
for which I shall use the terms 'soft' versus 'haTd' exceptions. Clearly 1:234-
(1) BW in Example 3.1.7 is a soft exception; if it is held that all legal
rules are subject to exceptions then it is difficult to find examples of hard
exceptions, otherwise 30-(2) HPW in Example 3.1.3 is a good example.
Exceptions to rebutting defeaters can be distinguished with respect to
the question whether they also reinstate the geneml rule, or whether they
are the only rule providing the overall conclusion (obviously, exceptions to
undercutting defeaters always reinstate the general rule): Reinstatement
matters if the general conclusion contains more information than the ex-
ception to the exception.
Often undercutting and rebutting defeaters are combined, in particular
hard or soft undercutting with soft rebutting defeaters. This situation can
occur in Example 3.1.1, if 2 HPW is combined with a rule which, say,
permits something which in the HPW is forbidden, but which is itself
subject to exceptions, for example to 6:2-(2) BW of Example 3.1.2.
A type of situation which is strictly speaking not a matter of rule and
exception, but which should nevertheless be included in the discussion, is
the existence of two conflicting rules, neither of which defeats the other.
Examples are cases which in Section 3.3 were called the overdetermination
type of open texture. Below I refer to such situations as 'undecided confiicts'
or, occasionally, as 'the Nixon diamond' (cf. 4.1.2 above). The reason why
they have to be included in the discussion is that in legal practice it will
not always be possible to break the conflict in favour of one of the rules,
for which reason formalisms which cannot cope with them lack sufficient
expressive power when applied to the legal domain. More specifically, unde-
cided conflicts are adequately formalized only if it is possible to alternatively
represent both conclusions as reasonable opinions.
Since this overview seems to exhaust all kinds of rule-exception relation-
ships really occurring in the practice of legal reasoning, I confine myself to
investigating the formalization of these kinds.

5.1.3. REQUIREMENTS FOR REPRESENTING RULES AND EXCEPTIONS

In a systematic discussion of the various methods and formalisms it is


also useful to have a list of issues with respect to representing rules· and
exceptions. Most of the issues listed below co me from the generalliterature
on nonmonotonic reasoning, except 'structural resemblance', which is an
issue only in AI-and-Iaw research, and 'exclusiveness of specificity', which,
to my knowledge, is new. For general discussions of requirements for legal
104 CHAPTER 5

knowledge representation the reader is referred to Susskilld (1987, pp. 114-


16) and Nieuwenhuis (1989, pp. 43-47).

Structural Resemblance
A first issue, discussed in Section 3.1, is which method best preserves the
separation of rules and exceptions in the legal natural-Ianguage sources.
Recall that I have defilled this as an aspect of the result of the formalization
process: the maill point is to avoid a situation where several natural- lan-
guage expressions are mixed in one knowledge-base unit; the situation that
one source unit is divided over several knowledge- base units is generally
not regarded as needing to be avoided.

Modularity
This issue was already briefty discussed in Section 3.1: the quest ion is
whether a natural-Ianguage expression can be formalized without having
to consider the rest of the domain. In the literat ure on nonmonotonic
reasoning modularity is one of the main aspects on which nonmonotonic
logics and formalization methods are compared (Touretzky, 1984, 1986;
Etherington, 1988, pp. 44-5; Loui, 1987, p. 106; and Poole, 1991, p. 295).
However, modularity, which is an aspect of the process of formalization,
is not always clearly distinguished from structural rescmblance, which is
an aspect of the 7'esult of formalization. One contributioll of the present
investigation will be to provide reasons why this distinction should be made.
The main disadvantage of non-modular translation is that it increases
the complexity of the task of validating and maintaining a knowledge base:
if a knowlcdge engineer has to consider many interactions between elements
of the knowledge base, then, particularly in large systems, s/he is likely to
make mistakes; this not only holds for designing a knowledge base, but
also for maintaining it, since with non-modular formalization adding new
items to thc knowledge base, for example, newly discovered exceptions,
causes the need to change old rules. In the rest of this book a formalization
method that makes it possible to add exceptions without having to change
the general rule will be said to support 'modularity of adding exceptions'.

Resemblance to Natural Language


This issue is in fact an instance of what I have called 'structural resem-
blance'. However, the reason for distinguishing it is tImt I want to use it for
an aspect of an individual source unit, rat her than for the relation between
different source units: the question now is whether its formalization contains
expressions which are not present in the natural language counterpart.
Closeness to natural language enhances readability of the formalization
and therefore supports validation and maintenance.
REPRESENTING EXPLICIT EXCEPTIONS 105

Exclusiveness of Specijicity
Is specificity the only possible criterion in determining the correct outcome,
or can other criteria also be applied? If so, can they only be applied
alternatively or also in combination? The importance of this issue is given
by the fact that in law several standards are combined: for example, in
Section 3.1.4 we have seen that not only the Lex Specialis principle but
also the Lex Superior and the Lex Posterior principle are used. This issue
will discussed in detail in Chapter 8.

Implementation
Obviously the various methods and formalisms will be evaluated with re-
spect to the prospects for implementation in legal knowledge-based systems.

Expressiveness
This is of course a very important point of comparison between rivallogieal
systems intended to model the same kinds of reasoning. It involves questions
like: Can aB distinctions between kinds of exceptions be made? Is the
exceptional conclusion the only one or the preferred one? Are alternative
answers possible in case of undecided conflicts? Does an exception to an
exception reinstate the general rule? Is a defeasible form of modus toBens
possible?
In the last section of this chapter I return to these points, and in Chapter 10
I shall use them again, in a comparison of the exception clause approach
with the methods developed in Chapters 6 to 8. The rest of the present
chapter is devoted to the explicit exception clause approach: Sections 5.2
to 5.5 will investigate its use in respectively default logic, circumscription,
Poole's framework and logie programming, after which Section 5.6 draws
these investigations together by, among other things, a schematie overview
of how the exception clause method can be applied in each of the non-
monotonie logies. It should be noted that Sartor (1991) has made similar
investigations, particularly with respect to logic programming (including
recent developments) and Poole's framework for default reasoning; some of
my observations concerning these formalisms are also made by Sartor.

5.2. Default Logic


As noted in Section 5.1.1, the application of the exception clause method in
default logie should result in a unique extension, in whieh the exceptional
conclusion holds. The problem which should be solved is that, although
in default logic new information can illvalidate previous illferences, it docs
not always validate the exceptional conclusion. This was shown in Sec-
ti on 4.1.1, in the default-Iogic formalization of Examplc 3.1.4: with two
106 CHAPTER 5

normal defaults the default theory in this example has two extensions,
whereas intuitively only the extension with Maximum = 12 is intended. As
already noted there, one way of validating the exceptional conclusion is to
use seminormal defaults, in which the nonnormal part of the justification
stands for an exceptional condition: more specifically, a unique extension
can be enforced by adding the negation of the prerequisite of the more
specific default to the justification of the more general default. Thus adding
the exceptional fact to F or deriving it from another default blocks the ap-
plicability of the more general default, which results in only one extension,
with the exceptional conclusion. Below I investigate how this method can
deal with the various types of exceptions, first with specific and then with
general exception clauses. Most of these formalizations are applications to
the legal domain of ideas developed by others. Sometimes, however, these
ideas had to be combined in an (as far as I know) relatively new way: this
holds in particular for my treatment of soft rebutting defeaters.

5.2.1. SPECIFIC EXCEPTION CLAUSES

As noted in Section 4.1.1, the exception clause method was originally


developed by Reiter & Criscuolo (1981) to deal with soft rebutting defeaters.
Consider the f(}llowing formalization of Example 3.1.7, in which d1 mixes
3:32-(1) BW and 1:234-(1) BW and d2 mixes 1:234-(1,2) BW.
Person(x) : Has_capacity(x) 1\ -,Minor(x)
Has_capaci ty(x)

Minor(x) : -,Consent_oLrepr(x) 1\ -,Has_capaci ty(x)


-,Has_capaci ty(x)

F: {Minor(Mary), Vx.Minor(x) --+ Person(x)}


After adding -,Minor(x) to the justification of d1 , this default is blocked for
lvI a1'y, which induces a unique extension, containing -,Has_capaci ty( M ary).
Thus formalized, d2 is a soft exception, which becomes apparent if the next
formalization of 1:234-(2) BW is added to the set of defaults ~.
Minor(x) 1\ Consent_oLrepr(x) : Has_capacity(x)
Has_capaci ty(x)
If now Consent_of _repr( M ary) is added to the facts, then d2 is blocked
and the only extension now contains Has_capacity(M.ary). Note that this
fact is in the extension by virtue of d3 , and not because of d1 , since d 1
is still inapplicable, because'Minor(Mary) is still given. Note also that,
when thus formalized, d3 cannot be blocked by a new default but only
by a fact:: therefore if a new soft exception to d3 is found, d3 has to be
changed to add the negation of the prerequisite of the new exception to
REPRESENTING EXPLICIT EXCEPTIONS 107

its justification, which violates the aim of modular adding of exceptions.


To prevent this, d3 should have a general exception c1ause, not referring to
particular exceptions, as will be illustrated in the next subsection.
Formalizing the interpretation of 1:234-(1) BW as a hard rebutting
defeater of 3:32 BW is very easy, viz. by, instead of adding the default
d2 to ß, adding the first-order formula
\/x.Minor(x) ~ -,Has_capacity(x)
to F. However, undercutting defeaters cannot be represented with spccific
exception c1ausesj they need general exception c1auses, which will also be
shown in the next subsection.
As noted in Chapter 4, the key difference between USillg exception
c1auses in default logic and in standard logic is that now the absence
of the exception need not be shown in order to apply the general rulej
its absence needs onIy to be consistent with what is known. However,
in using exception c1auses that denote a specific exception, it still has
some of the disadvantages mentioned in Chapter 3: firstly, it mixes several
source units in one KB unitj secondIy, adding new soft exceptions makes it
necessary to rewrite old ruIesj and, finally, as the number of exceptions and
exceptions to exceptions grows, the complexity of the formalization also
grows. For discussions of these disadvantages, see e.g. Touretzky (1984)
and Etherington (1988, pp. 44-5, 102-3). For these reasons it is usually
preferred to use general exception c1auses, which is the method that was
semiformally explained in Section 3.1.3. In line with this, I shall in the rest
of this chapter mainly discuss the use of general exception clauses.

5.2.2. GENERAL EXCEPTION CLAUSES


The idea is now to use clauses which, instead of referring to a particular
exception, as, for example, the condition -,Minor(x) in dl does, onIy express
the fact that an exception has occurred. Such conditions correspond to
naturaI-Ianguage phrases such as 'uniess stated otherwise'. As we will see in
the rest of this chapter, the technique of general exception c1auses has been
used in many formalisms. Sometimes a clause standing for applicability is
used, which is added to the general rule in its positive form, and sometimes
c1auses are used expressing abnormality or exceptionality, which occur in
the general rule in negated form. Since in defauit logic negation has no
special status, as it has in circumscription and Iogic programming, it can
deal with both kinds of clauses; I shall use applicability clauses, using the
predicate appl.
In the literat ure at least three ways to formulate appIicabiIity clauses can
be found. The first is to attach to the predicate appl an index denoting the
name ofthe rule, which resuits in expressions oft he form app1n(Xl,"" x m ).
108 CHAPTER 5

Thus there are as many app1n predicates as there are defaults. Another
way is to use McDermott's (1982) idea to encode the name of a rule as
a constant instead of as a predicate, which results in expressions of the
form appl(n, Xl, ••• , X m ). As observed by Brewka (1991a, p. 36), the second
naming technique is more flexible, since it allows one to say things about
rules and to quantify over classes of rules, which in law has very natural
applications: consider e.g. Examples 3.1.1 and 3.1.2, in which undercutting
defeaters render not just one norm but a group of norms inapplicable.
Note that in both naming methods the appl predicate has places for
other arguments besides for rule name. This is because it should be possible
to express the (in)applicability of adefault for some but not an individuals:
if, for example, Tweety is a penguin and Polly a raven then the default
'Birds fly' should only be blocked for Tweety and not for pony. To make this
possible the appl predicate must also have arguments for all free variables
occurring in the default. A complication is that not an defaults have the
same arity: as a consequence, in the second method we need not just one
appl predicate, but several such predicates app1i with different arities i,
where i is the number of free variables occurring in the default plus one
(since there must be an extra place for the name of the default).
This problem is avoided by the third method, in which just one appli-
cability predicate is used and in which a tuple of terms (n, Xl, ..• , X m ) is
replaced by one function expression n(xl, .. . ,xm ), resulting in a formula
appl(n(xl, ... , x m )). Thus an applicability atoms are of arity 1, for which
reason formulas that quantify over rules need not be duplicated for every
arity. It is this naming method that I shall use in the rest of this book.
However, for readability I omit the outer pair of parentheses, which results
in appln(xl, ... ,xm ).
Recall that the aim is to use the applicability clauses in such a way that
a unique extension is obtained. Below it is not always necessary to use an
applicability clause to obtain one extension but, as will be explained below,
these clauses also support modular adding of exceptions, and therefore they
will be added to every rule which is possibly subject to exceptions.

Hard Exceptions
Consider the following formalization ofExample 3.1.1, in which as a general
rule made inapplicable by 2 HPW section 6-(2) HPW is used, a rule which
decIares void any contract cIause giving an unfair profit to a third person.
Lease_clause(x) A Unfair_profit_to_third(x) :
app16-(2)(x) A Void(x)
Void(x)
In saying tImt the rent act is not applicable to leases concerning short-
termed usage, section 2 HPW is obviously an undercutting defeaterj if it is
REPRESENTING EXPLICIT EXCEPTIONS 109

in addition regarded as a hard one, then it should be formalized as a fact:


the set of facts F must then contain a formula of the fo11owing form.
\/x, y.Short_termed_contract(x) A InJiPW(y) ---t --.appl(y)
If F furt her only contains
Lease_clause(a) A Unfair_profit_to_third(a) A InJiPW(6-(2)(x))
then the default theory (F, {d4 }) has only one extension, containing Void( a);
if the exceptional case Short_termed_contract(b) is added to F, then F
implies --.app16-(2)(a), which makes the justification of d4 inconsistent with
what is known, for which reason the general rule is blocked. Hard rebutting
defeaters are also easy to express, viz. by adding to F a rule like
\/x.Conditions(x) ---t --.Void(x)

Soft Undercutting Defeaters


How can applicability clauses be used for soft exceptions? The key problem
is to a110w for the possibility that exceptions are themselves overridden, but
to still obtain default theories with just one extension. It seems that this can
be assured with the fo11owing formalization methodology, of which the basic
idea is very simple: every default has an extra justification clause expressing
its applicability. For undercutting defeaters this will be illustrated by a
new formalization of Example 3.1.1, now with 2 HPW interpreted as a soft
undercutting defeater. Then instead of as a set of facts 2 HPW should be
formalized as a set of defaults, being of the form
Short_termed_contract(x) A InJIPW(y) A y i- d5 (x, y) :
d5 : ____________a~p~p_ld_5~(X_,~y~)_A~--.~a~p~p-l~(y~)------------
--.appl(y)
If it is now only given that a is a cOlltract clause giving an unreasonable
profit to a third person, then d 4 can be applied to obtain the conclusion
Void(a); if the information that a is a short-termed contract and that
d5 i- d4 is also added, then d4 is blocked by d5. To see why this default
theory has only one extension imagine that d4 is applied because app16-
(2)(a) is consistent with what is known. Then --.app16-(2)(a) is also still
consistent with what is known, for which reason d5 can also be applied; this,
however, makes app16-(2)(a) inconsistent with what is known, since now
its negation is in the extension. If, on the other hand d5 is not applied to a,
then not a11 applicable defaults are applied. Hence d4 cannot be applied; the
only extension is the one in which d5 is applied, which contains --.app16-
(2)(a) and does not contain Void(a).
What could be the exception making 2 HPW inapplicable? It might be
6:2-(2) BW of Example 3.1.2. Consider its next formalization.
110 CHAPTER 5

CreditoLdebtorJ"ule(x) 1\ Unreasonable(x) 1\ xi- d 6 (x, y) :


appld6(X) 1\ --,appl(x)
--,appl(x)

Soft Rebutting Defeaters


Soft rebutting defeaters can only be expressed if they are combined with
undercutting defeaters. Consider the next formalization of Example 3.1.7.
F: {Vx.Minor(x) -+ Person(x)}

d7 : Person(x): appld 7 (x) 1\ Has_capacity(x)


Has_capaci ty(x)
At first sight the following formalization of 3:32-(1) BW would see m to
suffice.
Minor(x) : applds(x) 1\ --,appld7 (x) 1\ --,Has_capacity(x)
--,appld7 (x) 1\ --,Has_capacity(x)
However, this results in the theory having two extensions, since if d7 is
applied to obtain Has_capaci ty( M ary), the justification of ds is not consis-
tent any more with what is known, since it implies --,Has_capaci ty(M ary).
Thcrefore, there is also an extension with Has_capacity(Jl1ary), which call
only be avoided by splitting ds into the next two defaults:
dg : Minor(x): appldg(x) 1\ --,appld7 (x)
--,appld7 (x)

Minor(x) : appldg(x) 1\ --,Has_capacity(x)


dg,:
--,Has_capaci ty( x)
This example illustrates another possibility of the naming technique: of
both dg and dg, the applicability dause refers to dg , to express the fact
that they are both based on the same source, viz. 1:234-(1) BW; tlms an
exception to this exception needs only to contain one inapplicability dause,
viz. one for dg , as becomes apparent if 1:234-(2) BW is formalized with thc
same method.
d lO : Minor(x) 1\ Consent_oLrepr(x) : appldlO(;l;) 1\ --,appldg(x)
--,appldg(x)

Minor(x) 1\ Consent_oLrepr(x) : Has_capacity(x) 1\ appldlO(x)


Has_capaci ty(x)

Now if d lO and d lO , are applicable, both dg and d g, are blocked. Note also
that in that case d7 is reinstated since, although Minor( M ary) still holds,
this fact does now not give rise to the inapplicability of d7, since dg is made
inapplicable by dlO •
REPRESENTING EXPLICIT EXCEPTIONS 111

This example can also be used to explain why using applicability clauses
supports modularity of adding exceptions (see also Etherington, 1988, p.
103; Brewka, 1991a, p. 117). d7 would also be blocked by dg/ dg, if dg and
dg, would not have an applicability clause, but in that case the addition of
the exception dlO/d lO , would cause the need to change dg and dg, by still
giving it such a clause, since without it the resulting default theory would
have two extensions.

Undecided Confiicts
In default logic, how can the exception clause approach deal with undecided
conflicts between defaults? In the literature on nonmonotonic reasoning
the standard example is the so-called 'Nixon diamond' (see above, Sec-
tion 4.1.2). An example from the legal domain is a modification of Exam-
pIe 3.1.4 in which the agreement to duel on life-and-death is not regarded
as implying the purpose to kill, in which interpretation neither of the
two norms is an exception to the other. In the formalization methodology
developed in this section this is rendered by
Kill/\ Intentional: Maximum = 15/\ appld 1
Maximum = 15
Kill/\ life_and_death_duel : Maximum = 12 /\ appld 2
Maximum = 12
F: {life_and_death_duel ---+ Intentional, Kill,
life_and_death_duel, -,(Maximum = 12/\ Maximum = 15)}
ß: {dl' d2}
For reasons similar to those in Section 4.1.1 this default theory has two
extensions: one containing Maximum = 15 but not Maximum = 12, and
one the other way around. According to the credulous consequence notion
defined above both conclusions can be derived, while according to the
sceptical notion the disjunction Maximum = 12 V Maximum = 15 is derivable,
since by virtue of the deductive validity of cp ---+ (cp V X) this formula is
in both extensions. In conclusion, default logic nicely represents undecided
conflicts.

5.2.3. EVALUATION

To summarize the results obtained so far, while specific exception clauses


can only deal with rebutting defeaters, the general exception clause ap-
proach seems to be sufficiently expressive to formalize all kinds of exceptions
listed in the previous section. As noted above, others have also studied the
use of seminormal defaults for representing exceptions and this has resulted
in some points of criticism. Most of them relate to the fact that the logic
112 CHAPTER 5

of seminormal default theories is more complicated than that of normal


ones. Firstly, the property of semimonotonicity, which roughly says that
adding new defaults to adefault theory does not invalidate extensions,
only holds for normal default theories, for whieh reason Reiter's ideas for
a rather simple top-down proof procedure only apply to normal defaults.
Secondly, in Section 4.1.1 we have seenthat, unlike normal default theories,
seminormal ones are not guaranteed to have extensions.
Although these are certainly serious objections, seminormal default logic
still has its merits. Firstly, research has been done to identify large elasses
of seminormal default theories which do have extensions, for example, by
Etherington (1988) hirnself. Moreover, below we will see that if the aim
of obtaining a unique extension can be realized, an interesting fragment
of se mi normal default logie can be rat her efficiently implemented in logie
programming.

5.3. Circumscription

The use of exception elauses is also a well-studied technique in the mini-


mization approaches, starting originally with McCarthy (1986). In circum-
scription this method is applied by adding to the antecedent of a general
rule the negation of an atomie formula expressing exceptionality, and by
minimizing the extension of the predicate involved. Above it was explained
that in default logie the expressions appl(x) and -,inappl(x) have, when
added to the justification of adefault, logically completely the same effectj
however, in circumscription it becomes crucial to use a negated atom,
since in the minimal-model approaches only positive information can be
minimized. I discuss this method only as it is applied in circumscriptionj
with respect to logie programming I shall focus on its procedural aspects,
involving the interpretation of negation as the failure to derive the opposite.
In order to keep the predieate terms elose to legal language, I use an
exception predicate exc instead of an abnormality predieate ab, whieh is
normally used in circumscription.
As already briefly indieated in the discussion of Example 4.1.13, the
choice of the circumscription poliey is crucial. The policy which was de-
scribed there, and which is often recommended as the best one for default
reasoning, consists of minimizing the extension of the exc predicate, keepillg
fixed the extensions of the 'input predicates', which are the predicates
concerning the given facts of the case, and letting vary the extensions of
the 'output predieates', whieh are the predieates concerning the conelusions
we are interested in. Usually the input predicates will be all predicates
besides the exc predieate occurring in the antecedent of a conditional, and
the output predicates will be the predicates occurring in the consequent.
REPRESENTING EXPLICIT EXCEPTIONS 113

In all examples below this circumscriptioll policy will be implicitly appliedj


however, in doing so, we will see that some furt her refinements are necessary.

Hard UndercuUing Defeaters


With this method hard undercuttillg defeaters, like in Example 3.1.1, can
be expressed in the following way (in which (1) stands for 6-(2) HPW and
(2) for 2 HPW).
(1) Vx, y. x is alease 1\ y is a clause of x 1\ y gives unfair
profi t to a third 1\ ..,excl(x, y) --t y is void
(2) Vx, y. x is a short_termed contract 1\ y is a HPW section 1\
y # 2(x, y) --t exc(y)
(3) a is alease 1\ a is a short_termed contract 1\ b is a
clause of a 1\ b gives unfair profit to a third
1\ 1(a, b) is a HPW section

Nonmonotollicity can be illustrated by first skipping the second conjunct


of (3): then in all minimal models of {(1- 3)} ..,excl(a, b) is true, for which
reason b is void holdsj after adding the second conjunct of (3), however,
in all models of {(1 - 3)}, and therefore also in all its minimal models,
excl(a, b) is truej furthermore, since the consequent of a true material
implication can both be true and false, b is void is true in so me minimal
models but false in other ones: therefore in case of excl(a, b) nothing can
be concluded any more about the legal status of b, which is the intended
effect of an undercutting defeater.

Hard RebuUing Defeate1·s


Hard rebutting defeaters do not give rise to additional problems. Assume
that in Example 3.1.4 the rule about life-and-death duels is interpreted as
an exception of such a kind. Then the formalization becomes
287Sr: Vx, y. x kills Y 1\ x acts with intent 1\ .., exc287(x, y)
--t maximum penalty for x is 15 years

154-(4): Vx, y. x kills Y 1\ x and y duel on life-and-death


--t maximum penalty for x is 12 years

Again it is assumed that enough axioms are added to express the incom-
patibility of the two consequents. If now only
Charles kills Henry 1\ Charles acts with intent
is given, then in all minimal models of these premises exc287( Charles,Henry)
is false, for which reason all those models make maximum penalty for
Charles is 15 years true. If, however, also
Charles and Henry duel on life-and-death
114 CHAPTER 5

is given, then the theory dassicaIly implies both maximum penalty for
Charles is 12 years and exc287( Charles, Henry). Thc reason is that then
in alt models the antecedent of 154-(4) Sr and therefore also its conse-
quent is true, because of which with the additional axioms in aIl models
maximum penalty for Charles is 15 years is false; hut then the material
implication 287 Sr can in all models only be true if its antecedent is false as
weIl, and since the first two conjuncts are given, -, exc287( Cha1"les,Hen1"Y)
must in aIl models be false.

Soft Undercutting Defeaters


As already observed by McCarthy (1986, pp. 105-6), and as illustrated
above with Example 4.1.15, problems arise with soft undercutting defeaters.
In that example (9) is such a defeater, and in Example 3.1.1 section 2 HPW
can also be interpreted as such an exception. in which case (2) should be
changed into
(2') \Ix, y. x is a short-termed contract 1\ y is a HPW section 1\
y:/= 2'(x, y) 1\ -,exc2'(x, y) - t exc(y)

The problem is that if the input atoms of the exception are true for, say,
the contract a with dause band if y = 1(a, b), (2') is equivalent to a
disjunction exc2'(a, 1(a, b)) V exc1(a, b), which has two minimal models,
whereas intuitively the minimal model which makes excl(a, b) true and
exc2'(a, 1(a, b) false is the intended one, since there is no reason to make the
latter atom true: Le. it is not supported by a rule with this atom as the head.
As explained in Chapter 4, this has motivated the development of so-called
prioritized circumscription, in which predicates or atoms of lower priority
are aIlowed to vary if predicates or atoms of higher priority are minimized:
thus the intended model can be obtained by giving exc2' (a, 1(a, b)) priority
over exc1(a, b).

Soft Rebutting Defeaters


With this modification soft rebutting defeaters can without additional prob-
lems be formalized in the same way as in default logic, combined with an un-
dercutting defeater. Consider the following formalization of Example 3.1.7,
in which the undercutting defeater is a hard one and in which (4) stands
for 3:32-(1) BW, (5) and (6) for 1:234-(1) BW and (7) for 1:234-(2) BW.
(4) \Ix. x is a person 1\ -, exc4(x) - t x has legal capacity
(5) \Ix. x is a minor - t exc4(x)
(6) \Ix. x is a minor 1\ -, exc5(x) - t -, x has legal capacity
(7) \Ix. x is a minor 1\ x has consent of a legal representati ve
1\ -, exc7(x) - t exc5(x)
REPRESENTING EXPLICIT EXCEPTIONS 115

Note that in order to capture the directional eifect of the rules for, say,
Mary, the minimization of the exception atoms has to bc prioritized as
exc7(M ary) > exc5(M ary) > exc4(M ary)
More on this will be said in the seetion on logie programming.

Chaining Defaults and Hard Undercutting Defeaters


As MeCarthy (1986, pp. 104-5) observes, problems similar to those with
soft undereutting defeaters oceur with hard undercutting defeaters, if they
are chained with other defeasible rules. Assume that in Example 3.1.1 a case
law rule classifies a eertain situation as the existence of a short-termed lease.
Sinee, as argued in Seetion 3.3.2, legal classifieation rules are defeasible, this
rule can in the present methodology best be formalized as
(8) Vx.Situation(x) A --, exc8(x) --t x is a short-termed contract
If chained with (2) above, this results in
Vx.Situation(x) A y is a HPW seetion A y =I- 2(x, y) A
--, exc8{x) --t exc(y)
which, if the input predieates are true for a eertain eontract a with clause
band for y = l(a, b), is again equivalent to the disjunetion
exc8(a) V excl{a, b)
whieh has two minimal models, whereas intuitively again the model in which
only excl (a, b) holds is intended.
Note that this kind of problem does not oeeur in default logie, since
defaults are inference rules, which gives them a directional nature: for
example, the soft interpretation of 2 HPW beeomes in default logic
x is a short-termed contract : y is a HPW section
A y =I- 2'(x, y) A -, exc2'(x, y)
exc(y)
and the directional nature of this default, in which the applicability of (2)
is used to derive the inapplicability of (1), prevents the construetion of the
problematic disjunetion --,appl1(a, b) V -,app12(a, l(a, b)).
This problem of circumscription is not merely technical, since the two
situations in which they arise often oeeur in legal reasoning: obviously, in
aetual reasoning rules are often ehained and, furthermore, one of the con-
clusions of Chapter 3 was that almost all legal rules are subject to implicit
exeeptions, which makes it very natural to interpret undereutting defeaters
as soft ones. This being true it beeomes desirable to find general eriteria for
assigning the priorities, sinee if the priority ordering must be determined for
eaeh individual ease, then, as also noted by Etherington (1988, p. 47), the
116 CHAPTER 5

formalization process becomes hopelessly non-modular. For some classes


of circumscriptive theories these general criteria indeed exist, but their
discussion must be postponed to the subsection on logic programming.

Undecided Confiicts
The final situation which has to be considered is the Nixon Diamond, for
which I use the same legal version as in default logic. If in Example 3.1.4
both penal rules are formalized with an exception predicate (which is the
only way of avoiding inconsistency) then they become
287: \/x, y. x kills Y 1\ x acts with intent 1\ ..., exc287(x, y)
~ maximum penalty for x is 15 years
154-(4): \/x, y. x kills Y 1\ x and y duel on life-and-death
1\ ..., excl54-(4)(x, y) ~ maximum penalty for x
is 12 years
For the same reasons as in Example 4.1.14, this has two minimal models,
one with only exc287( Charles,Henry) true, and one with only excl54-
(4)( Charles,Henry) true; in both models the disjunction
maximum penalty for Cha1'les is 15 years V
maximum penalty for Charles is 15 years
is the strongest formula which holds for the output predicate, for which
reason no more than this disjunction can be derived. Note that in circum-
scription it is impossible to present both disjuncts alternatively as accept-
able beliefs given the facts and legal rules; in this respect circumscription is
less expressive than default logic, in which the disjuncts can be presented
as contained in different extensions.

EVALUATION

In conclusion, if general exception clauses are used, then all kinds of ex-
ceptions can also be formalized in circumscription. However, whereas it
is in default logic always possible to formalize in such a way that one
unique extension results, in circumscription priorities sometimes have to
be used to choose between multiple minimal models. This is particularly
necessary in two situations which are very common in the legal domain,
viz. situations with soft undercutting defeaters, and situations with hard
undercutting defeaters of which the input predicate is made true by a
defeasible rule. Dnless general criteria for prioritizing minimization can
be expressed, this decreases the modularity of the formalization process.
This difference between default logic and circumscription is mainly caused
by the fact that unlike adefault, which is an inference rule, the material
implication is not directional.
REPRESENTING EXPLICIT EXCEPTIONS 117

5.4. Poole's Framework for Default Reasoning


Poole's framework for default reasoning is structurally very similar to de-
fault logic, with the same distinction between a set of facts and a set
of defaults, and the same possibility of multiple extensions. Therefore it
is not surprising that in Poole's framework the same problem has to be
solved as in default logic: although new information can invalidate general
conclusions, it not always validates the exceptional one, since sometimes
it merely results in alternative explanations for contradictillg facts. To
deal with this problem, Poole has investigated both the explicit and the
implicit approach; the first in Poole (1985) and the second in Poole (1988).
His implicit approach, consisting of using specificity as a metaprinciple for
comparing explanations, will be discussed in the next chapter; the present
section is devoted to the way Poole uses general exception clauses.
One of the differences between default logic and Poole's framework
is the absence in Poole's defaults of the analogue of a justification in
Reiter's defaults, and this difference prevents using exactly the same style
of formalization. The reason is that formalizing adefault 'if rp then '1/-,' with
exception X as a member of the set ß of defauIts
(rp 1\ -'X) -t 'IjJ
is not sufficient, since thus there is in normal circumstances no way of
assuming -,x. To obtain this effect, also
-,x
must be added to ß, but then the material implication can just as weIl be
added to the facts as to the defaults; and this is what Poole does.
Like Poole (1988) I shall only discuss this method with general exception
clauses. Poole represents these clauses by meallS of a predicate expressing
the name of the default, with an arity corresponding to the number of
free variables of the default. More specificaIly, on the basis of a given
default theory (F, ß) Poole constructs a new default theory (F', ß') in
the following way: for each member rp of ß with free variables Xl, ••• , X n
he formulates a predicate nametp with arity n; then, instead of rp, he adds
its name Nametp(x!, ... ,xn ) to ß' and he adds a formula
\Ix!, ... , xn.Nametp(x!, ... , x n ) -t rp
not to ß' but to the facts F'. Thus the assumption in normal cases that
the applicability clause is true is modelIed by the fact that this clause
is a member of ß'. It is instructive to see what the result is if rp is a
material implication ('IjJ - t X), which it will normally be: then this formula
is equivalent to
\IX!, ... , xn.'IjJ 1\ Nametp(x!, ... , x n ) -t X
118 CHAPTER 5

which clearly reveals that this is in fact the applicability clause approach.
Actually Poole's naming technique is the first method explained above
in Section 5.2.2, with a predicate app1i' where i is the name of the default.
As remarked there, this technique is less flexible than denoting the name of
a rule by a function expression, which is the technique used throughout this
chapter. It can also be used without any difficulty in Poole's framework,
and because of its advantages I shall do so in the remainder of this section.
With this methodology, how can the various kinds of exceptions be
modelled in Poole's framework? Below I focus on scenarios instead of on
extensions, making use of Poole's (1988, p. 30) proof that every scenario is
contained in at least one extension. In all cases adefault
(1) appl(x)
should be added to ß. Now first it will be shown that undecided confiicts
between rules can be expressed by Poole's framework without difficulties:
assume that the next two conflicting rules are in F, together with the truth
of their 'input predicates' and the information that nobody is both a hawk
and a dove.
(2) \Ix. x is a quaker /\ app12(x) --t X is a dove
(3) \Ix. x is a republican /\ app13(x) --t x is a hawk
(4) Nixon is a quaker /\ Nixon is a republican
(5) \Ix --, (x is a hawk /\ x is a dove)
F U {app12(Nixon)} explains Nixon is a dove and F U {app13(Nixon)}
explains Nixon is a hawk, while both scenarios explain the disjunction
Nixon is a hawk V Nixon is a dove. Note that because ofthe consistency
requirement for scenarios app12(Nixon) and app13(Nixon) cannot be used
in one scenario.
Consider next hard rebutting defeaters and assume that instead of the
premises {( 2 - 5)} the following general rule is a member of :F.
(6) \lx.Ax /\app16(x) --t Bx

Then a hard rebutting defeater of (6) can be expressed in a similar way as


in circumscription, by adding to F the formula
(7) \lx.ex --t --Bx

Thus, if
(8) Aa /\ Ca

is also added to F, which then is {(6-8)}, then F implies --app16(a), which


prevents the instantiation of the default for (6( a)). The only explanation
which can be constructed is Fitself for --Ba.
REPRESENTING EXPLICIT EXCEPTIONS 119

Hard undercutting defeaters can be expressed in the same way, byadding


to F
(9) \lx.ex -+ --.app16(x)
Even soft rebutting defeaters can be expressed in Poole's methodology, at
least if they are combined with an undercutting defeater which is hard: if
besides (9) the soft rebutting defeater
(10) \lx.ex 1\ appllO(x) -+ --.Bx
is added to F, which then is {(6, 8 -10)}, then Fu {appllO(a)} is the only
explanation with respect to Ba or --.Ba which can be constructed.
However, problems arise with soft undeTcutting defeaters. The only pos-
sibility would seem be to add a formula
(11) \lx.ex 1\ appl11(x) -+ --.app16(x)
to F, but the problem is that then two scenarios can be constructed:
the intended scenario F U {appl11(a)}, implying --.app16(a) and thereby
blocking (6), but also the unintended scenario F U {app16(a)}, implying
Ba. The cause of the problem is the same as in circumscription: Poole's
system does not recognize the directional nature of (11), which intuitively
gives priority to the instantiation of (1) for l1(a) over its instantiation for
6(a). In terms of Brewka (1989): since Poole's framework has no way of
expressing priorities between defaults, it does not allow the use of defaults
to block other defaults. Of course, this is exactly the task of a soft under-
cutting defeater, and therefore this kind of exception cannot be expressed
in Poole's framework; a side-effect of this is that, since reinstatement of
general rules needs soft undercutting defeaters, it is also impossible to model
reinstatement .
Since this problem has the same cause as in circumscription, it is not
surprising that Poole's framework also has difficulties with chaining rules:
assurne that (6) is chained with the next hard undercutting defeater present
in F
(12) \lx.Bx -+ --.app1l3(x)
which intends to make the following rule in F inapplicable
(13) \lx.app1l3(x) -+ Dx
Now, if F = {(6, 8,12,13)}, then there is no way ofblocking the explanation
F U {app1l3( a)} for Da, since F implies --.app16( a) V --.app1l3( a) and
there is again no way of prioritizing between defauIt instances to express
the directional nature of the chain formed by (6) and (12).
Brewka (1989), after having pointed at these difficulties, generalizes
Poole's framework in a way similar to that in which circumscription has
120 CHAPTER 5

been generalized to solve these problems, viz. by providing means to ex-


press priorities between defaults. However, in the way Brewka presents this
generalization, it is not really the exception clause approach any more,
but the collision rule approach, since he does not restrict prioritization to
applicability clauses. For this reason its discussion will be postponed to
Chapter 7.
In conclusion, to have the same expressive power with respect to ex-
ception clauses as default logic and circumscription, Poole's framework
needs the possibility of priorities between default instances. With this ex-
tension using the framework for the exception clause approach becomes
very similar to doing so with circumscription, with, most importantly, the
same need for general prioritization policies. Furthermore, the consequence
of Definition 4.1.19 that maximal scenarios contain as many instances of
the applicability predicates as possible has its analogue in circumscription,
in the definition of a minimal model as the model containing a minimal
number of instances of the abnormality predicates. In sum, when applied to
the exception clause approach, Poole's framework does not, when compared
to default logic and circumscriptio"n, provide new possibilitiesj instead,
without Brewka's extension it is even less expressive.

5.5. Logic-programming's Negation as Failure

Compared to the logics discussed so far the main significance of logic


programming is its emphasis on computational efliciency. Accordingly, the
main aim of this section is to investigate to what extent logic programming,
in particular general programs, can be used to implement fragments of the
earlier discussed formalisms. Therefore in this section I mainly consider the
procedural interpretation of the exarriplesj in doing so I assume the use of a
standard theorem prover of logic programming, SLDNF resolution, which is
SLD resolution augmented with negation as finite failure (cf. Lloyd, 1984).
Essentially SLD resolution is a backward-chaining-like inference engine
combined with a unification algorithm.
As discussed in Chapter 4, the feature of logic programming which
makes the modelling of nonmonotonic reasoning possible is the procedural
interpretation of negation as the failure to derive the opposite (below I
denote negation as failure by rv). Compared to the other formalisms this
interpretation severely restricts the expressive power of general logic pro-
grams, since it leaves no room for expressing classical negation. As a solution
often the following transformation trick is used, in which each negated
atom -,Px is replaced by an atom P*Xj moreover, to preserve apart of the
meaning of classical negation sometimes the constraint
Px/\P*x ~
REPRESENTING EXPLICIT EXCEPTIONS 121

is added, in which the empty head stands for the false proposition. Con-
straints can be represented without extra logical machinery by using an-
other transformation trick: for each rule with Px in the head an extra literal
rv P*x is added to the body and vice versa, which blocks the simultaneous

derivation of two clauses Pa and P*a (see Kowalski, 1989 for applications
to representing legislation). However, as we will see below, with SLDNF
resolution this often results in looping programs.

5.5.1. SPECIFIC EXCEPTION CLAUSES

I start with a discussion of specific exception clauses (leaving, as usual in


logic programming, the universal quantifier implicit).

Rebutting Defeaters
As just explained, the formalization of rebutting defeaters requires the use
of a translation trick, but apart from this no real problems arise; it is not
even necessary to encode the incompatibility of the various predicates. Hard
rebutting defeaters can be formalized in the following way.
(1) Px 1\ rv Qx -+ Rx
(2) Qx -+ R*x

If besides Pa also Qa holds, then (1) is blocked, so that only R*a can be
derived. Note that from {(I, 2)} it is impossible to derive, for any individual
a, Ra and R* a at the same time. The general idea is to add the negation of
the body of the exception to the general rule; if this body has more atoms
the general rule has to be split: for example, if (2) is changed into
(2') Qx 1\ Sx -+ R*x
then besides (1) also
(1') Px 1\ rv Sx -+ Rx
is needed.
Soft rebutting defeaters can also easily be expressed. The same method
as for a general rule and its exception can be used for the exception and
its exception. Here is a soft rebutting defeater of (1) in case of Qx, joined
with its own (hard) rebutting defeater in case of Tx.
(3) Qx 1\ rv Tx -+ R*x
(4) Tx -+ Rx
Then if Pa, Qa and Ta hold, (3) is blocked and with (4) we can derive Ra.
Note, however, that the application of (4) does not reinstate (1).
122 CHAPTER 5

Undecided Conflicts
Here thc first problems arise. Consider thc Nixon Diamond, first without
cxception clauses.
(5) Rx --t P*x
(6) Qx --t Px
If Rn and Qn are added, then both Pn and P*n can be derived without any
recognition of the contradiction. To prevent this, exception clauses should
be used. The first possibility is to use the integrity constraint
Px 1\ P*x--t
in the way just described, viz. by changing (5) and (6) into
(5') Rx 1\ ,..., Px --t P*x
(6') Qx 1\ ,..., P*x --t Px
However, if now Rn and Qn are added, then with SLDNF resolution this
is a looping program (note also that the program is not stratifiable). To
avoid this, another way is needed to add exception clauses, in which the
method of representillg hard rebutting defeaters is used as if both rules
were exceptions to each other:
(5") Rx 1\ ,..., Qx --t P*x
(6") Qx 1\ ,..., Rx --t Px
If now both Rn and Qn are added, then both rules are blocked, since the
negated atoms in the bodies are false: therefore not hing can be concluded at
all, which is the sceptical approach to ulldecided conflicts. In this rcspect the
logic of general programs is, when compared to the other formalisms, less
expressive: like circumscription it cannot present alternative conclusions,
and it is the only formalism in which not even the disjunction Pn V P*n
can be derived, which formula, if P and P* are only incompatible but not
complementary (as in the 'hawk' and 'dove' version ofthe Nixon diamond),
at least carries some information.

5.5.2. GENERAL EXCEPTION CLAUSES

As in the other formalisms, specific exception clauses are not sufficient:


firstly, they do not allow the expression of undercutting defeatersj and
secondly, they offer no way of adding new exceptions without having to
change the general rule. Therefore I now turn to general exception clauses.

Undercutting Defeaters
Hard undercutting defeatcrs can be formalized without any problem.
REPRESENTING EXPLlCIT EXCEPTIONS 123

(7) Px 1\ '" exc7(x) --t Qx


(8) Rx --t exc7(x)
Procedurally there are also no problems with soft undercutting defeaters.
Let (8) be replaced by
(9) Rx 1\ '" exc9(x) --t exc7{x)
Then if Ra is added, the failure to derive exc9( a) gives rise to the derivation
of exc7(a).
However, in the semantic justification of this procedural interpretation
a refinement of the idea of stratification has to be introduced, in which
not predicates but atomic formulas are stratified; the reason is that in the
original form predicates are assigned to a unique stratum, whereas in (9)
the exc predicate should with respect to the term 9( a) be in a higher
stratum than with respect to 7(a). The new form, which is called loeal
stratijieation (Przymusinski, 1988), is closely related to the most refined
version of prioritized circumscription, viz. with pointwise circumscription,
which was discussed above in 4.1.3. The problem of circumscription with
this example, the existence of two minimal models, is then implicitly solved
by the semantics for locally stratified programs, which, in the same way as
for stratified programs, still builds up the intended minimal model itera-
tively according to the directional procedural effect of the program: in the
example this has the effect that, since no rule applies for the goal exc9(a),
the model is constructed in which exc9(a) is false, because ofwhich exc7(a)
must be true.
The same situation occurs with chaining an undercutting defeater with
a defeasible rule: it does not give rise to procedural problems, while it needs
the same semantical refinement. In
(10) Px 1\ '" exclO(x) --t Rx
(11) Rx --t exc7(x)
the absence of a rule with exclO(x) in the head ensures that in the model
respecting the procedural effect of the clauses exclO(a) will be false, which
makes Rn and therefore exc7(a) true.
It is now appropriate to make so me remarks on an issue raised in the
discussion of circumscription, viz. the need for general circumscription poli-
cies, which particularly arises in case of formulas like (9). Stratification has
turned out to be an important notion here: for example, Lifschitz (1987a)
has provided a semantics for negation as failure by showing that for strat-
ified logic programs the intended model is the same as the model obtained
for the corresponding circumscriptive theory, if in this theory prcdicates or
atoms in lower strata are minimized with higher priority. This result can
also be used the other way around, viz. in using stratification to express a
124 CHAPTER 5

general rational circumscription policy. A fuH discussion of this topic would


go beyond the topic of this chapter, but it should be stressed that this does
not work if the required transformation of a circumscriptive theory yields
a nonstratifiable logic program, which, as will be furt her discussed below,
is often the case if the theory contains undecided conflicts between rules.
I now give an example with Teinstatement of the general rule. At first
sight it does not seem to make much difference whether a conclusion is based
on the general rule or on the exception to the exception, but this is different
if the natural-Ianguage version of the general rule has a conjunction as its
consequent; in logic programs such a rule is split into two rules, each with
the same body but with another of the conjuncts as the head. Consider the
following program, in which it is assumed that the pair 7/7' is the result
of such a split, and in which 9/9' is a soft rebutting defeater of 7/7' and
12/12' a soft rebutting defeater of 9/9'.
(7) Px !\ '" exc7(x) -+ Qx
(7') Px A '" exc7(x) -+ Tx
(9) Rx A '" exc9(x) -+ exc7(x)
(9') Rx A '" exc9(x) -+ Q*x
(12) Sx -+ exc9(x)
(12') Sx!\ -.excl2(x) -+ Qx
Assume also that Pa, Ra and Sa hold. Note once more that the exception
clause of (7'), (9') and (12') refer to, respectively, (7), (9) and (12) to express
the fact that the pairs 7/7',9/9', and 12/12' are each based on one natural-
language rule. Now if only (7) were present, then its reinstatement by the
blocking of (9) by (12) would not mean anything, since it gives rise to the
same conclusion as (12'). However, with (7') also present, the reinstatement
of (7) permits the derivation of both Qa and Ta.

Rebutting DefeateTs
HaTd rebutting defeaters cannot be expressed in the same simple way as in
circumscription, which would be
(13) Px A '" excl3(x) -+ Qx
(14) Rx -+ Q*x
The problem is that with Pa and Ra this yields both Qa and Q*a, without
recognition of the contradiction. Therefore, besides (14) an extra rule is
needed, viz.
(15) Rx -+ excl3(x)
Soft rebutting defeaters are formalized in the same way as in the other
approaches, viz. combined with an undercutting defeater: (13) is softly
rebutted by the next two rules
REPRESENTING EXPLICIT EXCEPTIONS 125

(16) Rx /\ rv excl6(x) -t excl3(x)


(17) Rx /\ rv excl6(x) -t Q*x
or, alternatively, by (17) with a hard undercutting defeater, which is (16)
without rv excl6(x). Note that the exception clause of (17) contains the
name of (16). The semantic aspects are the same as with soft undercut-
ting defeaters, while, moreover, the same re marks on reinstatement of the
general rule apply.

Undecided Confticts
Again problems arise in case of the Nixon diamond, particularly if it is
represented as
(18) Qx /\ rv excl8(x) -t Px
(19) Rx /\ rv excl9(x) -t P*x
While with the use of specific exception clauses both rules are blocked if
Rn and Qn are added, now both exception clauses are negated by failure,
for which reason both Pn and P*n can be derived. The ollly way to obtain
the same result as above is to add two rules
(20) Qx - t excl9(x)
(21) Rx - t excl8(x)
which, however, is again the extreme sceptical approach, in which neither
alternative conclusions, nor a disjunctive conclusion can be derived.

5.5.3. LOGIC PROGRAMS WITH CLASSICAL NEGATION

The discussion so far has revealed that with respect to the present topic
logic programming with general programs mainly has difficulties with un-
decided conflicts between rules: sometimes two intuitively incompatible
conclusions are derived at the same time, and sometimes a looping program
results. Since the main cause of the problems is the inability of generallogic
programs to express classical negation, it becomes interesting to consider
some recent developments in logic programming, consisting of adding clas-
sical negation -, to the language. This results in so-called extended logic
programs, which are sets of ground clauses of the form
LI /\ ... /\ L m /\ '" L m +1 /\ ... /\ '" Ln :::} Lo
(where n 2: m 2: 0). The crucial difference with generallogic programs (cf.
Definition 4.1.9) is that now each Li does not have to be an atom; it may
also be a classically negated atom. Clauses with variables are interpreted
as a scheme for all their ground instances. Note that classical negation can
occur both in the body and in the head of a clause, and that not only
atoms but also classically negated atoms can be negated by failure. What
126 CHAPTER 5

is very important is that the arrow ~ is not interpreted as the material


implication, but as an inference rule, which invalidates modus tollens.
Let us now ex amine some proposals for the semantic interpretation of
extended logic programs.

Gelfond & Lijschitz' Answer Set Semantics


Gelfond & Lifschitz (1990), who first introduced extended logic programs,
base their interpretation on a new development in the semantics of general
logic programs, viz. stable model semantics, wh ich was briefly mentioned in
Chapter 4. Although stable model semantics still does not assign canonical
models to every general logic program, it does so for a wider dass of logic
programs than to (locally) stratified programs. Another feature of stable
semantics is that so me programs may have more than one stable model.
This is for present purposes very interesting, since it opens prospects for
an adequate treatment of undecided conflicts.
Gelfond & Lifschitz first develop a semantics for extended logic pro-
grams, which they call ans wer set sernantics, and then investigate the
link with stable model semantics by systematically translating extended
logic programs into general logic programs. They show that under rather
general conditions the condusions of an extended logic program and its
translation coincide. Explaining all technical details would go beyond the
scope of this chapter, but the general ideas can be given with a discussion
of the translation method. This method is very simple: Gelfond & Lifschitz
simply use the above described translation trick for negation, by replacing
each expression of the form ,L by its positive form L *. For example, the
extended dause
(22) A 1\ rv ,B ~ ,C
is transformed into
(22') A 1\ rv B* ~ C*
Note that for this method the reading of ~ as an inference rule is crucial,
since reading it as the material implication validates the derivation of
formulas which cannot be derived from the transformed general program:
for example, with the material implication the extended program {A ~ B,
~ ,B} would imply ,A, while the transformed general program {A --t B,
--t B*} would not imply A *.
Let us see how this method deals with undecided confiicts. It turns out
that a proper treatment depends on the precise way in which exception
dauses are used. To see why, consider first the following exceptionless
formalization of the Nixon Diamond.
(22) Qn ~ Pn
(23) Rn ~ ,Pn
REPRESENTING EXPLICIT EXCEPTIONS 127

This program is transformed into (ground instances of) (5) and (6), and this
has a unique stable model, satisfying both Pn and P*n, without recognizing
the contradiction.
So it is necessary to use exception clauses, but this can he done in
several ways. If (22) and (23) are changed into
(22') Qn 1\ rv Rn ::::} Pn
(23') Rn 1\ rv Qn ::::} ,Pn
then the translation results in (5") and (6"). If now Rn and Qn are added,
then again both rules are blocked; like general logic programming, stahle
model semantics does not conclude anything either.
The only way in which two answer sets/stable models can be obtained
is by letting a rule 'assume' its own head in its body:
(22") Qn 1\ rv ,Pn ::::} Pn
(23") Rn 1\ f'V Pn ::::} ,Pn
Note the similarity with normal defaults of default logic, of which the
justification is equal to their consequent. The general counterpart of this
extended program is {( 5',6') }. Ahove we remarked that this program is
not stratifiable, but in the new semantics it has two stable models, one
satisfying that Nixon was a pacifist and the other satisfying that Nixon was
not a pacifist. Note that to obtain this result for every undecided conflict,
it is crucial that every dause of an extended program is formalized in the
above way, Le. with the contrary of its head negated by failure in its body.
However, this solution still does not solve thc procedural problems
with thc Nixon diamond, sincc with SLDNF resolution {22", 23"} is still a
looping program. For this reason much currcnt research in logic program-
ming aims at developing alternative proof procedures for the new semantic
developments.

Kowalski & Sadri


Kowalski & Sadri (1990) also interpret extended logic programs with the
help of stable semantics for general programs, but they do so in a different
way. They interpret rules with negative heads as exceptions to rules with
the positive counterpart in their head. They realize this by translating
extended programs
(24) Px::::} Qx
(25) Rx::::} ,Qx
first into
(24') Px 1\ ,Qx ::::} Qx
f'V

(25) Rx :::} ,Qx


128 CHAPTER 5

and then into


(24") Px /\ rv Q*x -t Qx
(25') Rx - t Q*x
which is a general pro gram and which according to stable semantics yields,
together with Pa and Ra, the conclusion Q*a.
Here is how they deal with undecided conflicts. They cannot formalize
the Nixon Diamond with (22) and (23), since this would give priority to
(23); therefore they propose the following formalization.
(26) x is a quaker =} x is a pacifist
(27) x is a republican =} x is a hawk
(28) x is a hawk =} -, x is a pacifist
(29) x is a pacifist =} -, x is a hawk
Via their transformation method this results in the general pro gram
(30) x is a quaker /\ rv x is a hawk* - t x is a pacifist
(31) x is a republican /\ rv x is a pacifist* - t x is a hawk
(32) x is a hawk =} x is a pacifist*
(33) x is a pacifist =} x is a hawk*
Essentially this is the same formalization as the last of the above formal-
izations in Gelfond & Lifschitz' approach; this program also has two stable
models, but with SLDNF the pro gram loops, for which reason alternative
proof procedures are needed. Apart from this, Kowalski & Sadri's proposal
has one important drawback; since rules with negative heads aiways take
precedence, exceptions to exceptions cannot be represented.

Well-founded Semantics
While the approaches based on stable semantics are credulous, another pro-
posed semantics for general and extended logic programming, well-founded
semantics (Pereira & Alferes, 1992), is sceptical: where the credulous ap-
proach gives alternative conclusions, well-founded semantics draws no con-
clusion. While in case of undecided conflicts this is a drawback, an im-
portant advantage of well-foullded semantics is that it assiglls a canonical
model to every logic program. However, some have argued that well-founded
semantics loses some intuitive conclusions of stable semantics; for this
reason research continues (see e.g. Brewka, 1996; Prakken & Sartor, 1997a).

Intuitionistic Logic Pmgmmming


Finally, McCarty (1988a; 1988b) takes a different approach to the prob-
lem of classical negation. He interprets extended pro grams according to
illtuitionistic logic, which validates fewer inferences thall classicallogic: for
example, it does not have (<p V -'<p) as a tautology, which invalidates, for
REPRESENTING EXPLICIT EXCEPTIONS 129

instance, the inference of 1/1 from {c.p ---t 1/1, -,c.p ---t 1/1}. Another invalidity
is the inference of c.p from -'-'c.p, which blocks so me (but not all) con-
trapositive inferences, as thc reader can easily check. The effect of this
is that adding negation does not make the logic intractable. To model
nonmonotonic reasoning McCarty (1988c) introduces a failure operator,
to be used only in combination with intuitionistic negation. McCarty &
Cohen (1990) go on to advocate the use of explicit exception clausesj in
their opinion knowlcdge-engineering problems can be avoided by starting
to write a program without exception clauses and by then 'debugging' the
program by running various anticipated queries through the interpret.er and
blocking t.he unintended inferences by adding exception clauses. For present
purposes it is particularly interesting that McCarty's system can alterna-
tively present contradictory conclusions, while, furthermore, all kinds of
exceptions Iisted in 5.1.2 can be formalized in a similar way as in 'classical'
logic programming. It remains to be investigated how the exact relationship
is bctwecn McCarty's ideas and the other new developments. An interesting
question is what the philosophical impIications are of using intuitionistic
rat her than classical logic.

5.5.4. SUMMARY

Most kinds of exceptions listed in 5.1.2 can be expressed in generallogic


programs, especially when general exception clauses are usedj a minor
problem is that the inability to express classical negation makes it impossi-
ble to recognize rebutting defeaters logically as exceptions contradicting
the general rule. Serious problems occur in case of undecided cOllflicts
between rules. Firstly, eliminating integrity constraints which express the
incompatibility of predicates results in such cases in looping pro grams if
SLDNF resolution is usedj and secondly, there is no way of alternatively
representing incompatible conclusions as reasonable inferences. However, it
should again be noted that much current research in logic programming
aims at developing solutions to these problems. Some of it was already
discussed above.

In conclusion, the most important cont.ribution of logic programming


is its ability to provide efficient implementations of fragments of other
formaIisms, thereby impIicitly providing the prioritization policies needed in
circumscription. This conclusion will be illustrated by a schematic overview
in the next section. However, at the same time this means tImt interesting
but less tractable fragments cannot be rcpresented in logic programming,
which illustrates thc tradc-off between expressiveness and tractability.
130 CHAPTER 5

5.6. Evaluation

5.6.1. A FORMALIZATION METHODOLOGY

In the course of this chapter a generally applicable formalization methodol-


ogy has been developed, largely by combining techniques developed by oth-
ers. I now schematically appIy this methodology to each of the formalisms.
In short, the method is that for every defeasible rule a general exception
clause is added to its antecedent, in such a way that exceptions can be
assumed false if there is no evidence to the contrary, and in such a way that
if there are no undecided conflicts between rules, the resulting theory has
a unique extension/minimal model/maximal scenario (below 'conclusion
set'); finaIly, the exception clauses are expressed with the third naming
technique explained in Section 5.2.2, in which names of rules are denoted
by function expressions. It should be stressed that the methodology does
not guarantee the existence of unique conclusion sets; it is always possible
to constrain the formalization in such a way that either more conclusion sets
result, or none at all. Although I expect that in practice these cases will,
apart from undecided conflicts, turn out to be incorrect formalizations, their
formal possibility still puts severe practical constraints on the methodology.
Below DL stands for default logic, pe for prioritized circumscription, LP
for Iogic programming with general clauses, and PF for Poole's framework.
The distinction in DL and PF between facts and defaults will be made by
using function symbols fi for facts and di for defaults. (1) and (dd denote
the general rule. In LP the rules are implicitly quantified. FinaIly, in pe
the following circumscription policy will be assumed: the exc-predicates
are minimized according to their stratification in the corresponding logic
program (if relevant, this prioritization is given), the other predicates in
the antecedent are fixed or variable and the predicates in the consequent
are variable. As noted in the previous section this works weIl only if the
theory does not contain undecided conflicts.

Hm'd undercutting defeaters


DL: (dt) Ax: Bx/\ appld 1(x)/Bx
(!I) vx.ex - t -,appld1 (x)

pe: (1) Vx.Ax /\ -,excl(x) -t Bx


(2) \lx.ex - t excl(x)

PF: (!I) Vx.Ax /\ appl!I(x) - t Bx


(12) vx.ex - t -,appl!I(x)
(dd appl(x)
REPRESENTING EXPLICIT EXCEPTIONS 131

LP: (1) Ax 1\ rvexcl(x) -> Bx


(2) Cx -> excl(x)

Soft undercutting defeaters


DL: (dt) Ax: appld1(x) 1\ Bx/Bx
(d2) Cx: appld2(X) 1\ -,appldl (x)/-,appld 1(x)

pe: (1) \ix.Ax 1\ -,excl(x) -> Bx


(2) \ix.Cx l\-,exc2(x) -> excl(x)
priürities: für all x: exc2(x) > excl(x)

PF: cannüt be represented.

LP: (1) Ax 1\ rv excl(x) -> Bx


(2) Cx 1\ rv exc2(x) -> excl(x)

Hm'd rebutting defeaters


DL: (dd Ax:appldl(X)I\Bx/Bx
Ud \ix.Cx -> -,Bx
pe: (1) \ix.Ax 1\ -,excl(x) -> Bx
(2) \ix.Cx -> -,Bx

PF: Ud \ix.Ax 1\ applh(x) -> Bx


(12) \ix.Cx -> -,Bx
(dd appl(x)

LP: (1) Ax 1\ rv excl(x) -> Bx


(2) Cx -> excl(x)
(3) Cx -> B*x

Soft rebutting defeaters with hard undercutting defeaters


DL: (dl) Ax:appld1(x)I\Bx/Bx
Ud \ix.Cx -> -,appldl(X)
(d 2) Cx: appld 2(x) 1\ -,Bx/-,Bx

pe: (1) \ix.Ax 1\ -,excl(x) -> Bx


(2) \ix.Cx -> excl(x)
(3) \ix.Cx l\-,exc3(x) -> -,Bx

LP: (1) Ax 1\ -,excl(x) -> Bx


(2) Cx -> excl(x)
(3) Cx l\-,exc3(x) -> B*x
132 CHAPTER 5

PF: (h) \ix.Ax /\ applh(x) ---- Bx


(12) \ix.Cx ---- ,applh(x)
(Ja) \ix.Cx /\ applJa(x) ---- ,Bx
(dt) appl(x)
Soft rebutting defeaters with soft undercutting defeaters
DL: (dt) Ax:appld1(x)/\Bx/Bx
(d 2) Cx: appld 2(x) /\ ,appld1 (x)/,appld 1 (x)
(d3) Cx: appld 2(x) /\ ,Ex/,Bx
(The reason why the applicability condition of d3 contains d2 was explained
above at page 110.)
PC: (1) \ix.Ax /\ ,excl(x) ---- Bx
(2) \ix.Cx /\,exc2(x) ---- excl(x) /\ ,Bx
priorities: for all x: exc2(x) > excl(x)

LP: (1) Ax /\ ,excl(x) ---- Bx


(2) Cx /\,exc2(x) ---- excl(x)
(3) Cx /\,exc2(x) ---- B*x
PF: cannot be represented.
Undecided confiicts
DL: (dt) Ax: appld1(x) /\ Bx/Bx
(d2) Cx: appld 2(x) /\ Dx/ Dx
Ud \ix,(Bx /\ Dx)
Two extensions, with alternative conclusions
PC: (1) \ix.Ax /\ ,excl(x) ---- Bx
(2) \ix.ex /\,exc2(x) ---- Dx
(3) \ix,(Bx /\ Dx)
priorities: for aIl x: excl(x) ~ exc2(x)
Two minimal models, with disjunctive conclusion.
PF: Ut} \i:r.Ax /\ applh(x) ---- Bx
(12) \fx.Cx /\ applh(x) ---- Dx
(Ja) \fx,(Bx /\ Dx)
Two maximal scenarios, with alternative conclusions.
LP: (1) Ax /\ "" excl(x) ---- Bx
(2) Cx /\ "" exc2(x) ---- B*x
which does not recognize the contradiction, or
(1) Ax /\ "" excl(x)/\ "" B*x ---- Bx
(2) Cx /\ "" exc2(x)/\ "" Bx ---- B*x
REPRESENTING EXPLICIT EXCEPTIONS 133

which with SLDNF resolution loops, or these two rules with


(3) Ax -+ exc2(x)
(4) Cx -+ excl(x)
which blocks both rules and does not imply a disjunctive conclusion.
From this schematic overview it becomes apparent that the style of for-
malization is in all cases very similar. As noted above in Section 4.2.3,
it was already known that on certain conditions the various formalisms
are formally translatable into each other, but these results become more
important if under these translations a useful formalization methodology
can be preserved. The above scheme indicates that by and large this is
indeed the case. Particularly interesting is that, apart from undecided
conflicts, this also holds for general logic programmingj therefore, under
the restrictions that classical negation cannot be fully capt ured and that
there are no undecided conflicts, the various formalisms see m to have an
efficient implement at ion in logic programming. However, these restrictions
are not trivial, since in legal language classical negation is often used, and
dealing with undecided conflicts is an important topic in current AI-and-
law research. Presumably logic programming can be (and partly has already
been) extended to deal with these problems, but it might be that one of its
main advantages, computational efficiency, will then have to be given up.
To expand a little on logic-programming's problems with undecided
conflicts, it should be noted that implement at ion of other formalisms in
logic programming generally works well only if the resulting program is
stratifiable, and the problem is that theories with undecided conflicts often
induce nonstratifiable logic programs (like {(5 / ,6 / )} above). This holds, for
example, for the compilation of circumscriptive theories into logic programs
defined by Gelfond & Lifschitz (1989)j cf. Brewka (1991a, pp. 92-3). As a
consequence, the possibility of undecided conflicts between rules severely
increases the computational complexity of a theory. Actually, a whole lot
more could be said on this topic, since the field of logic programming
is currently very active in finding solutions to these problems. Above I
have already pointed at some (but certainly not all) new semantic devel-
opmentsj a detailed overview is given in Bidoit (1991). Another part of the
investigations is to find new sound and efficient theorem provers for these
new semantic developments since, as we have seen, logic-programming's
standard theorem prover SLDNF resolution is not appropriate. However,
all these topics go beyond the scope of this book.
To conclude this subsection, we can say that this chapter has resulted
in a formalization methodology which can be applied in several formalisms
and rather efficiently implemented in logic programming. The methodology
seems flexible and expressive enough to deal with a wide range of examples
134 CHAPTER 5

occurring in the literat ure on nonmonotonic reasoningj most importantly, it


can represent the kinds of exceptions listed in 5.1.2, which see m to exhaust
the rule-exception relations in the practice of legal rcasoning. However,
all this is subject to one important restriction: if a domain gives rise to
undecided conflicts between rules, thc exception dause approach loses much
of its attraction.

5.6.2. DIRECTIONALITY OF DEFAULTS

In the discussion of circumscription and Poole's framework we have seen


that some problems can only be solved by giving defeasible conditionals a
directional interpretation. For example, when two normality assumptions
exclude each other, the aim of minimizing exceptionality forces us to make
a choice, and intuitions see m to take into account the directionalnature of
the defaults: if, for example, in circumscription syntactically a formula
-,excl(a) --+ exc2(a)
can be derived from the premises without having to use contraposition, then
the intuitively desirable assumption is -,excl(a), but if this formula needs
contraposition to be derived, then the other choice -,exc2(a) is the best. In
Poole's framework the same problem occurs, which in Brewka's extension
of the framework can be solved in a similar way by assigning the priorities
between the defaults according to considerations about directionality. In
this chapter we have met two ways of capturing the directionality of defea-
sible rules. The first is to regard defauits as inference ruIes, as in defauit
logicj and the second is to express the syntactically motivated preferences
towards minimization in the form of metalevel information, as is done in
circumscription by expressing priorities between predicates or atoms and
formulating a circumscription policy.
It is interesting to note that, although in the semantics of Iogic pro-
gramming similar problems to those of circumscription occur, in its proce-
dural interpretation the required directionality is nicely captured. It seems
that, rather than being a property of the conditional connective, this is
mainly due to the procedural metalevel interpretation of negation as the
failure to derive the opposite. This becomes apparent from the semiformal
disjunction -, rv excl(a) Vexc2(a), which is propositionally equivalent to
rv excl(a) --+ exc2(a): if excl(a) is indeed not derivable, then by sim-

ple propositional reasoning exc2(a) holds, which is precisely the intended


conclusion. Most new development.s in the semantics of logic programming
consist of attempts to capture this directional procedural effect of negation
as failure. In these developments the concept of stratification has turned out
to be of value, and therefore it is not surprising that, as noted above, this
concept can, if there are no undecided conflicts, also be used in choosing
REPRESENTING EXPLICIT EXCEPTIONS 135

the prioritization strategy needed to make defaults directional in circum-


scription. This also means that for fragments of circumscription which can
be compiled into logic programs, the appropriate prioritization policy is
implicitly defined by the compilation.

5.6.3. CONTRAPOSITIVE INFERENCES

An issue which is occasionally discussed in the literat ure on nonmono-


tonic reasoning is the validity of contrapositive inferences, particularly of
modus tollens. Of course, for defeasible statements modus tollens should
be monotonically invalid: intuitively, if normally birds fly, then a nonflyer
need not be a nonbird, since it can also be an abnormal bird. However, since
nonmonotonic reasoning minimizes on abnormality, it might be argued that
a nonmonotonic form of modus tollens should be valid. It is obvious that
default logic does not account for this since defaults are inference rules, but
in circumscription and Poole's framework forms of defeasible modus tollens
are possible. Consider
(1) Vx.Ax 1\ ,excl(x) ~ Bx
(2) ,Ba
In circumscription defeasible modus tollens is valid und er one condition, viz.
that not only B but also A is a variable predicate: if A were fixed, {(1,2)}
would have two minimal models: the intended one in which excl(a) is false
and ,Aa is true, but also an unintended one in which both excl(a) and Aa
are true: the reason is that according to Definition 4.1.11 models with dif-
ferent extensions of input predicates are incomparable. As a consequence, in
circumscription the formalizer of the problem has, by me ans of the circum-
scription policy, control over the quest ion of whether for a particular default
defeasible modus tollens should be possible or not. Philosophically this is
questionable, since the validity of an inference pattern should depend on
the form of the expressions involved and not on pragmatic considerations.
However, as noted by Brewka (1991a, p. 57), in practice the antecedent
of adefault 'if Athen B' will always have to be variable, otherwise this
default cannot be chained with a preceding default 'if C then A', in which
A, as explained in Section 4.1.3, certainly has to be variable to give rise to
default conclusions.
Poole's framework reflects the same attitude towards contrapositive
inferences as circumscription does. Note first that, as the framework is
defined above, it validates modus tollens: recall that (1) and (2) will be in F
and adefault ,exc(x) in ßj then F U {,excl(a)} explains ,Aa, while there
is no explanation with excl(a). However, Poole defines an extended version
of his framework precisely with the aim of allowing the user to decide for
each individual default whether it should allow for defeasible modus tollens.
136 CHAPTER 5

This extended version is defined in the following way. Besides the categories
F for the facts and !:l. for the defaults Poole intro duces a new category C for
so-called constmints, which cannot be used for constructing explanations,
but only for blocking them. This is ensured by adding to Definition 4.1.17
of a scenario the requirement that F uD u C is consistent. Now defeasible
modus tollens can be blocked for (1) by adding the formula
(3) Vx. ,Bx --+ excl(x)
not to the facts but to the constraints. To see this, assume that ,Ba is added
to the facts F: then F U C implies excl(a) and therefore the explanation
for ,Aa becomes inconsistent with FUC, since it contains ,excl(a). Note
that (3) can also be added if ({1, 2)} is a circumscriptive theory, but then
it is just another premise, for which reason it can also be used for other
derivations, while in Poole's system it is only meant for blocking derivations.
As in default logic, in logic programming no contrapositive inferences
are possible at all. For Horn clauses this is obviously caused by the absence
of negation in the language, and for general clauses by the fact that only
atoms occurring in the body of a clause can be negated. And contraposi-
tive inferences are also invalid in the above mentioned extensions of logic
programming with classical negation, since in these approaches the clauses
are interpreted as inference rules. However, this has nothing to do with
considerations of defeasible reasoning; the only reason why these inferences
are invalidated is to preserve completeness (or near completeness) of logic-
programming's efficient proof methods. This, however, is philosophically
highly questionable: the only sensible reason to invalidate inferences like
modus tollens for a conditional seems to be that the conditional is inter-
preted as a defeasible one.
In conclusion, while default logic and logic programming do not allow
for any form of defeasible modus tollens, circumsctiption and Poole's frame-
work allow for both validating and blocking defeasiblemodus tollens, by
providing options for formalizing the problem in different ways. However,
since this choice is left to the user, these systems do not give any insight
into the philosophical quest ion whether defeasible modus tollens should be
valid.

5.6.4. ASSESSMENT OF THE EXCEPTION CLAUSE APPROACH


Concerning the points listed in Section 5.1.3 the following remarks can be
made.

Structural Resemblance
As already explained in Chapter 3, mixing source units in the formalization
can only be avoided if the exception clauses are general. However, it seems
REPRESENTING EXPLICIT EXCEPTIONS 137

that for soft rebutting defeaters in default logic and circumscription the sit-
uation of a one-to one correspondence between source units and knowledge-
base units cannot be obtained but only be approximated, by giving the same
name to knowledge-base units based on the same source unit.

Modularity
In general the exception clause approach is non-modular. Sometimes it
is said to support modularity (e.g. by Etherington, 1988, pp. 102-3 and
Brewka, 1991a, p. 117), but this only holds for modularity of adding excep-
tions; the formalized exception still has to mention explicitly to which rule
it is an exception, for which reason the knowledge engineer must still be
aware of all possible interactions between the various rules and exceptions
(cf. Touretzky, 1986, p. 18; Poole, 1991, pp. 282, 295). This is different
only if natural-Ianguage exceptions themselves mention to which rules they
are an exception, as is sometimes the case in law. Furthermore, we have
seen that in case of soft undercutting defeaters and in case of chaining rules
circumscription needs prioritization of defaults, which leads to non-modular
formalization if this cannot be done on the basis of general criteria. Such
criteria are provided by logic-programming's concept of stratification, but
only if the theory does not contain undecided conflicts.

Implementation
Of course, logic programming with general programs offers the best prospects
for implementation, but this is mainly caused by its underlying idea of
restricting the language to a computable fragment. We have seen that if
the languages of the other formalisms are restricted in a similar way, then
under certain conditions they become formally translatable into general
logic programs. Furthermore, even if this does not hold, then at least
a general program can be designed which roughly preserves the style of
formalization and which validates more or less the same inferences. For
these reasons logic programming is more a way of implementing logics than
a logic itself. However, the restrictions under which these observations
hold are rat her serious: they imply that conflicts between rules are either
resolved or blocked, and they give up the possibility to express classical
negation. For these reasons it is interesting to see what the results will be
of the ongoing investigations on the proof theory of logic programming.

Exclusiveness 01 Specijicity
As noted several times, the aim of capturing specificity by way of the
exception clause approach is to obtain a unique extension in which the
exceptional conclusion holds. In fact, in this way specificity is expressed
implicitly in the way the exception clauses are assigned. Now if this results
138 CHAPTER 5

in unique answers there is nothing to choose, for which reason there is only
room for other standards in case of undecided conflictsj if a conflict can
be solved by specificity, no other standard can solve it otherwise, and this
makes specificity exelusive.

Resemblance to Natural Language


Some domain rules themselves already contain exception elauses, but cer-
tainly not all of themj therefore the exception dause approach is in general
not reaIly elose to naturallanguage.

Expressiveness
Specific exception elauses are of limited use, but general elauses can be
used for the formalization of most kinds of exceptions. Problems arise
with two kinds. Firstly, in case of undecided conflicts between rules logic
programming either does not recognize the conflict, or results in loop-
ing programs, or has to be completely silent, and circumscription can-
not present alternative conelusions, but only one disjunctive conelusion.
Secondly, Poole's framework cannot represent soft undercutting defeaters,
which, among other things, prevents reinstatement of general rulesj the
other formalisms do not have this problem. We have also seen that, since
the exception elause approach aims at obtaining unique answers, in no
formalism the exceptional conclusion can be presented as the preferred one
of conflicting alternatives. Furthermore, defeasible modus tollens is only
possible in circumscription and Poole's framework, although it depends for
each individual default on the chosen circumscription policy. Finally, the
language of logic-programming is 10gicaIly far less expressive than the other
formalisms, but this is the price of tractability.

Overall assessment
In sum, although the exception dause approach has turned out to work
rat her weIl, if it has to be efficiently implemented it does so on a restricted
domain of application. The most important restriction is that it must
always either solve the conflict in favour of Olle of the rules, or block their
simultaneous application. For many applications this restriction will not
be a disadvantage, but for other ones it will be. In particular, attempts
to model the adversarial aspect of legal reasoning, which is one of the
main topics in current AI-and-Iaw research, need ways of dealing with
undecided conflicts without resolving or blocking them. It is true that,
at least philosophically, default logic satisfies these requirements, but when
used for the exception elause approach it has still a restriction: without
undecided conflicts it results in unique extensions, for which reason it does,
as just explained, not leave room for other standards to resolve norm con-
REPRESENTING EXPLICIT EXCEPTIONS 139

fliets besides specificity. In eonclusion, in so me applieations ways of dealing


with exeeptions are needed wh ich allow for undecided eonflicts and also for
other eonflict resolution principles. These ways require more general tools
to reason with ineonsistent information and sinee this kind of reasoning is
the seeond major topie of this book, I now turn to a way of dealing with
exeeptions which ean be embedded in teehniques for ineonsisteney tolerant
reasoning.
CHAPTER 6

PREFERRING THE MOST SPECIFIC ARGUMENT

6.1. Introduction
In this chapter I start the investigation of a second way of modelling rea-
soning with defeasible rules; this is the method of allowing for incompatible
solutions to a problem and choosing the exceptional one if it exists. In this
method exceptions can be left implicit: no use has to be made of exception
or applicability clauses, since exceptions are identified as a result of the
choice. As alrcady mentioned in Chapter 3, when applied to legislation this
method corresponds to application of the legal collision rule Lex Specialis
Derogat Legi Generali.
In the previous chapters three main reasons were identified in favour
of modelling reasoning with exceptions as choosing between alternative
conclusions. The first has to do with undecided conflicts: it should be
possible to say that a problem has two alternative, incompatible solutions,
and that there is no reason to prefer one of them. Another reason is that
the choice approach is, just like the exception clause approach, a way of
preserving the separation of rules and exceptions in legislation, but with
the advantage that it is often closer to natural language, since it does not
need to use exception clauses when the natural-Ianguage text does not use
them. A third and very important reason is that a system based on choosing
between answers leaves room for other standards besides specificity, which
still might solve conflicts between rules of which no one is an exception
to the other, or which might even override the specificity considerations.
The law is one domain in which such other standards are used: conflicts
between norms in legislation are not only solved on the basis of the Lex
Specialis principle, but also, and even with higher priority, on the basis of
the time of enactment of a norm and on the basis of the general hierarchical
structure of the legal system. This observation is the reason why in this book
I do not go into the third method of dealing with exceptions, besides the
exception clause and the choice approach. This method, already mentioned
in Section 5.1, is to rcgard specificity as a principle of the semantics of
a logic for defeasible conditionals. Obviously, if the specificity critcrion is
modelled as a semantic principle, no other standards can override it.
Within the choice approach the choice can be made in two ways. The
first, which will be investigated in this chapter, is to use an explicit formal-

141
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
142 CHAPTER6

ization of the notion of specificity as a collision principIe: what this principie


does is inspecting the logical structure of the formulas involved to check
which solution is based on the most specific information. Recall that in the
exception dause approach the notion of specificity is not formalized but is
used implicitly by the person who formalizes the expressions, in assigning
the exception dauses. The second way of making the choice is doing so on
the basis of some externally provided, predefined ordering on the premises
(as in e.g. Brewka, 1989; Konolige, 1988b; Roos, 1992), a method which
will be discussed in the next chapter. In this method specificity is also
used implicitly, viz. in laying down the priorities. A reason which is often
given for preferring the explicit use of specificity over its incorporation
in apremise ordering is that the latter would obstruct a modular way of
formalizing: the ordering would have to be defined individually for each case
of conflicting solutions, in which case a knowiedge engineer has to be aware
of all possible interactions between defeasible rules (cf. Etherington, 1988,
p. 47; PooIe, 1991, p. 295). By contrast, since the specificity check does not
depend on externally defined orderings but is entirely determined by the
Iogical structure of the premises, it is often daimed that all a knowiedge en-
gi neer has to do is to formalize individual natural-Ianguage expressions; the
system then takes care of calculating the rule-exception relationships, wh ich
would ensure a modular formalization process (cf. Touretzky, 1984, 1986;
Loui, 1987, p. 106; Poole, 1991, p. 287). A secondary aim of this chapter
is to investigate whether this daimed advantage of the specificity check
indeed holds. In any case, this chapter onIy investigates the explicit use of
specificity; its implicit use by way of premise orderings will not be ignored,
but since priorities can be defined on the basis of any criterion and not just
to express rule-exception relations, their discussion will be postponed to
the next chapter, wh ich is about reasoning with inconsistent information.
Recall from Chapter 5 that in some formalizations of the exception
dause method it is sometimes also necessary to make a choice, viz. in
circumscription and logic programming between multiple minimal models,
and in Poole's way of modelling the exception dause approach between
scenarios. However, in these cases the only purpose of using priorities is
to order the minimization of the exception dauses in order to semantically
model the directional procedural effect of negation as failure; this is not the
same as choosing between conflicting solutions to a problem: in particular,
no notion of specificity is involved.
The investigation of the choice method, whether with specificity or with
priorities, will not just be a technical discussion on how to deal with excep-
tions; in fact, the method cannot be discussed without going into detail into
one of the fundamental issues in the logical study of nonmonotonic reason-
ing, already touched upon in Section 4.1.5: should this kind of reasoning
PREFERRING THE MOST SPECIFIC ARGUMENT 143

be modelled by changing the logic or only by changing the way the logic is
used? The reason why this question particularly arises in the choice method
is that, as shown in Section 3.1 above, ways of choosing between conflicting
answers make the reasoning process nonmonotonic even if the underlying
logic is standard logic; and if inconsistency handling suffices to obtain
nonmonotonicity, why then also change the logic? The most outspoken
proponent oft his approach is Brewka (1989; 1991a), and Poole (1985; 1988)
also claims that if his way of using logic is followed, no nonstandard logics
are needed (note, however, that Poole, 1988 does not restrict this claim
to the way he models the choice approach; in that paper Poole mainly
investigates the use of his framework for modelling the exception clause
approach). In short, in the inconsistency handling approach defaults are
not regarded as different linguistic entities, but as approximations to the
truth, which might be set aside in specific circumstances. The discussion
in this chapter of Poole and in the next chapter of Brewka also serves
to investigate the tenability of this paradigm in modelling nonmonotonic
reasoning.
In this chapter two questions have to be answered: how can the notion
of specificity be formally defined as a collision rule, and what is the general
formal context determining when there is something to choose? Section 6.2
investigates the answer of Poole (1985) to the first question, after which
Section 6.3 criticizes the way Poole's uses his specificity definition in his
framework for default reasoning; this criticism results in an answer to
the second question in the form of an 'argumentation system', which is
presented in Sections 6.4 to 6.6. Finally, in Section 6.7 the choice method
with specificity is evaluated.

6.2. Poole: Preferring the Most Specific Explanation

In the AI-literat ure on nonmonotonic reasonillg the formalization of speci-


ficity was initially related to inheritance systems with exceptions. The idea
of such systems is that a subclass inherits the properties of its super-
classes, unless contradicting information ab out the subclass is availahle,
in which case this oven'ides the information about its superclasses. Vari-
ous ways of formalizillg this idea have been proposed, notably by Touret-
zky (1984; 1986) and Horty et al. (1990), who incorporate adefinition of
what a subclass is in the syntactic notion of an acceptable 'inheritance
path'. However, since the expressive power of the language of inheritance
nets is very weak, for present purposes these formalizations are too re-
stricted. While using a fulliogicallanguage, Loui (1987) gives, as part of a
system for defeasible argumentation, four syntactic categories in which one
argument is hetter with respect to specificity than a conflicting argument.
144 CHAPTER 6

This categorization involves more than testing for a subset relation, but it
is still based on implicit intuitions of what it means that an argument is
more specific than another. Poole's (1985) "theory comparator" is the first
attempt to give a semantic definition of specificity, without any reference to
syntactic cases, for which reason it can, if correct, be used for determining
the soundness of any syntactic categorization of specificity.
Poole (1985) presents his formalization of the specificity principle against
the background of his general view on default reasoning presented in detail
in Poole (1988), which, as explained in Chapter 4, is that if defaults are
regarded as possible hypotheses with which theories can be constructed
to explain certain facts, nonmonotonic reasoning can be modelled without
giving up standard logic. My initial reason to study Poole's ideas was
their striking similarity (also observed by Gordon, 1989) to what I call the
'modelling disagreement' view on legal reasoning. Very roughly, this view,
which is particularly popular among Anglo-american researchers in the
field of AI and law (Rissland & Ashley, 1987; Gardner, 1987; Gordon, 1989;
Skalak & Rissland, 1992) says that lawyers do not try to argue for the
legally correct solution, if it exists at all , but for the solution which best
serves the client's interests. More specifically, the similarity is as follows:
the legal counterpart of Poole's explanations are arguments for a desired
solution of a case: certain facts must be obeyed by such arguments: for
example, facts about the case at hand, or necessary truths such as 'a man
is aperson' or 'a lease is a contract', but for the rest lawyers have available
a large body of conflicting opinions, rules and precedents from which
they can choose a coherent set of premises which best serves the client's
interests. Of course, lawyers do not only express disagreeing views but
also cornpare them; now for this Poole's framework also provides a formal
counterpart, viz. the possibility to make a choice between incompatible
explanations. In Poole (1985) he only investigates the specificity principle,
hut in Poole (1988, p. 45) he remarks that also other criteria might he
used.
Recall that in Poole's framework adefault theory is a pair (T, ß), where
T is a consistent set of closed first-order formulas, the facts, and ß a set
of first-order formulas, the defaults; furthermore, a scenario of (T, ß) is
a consistent set Tu D, where D is a subset of ground instances of ß;
finally, an explanation of cp from (T, ß) is a scenario of (T, ß) implying
cp. Now, although Poole's (1985) main ohjective is to provide a semantics
for inheritance networks with exceptions, he does not restrict his specificity
principle to such networks, hut defines it on the semantics of full first-order
predicate logic. Consider explanations Ai = TU Di for cp and Aj = Tu D j
for -'cp. Informally, the idea is that Ai is more specific than Aj iff there is a
possible situation in which only A j applies. To make this precise, the facts
PREFERRING THE MOST SPECIFIC ARGUMENT 145

should be divided into necessary facts F n , true in all explanations based on


(F, ß), and contingent facts Fe, the 'input facts' of the case. Now the key
idea is that in the specificity check the contingent facts are ignoredj these
facts are only used to derive conclusions about the actual circumstances.
In checking for specificity they are replaced by any set of 'possible facts',
where a possible fact can be any first-order formula, whether in Fe or
not. Thus strictly speaking not the explanations Ai and Aj themselves
are compared, but their 'noncontingent' parts as applied to any possible
situation: in notation, what will be compared is for all possible facts Fp
the sets F n U {Fp } U Di and F n U {Fp } U Dj. Now if a possible fact Fp can
be found with which the latter construct implies --<p but the first does not
imply <p, and if no possible fact Fp can be found for which the reverse holds,
Le. with which the first implies <p without the latter implying --<p, then we
can say that Ai applies in a proper subset of all possible situations in which
Aj applies, which makes Ai strict1y more specific than Aj.
Definition 6.2.1 (Poole's specificity definition) Let Ai = F n U Fe U Di and
Aj = F n U Fe U Dj be explanations Jor, respectively, <p and --<p. Ai is more
specific than A j with respect to <p iJJ there is a Jact Fp such that
1. F n U {Fp } U D j 1= --<Pi and
2. F n U {Fp } U Di lF <p and 1I= __ <pI.
IJ, in addition, Aj is not mOTe specijic than Ai with respect to <p, then Ai
is strict1y more specific than A j with respect to <po
It might be asked why Poole defines the specificity test as a comparison
between explanations (or arguments) rather than as a comparison between
individual rules: after all, it seems that in most cases the comparison reduces
to the quest ion whether one rule is more specific than another one. However,
the reason why Poole compares arguments instead of rules is that he aims
at giving a general definition of what specificity is, to serve as a standard for
any syntactic way of using specificity. Surely, from a philosophical point of
view such adefinition is very worthwhile, but to be generally applicable, it
should not have to take into account syntactic details which are semantically
irrelevant.
Poole's elegant definition gives the intuitive results in the standard
examples.
Example 6.2.2 Consider first a legal variation of Minsky's penguin exam-
pIe, consisting of a rule stating that contracts only bind the parties involved,
and another rule saying that leases of houses also bind new owners of the
house. For a given contract c this becomes
IThis last 'non-triviality' requirement for Ai and -'CP is essential, since without it the
possible fact -'Cp would always make Ai more specific than Ai'
146 CHAPTER 6

(1) x is a contract --t x only binds i ts parties


(2) x is alease of house y --t x binds an owners of y
:Fn : {Vx. x is alease of house y --t X is a contract,
Vx ----, (x only binds its parties 1\ x binds an owners of y)}
:Fc : {c is alease of house h}
~: {(1 - 2)}

Al = :Fn U:Fc U {(1)}2 explains c only binds its parties, while A 2 =


:Fn u:Fc U {(2)} explains c binds an owners of h. A 2 is strictly more spe-
eifie than Al: c is a contract is a possible faet whieh makes Al explain c
only binds i ts parties without making A 2 explain c binds an owners
of h or c only binds i ts parties, and therefore A 2 is more specific than
Al; on the other hand, Al is not more specific than A 2 , since every possible
fact which implies c is alease of house hand thus makes A 2 apply,
with :Fn also implies c is a contract, wh ich makes Al apply. This is the
only kind of specificity expressible in inheritance networks.
Example 6.2.3 Another typical case of specificity occurs when the an-
tecedent of one default implies the antecedent of another one not merely as
a matter of fact, but deductively. Consider the case of conservatives, who
are by default rieh, and bankrupt conservatives, who are by default poor.
(3) x is a conservative --t x is rich
(4) x is a conservative 1\ x is bankrupt --t ----, x is rich

:Fc : {Dennis is a conservati ve 1\ Dennis is bankrupt}


~: {(3, 4)}
A 3 =:Fc U {(3)} is an argument for Dennis is rich and A 4 =:Fc U {(4)}
is an argument for ----, Dennis is rich. A 4 is strictly more specific than
A 3 since the antecedent of (4) logically implies the antecedent of (3) while
the reverse does not hold; this means that, on the one hand, every fact
which makes A 4 explain ----, Dennis is rich makes A 3 explain the opposite
while, on the other hand, there is a fact, Dennis is a conservative, which
makes A 3 explain Dennis is rich without making A 4 explain the opposite.
Therefore, the argument for ----, Dennis is rich takes precedence.
A variation on the Nixon Diamond is obtained by changing (4) into
(4') x is bankrupt --t ----, x is rich
In that case Dennis is a conservati ve is still a possible fact which makes
A 4 , = :Fc U {( 4')} more specific than A 3 , but now there is also a fact which
makes A 4 , explain ----, Dennis is rich without making A 3 explain Dennis
is rich. Therefore, neither explanation is strictly more specific than the
other.
2In this notation I ignore the complication that an argument does not use defaults
hut ground instances of defaults.
PREFERRING THE MOST SPECIFIC ARGUMENT 147

I now move on to some ex am pies which are less obvious.


Example 6.2.4 An interesting type of example is of tbe following form.
(5) x is a conservative ---t x is selfish
(6) x is poor ---t , x is selfish
(7) x is bankrupt /\ x is a conservative ---t x is poor
Fe: {Dennis is a conservati ve /\ Dennis is bankrupt}
ß: {(5-7)}
Consider the explanations A 5 = Fe U {(5)} for Dennis is selfish and
A 6 = Fe U {(6 - 7)} for -, Dennis is selfish. According to Poole's
definition both explanations are more specific than each other: Dennis
is a conservative is a possible fact which makes A 5 explain Dennis
is selfish witllOut making A 6 explain the opposite, and Dennis is poor
is a possible fact which makes A 6 explain -, Dennis is selfish without
making A 5 explain Dennis is selfish (recall that a possible fact needs
not be in Fe). At first sight, however, it would see m that intuitively there
is a reason to prefer A 6 , viz. the fact that it is based on the fact situation
Dennis is a conservative /\ Dennis is bankrupt, which is a specific
instance of the fact situation Dennis is a conservative on which A 5 is
based. Loui (1987), calling this a case of "superior evidence", indeed defines
specificity in such a way that it prefers A 6 .
Example 6.2.5 Another example, which syntactically differs from the pre-
vious one in some illessential respects, is the following imaginary legal
example.
(8) A wall with loose bricks is a maintenance deficiency.
(9) A wall near a road with loose bricks is a dangerous situation.
(10) In case of a maintenance deficiency the landlord, not the tenant,
must act.
(11) In case of a dangerous situation the tenant, not the landlord,
must act.
Fe: {A wall has loose bricks and is near a road.}
ß: {(8-11)}

wh ich in formulas becomes


(8) I ---t ?TI, (9) m ---t (1/\ ,t)
(10) (1/\ 1") ---t d (11) d ---t (,1/\ t)
Fe: {I,]'}
While Loui's definitions prefer the explanation Fe U {(9, ll)} for -,1/\ t,
according to Poole's definition the conflict can again not be decided. At
first sight Loui's outcome would seem to better match the common-sense
intuitions about the example. However, in my opinion Poole's definition
148 CHAPTER 6

is still correct in regarding the outcome as undecided, for the following


reasons. As explained, in preferring the most specific argument two phases
can be distinguished: firstly, determining which argument is the most spe-
cific; and secondly, deriving new facts with the preferred argument. Now
:Fe only plays a role in the second phase, in determining what may be
held on the basis of the facts of the case at hand. By contrast, specificity
is determined with respect to all possible situations; for an argument to
be preferred it is not enough to be more specific only under the facts
of the case. It is the latter situation which occurs in Example 6.2.5: the
norm (11) itself is, witness its formulation, not meant for a specific kind of
maintenance deficiency but for dangerous situations in general, irrespective
of whether they are maintenance deficiencies; therefore in other situations
the conflicting arguments could be ambiguous, for which reason it cannot
be said that (11) is meant as an exception to (10).

With respect to similar examples Vreeswijk (1991) remarks that ifintuitions


about specificity are not dear or point at different outcomes in different
examples, the formal answer must be that neither condusion is preferred.
In particular, he points at the danger of confusing intuitions about the
logical form of an example with domain specific intuitions about its content.
To summarize the discussion on 'superior evidence', in my opinion Poole's
definition is correct in not regarding this type of situation as a cause of
specificity and in regarding the conflicts in Examples 6.2.4 and 6.2.5 as
undecided.

6.3. Problems

Despite their intuitive attractiveness, Poole's ideas have some drawbacks.


One of them concerns the specificity definition itself, in that it mistakenly
regards all possible situations in which an argument applies as relevant for
determining specificity. Another shortcoming is that the way Poole uses his
definition ignores the possibility of multiple conflicts, while a final problem
is caused by the use of standard logic for representing defaults, which gives
rise to arguments which should not be possible.

6.3.1. SOME POSSIBLE FACTS ARE IRRELEVANT

The first problem was discovered by Loui & Stiefvater (1992). As explained
above, the general intuition behind Poole's specificity comparator is that
an argument Al is strictly more specific than an argument A 2 if there is a
possible situation in which only A 2 applies, and there is no possible situation
in which only Al applies. Although as a general intuition this seems sound,
PREFERRING THE MOST SPECIFIC ARGUMENT 149

the next example shows that the notion of a 'possible situation in which an
argument applies' should be defined carefully.
Example 6.3.1 Consider:
Al = {t, s t -t r s-tq (qAr)-tp}
A2 = {s s-tq q - t .p}
Fe = {s, t}
Fn = 0
Intuitively, Al should be strictly more specific than A 2 • However, according
to Poole's definition this is not the case, since there is a possible fact
making Al explain p without making A 2 explain 'p: this possible fact
is tA (r - t q). The problem is that this fact 'sneaks' a new way of deriving
an intermediate conclusion into the argument by introducing a new 'link'
r - t q, thereby intuitively making it a different argument: with the possible
fact Al uses a different default to explain q than with the actual facts,
viz. t - t l' instead of s - t q, and therefore it cannot be said that Al
itselj applies in the possible situation. Note that these considerations are
of a syntactical nature, which indicates that Poole's aim to have a purely
semantical definition of specificity cannot be maintained.

6.3.2. MULTIPLE CONFLICTS IGNORED

Another problem is that Poole's use of the specificity definition incorrectly


handles examples in which more than one conflict must be solved, because
it ignores the possibility that an argument contains a defeated premise.
Example 6.3.2 Consider the following example.
Al = {p P -t q q -t r r-ts}
A2 = {p, r (p A 1') -t .q .q - t t t - t .s}
Fe = {p,1'}
Fn = {r - t t}
Poole's definition prefers Al for s, because t is a fact which makes A 2 explain
·s without Al explaining s, while all facts which make Al explain s imply
rand therefore, since F n contains r - t t, they all imply t, which makes A 2
explain .s (note again that Fe is ignored). However, this overlooks the fact
that Al uses the intermediate conclusion q, for which the scenario Al' =
Fe U F n U {p - t q} is clearly defeated by A 2, = Fe U Fn U {(p Ar) - t .q}
for .q.
Of course, as Poole (1988, p. 146) himself recognizes when discussing a sim-
ilar example, for an argument to be preferred not only the final conclusion
but also all inter mediate conclusions must be prefcrrcd. Thc problem with
Definition 6.2.1, however, is that it does not recognize q as an intermediate
150 CHAPTER 6

conclusion of Al. In fact, this is a fundamental flaw of Poole's inconsistency


handling approach: it fails to recognize that the specificity definition must
be embedded in a general argumentation framework, defining when and to
which conflicts the definition should be applied. Most importantly, such
a framework should reflect the step-by-step nature of constructing and
comparing arguments (in fact, this is one of the two main conclusions of this
chapter). What should happen is that, rather than at once, it is checked
after each step whether the argument constructed thus far is better than all
counterarguments, and if at some step there are counterargumellts which
are strict1y more specific with respect to the conclusions of timt step, the
argument should be 'cut off'. In this way Al in the present example will
be cut off after its intermediate step leading to q and therefore A 2 will for
trivial reasons be the argument which wins the conflict about s, since its
only counterargument, which is Al, has not survived a comparison for an
earlier step. One of the main goals of this chapter will be to formalize this
step-by-step nature of constructing and comparing arguments.

6.3.3. DEFAULTS CANNOT BE REPRESENTED IN STANDARD LOGIC

Even with a framework satisfying the just stated demands there will be
so me problems left. As noted above, advocates of the inconsistcncy handling
approach to nonmonotonie reasoning stress thc fact that this approach can
be based on standard first-order logic. However, there are strong reasons
to doubt the tenability of this claim: I shall now show that if the required
general framework of comparing arguments which has just been sketched
is combined with Poole's (and Brewka's) view that defaults can be repre-
sented as material implications, then arguments can be constructed which
intuitively should not be possible at all.
Example 6.3.3 Consider the example of Bob having killed Karate Kid in
self-defence.
(1) x kins y ~ x is guilty of murder
(2) x kins yA x acts in self-defence ~
--, x is guil ty of murder
(3) x defends himself against KK ~ x acts in self-defence
Fe: {Bob kins KK, Bob defends himself against KK}
ß: {(I - 3)}

Clearly, from this set of rules and facts --, Bob is guil ty of murder
should be the preferred conclusion. However, Poole's framework allows us
to explain Bob acts in self-defence from Fe U {(3)}, but also --, Bob
acts in self-defence from Fe U {(1, 2)}. The reason is timt (1) together
with Bob kins KK implies Bob is guilty of murder, which by modus
PREFERRING THE MOST SPECIFIC ARGUMENT 151

tollens together with (2) implies --, Bob kills KK V --, Bob acts in
self-defence, which with the facts implies --, Bob acts in self-defence;
furthermore, since Bob kills KK is a possible fact making F n U {(1,2)}
explain --, Bob acts in self-defence without making F n U {(3)} explain
Bob acts in self-defence, the argument against Bob acts in self-
defence is less specific, which would mean that given the premises there is
an irresolvable legal issue concerning whether Bob acted in self-defence and
this in turn would mean that the argument for --, Bob is guil ty of murder
uses a non-preferred sub argument and cannot be preferred.
However, in my opinion the argument for --, Bob acts in self-defence
is intuitively not constructible, for the following reasons. This argument is
based on the fact that, since (1) and (2) have contradicting consequents,
logically their antecedents cannot be true at the same time; however in
the inconsistency handling approach, as weIl as in legal reasoning, the very
purpose of using collision rules is to cope with situations to which conflicting
rules apply, and therefore it is strange to allow an argument based on the
fact that such a situation cannot arise. A system which makes the argument
against Bob acts in self-defence possible fails to recognize that it is
(1) and (2) that are in conflict and that this is the conflict that has to
be resolved. And since this argument is based on modus tollens, we must
conclude that rules that are subject to collision rules do not validate modus
tollens.
It must be admitted that, as already discussed above, Poole (1988, pp.
137-40), recognizes the invalidity ofmodus tollens for defaults as "a possible
point of view", and presents a method to block it. However, as we have seen
in Section 5.6.3, this method is optional: the choice whether to use it or not
must be made separately for each default and philosophically this is not
satisfactory: if modus tollens is regarded as invalid for defaults, this should
be expressed in their logic, by making them one-directional.
To summarize the main results of this section, we have, firstly, seen
that the choice approach to deal with exceptions should take into account
the step-by-step nature of argumentation and, secondly, that the claim of
Poole and others that this approach can be modelled without having to
change the logic cannot be maintained: defaults should be represented as
one-direction rules.

6.4. A System for Constructing and Comparing Arguments


6.4.1. GENERAL REMARKS

As an attempt to solve the problems identified in the previous section, I


shall now develop a general system for constructing and comparing argu-
ments, in which the language of Reiter's default logic is used for expressing
152 CHAPTER 6

defeasible mIes. The system is general in the sense that it assumes an


unspecified standard for comparing arguments, but in this chapter this
standard will be instantiated with a suitable specificity principle.
The system consists of the following five elements. First it has the under-
lying logicallanguage of default logic, in which premises can be expressed.
Then it has the not ion of an argument; the intuitive idea is that arguments
are chains of defaults, grounded in the facts and held together by pieces of
first-order reasoning. While these two notions capture the construction of
arguments, the remaining elements formalize the evaluation of conflicting
arguments. The first is the notion of a conflict between arguments. In
this chapter we have so far only discussed cases where arguments have
contradictory conclusions, and this will initially be the only type of conflict
recognised by the system. However, later, in Section 6.6 I shall also analyze
conflicts where one argument attacks the justification part of adefault.
Then it must be defined how to compare conflicting arguments, to see
which arguments defeat which other arguments. In li ne with the subject
of this chapter it will first be defined how arguments are compared with
respect to specificity. However, it is very important to realise that the
general system does not depend on specificity in any way; it will be very easy
to replace this criterion with any other standard for comparison. Above it
was already explained why this is desirable: in the legal domain specificity
is just one of the possible standards for comparing arguments, and not
even always the most important one. In Chapters 7 and 8 the use of other
standards, including their combination, will be extensively discussed.
Since defeating arguments can themselves be defeated by other argu-
ments, comparing just pairs of arguments is insufficient; what is also needed
is adefinition that determines the status of arguments on the basis of all
ways in which they interact. It is this definition that produces the output
of the system: it divides arguments in three classes: arguments with which
a dispute can be 'won', respectively, 'lost' and arguments which leave the
dispute undecided. In this book these classes are denoted with the terms
justified, overruled and defensible arguments.
Let us now turn to the formal definition of these five elements.

6.4.2. THE UNDERLYING LOGICAL LANGUAGE

One main conclusion of the previous section was timt even in a system
for comparing arguments defeasible statements need to be formalized with
a nonstandard, one-directional conditional operator. To meet this require-
ment, the underlying logical language of the system will be that of default
logic. However, it is important to realise that the only aspect of default
logic that is used by the system is its language; Reiter's not ion of adefault
PREFERRING THE MOST SPECIFIC ARGUMENT 153

extension is replaced by the last three not ions just discussed, that of an
argument, of defeat among arguments and, most importantly of the status
of an argument.
The language of default logic is used in the following way. With respect
to the facts, the idea is to represent the facts of the case at hand as the set
:Fe of contingent facts, and necessary truths such as 'a man is aperson' or 'a
lease is a contract', as the set :Fn of necessaTY facts. With respect to default
statements the idea is to formalize them as non-normal defaults 'P : T /1/;
(where T stands for any valid formula), which in the rest of this book will
be written as 'P => 1/;. Unconditional defaults will be represented as defaults
of the form => 'P which is shorthand for T => <po The one-directional nature
of Reiter-defaults invalidates the formal construction of intuitively invalid
arguments.
At first sight, the use of non-normal defaults seems surprising, since in
Chapter 4 we have seen that non-normal default theories do not always
have extensions. However, the present system solves this problem, as will
be discussed in more detail below.
Next the language will be formally defined. Defaults will from now
on be called 'defeasible rules', or just 'rules'. Moreover, for notational
simplicity they are now regarded as object level formulas instead of as
(domain specific) inference rules, as they are in default logic. However, it
should be noted that they can still not be combined with other formulas,
i.e. they cannot be negated, conjoined, nested, and so on.
Definition 6.4.1 Let Lo be any jiTst-oTdeT language.
- A defeasible rule is an expression of the form
'PI 1\ ... 1\ 'Pn => 1/;
where each 'Pi (0 :S i :S n) is a formula of Lo. The conjunction at the
left of the aTrow is the antecedent and the literal at the right of the
arrow is the consequent of the rute. A rule with open Lo-formulas is a
scheme standing for alt its ground instances.
The defeasible extension LI of Lo is Lo extended with the set of alt
defeasible rules 'P => 1/;.
- Adefault theory is a set (:Fe U F n U~), where Fe U Fn is a consistent
subset of Lo and ~ is a set of defeasible rules.
To define some useful notation, for any rule 1', ANT(r) and CONS(1')
denote, respectively, the antecedent and the consequent of 1', while the con-
junction of all elements of ANT(r) is denoted by ANTCON(1'). Further-
more, for any finite set of rules R = {1'1,"" 1'n }, ANT( R) = ANT( 1'1) U
... U ANT(1'n ); likewise for CONS(R), while, finally, ANTCON(R) is the
conjunction of all members of ANT(R).
154 CHAPTER 6

6.4.3. ARGUMENTS

Next the notion of an argument will be defined. This will be done by


making use of work of Dung (1993; 1995) and Bondarenko et al. (1997),
who show how, among many other nonmonotonic logics, Reiter's definition
of default extensions can be reformulated in terms of constructing and
comparing arguments. To start with, a notion I"" of 'simple dcrivability'
will be defined over the language. This notion should not be confused
with the final nonmonotonie consequence notion we are aiming at, which
will capture the justified arguments; I"" only captures the arguments that
can be constructed, not the arguments that survive the competition with
their counterarguments. The new notion is defined in the form of what is
commonly called a deductive system.
Definition 6.4.2 (default deductive systems) Let La be any first-order lan-
guage, LI the defeasible extension of La and let Ra be any axiomatization
of fir'st-order logic defined over La. That is, Ra is a set of infer'ence rules
of the form 'PI, ... , 'Pn/1/J, where 'PI,· .. , 'Pn, 1/J E La and n ~ O. Note that
logical axioms can be repr'esented as inference rules with n = O.
Consider a new inference rule DMP (default modus ponens)
'P, 'P ~ 1/J / 1/J
and let Rl be Ra U {DMP}. Then (LI, Rt) is adefault deductive system.
The next definition is that of a deduction. Like all other definitions below,
it implicitly assumes an arbitrary default deductive system (.cl, Rt).
Definition 6.4.3 (default deductions) A deduction from a set r of .cl
f01'mulas is a sequence ['PI, ... , 'Pn], whe1'e n > 0, such that for all 'Pi
(1 ~ i ~ n):
- 'Pi E r; 01'
- There exists an inference rule '1/-'1, ... , 1/Jm/ 'Pi in Rl such that
1/Jl, ... ,1/Jm E {'Pl, ... ,<Pi-t}
We say that <P is simply derivable from r (r I"" <p) iff ther'e exists adefault
deduction from r witk <p as its last element.
In words, every element of adefault deduction is apremise, an axiom, or is
simply derived from preceding elements in the deduction. In particular,
a first-order formula is simply derivable iff it follows by DM P from a
defeasible rule, or if it is deductively implied by the facts and/or other
simply derivable formulas.
Definition 6.4.3 applies the standard notion of a deductioll to our new
language LI. To make it suitable for an argumentation system, a few extra
conditions are needed. They are stated by the definition of an argument,
which is defined as a deduction with two additional properties. The first is
PREFERRING THE MOST SPECIFIC ARGUMENT 155

that arguments do not contain unused defeasible rules: every defeasible rule
in an argument is both 'applicable' and 'applied' . This is because, since in
the present system reasoning about defeasible rules is impossible, including
such rules in an argument without using them does not serve any purpose.
The second property is that arguments do not contain circular chains of
rules; without this condition it would sometimes be possible to save inferior
arguments by extending them circularly.

Definition 6.4.4 Let r be any default theory (FeUFn Uß). An argument


based on r is a default-deduction from r such that
1. Of every defeasible rule d in A, its antecedent occurs bef01'e d and its
consequent OCCUTS after d in A; and
2. no element occurs more than once in A.
For any default theory r the set of all a1'f}uments on the basis of r will
be denoted by Argsr. If there is no danger of confusion, the subscript r will
be left implicit.

We also need the following auxiliary not ions.


Definition 6.4.5 For any argument A
- cp E Ais apremise of A iJJcp E r;
- cp E A is a conclusion of A iJJ cp E Co;
- An argument A' is a (proper) subargument of A iJJ A' is a (prope1)
subsequence of A;
- A is strict iJJ A does not contain any defeasible rule; A is defeasible
otherwise.
Note that defeasible rules cannot be conclusions of an argument, so that
reasoning about defeasible ruIes, for instance, deriving new defeasible rules,
or constructing arguments against such rules, is impossible. They can only
be applied, Le. used to derive conclusions.
I now illustrate the definitions with so me exampies. Consider adefault
theory r = Fe U F n U ß, where
Fe = {q}
Fn = 0
ß = {=> p, q => T, pl\1' => s}
Then the following sequence A is a (defeasible) argument on the basis of
r.
A = [=> p,p, q, q => T, r, (p 1\ 7'),p 1\ T => s, sJ
A's premises are {=> p, q, q => r, p 1\ T => s}; furthermore, A's conclusions
are {p, q, r, (p 1\ T), s }, and its sub arguments are
156 CHAPTER 6

[]
[q]
[=> P,pJ
[=> p,p,q]
[q,q => 1',1'J
[=> p,p, q, q => 1',1'J
[=> p,p, q, q => 1',1', (p t\ 1')J
[=> p,p, q, q => T, 1', (p t\ 1'), q t\ l' => s, sJ
Of these arguments only [ land [qJ are strict.
In the rest of this book I shall for readability of the examples often
only list the premises of an argument and leave the logical axioms and
derivation steps implicit. I also use the following notational convelltions. As
noted above, the definitions below assume an arbitrary default deductive
system. Furthermore, unless specified otherwise, they also assume a fixed
but arbitrary default theory r = (Fe U F n U ß). F will denote Fe U F n .
Furthermore, when I write Fe, Fn or D, I implicitly assume that Fe ~
Fe, Fn ~ F n and D ~ ß. And when I say of an argument A that A =
[Fe, F n , D], I mean that the premises of Aare all and only elements from
Fe, Fn and D.

6.4.4. CONFLICTS BETWEEN ARGUMENTS

So far the notions have been fairly standard; now the adversarial aspects
of the system will be defined. First it will be defined when an argument
attacks, i.e. is a counterargument of another argument 3 . This definition
does not yet include any way of checking which argument is better; it only
teIls us which arguments are in conflict. At first sight, it would seem that
arguments attack each other iff they have contradictory conclusions, i.e.
iff two rules in the respective arguments are in a head-to-head conflict.
However, the necessary facts complicate the matter. To see this, consider
the following example.
Example 6.4.6 Assume we have the defeasible rules
d1 : x has children => x is married
d2: x lives alone => x is a bachelor
with the necessary fact
In: 't/x. x is married ---+ ..., x is a bachelor
and the contingent facts
3To prevent terminological confusion, it should be noted that Dung and Bondarenko
et al. use the term 'attack' in a different way, coming doser to my notion of 'defeat', to
be defined below. They do not have an explicit notion of what I call 'attack'.
PREFERRING THE MOST SPECIFIC ARGUMENT 157

fCI: J ohn has children


f C2: John lives alone

The arguments [fCl' dll and [fC2' d2 l have no contradictory conclusions. Yet
intuitively they attack each other, since the necessary fact fn is a linguis-
tic convention, declaring the predicates is married and is a bachelor
incompatible. To capture this intuition, two arguments must be defined
as attacking each other iff they have conclusions that together with the
necessary facts are inconsistent.
Definition 6.4.7 (attack) Let Al and A 2 be two arguments. Al attacks A 2
iff Al U A2 U F n I- .L
Let me now discuss and illustrate this definition. Note first that if Al at-
tacks A 2, then A 2 attacks Al. Next, every internally inconsistent argument
attacks not only itself but every other argument, since every well-formed
formula of Co is implied by the inconsistent conclusions alone. Although this
might seem strange, it will not cause problems, since inconsistent arguments
will always come out as overruled (see furt her Section 6.5).
Example 6.4.8 The first example shows that in order to attack an argu-
ment, a counterargument can point its attack at the 'final' conclusioll of an
argument, but also at one of its proper subarguments, thereby indirectly
attacking the entire argument. So if we have
dl : ~ f forged evidence e
d2: f forged evidence e ~ ..., e is admissible evidence
d3 : ~..., f forged evidence e

we have that [d 3 l does not only attack [dll, but also [d l , d2 l. And if we vary
this example into
dl : ~ f forged evidence e
d4 : f is police officer 1\ f forged evidence e ~ ...,
f is trustworthy
d5 : f is police officer ~ f is trustworthy
d6 : f is trustworthy ~ ..., f forged evidence
fc: f is police officer
then we have that [d l , d4 ] does not only attack [fc, d5] but also [Jc, d5, d6].
And [fc, d5, d6l does not only attack [dd, but also [d l , d4 ].
The concept of 'attack/counterargument' is very important, since clearly
a minimum requirement on any system for comparing arguments is that if
two arguments are in conflict with each other, they cannot both be accepted
as justified. The present system should agree with this, in the sense that
the set of 'justified arguments', to be defined later, is free from conflicts.
To this end, I now define the following notion.
158 CHAPTER 6

Definition 6.4.9 A set Args of arguments is conflict-free iff no argument


in Args attacks an argument in Args.

6.4.5. COMPARING ARGUMENTS

Now that we know which arguments are in conflict with each other, the
next step is to compare conflicting arguments. This step has three aspects:
picking out the rules that are relevant to a conflict, comparing the relevant
rules with respect to specificity and determining how this results in defeat
relations among arguments.
The new specificity definition will be of a syntactic nature, for two
reasons. Firstly, Example 6.3.1 shows that an intuitive specificity defi-
nition that is free from syntactic considerations is impossible. Moreover,
Example 6.3.3 has shown that it is important to identify the rules that are
responsible for a conflict. It is these rules that will be compared with respect
to specificity. Therefore we first have to define when a rule is relevant to a
conflict.

Picking Out the Relevant Rules


When two arguments attack each other, we want to identify the 'conflict
pair', i.e. the rule or rules of each of the arguments that are responsible for
the conflict. In the simplest case this pair will consist of just one rule from
both arguments, viz. a pair of two rules such that their joint consequents to-
gether with the necessary facts imply a contradiction. However, sometimes
it takes the consequents of more than two rules to derive a contradiction,
for which reason the conflict pair must be defined as a pair of sets of rules.
More specifically, we consider pairs of minimal sets of defeasible rules of
the conflicting arguments such that their consequents together with the
necessary facts are inconsistent.
Definition 6.4.10 (conftict pairs) Consider any two arguments Al =
[Fl,D l ] and A 2 = [F2,D2] attacking each other. A conflict pair of (A l ,A 2)
is any pair (Cl, C 2 ) of minimal sets of defeasible rules such that Cl ~ D l
and C2 ~ D 2 , and
- CONS(C I U C2) U F n f- 1...
To illustrate this definition, consider first adefault theory with ~ =
{d l : => p,d 2 : => q}, F n = {Jn: ..., (p 1\ q)} and Fe = 0. Then [d1,p]
and [d 2 , q] attack each other and the only conflict pair is (d l , d2 ).4
Consider next adefault theory with the same ~ but with the contents
of F n and Fe swapped: Fe = {Je: ..., (p 1\ q) } and F n = 0. Then [d1,p] and
[d 2 , q] do not attack each other, since ...,(p 1\ q) is now just a contingent
4lf a set of rules in a conflict pair is a singleton, I omit the brackets.
PREFERRING THE MOST SPECIFIC ARGUMENT 159

fact. Instead the conflict is between [d l , p, Je, -, q] and [d2, q]. Now a crucial
difference with the first example is that the conflict pair of these arguments
is (0,0): again this is because Je is now just a contingent fact and not a
linguistic convention, for which reason it cannot be said that it is the two
defeasible rules that are conflicting.
Let us now examine the three examples from Section 6.3 that revealed
problems of Poole's approach. The will be used throughout below to illus-
trate the remaining components of the present system.
Example 6.4.11 Consider first the translation of Example 6.3.l.
Al = [t, s t :::} r s :::} q q /\ r :::} p]
A2 = [s s :::} q q :::} -,p]
Fe = {s, t}
Fn = 0
There is only one conflict pair, viz. (q /\ r :::} p, q :::} -,p), since {p, -,p} I- 1...
Example 6.4.12 Consider next Example 6.3.2, in which the arguments
conflict on two issues.
A I = [p dl: p:::}q d2: q:::}r d3 : r :::} s]
A2 = [p,r d4: p /\ r :::} -,q d5 : -'q :::} t d6 : t :::} -,s]
F e = {p,r}
F n = {r ~ t}

Since these arguments are in conflict on two issues, they have two conflict
pairs, viz. (dl,d4) and (d 3 ,d6 ).
Example 6.4.13 Consider finally again Example 6.3.3, which motivated
our shift to adefault logic language.
dl : x kills y ~ x is guilty of murder
d2 : x kills y /\ x acts in self-defence ~
-, x is guil ty of murder
d3 : x defends himself against KK ~ x acts in self-defence
Fe: {it: Bob kills KK, 12: Bob defends himself against KK}
The two arguments attacking each other are
Al = [it, d l , Bob is guilty of murder]
A2 = [h,d3,it,d l ,-, Bob is guilty of murder]
There is only one conflict pair, viz. (d I, d2 ). So the rules that are respon-
sible for the conflict are d l and d2 , which agrees with the conclusion of
Section 6.3.3.
Example 6.4.14 Finally I show an example in which more than two rules
are relevant. Assume that Al = [dl::::} Rab, d2 ::::} Rbe] , A 2 = [d3::::} Rea] ,
160 CHAPTER 6

and that F n contains transitivity and asymmetry axioms for R, l.e. F n


contains
transitivity: \:Ix, y, z.Rxy A Ryz --+ Rxz
asymmetry: \:Ix, y.Rxy --+ -,Ryx
then Al and A 2 attack each other. Now if we examine Al, both d l and
d2 are needed to create the conflict, so the conflict pair is ({ rl, r2}, {r3}),
which me ans that d3 must be compared with the set of rules {d l , d2}'

Comparing with Respect to Specijicity


Now that we know how to pick out the rules that are responsible for a
conflict, how do we compare them on specificity? Above we have seen
with Example 6.3.1 that the purely semantic character of Poole's specificity
definition cannot be maintained. Now instead ofincorporating the necessary
syntactic elements into Poole's definition (as I did in Prakken, 1993), I shall
use a simpler one, viz. a generalised version of the definition of Nute (1992),
which is completely syntax based. Nute's definition checks whether of two
conflicting defeasible rules the antecedent of one of them implies with (in
our terms) the necessary facts and the other defeasible rules of the default
theory the antecedent of the other. If this test succeeds, the first rule is
more specific than the other.
For example, pAr => q is strictly more specific than p => -'q, since
pAr I", p but p If pAr. And if D. = {r::::} q,p::::} -'q,r::::} p} and F n = 0,
then r ::::} q is strictly more specific than p ::::} -'q, since F n U D. U {r} I'" p,
while F n U 6 U {p} If r.
For the simplest form of conflict, with a direct conflict between two
defeasible rules, Nute's definition suffices. However, we have just seen that
conflicting arguments may contain more than one rule that is relevant to
the conflict. Therefore Nute's definition has to be generalised.
Definition 6.4.15 (specijicity) Let D I and D 2 be two sets of defeasible
rules. D I is more specific than D 2 iff
- ANT(Dt) U F n U D. I'" ANTCON(D2)'
D I is strictly more specific than D 2 iff D I is more specijic than D 2 and D 2
is not more specijic than D I .
Let us illustrate this with some examples, to start with the three examples
that were problematic for Poole's approach. In Example 6.4.11 q Ar::::} p is
strictly more specific than q ::::} -'p. This simple case of strict specificity also
occurs in the earlier conflict of Example 6.4.12: d4 : pAr => -,q is strictly
more specific than dl : p::::} q. Slightly more complex is the later conflict in
this example: since F n = {r --+ t} and {r --+ t, r} I- t, d3 is more specific
than d6 . Moreover, since F n U D. U { t} 1f r, d3 is also strictly more specific
than d6 •
PREFERRING THE MOST SPECIFIC ARGUMENT 161

Example 6.4.13 is another example of the simple specificity case: clearly

d2 : Bob kills KK /\ Bob acts in self-defence ::::}


• Bob is guil ty of murder
is strictly more specific than
d1 : Bob kills KK ::::} Bob is guilty of murder.
The reason that the set ß also counts in determining specificity is to capture
so-called defeasible specijicity. In AI the standard example is 'adults are
typically employed, students are typically unemployed, and students are
typically adults'. In order to derive 'adults' from 'students', we need the
third defeasible rule. Here is a legal example of this kind.
Example 6.4.16 Consider the following formalization of Example 3.1.4.
d1 : x kills y /\ x acts wi th intent ::::}
maximum penalty for x is 15 years
d2 : x kills y /\ x and y duel on life-and-death
::::} maximum penalty for x is 12 years
d3 : X and y duel on life-and-death::::} x acts wi th intent
:Fe: { Charles kills Henry, Charles and Henry duel on
life-and-death}
:Fn: {\Ix .(maximum penalty for x is 15 years /\
maximum penalty for x is 15 years)}
The rules to be compared are the instantiations of (h and d 2 for Charles
and Henry, since fn makes their consequents contradictory. The inclusion
of d3 in the set of defeasible rules makes it possible to simply derive the
antecedent of d1 from the antecedent of d2 ; and since the reverse derivation
is impossible, d 2 is strictly more specific than dl.

Defeat among Arguments


Now that we can determine which arguments are in conflict with each
other, which rules are relevant to a conflict and which set of relevant rules
is more specific than another, we can define when an argument defeats
another argument. It is important to realise that this definition does not
yet determine with which arguments a dispute can be won; it only teIls us
something about the relation between two individual arguments (and their
subarguments ).
The defeat relation between arguments is partly but not wholly deter-
mined by the specificity relation between the relevant set of rules. This
is since the specificity criterion will only be applied to conflicts between
two defeasible arguments; a strict argument always defeats a defeasible
argument, irrespective of specificity considerations. Note that since :F is
162 CHAPTER 6

assumed consistent, strict arguments never attack each other, so there is


no need to consider this situation.
The not ion of defeat is a weak one: that Al defeats A 2 intuitively does
not mean that Al is 'really better' than A 2 , but just that it is not worse;
this means that two conflicting arguments that are equally strong defeat
each other. The reason we have this weak not ion of defeat is that the notion
of 'justified arguments', to be defined next, regards only those arguments
as justified that, given the premises, are beyond any reasonable doubt
or challenge. And reasonable doubt can be cast on an argument just by
providing a counterargument that is not inferior to it. For this reason a
(defeasible) argument Al that attacks a (defeasible) argument A 2 already
defeats A 2 if, firstly, it survives the specificity test with respeet to just one
of the confliet pairs, and, secondly, if this survival is weak in the sense that
A 2 is not strietly more specific than A 2 .
Definition 6.4.17 (specijicity defeat) Let Al and A 2 be two arguments. Al
defeats A 2 iff
1. Al attacks A 2 ; and
2. A 2 is defeasible; and
(a) Al is strict; 01'
(b) for some confiict pair (Cl, C2) of (Al, A 2 ) it holds that C2 is not
strictly more specijic than Cl.
Al strietly defeats A 2 iff Al defeats A 2 and A 2 does not defeat Al.
Note that every internally inconsistent defeasible argument is strictly de-
feated by every strict argument. This is since for every conclusion <p of
astriet argument, the defeasible argument implies -'<p by the Ex Falso
principle.
Let us again check our three examples from Section 6.3. In Exam-
pIe 6.4.11 Al strictly defeats A 2 since they have only one conflict pair,
viz. (q 1\ l' ~ p, q ~ -,p), and the first default is strietly more specific than
the latter.
Example 6.4.12 is more interesting, since there Al and A 2 have two
confliet pairs. Alld above we have seen that in the earlier confliet A 2 has
the strietly more specific relevant rule, while in the later confliet Al has the
strietly more specific relevant rule. So both arguments satisfy condition (2b)
of Definition 6.4.17 for so me confliet pair, for whieh reason both arguments
defeat each other. Yet the reader should not worry: the definition of a
justified argument, to be defined in the next section, will respect the order
of the conflicts and still prefer A 2 over Al.
Example 6.4.13 is again simple: Al and A 2 have only one confliet pair,
and A 2 wins the specificity comparison with respeet to this pair, so A 2
strictly defeats Al.
PREFERRlNG THE MOST SPECIFIC ARGUMENT 163

6.4.6. INFORMAL SUMMARY

Perhaps the reader has been overwhelmed by the large number of definitions
in this section. Therefore I now briefly summarize what we have at this
point. We have the means to express strict information in the language of
standard logic and defeasible information as one-direction rules. Arguments
can be formed by applying monotonie inference rules to the premises, viz.
those of standard first-order logic plus a special modus ponens rule for
the one-direction rules. The simplest type of conflict between arguments is
when two defeasible rules have directly contradictory heads. Accordingly,
the basic way of comparing arguments is very simple, viz. checking which
of the two conflicting rules is more specific. However, two things complicate
the matter: indirectly contradicting rule heads because of strict information,
and multiple conflicts between two arguments. It is these two complications
that gave rise to the complex definitions: indirect rule conflicts induced a
complex definition of the relevant rules and their comparison, while multiple
conflicts induced a complex definition of defeat.
For the rest of this chapter all that is relevant is that the definitions
have, given a set of premises, produced a set of possible arguments and a
binary relation of defeat among these arguments; these two elements suffice
to define the notion of a justified argument. Since this not ion is the central
part of the system, it will be discussed in aseparate section.

6.5. The Assessment of Arguments


6.5.1. THE GENERAL IDEA

We have seen that the final result of the preceding section was a binary
relation of defeat among arguments. However, this notion just expresses
the relative strength of two conflicting arguments. If we want to know
which arguments can be accepted as justified relative to the entire set
of possible arguments, we also need adefinition that takes aB ways into
account in which arguments can interact. For instance, even if A strictly
defeats B, this is insufficient to know whether A is justified and B is
not; B can be reinstated by an argument C that strictly defeats A, and
that itself survives aB attacks, either by its own strength or in turn with
the help of other arguments reinstating C. Another important feature of
assessing arguments that should be captured is the step-by-step nature of
argumentation. Example 6.3.2 has shown that any definition of a justified
argument must satisfy the 'weakest link' principle that an argument cannot
be justified unlcss all its subarguments are justified.
In the present section a way of assessing arguments will be defined that
accounts for these features. It takes as input the set of all possible arguments
164 CHAPTER 6

and their mutual relations of defeat, and produces as output a division of


arguments into three classes: arguments with whieh a dispute can be 'won',
respectively, 'lost' and arguments which leave the dispute undecided. As
remarked above, the winning, or justified arguments should be only those
arguments that, given the premises, are beyond reasonable doubt: the only
way to cast doubt on these arguments is by providing new premises, giving
rise to new, defeating counterarguments. Accordingly, our set of justified
arguments should be unique and conflict-free.
The formalization of these ideas will employ the dialectieal style of a
dialogue game. The game has two players, a proponent and an opponent of
a claim. The proponent has to prove that there is a justified argument for
its claim, while the opponent has to prevent the proponent from succeeding.
A proof that an argument is justified will take the form of a dialogue
tree, where each branch of the tree is a dialogue, and the root of the
tree is an argument for the formula. The idea is that every move in a
dialogue consists of an argument based on the given default theory, where
each stated argument attacks the last move of the opponent in a way
that meets the player's burden of proof as specified by the rules of the
game. The required force of a move depends on who states it. Since the
proponent wants a conclusion to be justified, a proponent's argument has
to be strietly defeating, while since the opponent only wants to prevent the
conclusion from being justified, an opponent's move may be just defeating.
That a move consists of a complete argument means that the search for
an individual argument is conducted in a 'monological' fashion; only the
process of considering counterarguments is modelled dialectically.
I am not the first who has used the dialectieal form for a logical sys-
tem. Well-known are the game-theoretic notions of (monotonie) logieal
conscquence developed in dialogue logic (for an overview see Barth &
Krabbe, 1982). Here the meaning of the logical constants is defined in terms
of how statements with these connectives can be attacked and defended.
And in nonmonotonie logic dialectical proof theories have earlicr been
studied by, for instance, Dung (1994) and, inspired by Rescher (1977), by
Simari & Loui (1992), Loui (1997), Vreeswijk (1993b) and Brewka (1994b),
while Royakkers & Dignum (1996) have also developed ideas that can be
regarded as a dialectical proof theory.
Let us illustrate the dialogue game with a dialogue based on the follow-
ing example, loosely based on the famous OJ Simpson trial.
Example 6.5.1 Assurne we have the following default theory (the reader
may ass urne that it has arisen from everything that the parties in thc
dialogue have said: the dialogue game assurnes no prior agreement between
the parties on a pool of premises from which the arguments are to be
constructed; see also Section 10.4.5 below).
PREFERRING THE MOST SPECIFIC ARGUMENT 165

d1: =? x is admissible evidence


d2 : x is admissible evidence =? x proves guilt of suspect OJ
d3 : x is police officer in LA =? x is aracist
d4 : x is a racist /\ suspect y is black =? x forged evidence z
ds : x forged evidence z =? -, z is admissible evidence
d6 : x is police officer in LA /\ x has a black vife =?
-, x is a racist
d7 : =? I dislikes suspect OJ
d8 : x dislikes suspect y =? x forged evidence z
dg : x dislikes suspect y /\ x is police officer =?
-, x forged evidence z
ICl: I is police officer in LA
IC2: suspect OJ is black
IC3: I has a black vife
In: VX. x is police officer in LA - t x is police officer

Below, the proponent's moves are denoted with Pi and the opponent's
moves with Oi. The proponent, which in this case will be the prosecutor,
starts the dispute by claiming that a certain piece of evidence e (perhaps
a bloody glove found on the driveway) proves that the suspect is guilty.
P hereby uses the default assumption d 1 that any piece of evidellce is
admissible unless shown otherwise.
PI: [dl: =? e is admissible evidence,
d2 : e is admissible evidence =?
e proves guilt of suspect OJ]
Now the opponent, in this example the defence, has to defeat this argument.
o does so by arguing that the evidence is not admissible: the suspect is
black and the police officer who found the evidence is araeist since all
police officers in LA are raeists, so it is to be expected that the police
officer has forged the piece of evidence.
01: [ICi: I is police officer in LA,
d3: I is police officer in LA =? I is aracist,
I C2: suspect OJ is black,
d4: I is aracist /\ suspect OJ is black =?
I forged evidence e,
ds: I forged evidence e =? -, e is admissible evidence J
The proponent now has to counterattack with an argument that strict1y
defeats 0 1 and thus reinstates PI' P does so by arguing that police officer
I is known to have a black wife, which implies that he is not araeist. Note
that thus P (strict1y) defeats O's argument by (strict1y) defeating one of
its proper subarguments.
166 CHAPTER6

P2 : [Jet: J is police officer in LA,


IC3: I has a black wife,
d6 : I is police officer in LA 1\ I has a black wife =*
--, I is aracist 1
o finds no way to defeat this argument, so 0 now tries another line of attack
against PI. 0 now maintains that J is known to dislike the suspect, and
people who dislike suspects tend to forge evidence concerning that suspect.

01': [d 7 : =* I dislikes suspect OJ,


ds: I dislikes suspect OJ =* I forged evidence e 1
The proponent must also refute this li ne of attack, otherwise it loses the
dialogue game. P does so by arguing that police officers don't forge evidence
even if they dislike the suspect.
P2,: [In: 'r/x. X is police officer in LA -+ x is police officer,
JC!: I is police officer in LA,
dg: I dislikes suspect OJ 1\ I is police officer
=* --, I forged evidence e 1
Now the opponent has run out of moves: no argument on the basis of our
default theory defeats P's last argument. So P can successfully defend its
argument against every possible way of attack, which means that PI is a
justified argument.

6.5.2. THE DIALOGUE GAME DEFINED


Justified Arguments
Now the dialogue game for determining whether an argument is justified
will be formally defined.
Definition 6.5.2 A dialogue based on a delault theory r is a nonempty
sequence 01 moves movei = (Playe1'i, Argi) (i > 0), such that
1. Argi E Argsr;
2. Playe1"i = P iJJ i is odd; and Playeri = 0 iJJ i is even;
3. 11 Playeri = Playerj = P and i =I- j, then Argi =I- Argj;
4. 11 Playeri = P, then Argi strictly deleats Argi-l;
5. 11 Playeri = 0, then A1'gi deleats Argi-l.
The first condition makes a dialogue relative to a given default theOl'y r,
while the second condition says that the proponent begins and then the
players take turns. Condition (3) prevents the proponent from repeating
its attacks. In fact, this condition builds a 'loop checker' into the system.
It is easy to see that this rule will not harm Pj if 0 had a move the first
PREFERRING THE MOST SPECIFIC ARGUMENT 167

time P stated the argument, P will also have a move the second time, so no
repetition by P can make P win a dialogue. Finally, the last two conditions
form the heart of the definition: they state the burdens of proof for P and
O.
In the following definitions r will be left implicit.
Definition 6.5.3 A dialogue tree is a tree of moves such that
1. Each branch is a dialogue;
2. 1f Playeri = P then the children of movei are alt defeaters of Argi.
The second condition of this definition makes dialogue trees candidates for
being proofs: it says that the tree should consider all possible ways in which
o can attack an argument of P.
Definition 6.5.4 A player wins a dialogue if the other player cannot move.
And a player wins a dialogue tree iff it wins alt branches of the tree.
The idea of this definition is that if P's last argument is undefeated, it
reinstates all previous arguments of P that occur in the same branch of a
tree, in particular the root of the tree.
Definition 6.5.5 An argument A is justified iff there exists a dialogue tree
with A as its raot, and won by the proponent.

Formal Praperties
Let us now discuss some formal properties of the set of justified argu-
ments. The proofs can be found in Prakken & Sartor (1997a). The dialogue
game was originally developed in Prakken & Sartor (1996b; 1997a) as a
dialectical proof theory for a semantical characterisation of the set of jus-
tified arguments, formulated in terms of a fixed point operator, in Prakken
& Sartor (1996a). This semantics is an instance of a general semantical
framework for defeasible argumentation, developed by Dung (1995) and
Bondarenko et al. (1997), and which will be discussed below in Section 9.2.1.
The semantics of the present system is sceptical; to each set of premises it
assigns a unique set of (justified) consequences. This set is guaranteed to
exist, which is an improvement over default logic, in which not all default
theories have extensions. In Prakken & Sartor (1997a) it was shown that
the dialectical proof theory is sound with respect to this semantics, i.e. that
every argument that is justified according to the dialectical proof theory,
is also justified according to the fixed point semantics. Moreover, it was
shown that, on the condition that every argument is attacked by at most
a finite number of arguments, the proof theory is also complete, i.e. every
semantically justified argument is also dialectically justified. It also holds
that the set of justified arguments is conflict-free, as desired.
Finally, how about the step-by-step nature of comparing arguments, i.e.
the 'weakest link' principle that an argument cannot be justified if not all
168 CHAPTER 6

its subarguments are justified? Some systems incorporate this principle in


the definition of a justified argument, e.g. Vreeswijk (1997), Nute (1992),
Prakken (1993) and Sartor (1994). In the present system this is not the case;
instead it makes the principle follow from the other definitions, as also in
e.g. Pollock (1987), Simari & Loui (1992) and Geffner & Pearl (1992).
Proposition 6.5.6 1/ an argument is justified, then alt its suba1'!}uments
are justified.
Let us illustrate this important proposition with a final investigation of
Example 6.4.12. Here is the default theory again.
dl:P='l-q d3 : l' ='I- 8
d4 : p 1\ r ='I- ,q d6: t ='I- ,s
Fe = {p,r}
Fn={r~t}

We have seen that the two arguments Al for 8 and A 2 for ,8 have two
conflict pairs, viz. (d 1, d4 ) and (d 3 , d6 ), and that A 2 wins the earlier and AI
the later conflict. In Section 6.3.2 I argued that the earlicr conflict should
be dealt with before the later, and that therefore A 2 should be justified.
This is indeed the outcome of the definitions. Here is the proof that A 2 is
justified. P starts with the argument A 2 for ,8.

o counters with the argument Al for s, which, as we have seen, defeats


PI, since with respect to the conflict pair (d 3 , d6 ) 0 has the (strictly) more
specific rule.

But P can now reply with the first part of PI, which is cxactly that part
of H which wins the earlier conflict.
P2 : [p,1', d4 : pl\1' ='I- ,q]

This argument cannot be defeated by 0, so PI is shown to be justified.


And he re is how the proof that Al is justified fails.

Now 0 can attack the subargument for q.


01: [p, 1', d4: p 1\ l' ='I- ,q]

And P has no way of strictly defeating 01, so P has run out of moves:
the weakest link principle prevents Al from being justified since Al has a
subargument which is not justified.
PREFERRING THE MOST SPECIFIC ARGUMENT 169

Overmled and Defensible Arguments


I next turn to the definition of defensible and overruled arguments. In
agreement with earlier work in, for instance, in Prakken & Sartor (1995a;
1996a) these categories can be defined in a purely declarative way.
Definition 6.5.7 An argument is overruled iJJ it is attacked by a justified
argument, and it is defensible iJJ it is neither justified nor overruled.
But what are the corresponding dialogue rules? I only discuss this for de-
fensible arguments, since I do not expect that many parties in a dispute will
want to defend their own argument as overruled. So when is an argument
dialectically shown to be defensible? The key idea is to reverse the burden
of proof for 0 and P: the opponent now has to find an argument that
strictly defeats the proponent's previous move, while the proponent only
needs to find an argument that defeats the opponents last move. Moreover,
the non-repetition condition now holds for the opponent instead of the
proponent. For the rest the definitions remain unchanged: in particular,
if the proponent can make the opponent run out of moves against every
attacking strategy of the opponent, the argument is shown to be (at least)
defensible. Here is a simple example.
Example 6.5.8 Consider adefault theory with no priorities and just the
following two defeasible rules.
dl : ::::} p
d2: ::::} 'P
Clearly, neither the argument [dtJ for p nor the argument [d 2 ] for ,p is
justified. However, both can be shown to be defensible. Here is the proof
for [d l ].
PI: dl : ::::} p
In the game for justified arguments 0 could now attack with the defeating
argument [d2], after which P would have run out of moves. However, in the
game for defensible arguments it is 0 that runs out of moves, since now 0
has to strictly defeat PI.
More details on the relation between the decIarative and dialectical defi-
nition of defensible arguments are discussed in Prakken (1997). The main
result is that the dialogue game for defensible arguments is sound but not
complete with respect to the declarative definition. Incompleteness arises
since for some declaratively defensible arguments a dialogue can go on for
ever.

The Status of Conclusions


Since ultimately we are not interested in arguments but in the concIusions
they support, we also have to define the status of first-order formulas.
170 CHAPTER 6

Definition 6.5.9 For any /ormula <p 0/ Co we say that


- <p is a justified conclusion iJJ it is a conclusion 0/ a justified al'gument;
- <p is a defensible conclusion iJJ it is not justified and it is a conclusion
0/ a de/ensible argument;
- and <p is an overruled conclusion iJJ it is not justified or de/ensible, and
a conclusion 0/ an overruled argument.
It can be shown in the manner of Prakken (1993) that the set of justified
conclusions is closed under first-order logical consequence.

6.5.3. ILLUSTRATIONS

Above I have already illustrated several aspects of the system. As for the
two requirements stated at the beginning of Section 6.5.1, reinstatement
was illustrated with the OJ Simpson example in that section. For instance,
P2 reinstates P1 by strictly defeating 0 1, In fact, in any dialogue tree
won by P its first move is reinstated by all its subsequent moves. The
same example illustrates the step-by-step nature of argumentation, since
P2 strictly defeats 0 1 by doing the same for a subargument of 0 1 •
I now discuss the remaining two examples in Section 6.3 that revealed
problems of Poole's original approach. Consider first Example 6.3.1.
Al = [t, s t =} r 8 =} q q /\ l' =} p]
A2 = [8 8 =} q q =} -,p]
:Fe = {S, t}
:Fn = 0
It is easy to see that Al is justified, since there is only one conftict pair,
viz. (q /\ r =} p, q =} -, p), and the first default is strictly more specific
than the latter.
Next I show that the new system correctly handles Example 6.3.3.
d1 : x kills y ---+ x is guilty of murder
d2 : x kills y /\ x acts in self-defence ---+
-, x is guilty of murder
d3 : x defends himself against KK ---+ x acts in self-defence
:Fe: {/t: Bob kills KK, 12: Bob defends himself against KK}
The justified conclusion is that Bob is not guilty. The proof is very simple:
P starts with
P1 : [12, d3, /t, d 1 , -, Bob is guilty of murder]
And 0 has already run out of moves. The construction of the unwanted
argument against Bob acts in self-defence is prevented by the one-
directional nature of defeasible rules. Therefore there is only one possible
PREFERRING THE MOST SPECIFIC ARGUMENT 171

counterargument, viz. [fI, d1 , Bob is guil ty of murder], but this argu-


ment does not defeat PI: as discussed above, there is only one conflict pair,
viz. (d l ,d 2 ), and d2 is clearly strictly more specific than d l .
Example 6.5.10 Next I illustrate how soft rebutting defeateTs can be rep-
resented in the new system, by extending Example 6.2.2 with an exception
to the exception in case the tenant has agreed by contract with the opposite.

dl : x is a contract - t x only binds i ts parties


d2 : x is alease of house y - t x binds all owners of y
d3 : x is alease of house y 1\ tenant has agreed in x
::::} x only binds i ts parties
fn!: 'ix, y. x is alease of house y - t X is a contract
f n 2: 'ix,y --, (x only binds its parties 1\ x binds all owners
of y)

It is easy to see that haTd rebutting defeaters can be formalized in com-


pletely the same way as in Chapter 5, viz. by adding premises implying the
exceptional conclusions to the facts instead of to the defeasible rules.
Example 6.5.11 Finally, I discuss an example with defensible arguments.
Consider again Example 6.4.8.
d1: ::::} f forged evidence e
d2 : f is police officer 1\ f forged evidence e ::::}
--, f is trustworthy
d3 : f is police officer ::::} f is trustworthy
d4: f is trustworthy::::} --, f forged evidence
fe: f is police officer
The arguments Al = [fe, d I , d 2 1 and A 2 = [Je, d3 , d41 are in conflict on
two issues viz. whether f forged evidence, and whether f is trustworthy.
Accordingly, there are two conflict pairs. The first is (dl, d4), in which d4
is strictly more specific than dl, and the second is (d 2 , d3 ), in which d 2
is strict1y more specific than d3 . Now unlike in Example 6.3.2, neither of
these two conflicts are prior to the other in the stepwise comparison, so the
outcome should be that no argument is justified. Here is how the proof that
f is not trustworthy fails.
PI: [fe: f is police officer, d l : ::::} f forged evidence e,
d2 : f is police officer 1\ f forged evidence e ::::}
--, f is trustworthy 1
0 1 [Je: f is police officer,
d3 : f is police officer::::} f is trustworthy,
d4 : f is trustworthy::::} --, f forged evidence 1
172 CHAPTER 6

Now P has run out of moves. It is easy to see that the proof that f did not
forge evidence fails in the same way.

6.6. Combining Priorities and Exception Clauses

So far we have in this chapter only seen examples of rebutting defeaters;


but how about undercutting defeaters? This boils down to the quest ion
how the exception clause approach and the conftict resolution approach
can be combined in the present system. The reader might ask why this is
desirable; why is it not sufficient to choose for one of the two approaches?
The ans wer is that legallanguage itself combines the two approaches: as we
have seen in earlier chapters, some exceptions are represented with the Lex
Specialis principle (e.g. Example 3.1.4), and other exceptions with phrases
like 'unless the law provides otherwise' (e.g. Example 3.1.7). Moreover,
priorities and explicit exceptions can interact. Suppose, for instance, in
Example 3.1.1 that in certain circumstances case law provides reasons to
also apply a certain section ofthe HPW to short-termed leases; then 2 HPW
confticts with a case law rule, and a choice has to be made. In conclusion,
it should be possible to combine the two ways of representing exceptions in
one formalism.
To meet this demand, in the following section the formalism of this
chapter will be extended.

6.6.1. EXTENDING THE SYSTEM

I now introduce a special symbol rv for 'weak negation' in the language,


which can only be used in the antecedent of a defeasible rule. In fact, tliis
makes that the language of our system has the full expressiveness of default
logic, including non-normal defaults with justifications other than T: any
weakly negated formula in the antecedent of a rule is a justification of a
default.

Definition 6.6.1 Let Co be any first-order language. A defeasible rule is


now an expression of the form

'PI 1\ ... 1\ 'Pj 1\ rv 'Pk 1\ ... 1\ f'V 'Pn ~ 'ljJ

where each 'Pi (0 ~ i ~ n) is a fOTmula of CO.


'PI 1\ ... 1\ 'Pj is called the antecedent, rv 'Pk 1\ ... 1\ rv 'Pn is called the
justification and 'ljJ is called the consequent of the rule. For any expression
f'V'Pi in the justification of a defeasible rule, ''Pi is an assumption of the
rule. And an assumption of an aTgument is an assumption of any rule in
the argument.
PREFERRING THE MOST SPECIFIC ARGUMENT 173

Next the inferenee rule DM P of default modus ponens (see Definition 6.4.2)
has to be redefined. The idea is that in applying DM P the assumptions of
a defeasible rule ean be ignored; if an assumption is untenable, this will be
reflected by a sueeessful attaek on the argument.
Definition 6.6.2 DM P is an inference rule of the form

d: rpo /\ ... /\ rpj /\ '" rpk /\ ... /\ '" rpm => rpn,
rpo /\ ... /\ rpj

The definition of an argument remains unehanged. The following example


illustrates the different roles of weak and ordinary, or 'strong' negation in
the eonstruetion of arguments.
Example 6.6.3 Rule d l states that a person who eannot be shown to be
a minor has the eapacity to perform legal aets, and rule d2 requires that a
person is positively shown not to be a minor, in order that s/he ean exereise
the right to vote. The point is that these rules give rise to an argument for
x has legal capacity but not for x has the right to vote, sinee there
is no rule providing the anteeedent of 1"2.
dl : '" x is a minor => x has legal capacity
d2 : --, x is a minor => x has the right to vote

The use of weak negation yields an additional way of attaeking an argument,


viz. by eonstructing an argument with a eonclusion that eontradiets an
assumption of the attaeked argument. Therefore, the definition of attaek
has to be extended as weIl.
Definition 6.6.4 Let Al and A 2 be two arguments. Al attacks A 2 iff
1. Al U A 2 U F n f- -1; 01'
2. Al U F n f- --,rp for any assumption rp of A 2 .
Clearly, the new, second way of attack is not symmetrie: [ => p] attaeks
['" p => q] but not viee versa.
Next the notion of defeat among arguments has to be redefined. Note
first that Definition 6.4.15 of speeificity does not need to be ehanged, sinee
it only inspeets the anteeedents of rules and ignores their justifieations;
intuitively this seems reasonable. Next two problems have to be solved. The
first is that we have to decide whether an attaek on an assumption only
sueeeeds if the attaeking argument is not less speeifie than the attaeked
argument. I think this is not the ease; in my opinion legal eollision rules,
like Lex Specialis (but also Lex Superior and Lex Posterior) are only used
in ease of eonflicting eonclusions, not when a eonclusion of one argument
eontradicts an assumption of another.
174 CHAPTER 6

The second problem concerns the interaction between the two kinds of
attack. If one argument attacks the condusion of another (dause (1) of
Definition 6.6.4) and the other attacks an assumption of the first (dause
(2) of Definition 6.6.4), which argument should defeat which? I shall answer
this with a discussion of the following default theory.
dI : q 1\ f'V P ::::} -,p
d2 : ::::} p
fe: q
Note that d l is strictly more specific than d2 . [d 2 l attacks an assumption of
[fe, dll but [Je, dll and [d 2 l also have contradictory condusions, while [fe, dll
uses a strictly more specific rule. Still I think that [d 2 l should strictly defeat
[fe, dd· The reason is that this is the only way in which none of the two
rules has to be rejected (in an intuitive sense). If d 2 is accepted, then the
assumption of d l does not hold, so besides d 2 also d l can, as a rule, be
accepted. By contrast, accepting d l implies rejecting d2 , in the sense that
its antecedent is believed but its consequent is not. In fact, I here employ the
logical counterpart of the legal principle that the law should be interpreted
as coherently as possible.
I now incorporate this intuition in the new definition of defeat among
arguments. Since defeat now comes in two kinds, it is convenient to first
define the two kinds separately, before combining them. The first kind is just
the 'contradicting a condusion' kind of defeat of Definition 6.4.17. I repeat
its definition and now call it 'rebutting' an argument. The second kind of
defeat, contradicting an assumption, I call 'undercutting' an argument.
Definition 6.6.5 Let Al and A 2 be two arguments.
- Al rebuts A 2 iff
1. Al U A2 U F n I- 1-; and
2. A 2 is defeasible; and
(a) Al is strict; 01'
(b) for some conflict pair (Cl, C2) of (Al, A 2) it holds that C 2 is
not strictly more specijic than Cl.
- Al undercuts A 2 iff Al U F,l I- -''P for any assumption 'P of A 2 •
Note that, as desired, undercutting is independent of specificity.
Finally, these two not ions can be combined in the new definition of
defeat. As desired, it makes an attack on an assumption stronger than an
attack on a condusion: if one argument undercuts the other, and the other
does not undercut but only rebuts the first, the first defeats the second but
the second does not defeat the first.
Definition 6.6.6 Let Al and A 2 be two arguments. Then Al defeats A 2
iff
PREFERRING THE MOST SPECIFIC ARGUMENT 175

- Al undercuts A 2; or
- Al rebuts A2 and A2 does not undercut Al.
We say that Al strictly defeats A 2 iff Al defeats A 2 and A 2 does not defeat
Al.
Note that the dialogue game does not have to be changed: its rules apply
whatever the source is of the defeat relations.

6.6.2. ILLUSTRATIONS

Next I illustrate the extended system with some examples.

Explicit Exceptions in the Choice Approach


Here is how Example 3.1.7 can be formalized in our extended language.
Besides the new definitions, it also illustrates a subtlety in the choice of
predicate names.
dl : x is a person 1\,...., excdl(x) => x has legal capacity by dl
d2,: x is a minor => excdl(X)
d2 : x is a minor 1\,...., excd 2 (x) => --, x has legal capacity by d l
d3 ,: x is a minor 1\ x has consent of a legal representative
=> x excd2 (x)
d3 : x is a minor 1\ x has consent of a legal representative
1\ --, excd3(X) => x has legal capacity by d3
fn: '<Ix. x has legal capacity by y ---* x has legal capacity

Note that I now use non-normal defaults instead of semi-normal ones as


in Chapter 5; this is possible since the present dialogue game is based on
a semantics in which the existence of extensions is guaranteed: therefore
the advantages of semi-normal defaults with respect to Reiter's definitions
(discussed at the end of Section 5.2.2) are irrelevant for the present system.
Note also the similarity with the formalization in circumscription of
Section 5.3. However, there are two differences. The first is tImt the one-
directional nature of => prevents the need for prioritizing the exception
clauses as in circumscription, while the second concerns a subtlety in the
choice of the predicate names. The property of having legal capacity is now
relative to the rule on which it is based. This is to ensure that minors who
act with consent of their legal representative have legal capacity solely by
virtue of d3 and not on the basis of d l ; d l remains blocked for such minors,
since d3, only at tacks d2 , not d2,. Thus, assuming tlmt all antecedents are
given as facts, we have that only the conclusion that x has legal capacity
by d3 is justified; it cannot be concluded that x has legal capacity by dl .
To my knowledge, this way of making a conc1usion relative to a rule was
first proposed by Kowalski, e.g. in Kowalski (1995).
176 CHAPTER 6

Note that the necessary fact fn ensures that different rules that intu-
itively are in a head-to-head conflict on whether someone has legal capacity,
indeed make arguments that make use of the rules attack each other.

Combined Rebutters and Undercutters


I now illustrate the interaction between explicit exceptions and priorities.
Here is how Example 3.1.1 can be formalized. Recall that Section 2 of the
Dutch rent act dedares all other seetions in the Dutch rent act inapplicable
in case alease is short-termed. This section can be formalized as follows
(note again the similarity with translations given in Section 5.3).
2: x is a short-termed contract 1\ y is a HPW section 1\
y i= 2'(x,y) 1\ ""' -,app12'(x,y) =? -,appl(Y)
Furthermore, any other section r of the HPW receives an assumption that
it is applicable, as folIows.
r: Ax 1\ ""' -,applr(x) =? Bx
To illustrate the interplay between the two forms of defeat, ass urne next
that the antecedents of both 2 HPW and r HPW are satisfied for some
contract c, but that also Cc holds and that a case law rule prec makes an
exception to 2 HPW for r HPW in case of Cx.
p1·ec: x is a short-termed contract 1\ r(x) is a HPW section 1\
1·(X) i= 2(x, y) 1\ Cx =? applr(x)
Then, since prec is strictly more specific than 2 (recall that Definition 6.4.15
ignores the assumptions of 2), an argument with 1'(C) can be constructed
that strictly defeats the argument with 2(c, r(c)).
Note, finally, that this example also illustrates that in the present system
soft undercutting de/eaters can be soft in two ways: they can have an
assumption that is refuted but they can also be rebutted by a more specific
rule.

Arguments on the Backing or Validity 0/ Rules


As we already saw above in Section 3.1, the exception dause technique
can, when combined with naming, also be used to express information
on the backing or validity of rules (as earlier observed by Sartor, 1994,
Gordon, 1995 and Hage, 1996, 1997). The precise method is to give every
defeasible rule an extra antecedent that it is backed, or valid. In the present
system this can be done in two ways. If one want to assurne by default that
rules are valid, then each defeasible rule d should receive a weak antecedent
""' -, valid(d). Then it is not necessary to positively show that rule d is
legally valid; d can be used as long as it is not blocked by an argument
establishing its invalidity. If, by contrast, one does not want to assurne
PREFERRING THE MOST SPECIFIC ARGUMENT 177

by default that rules are valid, then each rule d should receive a strong
antecedent valid(d); then d can only be used if the premises give rise to
an argument that d is indeed valid.

6.7. Evaluation

How should the choiee method with specificity be evaluated with respect
to the points listed in Section 5.1.3? That this method supports structural
resemblance has already been shown in Chapter 3, and implementational
aspects will be discussed in Chapter 10 (this chapter will also contain a
detailed comparison between the choiee approach and the exception clause
approach). With respect to the other points the following remarks can be
made.
An aspect of specificity whieh is often stressed in the literature is that
it would support a modular jormalization process. However, in my opinion
this advantage does in general not hold. For example, in Example 6.2.3
the choiee between (4) and (4') depends on the question whether bankrupt
conservatives are regarded as by default poor or not: i.e. the relation of
the default 'Conservatives are rich' to the default 'Bankrupt people are
poor' must be considered, as weIl as to all other defaults about poor or rieh
persons. Example 6.4.16 about duels on life-and-death provides a genuine
example from the legal domain. The reason why it cannot be formalized in
a modular way is that the decision to add the third default to ß is the result
of antieipating a possible confliet between 287 Sr (dt) and 154-(4) Sr (d 2 ):
the second legal rule is meant as an exception to the first, but without the
third rule d2 is syntactically not a special case of d1; however, anticipating
possible confliets is not a modular way of formalizing. Another example
from Dutch law is the question whether the regulations on hidden defects
are a special case of the regulations on breach of contract (cf. Snijders, 1978,
pp. 50-52, who also discusses some other examples). In general the problem
is that in the legal domain such questions are often real legal issues, of whieh
the solution cannot be read off from the letter of the law, since the law is
often ambiguously formulated; for this reason the syntactieal occurrence of
specificity in the knowledge base is often no more than the formalization
of the outcome of a debate on such an issue: first it must be checked how
the outcome of the debate was, only then the appropriate formalization can
be chosen. Obviously, the presence of such issues in a domain completely
frust rates the goal of modular formalization of a piece of legislation.
With respect to the other points mainly positive remarks can be made.
The system developed in this chapter has been shown to bc sufficiently
expressive to represent all kinds of exceptions listed in Section 5.1.2, in a
way which is often closer to natural language than thc exception clause
178 CHAPTER 6

approach, since defeasible rules can be formalized both with and without
exception clauses. A slight drawback with respect to expressiveness is that,
since the system is based on default Iogic, no defeasible modus tollens is
possible. A very important advantage of the present approach is that it
allows for other standards for comparing arguments besides specificity, since
our dialogue game leaves room for any way of determining defeat. In the
rest of this book I shall aim to exploit this advantage.
To summarize the results of this chapter, an important conclusion has
been that the choice approach to dealing with exceptions needs a general
system for constructing and comparing arguments, defining the various
ways in which arguments can interact, and determining whell a choice
should be made. Another conclusion was that such a system should be
defined on top of a non-standard Iogic: defeasible rules should be formalized
as one-direction conditionals. A system satisfying these requirements has
been developed and its combination with an improved specificity definition
has been shown to be adequate for representing all types of exceptions.
Moreover, the application of the system is not restricted to dealing with
exceptions, since in leaving room for any standard for comparing arguments
it provides a general theory for reasoning with inconsistent information. In
the next two chapters I shall use the system in this way: in Chapter 7 I
apply it to modelling reasoning with inconsistent but ordered premises, and
in Chapter 8 I investigate how it can deal with multiple sources of premise
orderings and with reasoning about such orderings.
CHAPTER 7

REASONING WITH INCONSISTENT INFORMATION

7.1. Introduction
This chapter is devoted to a logical analysis of reasoning with inconsistent
information. For legal philosophy and AI-and-Iaw this subject very rele-
vant since, as noted in Chapter 3, the information with which a lawyer is
confronted is often contradictory. However, the relevance of this chapter
is not restricted to the legal domain; in other domains of common-sense
reasoning people are also often confronted with conflicting sources of in-
formation. The problem faced by a logical analysis is that according to
elassical logic inconsistent premises are of no use at all , since elassically
from a contradiction everything can be derived. Therefore, standard logic
is, if not inappropriate, at least insufficient to model nontrivial reasoning
with inconsistent information.
How then to account for this kind of reasoning? Some logicians have
developed alternative logics, in which contradictions, although they are still
false, do not have the devastating consequences which they have in elassical
logic (relevance logics, paraconsistent logics). Others, on the other hand,
have attempted to retain the semantics and proof theory of elassicallogic by
embedding it in a larger framework. The main idea is to allow the premises
to be ordered and to use the ordcring in such a way that one or more
consistent subsets result, to which the rules of elassicallogic can be applied;
thus the nonstandard consequences of the inconsistent set are the standard
consequences of the resulting set or sets (Rescher, 1964; Alchourron &
Makinson, 1981; Brewka, 1989; Roos, 1992). Whatever the merits are of
the first approach, the second seems to be eloser to legal reasoning, in
which, as we have seen above, sometimes collision rules, are used to choose
between incompatible conelusions. The purpose of this chapter is to analyze
the formal aspects of this kind of reasoning with knowledge that is subject
to collision rules. Although the basic idea is rat her simple, its full formal
analysis turns out to be surprisingly subtle. In fact, a major aim of this
chapter is to argue that all existing attempts fall short in certain respects.
In reasoning with inconsistent information two situations can be dis-
tinguished. The first occurs when a contradiction prevents the nontrivial
derivation of conelusions which intuitively have not hing to do with the
conflict. For such conelusions there will be no consistent subsets of the

179
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
180 CHAPTER 7

premises implying their negation. The second situation is that of a 'real'


conflict, Le. of a pair of contradicting formulas which both have a consistent
subset of the premiscs implying it. In fact, with respect to this second
situation the phrase 'reasoning with inconsistent information' might be
regarded as somewhat misleading, since in practice it will mostly occur
in debate situations, in which people choose different premises which are
jointly inconsistent but internally consistent, which means that each side
in the debate reasons from a consistent set of premises. Despite this, I
shall also use the phrase 'reasoning with inconsistent information' for this
situation, in which it then receives a more abstract meaning, like 'the joint
basis for discussion is inconsistent'. The reason for doing so is that it will
become apparent that it is not only possible but even illuminating to regard
the first situation as a border case of the second one.
This chapter is also relevant for formal AI research, since in recent years
reasoning with inconsistent but ordered information has received attention
in research on formalizing nonmonotonic reasoning, particularly as a way of
modelling the choice approach to dealing with exceptions. Sometimes the
priorities are used in a nonmonotonic logic, as in Konolige's (1988b) hierar-
chic autoepistemic logic and Brewka's (1994a) prioritized default logic, but
sometimes their use is an element in the inconsistency handling approach to
formalizing nonmonotonic reasoning, as in Brewka (1989) and Roos (1992).
This approach is very elose to the just mentioned 'ordering approach' to
reasoning with inconsistent information, and therefore a secondary aim
of this chapter is, as it was in the discussion of Poole's framework, to
investigate the tenability of this view on nonmonotonic reasoning.
The structure of this chapter is as follows. Section 7.2 discusses so me
drawbacks of recent formal theories about hierarchical reasoning with in-
consistent information, the cause of which is discussed in detail in Sec-
tion 7.3. Section 7.4 then demonstrates that the problems can be solved if
the use of priorities is embedded in the argumentation system developed in
Chapter 6. I end, in Section 7.5, with a discussion of some general features
of the system.

7.2. Existing Formalizations of Inconsistency Tolerant Reasoning

In this section some existing formalizations will be discussed of nontrivial


reasoning with inconsistent but ordered information. Two of them are
inspired by legal reasoning, viz. the belief revision related approaches of
Alchourron & Makinson (1981) and Sartor (1992a; 1992b). The third
oue, Brewka's preferred-subtheories framework, is a general theory of
nonmonotonic reasoning. Since the main differences of Konolige (1988b)
and Roos (1992) with these approaches concern other things than the way
REASONING WITH INCONSISTENT INFORMATION 181

they apply the orderings, these theories will not be discussed separately.
All approaches use orderings on formulas, for which reason some nota-
tional conventiolls have to be explained: x ~ y means that y is preferred over
x; x ~ Y is shorthand for x ~ y and y ~ x; and x < y abbreviates x ~ y and
y 1:. x. Unless stated otherwise, the ordering relation ~ is in this chapter
assumed to be a partial preorder, i.e. a relation which is transitive (if x ~ y
and y ~ z then x ~ z) and reflexive (x ~ x). For legal applications it might
be asked whether more properties of the ordering should be required: in
particular, whether it should it be linear (x ~ y 01' Y ~ x 01' X ~ y).
In other words: should a hierarchical relation be defined for every pair of
norms? Whether undefined relations between moral or legal norms exist is,
of course, not a formal question but a question of ethical or legal theory,
for which reason a formal theory of hierarchical reasoning should leave this
possibility open; if it does so, it then at least includes total orderings as
a special case, so that it is left to those who apply the system whether to
regard a ccrtain 'reallife' hierarchy as total or not.

7.2.1. ALCHOURRON & MAKINSON (1981)

Probably the first who studied the formal aspects of nontrivial reasoning
with inconsistent legal information were Alchourron & Makinson (1981).
Their theory provides two things: a way of comparing arbitrary sets of
norms on the basis of an ordering of their elements, and a not ion of non-
trivial consequence for reasoning with inconsistent premises. Although they
restrict their investigations to normative reasoning, nothing in their theory
prevents its application to other domains.
Sets of premises are ordered accordillg to their minimal elements.
Definition 7.2.1 A set X is calted at least as exposed as a set Y (X ~ Y)
iJJ for alt y E Y there is an x E X such that x ~ y. A nd X is strictly more
exposed than Y (X < Y) iJJ X i= 0 and for alt y E Y there is an x E X
such that x < y.
With the help of this definition the following 'weak' and 'strong' conse-
quence not ions are defined.
Definition 7.2.2 Consider a partially orde1'ed set of p1'emises X.
- Y ~ X indicates <p iJJ Y f- <p and for alt Z ~ X such that Z f- '<p:
y 1:. z.
- Y ~ X determines <p iJJ Y f- <p and f01' alt Z ~ X such that Z f- -'<p:
Y>Z.
Note the similarity with Definition 4.1.16 of weak and strong consequence;
the main difference is that the present definition refers to arbitrary subsets
instead of maximal consistent subsets.
182 CHAPTER 7

Example 7.2.3 Imagine by way of illustration two statutes saying, respec-


tively, that Amsterdam is and is not the capital of The Netherlands, and
the constitution saying that The Netherlands are a kingdom.
(1) Amsterdam is the capital of The Netherlands
(2) -. Amsterdam is the capital of The Netherlands
(3) The Nethe7'lands are a kingdom
(1) ~ (2), (1) < (3).
The set {(I, 2, 3)} determines The Netherlands are a kingdom, since its
subset {(1,2)}, which also implies the opposite (since it is inconsistent), is
strictly more exposed than {(3)}. Consequently, although the entire set of
premises is inconsistent, the conclusion that The Netherlands are a king-
dom is justified, while nothing of interest can be concluded about whether
Amsterdam is or is not the capital of the Netherlands. This corresponds to
the information lawyers would extract from these premises. In fact this is
an example of the first situation mentioned in Section 7.1, since intuitively
the third statement has not hing to do with the conflict between (1) and
(2).
However, the concepts 'indication' and 'determination' do not always give
acceptable results. As Alchourron & Makinson (1981, pp. 138-39) them-
selves observe, if two higher norms conflict, no lower norm can be deter-
mined: if, for instance, the ordering in the above example is changed into
(1) ~ (2); (1, 2) > (3), then {(3)} is strictly more exposed than {(1,2)},
which set implies -. The Netherlands are a kingdom, although intuitively
it has not hing to do with the conflict ab out what is the capital of The
Netherlands. According to Alchourron & Makinson an additional notion of
relevance is necessary, but they do not define it formally.
At first sight it would seem that the problems can be solved if the subsets
in Definition 7.2.2 are required to be consistent, after which there is no set
any more implying -. The Netherlands are a kingdom. However, this is not
the case, since then the definition still does not adequately handle iterated
conflicts, i.e. conflicts in which arguments conflict on both intermediate and
final conclusions (cf. Section 6.3.2 above). Consider the following example,
of which the intuitive reading is that (8) stands for the facts of the case, that
the line with (4) and (5) represents an argument for s via the intermediate
conclusion q and that the li ne with (6) and (7) is an argument for -'8 via
-'q.
Example 7.2.4 Consider
(4) p -+ q (5) q -+ 8 (8) > (4, 5, 6, 7)
(6) r -+ -.q (7) -.q -+ -'8 (4) > (5,6, 7)
(8) p 1\ r (6) > (5,7)
(7) > (5)
REASONING WITH INCONSISTENT INFORMATION 183

If we apply our analysis of Section 6.3.2 analogously to this example, then


the sets {( 4, 8)} and {( 4,5, 8)} should determine q and 8, respectively.
However, although q is indeed determined, 8 is not, since ({6, 7, 8)} implies
-'8 and (7) > (5), which makes {( 4,5, 8)} strict1y more exposed than
({6, 7, 8)}.
Despite these shortcomings, the 1981-paper of Alchourron & Makinson
contained so many interesting ideas that it became one of the origins
of research into a new logical framework, viz. belief revisionj and within
that framework their definitions of nonstandard consequence have been
improved by others.

7.2.2. BELIEF REVISION APPROACHES

Belief revision, or theory revision, (cf. Gärdenfors, 1988) is about the dy-
namics of 'belief sets': it studies the definition of and requirements for the
process of revising a set of propositions with respect to a certain proposition.
Examples of such requirements are that the contraction of a formula <p
from a theory should result in a theory not implying <p, and that revisions
should be minimal in that as much as possible of the information should
be preserved. The theory of belief revision has been applied to several
problems: for instance, to testing scientific hypotheses, to counterfactual
reasoning, and to updating databasesj in this section its application to
deriving nontrivial conclusions from inconsistent information will be inves-
tigated. The reason why beliefrevision is a possible candidate framework for
this purpose is that oue way of characterizing the nontrivial consequences
of inconsistent premises is defining them as the standard consequences of
the set resulting from contracting ..L from the premises (an idea suggested
by Alchourron & Makinson, 1981). The belief revision method which seems
best suited for these purposes consists of identifying all maximal consistent
subsets of the premises (for which I use Brewka's (1989) term 8ubtheories),
comparing them with respect to some ordering relation, and taking the
intersection of all sets that are maximal in this orderingj to the result of this
operation simply a monotonie logic can be applied. This is called 'partial
meet contraction' (Gärdenfors, 1988, pp. 59, 80) applied to arbitrary sets,
Le. to sets that need not be closed under deductive consequence. Note again
the similarity with Definition 4.1.16 above.
It should be noted that if belief revision is used to model nontrivial
reasoning with inconsistent premises, it seems better not to define it on sets
which are closed under logical consequence, Le. on theories. The reason is
that otherwise conclusions based on rejected premises sometimes stay valid,
which is not what we want: if, for example, we have the premises p and
p --+ q and for some reason p is rejected, then q should also go, since it is
184 CHAPTER 7

based on a rejected premise: however, if the revision is performed on theories


instead of on arbitrary sets of premises, then the minimality requirement
for revisions causes q to be preserved. Similar observations have been made
by Makinson (1985), but most investigations on belief revision focus on
revising theories, which seems to make them less suitable for reasoning
with inconsistent information.
Now given an ordered set of formulas, how should its maximal consis-
tent subsets be ordered? Note first that Definition 7.2.1 cannot simply be
applied to subtheories, since that would give incorrect results in case of
iterated conflicts. Consider again Example 7.2.4. The set {(4 - 8)} has four
subtheories:
A: {(4,5, 7,8)} B: ({5,6, 7,8)} C: ({4,5,6, 7)} D: ({5,6, 7,8)}
Although according to our analysis in terms of iterated conflicts A should
be preferred, the formal definitions say otherwise: an subtheories are of
equallevel, since in an of them the minimal element is (5). A better way
of comparing subtheories is developed by Sartor (1992b), who combines
several applications of Definition 7-.2.1 to subsets of subtheories into a
complex ordering relation over the subtheories themselves. Roughly, a set
X is maximal in the ordering over subtheories iff of every other subtheory
y no subset Y' inconsistent with Xis better than an subsets X' of X which
are inconsistent with Y'.
Definition 7.2.5 X is as least as good as Y (X t Y) iff for every set
Y ' ~ Y inconsistent with X there is a set X' ~ X which is inconsistent
with Y' and such that X' -/.. Y ' . Furthermore, X >- Y iff X t Y and
y;t X.
Sartor then defines a weak and strong consequence notion by simply using
in Definition 4.1.16 the ordering t for comparing the subtheories.
With this definition Example 7.2.4 has the (in my opinion) most natural
outcome that A is the only maximal subtheory: on the one hand, A t B,
since for an subsets B' of B inconsistEmt with A there is an incompatible
subset A' of A which is not more exposed: the reason is that an such B'
will contain (6), and then «4)} is the required subset A' of A 'defeating'
B'; on the other hand, B ;t A, since A' is a subset of A such that there is
no contradicting subset B' of B such that B' -/.. A'. This nicely shows that
earlier conflicts are dealt with first. Moreover, the problem of Alchourron
& Makinson with premises irrelevant to the conflict has also been solved.
In Example 7.2.3 with the ordering (1) ~ (2); (1,2) > (3) both subtheories,
{(1,3)} and ({2, 3)}, are maximal in the ordering t: «1)} is a subset of the
first which is not more exposed than an subsets of {(2, 3)} incompatible with
it, which are ({2,3)} itself and «2)}, while the same holds for the subset
«2)} of ({2,3)}. Since (3) is in the intersection of the two subtheories,
REASONING WITH INCONSISTENT INFORMATION 185

The Netherlands are a kingdom is a strong consequence of the entire set


{(I, 2, 3)}. In conclusion, it seems that within the beliefrevision framework
Sartor's way of comparing subtheories on the basis of orderings of their
elements is at present the best which is available.
Nevertheless, Sartor has not solved an the problems: his definitions
sometimes still give counterintuitive results, as illustrated by the following
example.
Example 7.2.6 Consider an illcoherent university library regulation of
which one section says that misbehaviour can lead to removal from the
library, while another section says that professors cannot be forced to leave
the library; and the library regulation of the faculty of law, which is lower
than the university regulation, saying that snoring is a case of misbehaviour;
finany, the facts, which are given highest priority, say that Bob is a professor
who snores in the library.
(9) x misbehaves - x may be removed
(10) x is a professor - -, x may be removed
(11) x snores - x misbehaves
(12) Bob is a professor 1\ Bob snores
(9,10) < (12); (9) ~ (10), (11) < (9,10)
This example has four maximal consistent subsets, each missing a different
element of the premise-set:
E: {(9, 10, 11)} F: {(9, 10, 12)} G: {(9, 11, 12)} H: {(10, 11, 12)}
Observe that the quest ion whether Bob may be removed from the library is
an example of the second situation distinguished in the first section, since
there are both consistent subsets saying 'yes' and such sets saying 'no'. Now
according to Definition 7.2.5 the ordering of these subtheories is
F>- E; E~G~H

The reason that F is the best is that it is the only subtheory not containing
the lowest norm (11), while of the other subtheories an subsets inconsistent
with F contain (11) (in fact, the only such subsets are the subtheories
themselves): and therefore all these sets are strictly more exposed than F.
In conclusion, the only maximal element of the set of subtheories is F. What
does it say about the consequences ofBob's snoring? Since (11) is not in F,
(9) cannot be used to derive Bob may be removed: the conclusion, then, is
timt Bob cannot be removed from the library. Note that the same outcome
is obtained by Definition 7.2.2: F determines -, Bob may be removed, since
its minimal element, which is (9), is strictly higher than the minimal element
of an sets implying the opposite, which in an these sets is (11).
However, I want to argue for a different view on the example, a view
which is more in line with Chapter 6. On the one hand, clearly there is a
186 CHAPTER 7

potential conflict between the rules (9) and (10), since when a professor is
held to be misbehaving a choice must be made about which of them takes
precedence. On the other hand it seems natural to say that there is no dis-
pute that Bob is misbehaving: just as in Section 6.3.3 it can be said that Bob
misbehaves is merely an intermediate conclusion for making (9) applicable,
for which reason (11) is irrelevant to the conflict about whether Bob should
be removed from the library. Instead of preferring -, Bob may be removed
by rejecting the conclusion that Bob is misbehaving, it seems more natural
to make a choice between the norms which are certainly in conflict with each
other, (9) and (10): and since these norms are of equallevel, the outcome
should be that the conflict cannot be resolved.
This point of view can also be illustrated with a legal example of the
same form but with a different ordering of the premises.
Example 7.2.7 Consider a provision (5 GW) of the Dutch constitution
declaring every person to have the right to submit a written request to the
proper authority, an (imaginary) case law decision stating that arequest by
fax is a writtcn requcst, and an (also imaginary) act stating that prisoners
do not havc thc right to submit requests to a proper authority. I leave
the ordering relation between case law decisions and lcgislation undcfincd.
Finally, in order to make the example closer to most legal systems, I assurne
that the Dutch constitution is higher than statutes; in fact in Dutch law
their relation is more complicated.
(13) x is a fax ---+ x is wri tten
(14) x is arequest 1\ x is written ---+ a proper authority
must accept x
(15) x is a prisoner's request ---+ -, a proper authority
must accept x
(16) My-lette1' is a fax 1\ My-letter is a prisoner' s request
1\ Vx. x is a prisoner' s request ---+ x is arequest
(16) > (14), (14) > (15)
For reasons explaincd ahove with Example 4.1.21 open formulas are schemes
for all their ground instances.
Thc subtheorics are
I: {(13, 14, 15)} J: {(13, 14, 16)} K: {(13, 15, 16)} L: {(14, 15, 16)}

In the ordering of the subtheories I is lower than the other three, since
it implies the negation of (16), and since according to Definition 7.2.5
{(16)} >- {(13, 14, 15)} sincc (16) > (14,15). However, because of the
undefined status of (13) thc other sets are incomparable, which according
to Definition 4.1.16 makes all ofthem weakly imply their consequenccs, and
REASONING WITH INCONSISTENT INFORMATION 187

this means that both a proper authori ty must accept My-letter and its
negation are only weakly provable. Again in my opinion a different analysis
seems closer to legal reasoning: I do not think many people will accept that
the enactment of astatute norm like (15) can block the application of the
constitutional norm just because the antecedent of the constitutional norm
is provided by a case law decision; again a more natural view seems to
be that it is (14) and (15) which are in conflict, and since (14) is higher
than (15), this view leads to the outcome that a prisoner's request must
be accepted. In general terms Definition 7.2.5 has the following undesirable
consequence: if a norm later in an 'argument chain' takes part in a conflict,
then norms earlier in the chain can only give rise to justified intermediate
conclusions if they are higher than the lowest norm involved in the conflict.

For mathematicians a natural reply to these objections would be: "weIl,


if (11) should stay and (15) should be rejected, then the ordering should
be changed: (11) should be higher than (9) and (10) and (15) should
be lower than (13)". This, however, although it may be acceptable for
mathematical purposes, is cognitively inadequate, since it does not capture
the way hierarchies are used in legal reasoning: such a hierarchy does not
depend on desired outcomes in individual cases but is, instead, based on
general grounds and then used to solve individual conflicts. What is required
in modelling this use is a modification of the formal definitions rather than
a change in the assignment of a specific ordering: premises like (13) should
not be regarded as higher than (15), but as irrelevant to the conflict.

7.2.3. BREWKA'S PREFERRED-SUBTHEORIES APPROACH

An alternative way of constructing the preferred maximal consistent subsets


is the one of Brewka (1989). As explained in Section 4.1.5, the idea is
that on the basis of a strict partial ordering of the individual premises a
consistent set of premises is constructed by first adding as many formulas
of the highest level to the set as is consistently possible, then adding as
many formulas of the subsequent level as is consistently possible, and so
on. If incomparable formulas are in conflict, the resulting set branches into
alternative and mutually inconsistent sets, analogous to the extensions of
default logic.

Since the formal definitions have already been discussed in Chapter 4,


it now suffices to check what the result is in the critical examples of the
present chapter. In iterated conflicts ofthe type ofExample 7.2.4 everything
goes weIl, as can be illustrated by considering the most critical ordering,
which is now
188 CHAPTER 7

(8) > (4,5,6,7)


(7) > (4,5,6)
(5) > (4,6)
(4) > (6)
After the facts (8) first (7) is added, then (5) and then a choice must be
made between (4) and (6), since adding them both would make the set
inconsistent. Since (4) > (6), (4) is added to the preferred subtheory, which
then as desired implies both q and s.
Let us now see how this method deals with Example 7.2.6. First the
facts (12) are added to the set, and then both norms about removal from
the library, Le. both (9) and (10), since without the lower norm (11) on
snoring (9) cannot be used to derive that Bob may be removed, for which
reason no contradiction occurs. Finally, (11) is considered: adding this
implication to the set would cause an inconsistency, for which reason it
is left out. Thus the method of Brewka has resulted in the same set as
Sartor's beliefrevision method: {(9, 10, 12)}, which makes it subject to the
same criticism: it, in my opinion mistakenly, regards (11) as relevant to
the conflict in that the low status of (11) prevents the construction of the
alternative preferred subtheory {(9, 11, 12)}. A similar result is obtained
in Example 7.2.7: first the facts (16) are added, then the constitutional
provision (14) on written requests, and then a choice has to be made
between the case law decision (13) on faxes and the statute norm (15) on
prisoner's requests: at most one of them can be added without making the
set inconsistent, and since the norms will be incomparable, the set branches.
One of the resulting preferred subtheories implies My-letter is wri tten
and a proper authori ty must accept My-letter, but the other one im-
plies their negations, which means that Brewka's framework fails to regard
the first two formulas as strongly provable from {(13 - 16)}. In conclusion,
with respect to the present topic Brewka's theory is not an improvement
of the belief revision approaches.

7.3. Diagnosis

As already briefly indicated, the problem with the above methods is that
they regard too many premises as relevant to the conflicts about whether
Bob may be removed from the library and whether the request may be
submitted: in both approaches it is alt members of the (classically) minimal
inconsistent set which are regarded as relevant, whereas Examples 7.2.6
and 7.2.7 have illustrated that it is only a subset of this set which matters:
informally, only conditional rules with conflicting consequents are relevant
to the conflict. At first sight this point seems to be rather ad hoc, in that
it only pertains to the specific form of these examples. Nevertheless, it
REASONING WITH INCONSISTENT INFORMATION 189

can be generalized if a different attitude is employed towards reasoning


with inconsistent information, viz. if it is regarded as choosing between
conflicting arguments instead of as revising inconsistent premises. In the
discussion of the examples in the previous section this attitude has some-
times already been employed. As was explained in Chapter 6, constructing
and comparing arguments is a step-by-step process and for this reason
premises which are only needed to provide inte1'mediate conclusions of an
argument should be regarded as irrelevant to conflicts about conclusions
drawn in fU1·ther steps of the argument. In Example 7.2.6 this me ans that
(11), which in the argument for Bob may be removed is only used to derive
the intermediate conclusion Bob misbehaves, is irrelevant to the conflict
about Bob may be removed. The same holds in Example 7.2.7 for the rule
(13) on faxes, which is irrelevant to the conflict between the constitution
norm (14) and the statute norm (15) on whether requests may be submitted.
Now if we concentrate on Example 7.2.7, and we loosely define an
argument as a consistent set implying a conclusion, then at first sight
nothing seems to have changed, since formally {(13 - 16)} not only con-
tains an argument for My-letter is written, viz. {(13,16)}, but also one
for...., My-lette1' is written, viz. {(14, 15, 16)}, since (15) and (16) imply
...., a proper authori ty must accept My-letter which by modus tollens
with (14) implies...., My-letter is written. However, in li ne with the analy-
sis in Chapter 6 of Example 6.3.3 a more natural view seems to be timt argu-
ments such as the one for ...., My-letter is wri tten are not constructible: the
formal constructibility of this argument depends on the fact that the truth
of My-letter is wri tten would lead to a conflict between two other norms,
but the very idea of inconsistency handling is to resolve such conflicts when
they occur, and then it seems strange to allow arguments which are based
on the idea that such conflicts cannot occur. A system which makes such
arguments possible by validating contrapositive inferences, fails to recognize
that it is (9) and (10), and (14) and (15), that are responsible for the
conflicts.
It turns out that the same conclusion can be drawn as in Section 6.3.3:
rules that are subject to defeat (whether by exceptional rules or by hi-
erarchically higher rules) are one-directional; they do not satisfy modus
tollens and other contrapositive inferences. And if this observation is com-
bined with the observation about the step-by-step nature of constructing
arguments, then we have a general reason for regarding (11) and (13) as ir-
relevant to the conflicts: because the rules are one-directional, no argument
can be set up against Bob misbehaves or against My-letter is wri tten.
Note, by the way, that this analysis captures both of the two kinds of
reasoning with inconsistent information distinguished in Section 7.1: we can
say that the phenomenon of comparing arguments includes as a border case
190 CHAPTER 7

the situation in which no counterargument exists. Moreover, the combined


analysis of these situations has turned out to be very useful, since it enables
us to explain why the rules (11) and (13) are irrelevant to the conflicts by
saying that they give rise to conclusions which have no counterarguments.
It should be stressed that the above problems cannot be solved just by
replacing standard logic with some nonmonotonic formalism. It is instruc-
tive to see what the result is if adefault logie version of Sartor's subtheory
comparator is used to compare extensions of normal default theories in
Reiter's default logie. In defining this version use can be made of the fact
that every extension of adefault theory has a set of 'generating defaults'
(cf. Reiter, 1980, Th. 2.5), which informally are the defaults responsible
for the content of the extension: every formula of an extension is implied
by the facts and the consequent of one or more generating defaults. Sets
of generating defaults can be regarded as the analogue of the maximal
consistent subsets of an inconsistent set of standard logie formulas. Since
Definition 7.2.1 of exposure can simply be applied to sets of defaults,
Definition 7.2.5 can be adapted to default logic in the following way.

Definition 7.3.1 (Sartor's subtheory comparator adapted to default logic).


Let (F,~) be adefault theory and D 1 and D2 be sets of generating defaults
of extensions EI and E 2 of (F, ~). Then EI is as least as good as E2
(EI t E 2) iJJ for every set D 2, ~ D 2 such that E(F, D 2,) is inconsistent
with EI there is a set D 1, ~ D I such that
- E(F, Dl') is inconsistent with E(F, D 2 ,) and
- Dl' f:. D 2,·

Now if we read Example 7.2.6 as adefault theory, with F = {(12)} and


~ = {(9 - 11)}, then the essential observation is that, although there
is no D ~ ~ with an extension containing ...., Bob misbehaves, the low
status of (11) still prevents the extension of (F, {(9, 11)}, containing Bob
misbehaves from being preferred. In Prakken (1993) it is shown that the
same problems occur in Brewka's (1991a) application of his subtheories-
approach to default logie, and similar criticism applies to Konolige's (1988b)
hierarchie autoepistemie logic. In fact, the main difference betweell the
argumentation approach and the other approaches can be summarized as
follows: a basic assumption of the argumentation approach is that a formula
can only be rejected if the argument on which it is based can be defeated,
while the ideas behind the other approaches do not exclude that a formula
can be rejected even if it is implied by a consistent subset on whieh no
attack is possible.
Before this chapter's solution to the problems will be presented, one
final argument in defence of the criticized approaches should be discussed.
REASONING WITH INCONSISTENT INFORMATION 191

It might be argued that in Examples 7.2.6 and 7.2.7 it is perfectly possible


to define belief revision functions such as
f( {(9 - 12)}) = {(9, 11, 12)} n {(10, 11, 12)}
and
f({(13 - 16)}) = {(13,14,16)}
after which the desired conclusions can be derived with standard logic.
However, although technically this is undoubtedly possible, in my opinion
it does not clarify what essentially happens. What would clarify the matter
is a belief revision function which, firstly, is stated in general terms instead
of only for a few syntactic cases and, secondly, only uses formal not ions. And
the conclusion of this section has been that for a general function notions
like 'comparing arguments' and the distinction between intermediate and
final conclusions are indispensable, and that non-formal notions can only
be avoided by using a one-directional conditional.
To summarize the results of this section, we have, firstly, seen that the
approach of modelling nonmonotonic reasoning as inconsistency tolerant
reasoning cannot defend its claim that thus classical logic need not be
abandonedj and secondly, we have concluded that theories of illconsistency
tolerant reasoning should take the step-by-step nature of argumentation
into account. Since similar conclusions have been drawn in Chapter 6
from the discussion of Poole's approach, a natural solution suggests itself:
using the argumentation system developed in Chapter 6, but replacing
the specificity comparator by a comparator which takes any hierarchical
relation between mIes into account, irrespective of its source.

7.4. Hierarchical Defeat

To adapt our argumentation system to reasoning with inconsistent but


ordered information only very little needs to be changed. First the input of
the system must be redefinedj besides adefault theory now also an ordering
of the defeasible mIes is assumed as given.
Definition 7.4.1 An ordered default theory is a pair (Fn U Fe U ß, :S:),
where F n, Fe and ß are defined as above and :s: is a partial preorder on ß.
The other definitions of the system are now relative to an ordered default
theory.
It now suffices to replace the specificity comparator of Definition 6.4.15
with a comparator timt takes the rule-ordering into accountj this compara-
tor again induces a defeat relation among arguments and thus the rest of
the system applies in exactly the same way as in Chapter 6. Here is how
this can be done.
192 CHAPTER 7

As I argued in the previous section, the definitions should, in addition to


a choice criterion, also provide a relevance criterion. In fact, the relevance
criterion was already defined in Chapter 6, in Definition 6.4.10 of conflict
pairs, so all that is left to define is ordering conflict pairs in terms of
the ordering on the rules that they contain. To this end Alchourron &
Makinson's definition of '(strict) exposure', i.e. Definition 7.2.1 above can
be used. In the notation of the present system:

Definition 7.4.2 For any two sets Rand R' of defeasible rules,
R < R' iff for some 1· E Rand alt r' E R' it holds that 1" < r'.

The intuitive idea behind this definition is that if R < R', R can be made
better by replacing some rule in R with any rule in R', while the reverse is
impossible.
Finally, in the definition of rebuttals (see Definition 6.6.5) this ordering
takes the place of the specificity ordering.

Definition 7.4.3 (hierarchical rebuttals) Let Al and A 2 be two m"[}uments.


Al rebuts A 2 iff
1. Al U A 2 U F n 1-1..; and
2. A 2 is defeasible; and
(a) Al is strict; or
(b) for some conflict pair (Cl, C 2 ) of (Al, A 2 ) it holds that C 2 -;. Cl.

With this new not ion of rebutting, Definition 6.4.17 again yields a defeat
relation of arguments, to which the rules of the dialectical proof theory of
Section 6.5 can be directly applied. Since this proof theory is independent
of any ground for the defeat relations, all its properties still hold.
Here is how the new definitions deal with the critical examples.

dl: x misbehaves * x may be removed


d2 : x is a professor * --,
x may be removed
d3 : x snores * x misbehaves
!t: Bob is a professor 1\ Bob snores
d l ~ d2, d3 < d l

Al = [!t, dl , d3 1 is an argument for Bob may be removed, while A 2 =


[!t, d21 is an argument for the opposite. The sets relevant to the conflict
are {dt} and {d 2 }. Since the only elements of these sets are of equal level,
neither Al, nor A 2 is justified; both are merely defensible arguments. Note
that, as desired, d3 is not considered by the choice criterion.
REASONING WITH INCONSISTENT INFORMATION 193

d4: x is a fax::::} x is written


d5 : x is arequest 1\ x is written::::} a proper authority
must accept x
d6 : x is a prisoner' s request ::::} --, a proper authori ty
must accept x
12: My-letter is a fax 1\ My-letter is a prisoner' s request
1\ \Ix. x is a prisoner's request --t x is arequest
d5 > d6,d4 ~ d6·
A 3 = [12, d4, d51for a proper authority must accept My-letter defeats
A 2 = [12, d61 for the opposite conclusion, since d5 > d6; furthermore, A 3 's
only subargument, [12, d41 for My-letter is written, is trivially justified,
since it has no counterargument. In conclusioll, A 3 is a justified argument.
A nice property of these two formalizations is that they avoid a problem
of many other prioritization approaches, viz. the need to express priorities
between rules which intuitively have nothing to do with each other. For
example, above we have seen that in other approaches d3 has to be as high
as both dl and d2, and d4 has to be higher than d6; obviously, this severely
complicates the job of the knowledge engineer, who then really has to solve
each possible problem beforehand before s/he can assign the priorities (cf.
Poole, 1991, p. 289); moreover, in doing so s/he cannot rely on the general
legal criteria for assigning the priorities, since, as these two examples show,
to obtain the correct result the legal ordering often has to be changed. In
the present system these problems do not occur; it is not even necessary
that any relation is defined at all between 'earlier' and 'later' norms in
conflicting arguments. For knowledge engineering this is, of course, much
better.

7.5. General Features of the System

In essence, what has been developed in the last two chapters is a system for
comparing conflicting arguments, Le. a system for defeasible argumentation.
This has resulted in a nonmonotonie notion of logical consequence, viz.
that of being a justified conclusion of an ordered default theory. Below this
notion will be denoted by f I""a r.p, in words 'f argumentatively implies r.p'.
In this section some general features of the system and consequence notion
will be discussed.

7.5.1. PROPERTIES OF THE CONSEQUENCE NOTION

As discussed in Section 4.2.2, it has been argued that consequence notions


that are not monotonie should at least satisfy some other properties. Firstly,
in Section 4.2.2 I argued that allY consequence notion should at least satisfy
194 CHAPTER 7

the conjunction principle, which for argumentative consequence looks as


folIows.

This property follows from the deductive closure of the set of justified
formulas, which as remarked above in Section 6.5.2, can be proven along
the lines of Prakken (1993).
Another property that is often regarded as essential is cumulativity.
For our argumentation system this property has the following form. I,,-,a is
cumulative iff for any default theory (F U~,~)

In words, if a derivable formula is added to the facts, then everything else


stays derivable.
The present system does not have this property, as is shown by the
following counterexample.

d1: '*
p
d4 : r,* -,p
h:p

On the basis of this default thcory all of p, q and T are justified conclusions.
This is since the only counterargument to the arguments supporting these
conclusions is [h,d 1 ,d2 ,d3 ,d4], which is incoherent and therefore overruled
by the empty argument.
However, if we add the justified conclusion T to the facts, as

12: r

then in the new default theory neither of p, q and rare justified any more.
The reason is that now there is a different argument for p, viz. [12, d4], which
is not incoherent and is therefore able to defeat the argument [h, d1, d2 , d3 ].
Is the lack of cumulativity a drawback of the present system? In my
opinion it is not; I think (following Vreeswijk, 1993a, pp. 82-6): that such
examples clearly illustrate why in general cumulativity is not a desirable
property of nonmonotonic consequence not ions. What the above shows is
that in evaluating arguments it is important to take the derivation history of
the conclusions into account: with the initial default theory the conclusion
r is based on an intermediate conclusion p, for which reason the subsequent
argument for -,p is incoherent; however, in the extended default theory r
does not depend on p any more, which makes the new argument for -,p
REASONING WITH INCONSISTENT INFORMATION 195

coherent. In other words, in the second default theory the argument for ...,p
does not use the 'same' l' as in the first default theory.1

7.5.2. SCEPTICAL AND CREDULOUS REASONING

Next it will be discussed how thc system deals with sceptical and credulous
reasoning (cf. Chapter 4). Recall that sceptical consequences are those con-
clusions which on the basis of the given information cannot be challenged,
while credulous consequences are those conclusions that hold in at least
one possible state of affairs admitted by this information. Clearly sceptical
reasoning is captured by the not ion of a justified argument and credulous
reasoning by the notion of a defensible argument. When two arguments are
in an irresolvable conflict, neither of them is justified but they can still be
furt her pursued as alternative defensible points of view.
The ability of the system to capture both sceptical and credulous reaSOll-
ing is closely related to a particular, 'moderate' view on sceptical reasoning,
which is not shared by all approaches in the literat ure. All alternative,
more extreme account of sceptical reasoning can be found in Horty et
al.'s (1990) system for inheritance networks with exceptions, and also in
the work of Nute, e.g. (1992). This account is that a sceptic, if faced with a
conflict which is irresolvable, refuses to furt her pursue any of the arguments
involved in the conflict. In other words, extremely sceptical reasoners 'cut
off' an argument not only when it is overruled, but also when it is merely
defensible.
Example 7.5.1 The following variant of our OJ example illustrates this
kind of scepticism.
dl: :::;.. f forged evidence e
d2 : f forged evidence e :::;.. ..., e is admissible evidence
d3: :::;....., f forged evidence e
d4: "-J"" e is admissible evidence :::;.. e proves guilt

Assurne that ~ only contains ~ relations. Here is how in the present system
the proof for 'e proves guilt' fails.
P1: [d4: "-J"" e is admissible evidence :::;.. e proves guilt I
01: [d 1: :::;.. f forged evidence e,
d2 : f forged evidence e:::;.. ..., e is admissible evidence]

Now P has run out of moves, since [d 3 ] defeats 0 1 only nonstrictly.


11n fact, this observation is the basis of Brewka's (1991b) cumulative version of
default logic, mentioned above in Section 4.2.2: he 'tags' each derived formula with
its derivation historYi then the above counterexample trivially fails, since changing a
defeasible conclusion into a fact trivially makes it syntactically different.
196 CHAPTER 7

Makinson & Schlechta (1991) call arguments like [dl, d2J "zombie paths":
although this argument is not fully alive, since it is not justified, it neither
is totally dead, since it can still prevent other arguments from being justi-
fied. Makinson & Schlechta argue that a suitable theory of inheritance or
argumentation should allow for such an intermediate status of a path or
argument. The present system does so, in the form of defensible arguments.
However, extremely sceptical systems do not have this intermediate cate-
gory; in such systems [d 1 , d2] is, in our terms, not allowed to prevent [d 4 J
from becoming justified, since it has a sub argument that is in an irresolvable
conflict with another argument, viz. [d3]; and an 'extreme sceptic' then
refuses to pursue the argument any furt her. As a result, [d4] is in such
systems justified, even though it is attacked by a 'zombie', i.e. an argument
that is not justified but that is not worse than any counterargument.
Granted that the different accounts of scepticism reflect a 'dash of
intuitions', I still think that the moderate account of the present system
is justified by its underlying general ideas. Recall that the system regards
an argument as justified only if, given the premises and the way they are
ordered, no doubt at all can be cast on the argument (unless, of course,
new premises are added). Now in our example doubt can be cast on the
argument [d4], since it has a counterargument that is not weaker than any
argument attacking it. I think that if such a situation were to arise in
practice, a judge would feel compelled to determine whether d l < d3 before
deciding in favour of P.

7.5.3. FLOATING CONCLUSIONS

The following example raises another point for discussion.


Example 7.5.2 Assurne that F = ::; = 0 and ß = {dl - d4}, where

Since no priority relation holds between d1 and d3 , all arguments in the


example are defensible; yet in whatever way the conflict between d1 and d3
were solved, q would be justified. It might be argued timt q should come
out as a justified condusion even though it is not supported by a justified
argument. See e.g. Makinson & Schlechta (1991), who call such condusions
"floating conclusions" .
The present system does not make it impossible to capture this intuition,
but we have to introduce the notion of a7yument extensions, i.e. of, in some
sense, maximal sets of arguments with certain properties. The idea is to
coherently extend the, unique, set of all justified arguments with as many
defensible arguments as possible. The resulting argument extensions all
REASONING WITH INCONSISTENT INFORMATION 197

express a possible point of view admitted by the premises. Any formula that
in all such extensions is the conclusion of some argument can be regarded
as a justified conclusion, even if it is not in all extensions supported by the
same argument, i.e. even if it is not the conclusion of a justified argument.
In the literature various ways of defining such extensions can be found,
all differing in subtle ways (see e.g. Dung, 1995 and Bondarenko et al., 1997,
which will be discussed below in Section 9.2.1). However, for present pur-
poses their differences do not matter; it suffices to present just one such
definition; the main idea also applies to the alternatives. The following
definition is taken from Prakken & Sartor (1997a).

Definition 7.5.3 A defensible extension is any maximal (w.r.t. set inclu-


sion) conftict-free set of arguments that includes alt justified arguments.

Clearly, any ordered default theory has at least one defensible extension.
Then the not ion of a justified conclusion can be redefined as follows.

Definition 7.5.4 A formula <p is a justified conclusion iff alt defensible


extensions have an argument with <p as conclusion.

It would be interesting to find the corresponding dialogue rules for this


notion, but this has to be left for future research.
If we again look at Example 7.5.2, we see that it has the following
defensible extensions (I leave the (justified) arguments for valid formulas
implicit, as well as combinations of independent arguments).

EI: {[dlJ, [d l , d2 ]}
E2: {[d 3 J, [d 3 , d4]}

Now in both extensions there is an argument for q, so q is a justified


conclusion. It does not matter that in both extensions q is the conclusion
of a different argument.
Which notion of justified conclusions is the best, the one of Defini-
tion 6.5.9 or the just-given one? In my opinion this is the wrong ques-
tion; rather than being alternative definitions of the same notion, these
definitions capture slightly different not ions of argumentative consequence,
reflecting different ways in which conclusions can be supported by a body of
information. It seems meaningful to say that there are justified conclusions
that are supported by ajustified argument, and that there are other justified
conclusions that are not supported by a justified argument. Perhaps it
depends on the nature of the application of the system whether we should
interested in one type of conclusion or the other, or whether their difference
does not matter.
198 CHAPTER 7

7.5.4. ACCRUAL OF ARGUMENTS

A familiar phenomenon of practical reasoning is that reasons that indi-


vidually have not enough weight to support a certain conclusion, often
do have that weight in combination. For instance, it might be that the
individual reasons that it is hot, and that it is raining, are insufficient
for not going jogging, but that they are sufficient in combination. How
can this phenomenon be formalized in an argumentation system? (with
a formalization I here mean the combination of a formal argumentation
system with a formalization methodology).
It has been argued (Hage, 1996; Verheij, 1996) that systems which
compare individual arguments cannot model this phenomenon in a natural
way; instead a system should compare combinations of arguments. Others
(Pollock, 1995; Prakken & Sartor, 1996b) have said that combining reasons
is not a matter of inference but of formulating premises, since in general it
cannot be assumed that different reasons for a conclusion are independent.
Let us for the moment use 'reason' as ambiguous between 'rule' and
'argument', and ask what are the basic requirements that any formalization
of weighing reasons should satisfy. I see four such requirements. Firstly,
the formalization should (obviously) deal correctly with the typical case
of accrual of reasons, where the combination of individually insufficient
reasons is sufficient, as in the above-mentioned jogging example. However,
the formalization should also admit that sometimes the combination of two
reasons pro is not itself a reason pro. To modify the jogging example, for a
certain person it can be that the individual circumstances that it rains and
that it is hot are reasons for not going jogging, but that the combination
is so pleasant that it is instead a reason for going jogging. Moreover, even
if the combination of two reasons pro is a reason pro, the combination
might be weaker than the individual ones. In our example, even if the
combination of rain and he at is still a reason not to go jogging, it might be
a weaker reason than just rain or just heat, because the combination is less
unpleasant. So even if 'If it rains then I don't jog' and 'if it is hot then I
don't jog', individually outweigh 'if it is sunday, I jog', their combination 'if
it rains and it is hot, I don't jog', might not. In general terms, having more
reasons for a conclusion does not always make a stronger case. Finally, the
formal analysis should capture that sometimes reasons are not combined.
For instance, if the head coach teIls an athlete that he should go runnillg
whenever it is Sunday, it is irrelevant what the other coaches say; the various
prohibitions and permissions issued by members of the coaching hierarchy
are not weighed in combillation.
How can accrual of reasons be formalized in a way that respects these
observations? The general idea is given by the second observation, which
REASONING WITH INCONSISTENT INFORMATION 199

says that sometimes the combination of two reasons pro is instead a reason
con, and by the last observation, which says that reasons are sometimes
combined but sometimes not. These observations suggest that combining
reasons is not a matter of logic but of formulating premises. Each time when
the premises contain more than one rule with the same consequent, it should
be decided whether in addition a third rule, conjoining their antecedents
and with the same consequent, should also be added to the premises.
Plausible as this solution may seem, its formalization requires a subtlety.
To see this, let us first give a direct formalization of the idea. Suppose we
have the rules
d1 : It rains ~ --, I go jogging
d2 : It is hot ~ --, I go jogging
The idea is that it is adecision of content whether also
d 1/ 2 : It rains 1\ It is hot ~ --, I go jogging
Should be in the premises. Moreover, in general the strength of the com-
bined rule does not depend on the strengths of d1 and d2 • So if the premises
also contain
d3 : It is Sunday ~ I go jogging
then even if both dl > d3 and d2 > d3, it may still be that d 1/ 2 < d3. So
not only combining rules, but also assigning the priorities to the combined
rule is a matter of decision, to be taken in each individual case.
However, upon reflection this proposal does not see m completely ade-
quate. Assurne the priorities are such that the combined rule is weaker than
its component rules: dl > d3, d2 > d3, d 1/ 2 < d3. This induces the following
defeat relations.
[dll strictly defeats [d31
[d2 1strict1y defeats [d 31
[d31 strictly defeats [d 1/ 2 1
As remarked above, the intuitive outcome is that [d 3 ] is justified while
the other arguments are overruled. However, although [d 31 strictly defeats
[d 1/ 2 ], it is in turn still strictly defeated by both [dI] and [d 2 ], which
themselves are not defeated by any argument. This makes [d3] overruled
and the other arguments justified, as can easily be checked. However, the
intuitive outcome is that I go jogging, so our first proposal is flawed.
Yet there is a way to re pair it, viz. by changing the representation of the
example in the way described by Hage (1997, p. 204). In can be argued that
if we have a hot and rainy Sunday, then the individual reasons concerning
heat and rain cease to apply: when the antecedent of d1/ 2 is justified, d1
and d2 cannot be used any more to set up arguments.
200 CHAPTER 7

This intuition can be formalized by using applicability clauses (cf. Sec-


tion 6.6.2). First each rule is given an extra condition, expressing that the
rule is assumed to be applicable.
d1 : It rains 1\ rv -, appl(dt} ::::} -, I go jogging
d2 : It is hot 1\ rv -, appl(d 2) ::::} -, I go jogging
d1/ 2 : It rains 1\ It is hot 1\ rv -, appl(d 1/ 2) ::::} -, I go jogging
d3: It is Sunday 1\ rv -, appl(d 3 ) ::::} I go jogging
Next inapplicability rules are added, making d 1 and d2 inapplicable if the
antecedent of d3 holds.
d4 : It rains 1\ It is hot 1\ rv -, appl(d 1/ 2) ::::} -, appl(dt}
d5 : It rains 1\ It is hot 1\ rv -, appl(d 1/ 2) ::::} -, appl(d2)
Then on a hot and rainy Sunday only d 1/ 2 and d3 are applicable, and since
d 1/ 2 < d3 , the outcome is that I go jogging. Note that this method is not
just tinkering with the premisesj it is based on the general idea tImt when
a more specific rule with a certain consequent applies, more general rules
with the same consequent have become irrelevant.
If we check how this method deals with the other three observations, we
see that all of them are respected. Let us ass urne that it is a hot and rainy
Sunday. Then firstly, in the typical case where d 1 < d3 and d2 < d3 but
where d3 < d 1/ 2 , we again have that only d 1 / 2 and d3 are applicable, and
since d3 < d1/ 2 the outcome is that I do not go jogging. Secondly, if the
combination of heat and rain is regarded as a reason to go jogging instead
of not to go jogging, the premises will not contain d 1/ 2 but instead a rule
d4 : It rains 1\ It is hot 1\ rv -, appl(d4) ::::} I go jogging
together with the priority relations dl < d4, d 2 < d4. Then the outcome is
that I go jogging, irrespective whether d 3 has higher or lower priority than
d 1 and d2 . Finally, the case that reasons do not combine at all can very
easily be formalized, by not adding d1/ 2 to the premises.
Yet some feel that this solution is not satisfactory, since the idea of
accrual of reasons is intuitively very natural (see e.g. Verheij, 1996, pp.
161/2). Accordingly, they feel that reasons should combine 'by default', i.e.
unless it is explicitly said that they do not combine. It would be interesting
to investigate how the definitions of the present system can be changed to
this effect, and then to compare the two approaches. However, this has to
be left for future research.

7.6. Conclusion
Inspired by a formal analysis of legal reasoning, this chapter has investi-
gated nontrivial reasoning with inconsistent but ordered information. Two
REASONING WITH INCONSISTENT INFORMATION 201

important conclusions have emerged, both revealing that the formalization


of this kind of reasoning is more complicated then is generally acknowledged
in AI research. The first conclusion is that reasoning with inconsistent
information should somehow be modelled as constructing and comparing
incompatible arguments, in a way which reflects the step-by-step nature of
argumentation. However, this is not sufficient, since the second conclusion is
that an argumentation system still faces serious problems if standard logic
is not abandoned as the knowledge representation language: what is needed
is that defeasible rules can be expressed as one-directional rules. For this
reason approaches to formalizing nonmonotonic reasoning by changing the
way logic is used rat her than changing the logie are far less attractive than is
often claimed. It has turned out that the argumentation system developed
in the previous chapter, when applied to reasoning with orderings, respects
the two conclusions. The system has also been evaluated with respect to
some general logical issues and the overall assessment has been positive.
I now turn to a very important issue, the combined use of several sources
of priority relations. This issue raises questions which justify aseparate
chapter.
CHAPTER 8

REASONING ABOUT PRIORITY RELATIONS

8.1. Introduction
Whereas in Chapter 6 arguments could only be compared with respect to
specificity, in Chapter 7 any criterion has been allowed, provided that it can
be expressed as an ordering on rules. However, apart from specificity almost
nothing has yet been said about the possible sources of the priorities. For
so me time in AI the (often implicit) hope has been that the sources of these
priorities are of a general, domain independent nature. As a consequence,
the question where these priorities can be found is usually not treated
as a matter of common-sense reasoning. If this question is addressed at
all, then it is usually considered a metalogical issue. In particular, much
research has concentrated on formalising the specificity principle, which, as
we have seen, can be expressed in purely logical terms. However, abrief
look at the legal domain already suffices to see that the hope that we
can find useful domain-independent sources of preferences is unrealistic.
In law, but also in many other domains of common-sense reasoning, such
as bureaucracies, collision rules are themselves part of the domain theory
(see e.g. the detailed overview of Peczenik, 1990 in the context of Swedish
law). This even holds for specificitYi although checking which argument is
more specific may be a logical matter, deciding to prefer the most specific
argument is a legal decision. Moreover, the collision rules not only vary
from domaiIl to domain, they can also be incomplete or inconsistent, in
the same way as 'ordinary' domain information can be. In other words,
reasoning about priorities is nonmonotonic reasoning. These observations
me an that in a logic that is meant to formalise this kind of reasoning, the
consequences of a set of premises do not only depend on the priorities, they
also determine the priorities. In most current nonmonotonic logics these
observations are ignored (but see Chapter 9 for some exceptions).
In this chapter these issues will be addressed. The argumentation system
will be extended in such a way that information about the priorities can
be expressed in the premises and that the priorities needed for resolving
conflicts can be derived as conclusions of these premises, in the same way
as any other conclusion. This means that arguments call be set up for and
agaillst priority conclusions alld, if necessary, they call be compared with
the help of other priorities, in turn derived from the premises in the same

203
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
204 CHAPTER 8

way.
I start this chapter with an overview of legal collision rules, resulting in
so me requirements for their formalization. Then I extend the definitions of
the current system in such a way that priority information can be expressed
and derived in the system, after which I sketch and illustrate a methodology
for representing legal collision rules.

8.2. Legal Issues


8.2.1. LEGAL COLLISION RULES

Legal systems anticipate conflicts between norms by providing general colli-


sion rules. We have already come across the three principles that are present
in practically all legal systems: the Lex Superior principle, based on the
general hierarchical structure of a legal system (e.g. the constitution takes
precedence over ordinary statutes), Lex Posterior, which gives priority to
the later rule over the earlier one, and Lex Specialis, which is the specificity
principle. In addition, legal regulations often contain several special collision
rules. For example, the Dutch criminal code contains a provision, section
1-(2) Sr, which says that if the law changes during a criminal case, the
norm which is the most favourable for the suspect should be applied.
Obviously, this collision rule is intended to have precedence over the general
Lex Posterior principle. Another example is Example 3.1.6 of Chapter 3, in
which Section 1624 of the Dutch civil code (BW) states that if a contract has
features of both lease of business accommodation and another contract, and
a regulation concerning the other contract type conflicts with a regulation
concerning lease of business accommodation, the latter prevails. And we
saw that 1637c BW is a similar provision concerning labour contracts.
However, legal collision rules are not restricted to the three general
principles and special legislative collision rules. Such rules are often also
stated (and attacked) in cases of statutory interpretation, when different
interpretation methods point at opposite conclusions. For instance, lawyers
can argue about whether a socially desirable interpretation (or an interpre-
tation respecting the intentions of the rule maker) should take precedence
over a literal interpretation (see MacCormick & Summers, 1991 for detailed
discussions of collision rules concerning interpretation). Moreover, as al-
ready discussed above in Section 3.3.3, a legal decision very often involves
the weighing of reasons for and against a conclusionj and if adecision
or argument makes the grounds for a particular outcome explicit, it in
fact states a collision rule. To give an Italian example from Sartor (1994),
in which the conflicting reasons are based on legal principles: according
to the principle of the protection of privacy it is forbidden to propagate
private information, while according to the principle of the freedom of
REASONING AB OUT PRIORITY RELATIONS 205

communication, it is permissable to propagate every piece of information


that has public significance. These principles confiict in the case of a piece of
information about the private life of a person having a public role. Now the
confiict could be solved by a collision rule stating that if the involved person
is a politician, and the private information concerns aspects which may
affect his/her political functioning, then the communication rule should be
preferred.
It is important to note that, from a logical point of view, legal collision
rules behave exactly like any other legal rule. Firstly, collision rules make
their consequent (which is a priority assertion) dependent on the satisfac-
tion of certain conditions. For instance, the above collision rules from the
Dutch civil and criminal code are only applicable if other legal quest ions
have been answered: for applying 1637c it must be known that something
is a contract and that a certain contract is a labour contract, and for using
1-(2) Sr it must be established which rule is the most favourable for the
suspect: all these things are real legal issues, about which further legal
information exists, and which need to be reasoned about. Moreover, like
any otehr legal rule, legal collision rules are also defeasible. As argued by
e.g. Peczenik (1990, pp. 187-8), they only hold as a general rule; in specific
circumstances other considerations might prevail. Finally, legal collision
rules can be in confiict with each other: imagine, for example, two confiicting
statute rules, a later one concerning any type of contract and an earlier one
concerning labour contracts: then 1637c BW and Lex Posterior disagree
on which rule takes precedence; or imagine that a rule of the criminal code
changes during a case and that the earlier one is more favourable for the
suspect: then Lex Posterior and 1-(2) Sr are in confiict. To such confiicts
any other collision rule can apply: for instance, in Dutch law the latter
confiict is solved with the Lex Specialis principle.

8.2.2. REQUIREMENTS FOR A FORMAL ANALYSIS

No Hierarchy of Metalevels
I now discuss some aspects of reasoning with and about legal collision rules
that have to be respected by a formal analysis. First of all, we cannot
assurne a hierarchy of separate layers of collision rules. For instance, the
three Lex principles do not only apply to confiicts between 'ordinary' legal
rules, but also to confiicts in which they are themselves involved. To give an
example, in Dutch law the confiict just mentioned between Lex Posterior
and 1-(2) Sr is, as just mentioned, solved with the Lex Specialis principle,
and below we will even come across examples in which Lex Posterior is
applied to itself.
206 CHAPTER 8

Combining Legal Collision Rules


A formal analysis also has to respect the way lawyers combinc several
collision rules. This way is perhaps best explained by way of an example.
Consider the three Lex principles and assume for the purpose of explanation
that Lex Superior (H) is the most important one and that the temporality
principle (T) in turn prevails over specificity (8) (in fact in several legal
systems the relation between T and 8 is subject of debate). Let us see what
happens in legal reasoning if two rules Tl and T2 are in conflict. If H prefers
Tl over T2 and T prefers T2 over 1·1 things are easy: since H overrules T, Tl is
preferred over r2. Now what if Tl and T2 are hierarchically equal, say, they
are both statute provisions? Then H hands the decision over to T, which
results in T2 being preferred over 1·1. However, things are different if Tl and
T2 are not hierarchically equal but when their hierarchical relation is unde-
termined, i.c. when they are from sources of different types between which
no clear hierarchical relationship exists. In Dutch law this is, for instance,
the case between regulations of councils and so-called district water boards
(regional authorities responsible for surface water management). In such
cases the legal consensus is that the conflict cannot be solved by resorting
to T but that the regulations are 'overall' incomparable.

The Scope of the Legal Collision Rules


Another feature of legal collision rules that has to be respected is that they
often have restricted scope. For instance, the three Lex principles seem
to only apply to conflicts between legislative norms: legally it secms odd
to say that a new casc law decision defeats an already existing statute
norm because of Lex PosteTior; or to say that a case law decision trying to
make an exception to astatute norm succeeds in doing so because of Lex
Specialis: whether the decisions override the statute norms entirely depends
upon whether they are accepted as doing so by the legal community. For
the same reason it cannot be said that case law is always hierarchically
inferior to statute norms. In conclusion, auy formalization of legal collision
rules has to be able to express their scope.

8.3. Extending the Definitions

In this section the argumentation system developed in the previous two


chapters will be adapted to reasoniug ab out rule priorities. This involves
two changes: the language of the system needs to made capable of expressing
priority information, so that priority conclusions can be argumentatively
derived in the same way as any other conclusionj and the derived priority
conclusions ueed to be made available for resolving conflicts between argu-
ments (including priority arguments). Since only priority relations between
REASONING ABOUT PRIORITY RELATIONS 207

individual rules will be considered, the specificity comparator of Defini-


tion 6.4.15, which is an ordering not on rules but on sets of rules, will have
to be reduced to an ordering on rules. Below I co me back to this.
This chapter does not consider partial preorders, as in Chapter 7, but
striet partial orders, Le. orderings < which are transitive (if x < y and y < z
then x < z) and asymmetrie (if x < y then y I- x). This is since formally
strict partial orders make things simpler, while in practical applications
the difference with partial preorders does not matter, as will be explained
below.

EXTENDING THE LANGUAGE

In order to be able to express priority information, the object language of


our system needs to be extended with two features. First it has to contain a
distinguished twoplace predicate symbol -<, denoting the ordering relation
<. In addition, since the ordering is an ordering of (defeasible) premises,
the language has to contain names for all defeasible rules of the language.
These names are added with the third naming convention explained in Sec-
tion 5.2.2: every defeasible rule scheme with the informal name name and
containing the free variables Xl, ... ,X n is denoted by a function expression
name(xI, ... , xn)j and any rule obtained by instantiating this scheme with
terms tl, ... , t n is denoted by name(tl,"" t n ). It is further assumed that
each defeasible rule has exactly one name, but different rules are allowed to
have the same name, as proved useful in Section 5.2.2. Finally, in agreement
with Section 8.2.2, the logicallanguage is not divided into separate object
and metalevel languages.
With these changes, collision rules can now be expressed in the logical
language of our system, which me ans that the priorities can now be derived
as argumentative conclusions, in the same way as any other conclusion.
This makes the explicit ordering component of an ordered default theory
redundant, so an orde1'ed default theory is from now onjust a set FnUFeUß.
Next the derived ordering must of the desired type, Le. a strict partial
order. This can be ensured by adding the defining axioms of a strict partial
order (transitivity and asymmetry) to the necessary facts. Accordingly, in
the rest of this chapter it is assumed that of every default theory the set
Fn contains all and only the following formulas containing the -< predicate

transitivity: VX, y, z. x -< Y 1\ Y -< z -4 X -< z


asymmetry: Vx, y. x -< Y -4 --, Y -< x
For simplicity some restrictions on the syntactic form of priority expres-
sions will be assumed. Fe may not contain any priority expressions, while in
the defeasible rules priority expressions may only occur in the consequent,
208 CHAPTER 8

and only in the form of conjunctions of literals (recall that a literal is an


atomic formula or a negated atomic formula). This excludes, for instance,
disjunctive priority expressions. The reason for these syntactic restrictions
is that they lead to simpler definitions and do not seem to be harmful in
practical applications.

CONNECTING OBJECT AND METALEVEL


Ensuring that priority information can be derived in the system is not
yet suflicient; in addition the derived priorities need to be made available
for resolving conflicts between arguments; i.e. they somehow need to be
lifted to the metatheory of the system, in particular to Definition 7.4.2.
In other words, a formal connection must be defined between the object-
and metalevel of the system: it must be ensured that r < r' ie and only if
there is a justified argument for r -< 1" (Note that the predicate symbol<
denotes the ordering used by the (metalevel) definitions of the system, e.g.
by Definition 7.4.2).
To realize this, first the following notation is needed.
Notation 8.3.1 For any ordered default theory r the set JustArgs r is the
set of all ar'!}uments that are justified on the basis of r.
Now the idea is that the ordering component of an ordered default theory
r is determined by the set of all priority arguments that are justified on the
basis of r 1 . More precisely, the joint conclusions of all priority arguments
in JUstA1'gsr should correspond to an ordering < such timt JustArgsr
contains precisely those arguments that according to the old Definition 6.5.5
are justified on the basis of (f, <).
Let us first introduce the following notation for an ordering determined
by a set of arguments.
Definition 8.3.2 For any set Args of arguments
<Args = {r < r' 11' -< r' is a conclusion of some A E A1'gS}
Furthermore, fo1' any set of arguments, Args, A (strictly) Args-defeats B
on the basis ofr iJJ acc01'ding to Definition 6.4.17 A (st1'ictly) defeats B on
the basis of (r, <Args). Occasionally, the analogous notion Args-rebuts will
be used, and, for any argument A, {A}-defeats will be written as A-defeats.
Now how should the dialogue rules of Chapter 6 be changed to produce the
'right' ordering? The main problem here is on the basis of which priorities
the defeating force of the moves should be determined. What must be
avoided is that for determining the defeating force of a particular move
I Note that this notion, referring to the ordering that is implicit in r, still has to be
defined.
REASONING AB OUT PRIORITY RELATIONS 209

all possible priority arguments have to be generated. The pleasant surprise


is that, to achieve this, a few very simple conditions suffice. For 0 it is
sufficient that its move 0-defeats P's previous move. This is because, as
shown in Prakken & Sartor (1997a), Definitions 6.4.17 and 7.4.3 imply
timt if A is for some S an S-defeater of P's previous move, it is also an
0-defeater of that move. So 0 does not have to take priorities into account,
as is illustrated by the following dialogue (based on an implicitly assumed
default theory)
PI: [dl: * pJ
Now a possible move of 0 is
01: [d 2 : * q, d3 : q * -,pJ
since 01 0-defeats PI.
The proponent, on the other hand, should take some priorities into ac-
count. However, it suffices to consider only those priorities that are stated by
P's move; more priorities are not needed, since Definitions 6.4.17 and 7.4.3
also imply that if P's argument Argi strictly Argsi-defeats O's previous
move, it will also do so whatever other priorities can be derived. So P can
reply to 0 1 with
P2: [d4: * -'q, d5: * d2 -< d4J
which strictly P2-defeats 0 1 ,
However, this is not the only type of move that the proponent can make.
P can also argue that, although O's move 0-defeats P's previous move, it
does not so under the ordering determined by the justified arguments; Le.
P can 'undo' the defeating force of O's move with a priority argument. For
instance, P can undo the defeating force of 01 with
P2,: [d5: * d3 -< dlJ
since 0 1 does not P 2,-defeat PI.
At first sight, the new burden of proof for P seems too liberal, since the
priorities stated by P may turn out not to be justified. However, this will
manifest itself in the possibility of a successful attack by 0 on P's priority
argument. For instance, 0 could respond to P2, with
02: [d 6 : * r, d7: * dl -< d2J
l'

And if P has no answer, then PI and P2, turn out not to be justified.
Now these ideas will be incorporated in the definition of a dialogue. All
conditions are the same as above in Definition 6.5.2, except for the new
type of move for P, and the references to 'defeat', which are made relative
to the defeating argument.
210 CHAPTER8

Definition 8.3.3 A priority dialogue based on r zs a finite sequence of


moves movei = (Played i , Argd (i > 0), where
1. A1'gi E Argsr;
2. Playedi = P iJJ i is odd, and Playedi = 0 iJJ i is even;
3. If Playedi = Playedj = P and i i= j, then Argi i= A1'gj;
4. If Playedi = P then
- Argi st1'ictly Argi-defeats Argi-l; 01'
- ATgi-l does not Argi-defeat Ai-2'
5. If Playedi = 0 then Argi 0-defeats Argi-l on the basis of r.
And in the definition of a dialogue tree the only change is that the defeat
condition on O's moves is made relative to the empty set.
Definition 8.3.4 A priority dialogue tree is a finite tTee of moves such
that
1. Each branch is a dialogue;
2. If Playedi = P then the childTen of movei a1'e all0-defeate1's of Argi.
The other definitions stay the same.
In Prakken & Sartor (1997a) it is shown that the above-mentioned for-
mal properties of the system with fixed priorities also hold for the defeasible
case. Morcover it is shown that the set JustArgsr contains exactly those
arguments that are justified on the basis of (r, <JustA1'gsr)' according to
Definition 6.5.5. Thus the new dialogue rules justify the 'right' priority
conclusions.

8.4. A Formalization Methodology


In this section a general method for representing priority rules will be
described. The method is not completely original: it combines some existing
techniques, such as the naming technique explained above, which is well-
known from AI applications, and a way of combining orderings discussed
in Prakken & Sartor (1996b). It is also similar to the method developed
by Gordon (1994; 1995); thc main differences arise from the fact that
Gordon encodes the priorities in terms of Geffner & Pearl's (1992) notion
of specificity (see further Chapter 9 below).
The main idea of the method is to give only positive information about
the ordering (apart from one case, to be discussed below). The three general
principles Lex Superior, Lex Specialis and Lex Posterior thus become:
H(x, V): x is inferior to y ::::} x -< y
T(x, V): x is earlier than y ::::} x -< Y
S(x, V): y is more specific than x ::::} x -< y
Other rules can then specify when the antecedents of these rules hold. For
example,
REASONING ABOUT PRIORITY RELATIONS 211

d1 (d3(X), d4 (x)): =* d4(x) is more specific than d3(X)


d2 (x, y): x is in astatute 1\ y is in the constitution
=* x is inferior to y
Finally, the relation between the three prineiples ean be speeified as follows
(assuming just for the sake of illustration that Lex Posterio1' prevails over
Lex Specialis).
HT(T(x, y), H(x, y)): =* T(x, y) -< H(x, y)
TS(S(x, y), T(x, y)): =* S(x, y) -< T(x, y)
It is now the time to explain how the method deals with speeifieity. The
idea is that specificity observations like dl above are provided by an external
procedure that applies Definition 6.4.15 to the premises. However, as noted
above, there is a problem here, sinee this definition does not order rules
but sets of rules. Now, although the form of this definition suggests that it
will be rather easy to reduee the ordering on sets of rules to an ordering on
rules, I shall nevertheless not go into detailed teehnical investigations here
but instead simply ass urne that this reduction is possible.
As for the requirements stated in Seetion 8.2.2, we have already re-
marked that the logical language amalgamates objeet and metalanguage.
Reeall next that legally it makes a differenee whether two norms are hier-
arehically equal or ineomparable: only in the first ease ean the other legal
eollision rules be used to resolve the conflict. How does the present method
reflect this differenee? Representing that two rules are ineomparable is
straightforward, by explicitly saying so. In ease of the district water board
and eouneil regulations of Duteh law:
H(x, y): x is a water board regulation 1\ y is a council
regulation =* ..., x -< Y 1\ ..., Y -< x
(this is the only ease where negative priority information is expressed).
Assurne now that a water board regulation w eonfliets with a later couneil
regulation c. Then, although by T we have w -< c, by H we have the
contradicting conclusion ..., w -< c, and since by HT we have that T -< H,
applieation of T is bloeked, as desired. Note that here we make use of the
faet that distinct defeasible rule sehemes ean have the same name.
However, how ean we express that two rules are of equal priority? At
first sight, this would seem impossible, sinee -< is asymmetrie. However,
there still is a way: the idea is that if two rules are intuitively of the same
hierarehieal type, this is formalised by saying nothing! To see why this
works, ass urne that two eompeting rules dl and d2 are from the same souree:
for instanee, both are statutes, but d1 has been issued at a later time.
Then from H nothing follows sinee its anteeedent is not satisfied, whieh
leaves room for applieation of T, as desiredj and indeed, from T we can
212 CHAPTER 8

derive d2 -< dl; and sinee there is no counterargument, this is a justified


eonclusion.
The final requirement of Seetion 8.2.2 was that it should be possible to
express the seope of legal eollision rules, for instanee the fact tlmt the three
Lex principles only apply to norms from legislation. This ean very easily
be formalized, by giving the three rules H, T and S two extra anteeedents.
For instanee, T beeomes
ll(x, y): x is legislation /\ y is legislation /\ x is earlier
than y =? x -< Y
If one wants to express the opinion that the temporality principle also holds
between preeedents, then it suffiees to say:
Tp(x, y): x is a precedent /\ y is a precedent /\ x is earlier
than y =? x -< y
Note that thus the temporality prineiple is still not applied to eonfliets
between ease law and legislation.

8.5. Examples
Let us now apply the new definitions and the formalization methodology to
some legal examples. The first example is about the relation between the Lex
Posterior principle and Seetion 1 of the Duteh eriminal code (Sr). Reeall
that this section says that if the law ehanges during a eriminal ease, the
rule shall be applied that is the most favourable for the suspect. Clearly, if
the most favourable rule is the original one, this eollision rule itself eollides
with the temporality principle. In Duteh law this principle is eodified as
Section 5 ofthe 'Aet general provisions' (AB), which makes it hierarehieally
equal to 1 Sr. Let us furt her assurne that both provisions were enacted at
the same time, so that they are also temporally equal (note that here a
eollision rule, viz. Lex Posterior, is applied to itself; this possibility will
be furt her illustrated with the seeond example). The eonflict ean then be
resolved with Lex Specialis, observing that the eollision rule of the eriminal
code is more specifie than 5 AB, which is written for legislation in general.
The eriminal eollision rule ean be formalized as follows (where r,1",c and
s are variables).
l(r,r',c,s): cis a criminal case /\ s is suspect in c /\
r changes during c into 1.1 /\
r is more favourable for s than 1" =? r' -< r
In addition, F n eontains
In: \;/1',1", c. r changes during c into r' -4 r is earlier than r'
REASONING ABOUT PRIORITY RELATIONS 213

This necessary fact is needed to make 1 Sr more specific than T. Accord-


ingly, the following observation can be added to F c as the result of the
'external' application of Definition 6.4.15.
fq: l(r,r',c,s) is more specific than T(r,r')
Assurne now that during a certain case casel on theft with suspect John
Section 310 Sr, stating a maximum penalty of 6 years for theft, is changed
into 310' Sr, raising the maximum to 8 years. Clearly, the new provision is
less favourable for John than the old rule. Let us add the following rules to
L). (which already contains H, T, 8, 01 and 02)'

310: x is guilty of theft :::}- the maximum penalty for x


is 6 years
310': x is guil ty of theft :::}- the maximum penalty for x
is 8 years
alld to F c the following facts.
f C2: casel is a criminal case
fC3: John is suspect in casel
f C4: John is guilty of theft
fcs: 310 changes during casel into 310'
f C 6: 310 is more favourable for John than 310'
(in practice f4 will usually be the conclusion of a defeasible argument).
Finally, I ass urne that F n contains the proper arithmetic axioms. With this
default theory, the instantiated version of 8, of which the name reads as
8(1(310,310', casel, John), T(310, 310'))
gives rise to the conclusion
T(31O, 310') -< 1(310,310', casel, John).
(in words, 1 Sr has priority over Lex Posterior) and thus to the conclusion
that the maximum penalty for John is 6 years. Here is the dialectical
proof (to maintain readability, I shall from now on, if there is no danger of
confusion, only give the function-symbol part of the rule names, and prefix
the rules in the arguments with their names).
P starts by arguing for a maximum penalty of 6 years, using the old
rule.
PI: [John is guil ty of theft,
310: John is guil ty of theft :::}-
the maximum penalty for John is 6 years]
o counters by using the changed rule, arguing for a maximum penalty of
8 years.
214 CHAPTER 8

0 1: [John is guilty of theft,


310': John is guilty of theft '*
the maximum penalty for John is 8 yearsJ

P now neutralizes the defeating force of 0 1 with a priority argument using


1 Sr.

P2 : [easel is a criminal case, John is suspect in easel,


310 changes during easel into 310',
310 is more favourable for John than 310',
1: easel is a criminal case A John is suspect in easel A
310 changes during easel into 310' A
310 is more favourable for John than 310' '*
310' -< 310J

o now challenges this priority argument with an appeal to Lex Posterior.


O2: [310 changes during easel into 310',
Vr, r', e. r changes during e into r' ~ r is earlier than r',
T: 310 is earlier than 310' '* 310 -< 310'J

But now P takes the debate to the meta-meta level, resolving the conflict
between the collision rules 1 Sr and Lex Posterior with the Lex Speeialis
rule.

P3: [1 is more specific than T, S: 1 is more specific than T


'* T -< 1J

Now 0 has run out of moves, and PI has been shown to be justified.
The second example is from Italian town planning legislation, which
contains a collision rule stating that rules intended to protect buildings
of architectural merit prevail over rules concerning town planning. The
example is constructed in such a way that this collision rule conflicts with
the temporality principle. The case concerns a town planning rule saying
that if a building needs rebuilding, its exterior may be modified, and an
earlier, and conflicting, architectural-heritage rule saying that if a building
is on the list of protected buildings, its exterior may not be modified. The
example is partly intended to illustrate a technical subtlety with philosoph-
ical significance, viz. the application of a priority rule to itself (or better,
to one of its instances). Rule dg states that rule d3 is later than the Lex
Posterior principle T, wh ich implies that d 3 prevails over T, according to
T itself.
REASONING ABOUT PRIORITY RELATIONS 215

d1 (x): x is a protected building =? ..., x' s exterior


may be modified
d2 (x): x needs rebuilding =? x's exterior
may be modified
d3 (x, y): x is about the protection of architectural
heri tage A y is a town planning rule =?
y-<x
T(x, y): x is earlier than y =? x -< y
d4 (d 1 (x)): =? rl(x) is about the protection of
architectural heritage
d5(d 2(x)): =? d2(X) is a town planning rule
d6(dl (x), d2(y)): =? dl(X) is earlier than d2(y)
d7(My-villa): =? My-villa is a protected building
ds(My-villa): =? My-villa needs rebuilding
dg(T(x, y), d3 (x, y)): =? T(x, y) is earlier than d3 (x, y)

Here is the proof that the exterior of My-villa may not be modified.
PI: [d7: =}My-villa is a protected building,
dl: My-villa is a protected building =? ..., My-villa' s
exterior may be mOdifiedJ
o can respond in only one way.
0 1: [d s : =} My-villa needs rebuilding,
d2: My-villa needs rebuilding =? My-villa' s exterior
may be modified J
P can now neutralize O's defeater with the following priority argument,
saying that the protection rule prevails over the town planning rule.
P2: [d4: =? d1 is about the protection of architectural
heritage,
d5: =? d2 is a town planning rule,
d3: dl is about the protection of architectural
heri tage A d2 is a town planning rule
=? d2 -< dI]

But 0 can 0-defeat this priority argument with a conflicting priority argu-
ment based on the Lex Posterior principle.
02: [d6: =} d 1 is earlier than d2,

T: d1 is earlier than d2 =? d1 -< d2J


Now P takes the debate to the meta-meta-level, by asserting a priority
argument that neutralizes O's defeater at the meta-level. P's argument
says that, since the Lex Posterior principle is earlier than the building
216 CHAPTER8

regulations principle, the former is inferior to the latter on the basis of the
former. Although this seems to be self-referential, formally it is not, since
one instance of Lex Posterior speaks about another instance of itself.
P3: [d9: => T is earlier than d3,
T': T is earlier than d3 => T ~ d3 1
Now 0 has run out of moves, and we know that My-villa's exterior may
not be modified. Note that in this dialogue the function arguments of T
and d3 are instances of d 1 and d 2, while of T' they are the full vers ions of
T and d3 • I leave it to the reader to write the full name of T'.
Interestingly, the outcome of this example is that one instance of T, i.e.
of Lex Posterior, makes another of its instances inferior to a competing rule.
A variant of this example is when a rule like 1637c BW (see Example 3.1.6)
takes precedence over Lex Specialis by virtue of being more specific. Neither
logically nor legally does this seem problematic, as also argued by Suber
(1990, p. 216). The present analysis formalises Suber's intuitions.

Interpretation Debates
Finally, I schematically illustrate how interpretation debates can be formal-
ized. The form of i 1 and i 2 below was earlier suggested by Sartor (1994),
while a similar style is used by Hage (1996). Assume that d1 and d2 are
two alternative interpretations of astatute section S.
dl: pl\r=>q
d2: p V r => q
Assume, moreover, that we have two opinions as to which is the correct
interpretation of section S.
il: cp => d 1 is the correct interpretation of Section S
i2: 'Ij; => d2 is the correct interpretation of Section S
Let us assume that dl is a teleological and d2 a literal interpretation of
Section S (in realistic examples this will itself be a mater of dispute, but
for simplicity I give the following rules empty antecedents and assume that
there are no conflicting rules).
i3: => dl is a teleological interpretation of Section S
't4: => d2 is a literal interpretation of Section S
Thc necessary facts say that there is exactly one correct interpretation of a
legal rule, and they say that d1 and d2 are two different rules. These facts
are needed to make il alld i2 conflicting rules.
Inl: Vx, y, z. x is the correct interpretation of z 1\
Y is the correct interpretation of z ~ x = y
In2: --, d1 = d2
REASONING ABOUT PRIORITY RELATIONS 217

(note that an expression x = y does not say timt x and y are of equal
priority but that they are the same object, viz. the same rule).
Finally, Iassume there is a rule on the priority of alternative interpre-
tation methods, saying that teleological interpretations prevail over literal
interpretations. Again this will in practice be a matter of dispute, but again
I make some simplifying assumptions.
Z5: x is a teleological interpretation of z /\
Y is a l i teral interpretation of z =} Y -< z
Assurne now that cp and 'l/J hold as a matter of fact. Then the interpretation
arguments containing il and i 2 are in conflict, since the consequents of these
rules are, together with in! and in2' inconsistent. d 1 and d2 are the relevant
rules of the two conflicting interpretation arguments and their conflict is
decided by the priority argument [i 3, i4, i 5], saying that i2 -< il. So d l is the
correct interpretation of Section S.
Note, however, that this conclusion does not yet warrant the application
of d 1 and block the application of d 2 • To realize this, applicability clauses
must be used (this is again inspired by Sartor, 1994 and Hage, 1997). Firstly,
d l and d2 must receive applicability assumptions.
dl: P /\ r /\ -, "" appl(dr) =} q
d2 : (p Vr) /\ -, '" appl(d 2 ) =} q
Next we add a general rule, saying that an interpretation of a rule is not
applicable if it is not the correct interpretation of the rule.
91: x is an interpretation of y /\
-, x is the correct interpretation of z
=} -,appl(x)
If we finally add to F n that literal interpretations are interpretations
i n 3: V x, y. x is a teleological interpretation of y --+
x is an interpretation of y
then we can derive that d2 is not applicable from the modified theory.

8.6. An Alternative Method


In this chapter a formal model has been presented for reasoning about pri-
ority relations between individual rules. It has proved adequate to capture
the most important features of the use of legal collision rules. However, the
present model is not the only possible one. In the literature there have been
at least two proposals to formalize reasoning with and ab out legal collision
rules with applicability clauses, viz. Hage (1996; 1997) and Kowalski &
Toni (1996). Let us now outline the alternative method and then briefly
compare it with the present one.
218 CHAPTER 8

Hage (1996) has argued that the legal collision rules do not state prior-
ities between legal rules but applicability conditions for legal rules. In this
reading the Lex SupeTior principle says that if a higher rule conflicts with
a lower rule, the lower rule is inapplicable. Let us see to what extent this
view can be formalized in the present system. Every defeasible rule receives
an assumption that it is not inapplicable. For instance,
d1(x): x is a protected building 1\ '" -.appld1(x)
=> -. x's exterior may be modified
Then the three principles can be expressed as follows.
H: '" -.appl(H) 1\ x conflicts with y 1\ Y is inferior to x 1\
'" -.appl(x) => -.appl(y)
T: '" -.appl(T) 1\ x conflicts wi th y 1\ x is earlier than y 1\
'" -.appl(x) => -.appl(y)
S: '" -.appl(S) 1\ x conflicts with y 1\ Y is more specific than x
1\ '" -.appl( x) => -.appl(y)

Likewise for the ordering of these three principles:


HT: '" -.appl(HT) 1\ T conflicts with H 1\ '" -.appl(H) =>
-.appl(T)
TS: '" -.appl(TS) 1\ S conflicts with T 1\ '" -.appl(T) =>
-.appl(S)
HS: '" -.appl(HS) 1\ S conflicts with H 1\ '" -.appl(H) =>
-.appl(S)
In fact, this style of formalization is an instance of the method proposed
by Kowalski & Toni (1996) for encoding priorities with exception clauses.
However, the method contains one feature that has not been formalizcd in
our system, viz. the ability to express metalevel statements of the kind x
conflicts with y (which is intended to hold iff x and y have contradicting
consequents). As Kowalski & Toni remark, the use of such statements
depends on techniques of metalogic. Thus their use makes it necessary
to investigate the formal well-behavedness of the system, since it is well-
known that metalevel reasoning is prone to paradoxes and inconsistencies.
Another problem is that in this method it is hard to express properties
(like transitivity) of the ordering induced by the inapplicability rules; this
is the reason why the rule H S had to be included above. Nevertheless,
although Hage does not investigate the formal properties of his system, his
idea seems conceptually plausible and therefore it deserves further study.
In the present system it can already be applied if one is prepared to ensure
'by hand' the correctness of the metalevel statements on conflicting rules.
CHAPTER 9

SYSTEMS FOR DEFEASIBLE ARGUMENTATION

In this chapter the argumentation system developed in the last three chap-
ters will be compared with related research on defeasible argumentation.
First, Section 9.1 gives a conceptual description in general terms of the
notion of an argumentation system, followed by a discussion in Section 9.2
of the main systems of this kind that have so far been developed. After
that, Section 9.3, discusses some other recent developments, which are
not argument-based, but which nevertheless deal with the same issues as
argumentation systems.

9.1. Argumentation Systems

Let us now step back a little and see what it is that we have developed
in Chapters 6-8. I have called it a system for defeasible argumentation, or
an argumentation system (AS)l. In this section I discuss the general ideas
behind such systems.
My system is an example of arecent development, in which several
researchers have taken up the idea of formalizing nonmonotonic reason-
ing as constructing and comparing alternative arguments. Most ASs have
been developed in AI research on nonmonotonic reasoning, although Pol-
lock's work on defeasible argumentation (see below in Section 9.2.3) was
initially developed to analyze epistemological issues in the philosophy of
science. In AI argument-based systems have been developed either as a
reformulation of (Dung, 1995; Bondarenko et al., 1997), or as an alternative
to (Loui, 1987; Simari & Loui, 1992; Vreeswijk, 1993a, 1997; Prakken &
Sartor, 1997a) earlier nonmonotonie logics. The key idea is to analyze
nonmonotonic reasoning in terms of the interactions between arguments
for alternative conclusions. Nonmonotonicity arises from the fact that ar-
guments can be defeated by stronger counterarguments. Since in the legal
domain notions like argument, counterargument, rebuttal and defeat are
very common, it comes as no surprise that in formalizing the defeasibility
of legal reasoning especially argumentation systems have been successfully
1 Earlier (e.g. in Prakken, 1995b) I used the term 'argumentation framework', but I now
want to reserve this term for formalisms that leave the underlying logic for constructing
arguments unspecified.

219
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
220 CHAPTER 9

applied (Prakken, 1991a, 1993; Loui et al., 1993; Sartor, 1993, 1994; Gor-
don, 1994, 1995; Prakken & Sartor, 1996b).
To describe the general structure of ASs (generalizing the description of
Section 6.4.1), they contain the following five elements, although sometimes
implicitly: an underlying logical language, definitions of an argument, of
conflicts between arguments and of defeat among arguments and, finally, a
definition of the assessment of arguments.
ASs are built around an underlying logical language for expressing
arguments. Some ASs, like my system of Chapters 6- 8, assurne a partic-
ular logic, while other systems leave the underlying logic partly or wholly
unspecified; thus these systems can be instantiated with various alternative
logics, which makes them frameworks rather than systems. Then ASs have
the notion of an argument, wh ich corresponds to a proof in the underlying
logic. This is a narrow use of the term 'argument', which should not be
confused with the broader meaning it often has in AI and law and argu-
mentation theory, viz. when it stands for a dispute. Some systems, notably
Dung (1995), also leave the internal structure of an argument unspecified,
and exclusively focus on the way arguments interact. Thus Dung's frame-
work is of the most abstract kind; other frameworks, by contrast, also define
the internal structure of an argument.
The notions of an underlying logic and an argument still fit with the
standard picture of what a logical system iso The remaining three elements
are what makes an AS a framework for adversarial argumentation. The
first is the notion of a conflict between arguments. Usually, two types of
conflict are distinguished (starting with Pollock, 1987), viz. rebutting an
argument, where arguments have contradictory conclusions, and undeTcut-
ting an argument, where one argument denies an assumption of another
argument (as in Definition 6.6.5 above) or when it denies the link between
the premises and conclusion of the other argument; obviously this link can
only be denied of nondeductive arguments, such as inductive, abductive or
analogical arguments.
An AS also has ways of comparing arguments, on the basis of certain
standards, to see whether an argument defeats a counterargument. In AI
the specificity principle is regarded as very important, but one of the main
themes of the previous chapters was that any standard might be used: their
content is part of the domain theory, and is debatable, just as the rest of
the domain theory iso
Since attacking arguments can themselves be attacked by other argu-
ments, comparing pairs of arguments is not sufficient; what is also needed
is adefinition of the status of arguments on the basis of all the ways in
which they interact. It is this definition that produces the output of an
AS: it typically divides arguments in three classes: arguments with which a
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 221

dispute can be 'won', arguments with which a dispute should be 'lost' and
arguments which leave the dispute undecided. Sometimes, the last category
is not explicitly defined. In this book I have denoted these classes with the
terms 'justified', 'overruled' and 'defensible' arguments.
These notions can be defined both in a declarative and in a procedu-
ral form. The declarative form, usually with fixed point definitions, just
declares certain sets of arguments as acceptable, (given a set of premises
and evaluation criteria) without defining a procedure für testing whether
an argument is a member of this set, while the procedural form amounts
to defining just such a procedure. Thus the declarative form of an AS can
be regarded as its (argumentation-theoretic) semantics, and the procedural
form as its proof theory. (similar observations have been made earlier by
Pollock, e.g. 1995, and Vreeswijk, 1993a, pp. 88-9). Note that the system
of Chapters 6-8 has only been given in a procedural form; its semantics is
described in Prakken & Sartor (1996a; 1997a). Note, finally, that it is very
weIl possible that, while an AS has an argumentation-theoretic semanties,
at the same time its underlying logic has a model-theoretic semantics in
the usual sense, for instance, the semantics of standard first-order logic.

9.2. Some Argumentation Systems


In this section I give an overview of some of the most important argumen-
tation systems and frameworks. Some other recent developments will be
discussed in Section 9.3.

9.2.1. THE BONDARENKO-DUNG-KOWALSKI-TONI APPROACH

The system of Chapters 6-8 is based on a very elegant abstract approach


to nonmonotonie logic developed in several articles by Bondarenko, Dung,
Toni and Kowalski (below caIled the 'BDKT approach'). The latest and
most comprehensive publication is Bondarenko et al. (1997). However, in
this sectiOll I present the approach as formulated by Dung (1995). This is
because in Bondarenko et al. (1997) the basic notion is not that of an argu-
ment but that of a set of what they call "assumptions". In their approach
assumptions are formulas designated as having default status; inspired by
Poole's framework, they regard llonmonotonic reasoning as adding sets
of assumptions to theories formulated in an underlying monotonie logic,
provided timt the contrary cannot be shown. What in their view makes the
theory argumentation-theoretic is that this provision is formalized in terms
of sets of assumptions attacking each other. Although the assumption-based
and argument-based formulations of the BDKT approach are equivalent,
I find Dung's (1995) formulation in terms of arguments more intuitive, at
least for the purpose of this book.
222 CHAPTER 9

The two basic notions of Dung (1995) are a set of arguments and a
binary relation of defeat among arguments. Dung completely abstracts from
both the internal structure of an argument and the origin of the set of
arguments, and this is what makes the approach a framework rat her than
a particular system. In terms of these two notions, Dung defines various
notions of so-called argument extensions, which are intended to capture
various types of defeasible consequence. These not ions are declarative, just
declaring sets of arguments as having a certain status. Finally, Dung shows
that many existing nonmonotonic logics can be reformulated as instances
of the abstract framework.
To illustrate the level of abstract ion of this approach with the system
of Chapter 6, Dung completely abstracts from everything contained in
Section 6.4; he is only interested in the fact that this section results in a set
of arguments (which in that section are alliogically possible arguments on
the basis of the premises) and in a defeat relation defined on that set. Thus
Dung only studies the final element of an AS, the assessment of arguments.
Here are the main formal notions of Dung (1995) (with some termino-
logical changes).
Definition 9.2.1 An argument-based theory (ATj2 is a pair (Args, de-
feat), where Args is a set of arguments, and defeaf3 a binary relation on
Args.
- An AT is finitary iff each argument in Args is defeated by at most a
finite number of aryuments in Args.
- A set of arguments is conflict-free iff no argument in the set is defeated
by another argument in the set.
The idea is that an AT is defined by so me nonmonotonic logic or system for
defeasible argumentation. Usually the set Args will be all arguments that
can be constructed in these logics from a given set of premises (as e.g. in
my system of the previous chapters). Unless stated otherwise, I shall below
implicitly assurne an arbitrary but fixed AT.
From Dung's use of his framework it appears that he intends, as in my
system, the relation of defeat to be a weak notion: i.e. intuitively 'A defeats
B' me ans that A and B are in conflict and that A is not worse than B.
This means that two arguments can defeat each other. A typical ex am pie
is the Nixon Diamond, with two arguments 'Nixon is a pacifist because
he is a Quaker' and 'Nixon is not a pacifist because he is a Republican'.
If there are no grounds for preferring one argument over the other, they
(intuitively) defeat each other. As we have seen in Section 6.4.5, a stronger
notion is captured by strict defeat (not explicitly mentioned by Dung),
2Dung uses the term 'Argumentation Framework'.
3Dung uses 'attack'.
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 223

which by definition is asymmetrie: A strictly defeats B iff A defeats Band


B does not defeat A. A standard ex am pie is the Tweety Triangle, where
intuitively (if arguments are compared with specificity) the argument that
Tweety flies because it is a bird is strict1y defeated by the argument that
Tweety doesn't fly since it is a penguin.
A central notion of Dung's framework is acceptability. It captures how
an argument that cannot defend itself, can be protected from at tacks by a
set of arguments.
Definition 9.2.2 An argument A is acceptable with respect to a set S of
arguments iJJ each myument defeating A is defeated by an argument in S.
To illustrate acceptability, consider the Tweety Triangle with A = 'Tweety
is a bird, so Tweety flies', B = 'Tweety is a penguin, so Tweety does not
fly' and C = 'Tweety is not a penguin', and assume that B strict1y defeats
A and C strictly defeats B. Then A is acceptable wi th respect to {C},
{A,C}, {B,C} and {A,B,C}, but not with respect to 0 and {B}.
Another central notion is that of an admissible set.
Definition 9.2.3 A conftict-free set of arguments S is admissible iJJ each
argument in S is acceptable with respect to S.
In the Tweety Triangle the sets 0, {C} and {A, C} are admissible but all
other subsets of {A, B, C} are not admissible.
In terms of the notions of acceptability and admissibility several not ions
of 'argument extensions' can be defined. For instance, Dung defines the
following credulous notions.
Definition 9.2.4 A conftict-free set S is a stable extension iJJ every argu-
ment that is not in S, is defeated by some argument in S.
Consider an AT called TT (the Tweety Triangle) where A1'gS = {A, B, C}
and defeats = {(B, A), (C, B)}. TT has only one stable extension, viz.
{A,C}.
Consider next an AT called ND (the Nixon Diamond), with Args =
{A, B}, where A = 'Nixon is a quaker, so he is a pacifist', B = 'Nixon is
a republican, so he is not a pacifist', and defeats = {(A,B), (B,A)}. ND
has two stable extensions, {A} and {B}.
Since a stable extension is conflict-free, it reflects in some sense a co-
herent point of view. Moreover, it is a maximal point of view, in the sense
that every possible argument is either accepted or rejected. The maximality
requirement means that not all AT's have stable extensions. Consider, for
example, an AT with three arguments A, Band C, and such that A defeats
B, B defeats C and C defeats A (such circular defeat relations can occur,
for instance, in logic programming because of negation as failure, and in
default logic because of the justification part of defaults: see page 73 above.)
224 CHAPTER 9

To give also such AT's a credulous semanties, Dung defines the not ion of a
preferred extension.
Definition 9.2.5 A conflict-free set is a preferred extension iff it is a
maximal (with respect to set inclusion) admissible set.
All stable extensions are preferred extensions, so in the Nixon Diamond and
the Tweety Triangle the two semantics coincide. However, not all preferred
extensions are stable: in the above example with circular defeat relations the
empty set is a (unique) preferred extension, whieh is not stable: preferred
semantics leaves all arguments involved in the odd cycle of defeat out of
the extension, so none of them is defeated by an argument in the extension.
Preferred and stable semanties capture a credulous notion of defeasible
consequence: in cases of an irresolvable confiict as in the Nixon diamond,
two incompatible extensions are obtained. As we have seen in Chapter 4
(e.g. Definitions 4.1.3 and 4.1.16), one way to define sceptical consequence is
to take the intersection of all credulous extensions. However, Dung defines
sceptical consequence in a different way, resulting in a unique extension. It
is this notion that was the semantie basis of the dialectical proof theory of
Section 6.5. Dung defines the sceptieal semanties with a monotonie operator
F AT , whieh for each set 5 of arguments returns the set of all arguments
acceptable to 5. Since F AT is monotonie (Le. if 5' ;;2 5, then F AT (5') ;;2
F AT (5)), it is guaranteed to have aleast fixed point, i.e. a smallest (with
respect to set inclusion) set 5 ~ Args such that F AT (5) = 5. This least
fixed point captures the smallest set which contains every argument that
is acceptable to it; it is this set, which by definition is unique, which Dung
defines to be the sceptical (grounded) extension of AT.
Definition 9.2.6 Let AT = (Args, defeat) be an argument-based theory
and 5 any subset of Args. The characteristic function of AT is:
- FAT : Pow(A1'gs) --+ Pow(Args)
- FAT(5) = {A E ArgslA is acceptable with respect to 5}
I now give a perhaps more intuitive variant of this definition, which for
finitary AT's is equivalent to the fixed point version and in general results
in a subset of the grounded extension.
Definition 9.2.7 For any AT = (Args,defeat) we define the following
sequence of subsets of Args.
- F~T = 0
- F~V = {A E Arg s lAis acceptable with respect to F~T}'
Then the set JustArgsAT of arguments that are justified on the basis of AT
is U~O(F~T)'
What this definition does is to approximate the least fixed point of F AT
by iterated application of FAT, starting with the empty set. Thus first all
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 225

arguments that are not defeated by any argument are added, and at each
furt her application of F AT all arguments that are reinstated by arguments
that are already in the set are added. This is achieved through the notion
of acceptability. To see this, suppose we apply FAT for the ith time: then
for any argument A, if all arguments that defeat Aare themselves defeated
by an argument in F i - 1 , then A is in F i . To illustrate this with the above
Tweety Triangle: FfT = {C}, FiT = {A, Cl, FfT = FiT' so A is reinstated
at F 2 by C. Finally, that this semanties is sceptieal is illustrated by the
Nixon Diamond: FJv D = Ff?, D = 0.
The developers of the BDKT approach have also studied procedural
forms for the various semantics. One of them was the basis of the dialogue
game of Section 6.5, viz. Dung's (1994) game-theoretic version of extended
logie programming. Furthermore, within the BDKT approach, Kowalski
& Toni (1996) have studied formalization methodologies for rules with
exceptions, which in their surface structure are very similar to those of
Section 5.6.1 above. As already discussed above in Section 8.6, Kowalski
& Toni (1996) also define a way of expressing priority rules in terms of
applicability clauses, in a similar way as Hage (1996) does within his reason-
based logic, to be discussed below.

Evaluation
In my opinion the abstract BDKT approach is an extremely useful tool
in revealing relations and differences between the various existing non-
monotonie logics. Moreover, the approach makes it very easy to formulate
alternative semantics for these logics. For instance, default logic, whieh was
shown to have a stable semantics, can very easily be given an alternative
semanties in which extensions are guaranteed to exist, like preferred or
grounded semantics. Moreover, the proof theories that have been or will be
developed for the various argument-based semantics immediately apply to
the nonmonotonie logies that are an instance of these semanties. Because
of these features, the BDKT framework is also very useful as guidance in
the development of new systems, as Giovanni Sartor and I have used it in
developing the system of the previous chapters.
On the other hand, the level of abstractness of the BDKT approach also
leaves much to the developers of particular systems, such as the internal
structure of an argument, the ways in which arguments can conflict, and
the ways in whieh the defeat relations are defined. Moreover, it also seems
that at some points the BDKT approach needs to be refined or extended.
For example, in Chapter 8 I needed to extend the approach to let it cope
with reasoning about priorities. Although Kowalski & Toni (1996) argue
that their alternative method avoids extending the semanties, we have seen
in Section 8.6 that they in turn need to extend it with metalogie features.
226 CHAPTER 9

9.2.2. POLLOCK

John Pollock was one of the initiators of the argument-based approach to


the modelling of defeasible reasoning. Originally he developed his theory as
a contribution to philosophy, in particular epistemology. Later he turned to
AI, developing a computer program called OSCAR, which implements his
theory. In this section I only discuss the logical aspects of Pollock's system;
for the architecture of the computer program the reader is referred to e.g.
Pollock (1995).
In Pollock's system, the underlying logical language is standard first-
order logic, but the not ion of an argument is nonstandard. Arguments
are formed by combining so-called reasons. Technically, reasons are in-
ference schemes, relating a set of premises with a conclusion. What is
very important is that, unlike Reiter's defaults, Pollock's reasons are not
domain specific but express general epistemological principles. This is why
I do not discuss reasons as part of the underlying language but as part
of the inference system defined over the language. Reasons come in two
kinds, conclusive 'and prima facie reasons. Conclusive reasons logically
entail their conclusions. Thus a conclusive reason is any valid first-order
inference scheme (which means that Pollock's system includes first-order
logic). Prima facie reasons, by contrast, only create a presumption in favour
of their conclusion, which can be defeated by other reasons.
Based on his work in epistemology, Pollock distinguishes several kinds of
prima facie reasons: for instancc, principles of perception, such as 'x looks
red is a reason to believe that x is red'; reasons based on the statistical
syllogism, which (roughly) says that if most F's are G and x is an F, then
(prima facie) x is a G; and reasons based on principles of induction, e.g.
(roughly) 'X is a set of F 's and all members of X have the pmperty G is
a reason to bclieve that all F's have the property G'.
Pollock's system is the only AS that besides linem' arguments (i.e.
sequences or trees of inferences) also studies suppositional arguments (as
in natural deduction). For instance, given that P is prima facie reason for
Q, from an empty set of premises the material implication P ----- Q can be
inferred as folIows: first assurne P, then infer Q by applying the reason, and
then infer P -) Qwhile retracting the assumption P. Although this feature
of Pollock's system is very interesting, for simplicity I confine myself below
to linear arguments.
Pollock combines the definitions of confticting arguments and of their
comparison. Prima facie reasons (and thereby the arguments using them)
can be defeated in two ways: by rebutting defeaters, which are at least as
strong reasons with the opposite conclusion, and by undercutting defeaters,
which are at least as strong reasons denying the conllection that the un-
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 227

dercut reason states between its premises and its conclusion. (In agreement
with his emphasis on epistemology, Pollock defines the strength of reasons
in probabilistic terms.) His favourite example of an undercutting defeater is
when an object looks red because it is illuminated by a red light: knowing
this undercuts the reason for believing that this object is red, but it does
not give a reason for believing that the object is not red.
Over the years, Pollock has more than once changed his definition of the
status of arguments. While earlier versions (e.g. Pollock, 1987) dealt with
(successful) attack on a subargument (cf. Example 6.4.8 above) via the
definition of defeat, as in my system, the latest version makes this part of
the status definition (by explicitly requiring that all proper sub arguments
of an 'undefeated' argument are also undefeated). While this change is
just a matter of taste, a more substantial change is that, while his earlier
definitions correspond to the grounded semantics of Definition 9.2.7 (as
shown by Dung, 1995), his latest version corresponds to a slight refinement
of the preferred semantics of Definition 9.2.5 (as remarked by Bondarenko
et al., 1997). Here is the latest definition, presented in Pollock (1995).4
Definition 9.2.8 (Pollock). An assignment of 'defeated' and 'undefeated'
to a set S of arguments (closed under subarguments) is a partial defeat
status assignment iJJ it satisfies the following conditions.
1. All premises are (as arguments) assigned 'undefeated ';
2. A E S is assigned 'undefeated' iJJ:
(a) All proper suba7"!Juments of Aare assigned 'undefeated'; and
(b) All arguments defeating Aare assigned 'defeated '.
3. A E S is assigned 'defeated' iJJ:
(a) One of A 's proper subarguments is assigned 'defeated'; or
(b) A is defeated by an argument that is assigned 'undefeated '.
A defeat status assignment is a maximal (with respect to set inclusion)
partial defeat status assignment.
The conditions (2a) and (3a) on the proper subarguments of A make it the
case by definition what I prove as Proposition 6.5.6, while the conditions
(2b) and (3b) on the defeaters of Aare the analogues of Dung's notion of
acceptability.
As in the previous section it is easy to verify that in case of an undecided
conflict, i.e. when two arguments defeat each other, an input has more than
one status assignment. Since Pollock wants to define a sceptical consequence
notion, he therefore has to consider the intersection of all assignments.
Accordingly, he defines, relative to a set S of arguments, an argument to
4 Actually, Pollock states his definition in terms of an inference graph instead of a set
of arguments.
228 CHAPTER 9

be undefeated (in my terms 'justified') iff in all status assignments to 8 it


is assigned 'undefeated'. Moreover, an argument is defeated outright (in my
terms 'overruled') iff in no status assignment to 8 it is assigned 'undefeated',
and pmvisionally defeated (in my terms 'defensible') otherwise.
In the previous subsection we have seen that the BDKT approach leaves
the origin of the set of 'input' arguments unspecified. At this point Pollock
develops some interesting ideas. At first sight it might be thought that
the set 8 of the just-given definitions is just the set of all arguments that
are logically possible on the basis of the premises. However, this is only
one of the possibilities that Pollock considers (calling it ideal warmnt).
He also considers two other ways of defining the set 8, both of which
have a computational flavour. This is because Pollock wants his logical
definitions to serve as a standard for computer programs and, as already
explained above in Section 4.3.1, computer programs cannot be guaranteed
to find all possible arguments and their counterarguments in finite time.
The first is the notion of j'ustijication, which is when the set 8 contains just
those arguments that have actually been constructed by a reasoner. Thus
justification captures the current status of a belief; it may be that furt her
reasoning (without adding new premises!) changes the status of a conclu-
sion. This cannot happen for the final consequence notion, called warmnt.
An argument A is warranted iff always after performing a finite number of
inferences eventually a stage is reached where A is undefeated relative to
the set 8 of arguments constructed thus far and A stays undefeated 1·elative
to every 8' :2 8 that is the r-esult of making (jinitely) mOTe inferences. The
difference between warrant and ideal warrant is subtle: it has to do with the
fact that, while in determining warrant every set 8' :2 8 that is considered
is jinite, in determining ideal warrant the set of all possible arguments has
to be considered, and this set can be infinite.
Although the notion of warrant is computationally inspired, as Pollock
observes there is no automated procedure that can determine of any war-
ranted argument that it is warranted: even if in fact a warranted argument
stays undefeated after so me finite number n of computations, a reasoner
can in state n not know whether it has reached a point where the argument
stays undefeated, or whether more computation will change its status.

Evaluation
Evaluating Pollock's system, we can say that it is based on a deep and plau-
sible philosophical (epistemological) theory of defeasible reasoning. More-
over, logically it is a rich theory, including both linear and suppositional
arguments, and deductive as weIl as nondeductive (mainly statistical and
inductive) arguments, with a corresponding distinction between two types
of conflicts between arguments. Pollock's definition of the assessment of
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 229

arguments is, as noted above, related to Dung's preferred semantics. Com-


pared to grounded semanties, used in my system, the main advantage of
preferred semantics seems to be that it captures 'floating conclusions' (cf.
Section 7.5.3 above). However, as I explained in that section, in my opinion
it is very weIl possible to use different semantics in parallel, to capture
slightly different sens es in which an argument can be supported by the
premises. With respect to AI applications it is interesting that Pollock
addresses computational issues, with the idea. of partial computation em-
bodied in the notions of warrant and especially justification, and that he
has implemented his system in a computer program.
However, since Pollock focuses on epistemological issues, his system is
not immediately applicable to practical (including legal) reasoning. For
instance, the use of probabilistic notions makes it difficult to give an account
of reasoning with and about collision rules such as I have given in Chapter 8.
Moreover, it would be interesting to know what Pollock would regard as
suitable reasons of practical reasoning. With respect to legal applications it
would be interesting to study how, for instance, analogical and abductive
arguments can be analyzed in Pollock's system as giving rise to prima facie
reasons. Finally, it would be interesting to study whether Pollock's system
allows for a proof theory in dialectical style, which in my opinion is very
desirable for legal applications.

9.2.3. LIN AND SHOHAM

Before the BDKT approach, an earlier attempt to provide a unifying frame-


work for nonmonotonic logics was made by Lin & Shoham (1989). They
show how any logic, whether monotonie or not, can be reformulated as a
system for constructing arguments. However, in contrast with the other
theories in this section, they are not concerned with comparing incompati-
ble arguments, and so their framework cannot be used as a theory of defeat
among arguments.
The basic elements of Lin & Shoham's abstract framework are an un-
specified logical language, only assumed to contain a negation symbol,
and an also unspecified set of inference rules defined over the assumed
language. Inference rules are either monotonie or nonmonotonic. Arguments
can be constructed by chaining inference rules into trees. The nonmonotonie
inference rules are not domain specific as in default logic, but are general
inference rules, assumed to receive their justification from a logical inter-
pretation of the underlying language.
Although the lack of the not ions of conflicting arguments and of their
comparison is a severe limitation, in capturing non mono tonic consequence
Lin & Shoham introduce a notion which for defeasible argumentation is very
230 CHAPTER 9

relevant viz. that of an argument structure. This is a set T of arguments


satisfying the following conditions:
1. The set of 'base facts' (which roughly are the premises) is in T;
2. Of every argument in T all its subarguments are in T;
3. The set of conclusions of arguments in T is deductively closed and
consistent.
Lin & Shoham then reformulate some existing nonmonotonie logies in
terms of monotonie and nonmonotonic inference rules, and show how the
alternative sets of conclusions of these logies can be captured in terms of
argument structures with certain completeness properties.
To apply the notion of argument structure to Pollock's system, in his
definition of justification we can regard a set of arguments that, relative
to a certain reasoning stage, is justified, as an argument structure, with
one difference: Pollock's sets of justified arguments are not closed under
deductive consequence, whieh violates Condition (3) (note that both for
Pollock and Lin & Shoham the sets of arguments are not closed under
defeasible reasons or inference rules).
The not ion of an argument structure can also be applied to my system,
in partieular to the set of all justified arguments. Obviously each fact j is
contained in this set in the form of a singleton argument [jJ and by Propo-
sition 6.5.6 all subarguments of justified arguments are justified as weIl. In
this book I have not shown that the set of justified conclusions is deductively
closed, but this can be shown along the lines of Prakken (1993). In sum,
the set of justified arguments of my system is an argument structure.

9.2.4. VREESWIJK'S ABSTRACT ARGUMENTATION SYSTEMS

Like the BDKT approach and Lin & Shoham (1989), Vreeswijk (1993a;
1997) also aims to provide an abstract framework for defeasible argumen-
tation. His framework builds on the one of Lin & Shoham, but contains the
main elements that are missing in their system, viz. not ions of confliet
between arguments and of comparing conflicting arguments. As Lin &
Shoham, Vreeswijk's system also assurnes an unspecified logieal language
(only assumed to contain the symbol .1, denoting 'False'), and an unspec-
ified set of monotonie and nonmonotonic inference rules (which Vreeswijk
calls 'striet' and 'defeasible'), which also makes his system an abstract
framework rather than a particular system. Other aspects taken from Lin &
Shoham are that in Vreeswijk's framework arguments can also be formed by
chaining inference rules into trees and that Vreeswijk's defeasible inference
rules are also not domain specific but general logieal principles. However,
Vreeswijk adds to Lin & Shoham's basic elements that of an ordering on
arguments (on which more below).
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 231

In applications of his abstract system Vreeswijk copes with defeasi-


ble statements by assuming an object language with a defeasible connec-
tive <p > 'I/J and by also assuming a defeasible inference rule of the form
{<p > 'I/J, <p} I'" 'I/J, supposedly justified by the semantic interpretation of
>. As for conflicts between arguments, a difference from all other systems
of this section is that a counterargument is in fact a set of arguments:
Vreeswijk defines a set E of arguments incompatible with an argument (1
iff the conclusiollS of E U {(T} strictly imply 1.. Vreeswijk has no explicit
notion of undercutting conflictsj he claims that he can formalize them as
arguments for the denial of a defeasible conditional.
As for the assessment of arguments, Vreeswijk's declarative definition,
(which he says is about "warrant") is similar to Pollock's definition of a
defeat status assignment: both definitions explicitly reflect the step-by-
step nature of comparing arguments and both lead to branching sets of
conclusions in case of irresolvable conflicts. However, there are some differ-
ences in the details, which may make Vreeswijk's definition closer to stable
semantics than to preferred semantics, as is Pollock's definition (but this
should be formally verified). Paraphrasing Vreeswijk's definition, it says
that an argument is "in force" on the basis of a set of premises iff
1. The argument consists of just a premisej or
2. The argument is deductive and all its proper subarguments are in forcej
or
3. The argument is defeasible and all its proper subarguments are in
force and of each set of incompatible arguments at least one member
is inferior to it, or is not in force.
In the Nixon Diamond this results in 'the Quaker argument is in force iff
the Republican argument is not in force'. To deal with such circularities
Vreeswijk defines an extension of the premises as any set of arguments
satisfying the above definition. With equally strong conflicting arguments,
as in the Nixon Diamond, this results in multiple extensions.
Vreeswijk extensively studies various other characterizations of defea-
sible argumentation. Among other things, he develops the not ion of an
'argumentation sequence'. Such a sequence can be regarded as a sequence
of Lin & Shoham's (1989) argument structures (but without the condition
that these are deductively closed) where each following structure is con-
structed by applying new inference rules to the preceding structure. An
important addition to Lin & Shoham's notion is that a newly constructed
argument is only appended to the sequence if it survives all counterattacks.
Thus the notion of an argumentation sequence embodies, like Pollock's
notion of 'justification', the idea of partial computation, i.e. of assessing
arguments relative to the inferences made so far. Vreeswijk also devel-
ops a procedural version of his framework in dialectical style (see also
232 CHAPTER 9

Vreeswijk, 1993b, 1995); this version was one of the sources of inspiration
of the dialogue game of Section 6.5 above.
Vreeswijk also discusses a distinction between two kinds of nonmono-
tonic reasoning, 'defeasible' and 'plausible' reasoning. According to hirn,
the above definition captures defeasible reasoning, which is unsound (i.e.
defeasible) reasoning from firm premises, like in 'typically birds fly, Tweety
is a bird, so presumably Tweety flies'. Plausible reasoning, by contrast, is
sound (i.e. deductive) reasoning from uncertain premises, as in 'all birds
fly (I think), Tweety is a bird, so Tweety flies (I think)'. The difference is
that in the first case adefault proposition is accepted categorically, while
in the second case a categorical proposition is accepted by defauIt. In fact,
Vreeswijk would regard reasoning with ordered premises such as I have
studied in Chapters 6-8 not as defeasible but as plausible reasoning.
One element of this distinction is that for defeasible reasoning the
ordering on arguments is not part of the input theory, reflecting collision
rules for, or degrees of belief in premises, but a general ordering of types
of arguments, such as 'deductive arguments prevail over inductive argu-
ments' and 'statistical inductive arguments prevail over generic inductive
arguments'. Accordingly, Vreeswijk assurnes the ordering on arguments to
be fixed for all sets of premises (although relative to a set of inference
rules). Vreeswijk formalizes plausible reasoning independently of defeasible
reasoning, with the possibility to define input orderings on the premises
(but not with reasoning about the orderings), and then combines the two
formal treatments. To my knowledge, Vreeswijk's framework is unique in
treating these two types of reasoning in one formalism as distinct forms
of reasoning; usually the two forms are regarded as alternative ways to
look at the same kind of reasoning (cf. e.g. the 'inconsistency handling'
and 'defeasible conditional' approaches to default reasoning discussed in
Chapter 4).
Evaluating Vreeswijk's framework, we can say that, as Pollock but in
contrast to BDTK, it formalizes only one type of defeasible consequence
and that it has little attention for the details of comparing arguments, but
that it is philosophically well-motivated, and quite detailed with respect to
the structure of arguments and the process of argumentation.

9.2.5. NUTE'S DEFEASIBLE LOGIC

In several publications Donald Nute has developed a family of wh at he calls


'defeasible logics'. Since his approach has, in line with my conclusions of
Chapter 6, a one-directional conditional, and since his system is very weIl
suited for implementation, I shall discuss it in so me detail. In particular,
I present the version described in Nute (1992). Nute's systems are based
SYSTEMS FüR DEFEASIBLE ARGUMENTATION 233

on the idea that defaults are not propositions but inference licences. Thus
Nute's defeasible rules are, like Reiter's defaults, one-directional. However,
unlike Reiter's defaults they are twoplacej assumptions are dealt with by
an explicit category of defeater rules, which are comparable to Pollock's
undercutting defeaters, although in Nute's case they are, like his defeasible
rules, not intended to express general principles of inference but, as in
default logic, domain specific generalizations.
As for the underlying logicallanguage, since Nute's aim is to dcvelop
a logic that is efficiently implementable, he keeps the language as simple
as possible. He assurnes a logic programming-like language with three cat-
egories of rules, strict rules A ~ p, defeasible rules A ::::} p and defeaters
A ~ p. In all three cases p is a (in my terms) strong literal, i.e. an atomic
proposition or a classically negated atomic proposition, and A is a finite
set of strong literals. Strict and defeasible rules play the same role as in my
system, but defeaters must be read as 'if Athen it might be that p', and
they can only be used to block an application of a rule B ::::} p.5 An example
is 'Genetically altered penguins might fly', which undercuts 'Penguins don't
fly'. Thus Nute's system has, like Pollock's and my system, both rebutting
and undercutting conflicts between arguments.
Arguments can be formed by chaining rules into trees, and conflicting
arguments are compared with the help of an ordering on the rules. Ac-
tually, Nute does not have the explicit notion of an argumentj instead he
incorporates it in two notions of derivability, strict (I-) and defeasible (I",)
derivability, to be explained below. To capture nonderivability Nute does
not use the familiar notions If (meaning 'not 1-') and 11- (meaning 'not
I""). Instead, his aim of designing a tractable system leads hirn to define
two not ions of demonstrable nonderivability -l and "'I, which require that
a proof of a formula fails after finitely many steps.
Finally, as for the assessment of arguments, as just stated Nute's notion
of an argument is implicit in his definitions of derivability. Strict derivability
of a formula p is simply defined as the existence of a tree of strict rules with p
as its root. The definition of dcfeasible derivability more complex, although
the basic idea is simple. Nute has two core definitions, depending on when
the last rule of the tree is strict or defeasible. The definition for the second
case is as follows (for a set of premises T).
Definition 9.2.9 (defeasible derivability) TI", P if there is a rule A ::::} pE
T such that
1. T -l p, and
2. for each a E A, T I'" a, and
3. for each B ~ pET there is a bEB such that T ",I b, and
SFor any p, if P is an atomic formula then 15 stands for ""p, and if p is a negated atomic
formula ""q, then 15 stands for q.
234 CHAPTER 9

4. fo1' each C --t pET 01' C ~ PET, eithe1'

(a) theTe is acE C such that T ",I c 01'


(b) A => p has highe1' p1'io1'ity than C --t p (01' than C ~ p).
Condition (1) says that the opposite of p must demonstrably be not strictly
derivable. This gives strict arguments priority over defeasible arguments.
For the rest, this definition has a typical inductive structure, very suitable
for implementation. There must be a defeasible rule for p which, firstly,
'fires', Le. ofwhich all antecedents are themselves defeasibly derivable (Con-
dition 2) and which, secondly, is of higher priority than any conflicting rule
which also fires: for any rule which is not lower, at least one antecedent
must be demonstrably nonderivable (Conditions 3-4). As a special case
Condition (3) gives priority to strict rules over defeasible rules; for the
rest these priorities must be defined by the user (Condition 4). Nute pays
much attention to specificity (as noted above, my Definition 6.4.15 is a
generalization of Nute's) but he also studies the case of arbitrary orderings
(but no reasoning about priorities).
A literal can also from a strict rule be derived defeasibly, viz. when one
of its antecedents is itself derived defeasibly. When there is a strict rule
for p the definition of defeasible derivability is simpler: since strict rules
have priority over the other two categories, Condition 4 can be dropped. In
consequence, defeasible derivability from a strict rule can only be blocked
by derivability from a conflicting strict rule.
At first sight, these definitions look, apart from the simpler language,
much the same as my system: both systems have one-direction defeasible
rules and in both systems arguments are compared with respect to the
priority relation of the last defeasible rules used in the argument. Moreover,
the differences that Nute's arguments are trees while mine are sequences,
and that the step-by-step comparison of arguments is explicit in Nute's
system while implicit in mine (cf. Proposition 6.5.6) are just a matter of
design. However, upon closer reflection, there are so me important differ-
ences. Firstly, Nute also applies the priorities in case of undercutting rules.
More importantly, Nute's system exhibits the extreme form of scepticism
discussed above in Section 7.5.2: this is since an inference of p can only be
blocked by a rule for p if all antecedents for that rule are derivable or, in my
terms, justified. Since Nute has no third category 'defensible' in between
'( demonstrably) derivable' and '( demonstrably) not derivable', two rules
that are in an irresolvable conflict do not give rise to conclusions and thus
cannot block other inferences.
Finally, Nute's system behaves differently when a conflict involves strict
ruIes, as in Exampie 6.4.6 above. Here is the exampie again:
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 235

x has children::::} x is married


x li ves alone::::} x is a bachelor
x is married ~ --, x is a bachelor
In my system the outcome depends on the priority relation between the two
defeasible rules, since it is natural to say that the conflict is between these
two rules; the strict rule is just a linguistic convention, dedaring the heads
of the defeasible rules incompatible. By contrast, in Nute's system only
rules with direct1y contradicting heads are compared, and since strict rules
prevail over defeasible rules, the outcome is that x is a bachelor, even if the
first defeasible rule has priority over the second. I regard Nute's outcome
as counterintuitive, which is the reason why in Prakken & Sartor (1996b;
1997a) the more complex definitions of Section 6.4.4 were developed.
On the positive side, Nute has developed a system which gives intuitive
results for a large dass of benchmark examples and which, due to its simple
language and its transparent definitions, is very suitable for implementa-
tion. In fact, Nute shows that the system is decidable. Moreover, he has
implemented the logic as an extension of Prolog and already used it in
expert system applications (see Nute, 1993).

9.2.6. SIMARI AND LOUI

Simari & Loui (1992) present a dedarative system for defeasible argu-
mentation that combines the ideas of Pollock (1987) on the interaction
of arguments with (Pooie, 1985)'s specificity definition and Loui's (1987)
view on defaults as metalinguistic rules. Their system is similar to the one
of Chapter 6 above in several respects. They too divide the premises into
sets of contingent and necessary first-order formulas and one-directional
default rules, and they too have a sceptical (but not extremely sceptical)
notion of defeasible consequence. In fact, Simari & Loui use the definition
of Pollock (1987), which, as remarked above, corresponds to the sceptical
semantics of Definition 9.2.7. For comparing conflicting arguments Simari
& Loui use Poole's (1985) specificity definition.
The main differences between Simari & Loui's system and the one of
this book are that their system does not have assumptions in its language,
and so they only have the head-to-head kind of conflict, they only compare
arguments with respect to specificity, and they do not formalize reasoning
about priorities. Moreover, because they use Poole's specificity definition,
their system is subject to the problem discussed above in Section 6.3.1
(although this problem is addressed in Loui & Stiefvater, 1992). On the
other hand, Simari & Loui prove so me interesting formal properties of
arguments and they sketch an architecture for implementation, which has
the same dialectical form as my dialectical proof theory.
236 CHAPTER 9

9.2.7. GEFFNER AND PEARL'S CONDITIONAL ENTAILMENT


Geffner & Pearl (1992) present a proof theory for defeasible condition-
als based on the idea of comparing arguments. An interesting aspect of
their theory is that it is derived from a model-theoretic semantics, defin-
ing a nonmonotonic notion of "conditional entailment". Their main idea
is to translate defeasible conditionals cp > 'I/J into material implications
(cp /\ Oi) -+ 'I/J, in which Oi is an applicability predicate similar to the ones
discussed in Chapter 5. These express ions are evaluated in a preferential
model structure (cf. Section 4.2.1), in which those models are preferred in
which as few applicability predicates are made false as possible. In making
this precise Geffner and Pearl define a class of "admissible orderings" on
Oi 's which, if respected by the preference relation on models, reflects the
notion of specificity. Although this notion is the only source of priorities that
Geffner & Pearl consider, their formalism seems not to exclude orderings
on the Oi 's based on other standards, such as a hierarchy of the conditionals
in which the Oi'S occur.
In my opinion a real drawback of their system is that in representing
defeasible conditionals they use the material implication, which again gives
rise to the formal possibility of intuitively impossible counterarguments.
If the ordering on the Oi 's reflects specificity this only holds if in cases
like Example 6.3.3 the antecedents of (1) and (3) contain no more than an
applicability clause, but with orderings based on hierarchies of premises the
problems will be exactly the same as before in Examples 7.2.6 and 7.2.7.
This cannot be repaired by just giving up the material implication for
expressing defaults, since that would give up the possibility of a formal
semantics of the kind Geffner and Pearl have developed.

Gordon's Use 0/ Conditional Entailment


Gordon (1994; 1995) uses Geffner & Pearl's system in a way that was
not explicitly intended by these authors and that makes it more suit-
able for expressing legal knowledge. Gordon's use is a component of his
'Pleadings Game' (to be discussed below in Section 10.4.5), which models
the common-Iaw procedure of pleading as a dialogue game in which the
players propose premises and construct and attack arguments according to
certain discourse rules. As components of the game, Gordon needs a logical
language for representing both explicit exceptions and priority information,
and an argumentation system for computing the out co me of agame. For
the latter purpose Gordon uses Geffner & Pearl's proof theory. However,
he cannot simply use their way of representing knowledge, since this does
not permit the expression of explicit exceptions and explicit priorities.
Therefore Gordon develops his own logical language, which he then gives
a logical interpretation by mapping it in a particular way onto Geffner &
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 237

Pearl's original language. The resulting system has all the elements of my
system: assumptions, applicability, strict and defeasible ruIes, and reasoning
with and about priorities.
Still, with respect to priorities there are so me differences. Gordon's idea
is to use the fact that in conditional entailment specificity is hardwired in
the logic (via the requirement that orderings are admissible), by transform-
ing winning rules (according to any priority criterion) into more specific
rules. However, this is obviously impossible when the inferior rule is more
specific than the winning one, precisely because specificity is in Geffner &
Pearl's system the overriding kind of defeat. As I have discussed above in
Chapter 8, and as Gordon (1995) acknowledges, this is undesirable in legal
applications.

9.2.8. GENERAL COMPARISON

Perhaps at this point the reader is overwhelmed by the variety of systems


that I have discussed. At first things seemed simple, since in Chapters 6-8
I developed one single argumentation system, but in the present chapter
this system has turned out to be just one of many. Nevertheless, in my
opinion, the situation is not so dramatic. In particular the BDKT approach
has shown that a unifying account is possiblej not only has it shown that
many differences between argument-based systems are variations on just
a few basic themes, but also has it shown how extension-based logics can
be reformulated in argument-based terms. In addition, several differences
between the various systems appear to be mainly a matter of design, Le.
the systems are, to a large extent, translatable into each other. This holds,
for instance, for the conceptions of arguments as sets (Simari & Loui),
sequences (my system) or trees (Lin & Shoham, Nute, Vreeswijk, and for
the implicit (BDKT, Simari & Loui, my system), or explicit (Pollock, Nute,
Vreeswijk) stepwise comparison of arguments. Moreover, other differences
result from different levels of abstraction, notably with respect to the under-
lying logicallanguage, the structure of arguments and the source of priority
relations. And some systems extend other systems: for example, Vreeswijk
extends Lin & Shoham by adding the possibility to compare conflicting
arguments, and the system of Chapter 8 extends the BDKT approach to
reasoning about priorities. Finally, the declarative form of so me systems
alld the pro ce dural form of others are two si des of the same coin, as are the
semantics and proof theory of standard logic.
The main substantial differences between the systems are probably the
various notions of defeasible consequence, such as the extreme (Nute) versus
moderate view on scepticism and other subtle differences, often reflecting a
clash of intuitions in particular examples. Although the debate on the best
238 CHAPTER9

definitions will probably continue for so me time, in my opinion the BDKT


approach has nevertheless shown that to a certain degree a unifying account
is possible here also. Moreover, as already explained above in Section 7.5.3,
some of the different consequence notions are not mutually exclusive but
can be used in parallel, as capturing different degrees to which a conclusion
can be supported by a body of information. And each of these notions may
be useful in a different context or for different purposes. Another important
difference is that while some systems formalize 'logically ideal' reasoners,
other systems embody an (interesting) idea of partial computation, i.e. of
evaluating arguments not with respect to all possible arguments but only
with respect to the arguments that have actually been constructed by the
reasoner (Pollock, Lin & Shoham, Vreeswijk). This notion is motivated by
the fact that in general it is impossible to construct all arguments, since
reasoners have have limited time; in other words, it is motivated by the fact
that common-sense reasoning is reasoning under resource bounds.

9.3. Other Relevant Research


I end this chapter with a discussion of some recent systems that are not
argument-based but that nevertheless deal with the same issues as argu-
mentation systems.

9.3.1. BREWKA'S LATER WORK


After developing the preferred-subtheories framework, as his contribution to
the inconsistency-handling approach to nonmonotonic reasoning (see Sec-
tion 7.2.3 above), Brewka became convinced that the material implication
is not appropriate for expressing defaults, and turned to default logic and
logic programming. However, in contrast to this book, Brewka did not adopt
an argument-based approach. This makes it possible to compare how far
one can go by adopting only one of the conclusions of Chapters 6 and 7,
viz. that defaults are one-directional.

Brewka '8 Prioritized Default Logic


Brewka (1994a) presents a prioritized version of default logic (PDL), includ-
ing a treatment of reasoning about priorities. Since Brewka regards explicit
exception clauses and priorities as two alternative methods for modelling
nonmonotonic reasoning, he restricts his prioritized default logic to normal
defaults.
The basic idea is simple. Brewka defines extensions in the same way
as in Definition 4.1.2 above, which constructs the extension step-by-step,
starting with the facts and branching into alternative extensions in case
of conflicting defaults. However, in PDL so me conflicts between defaults
SYSTEMS FüR DEFEASIBLE ARGUMENTATION 239

do not lead to branching extensions, since the defaults are ordered and
at each step only the highest of the applicable defaults is applied. Clearly
this only results in alternative extensions if a conflict between two defaults
cannot be solved with the priorities. Interestingly, since Brewka only allows
normal defaults, checking the applicability of adefault does not need a
lookahead assumption (as was already ShOWll by Reiter, 1980): testing
whether applying adefault introduces an inconsistency is done not with
respect to the (guessed) extension E as it will be constructed, but instead
with respect to the extension Ei as it has been constructed thus far. For
this reason the existence of extensions is guaranteed.
Although Brewka's PDL gives the intuitive outcome in a large class of
cases, it still has problems with Example 7.2.6. Here it is again, formalized
in prioritized default logic.
dl : x misbehaves :::} x may be removed
d2 : x is a professor:::} --, x may be removed
d3: x snores :::} x misbehaves
h: Bob is a professor 1\ Bob snores
d3 < d I, d3 < d2
As I argued in Chapter 7, this example should have two extensions, since
the two defaults that intuitively are in conflict, dl and d2 , have no priority
over each other. However, as in his preferred subtheories approach, Brewka
obtains a unique extension, in which d2 is applied at the cost of dl. The
reason is that at step EI there are only two applicable defaults, viz. d2 and
d3 , since only those defaults have their antecedent in the facts, Le. in E o.
And since d3 < d2, we have to apply d2. Then, although at E 2 we can apply
d3 (since there is no conflicting default), at E3 we cannot apply d1 , since
applying it would make the extension inconsistent.
Although Brewka (1994a) does not agree that his outcome is coun-
terintuitive, he accepts my criticism as an alternative point of view and
defines aversion of his PDL which respects my criticism. However, this
version of PDL again contains a lookahead assumption, which makes it
non-constructive. For this reason Brewka prefers his original versioll.

Defeasible Priorities in PDL


Inspired by legal reasoning Brewka (1994c) extends his PDL to the case
of defeasible priorities (in Prakken, 1995a I generalize his method to any
extension-based system). The basic idea is to create extensions with every
possible ordering of the defaults, and to keep only those extensions that are
created with an ordering that correspond to the priority information in the
extension. Although the resulting definitions work weH for extension-based
systems, they are less suited for procedural versions. This is the reason why
I have developed an alternative method in Chapter 8.
240 CHAPTER 9

Brewka 's Prioritized Extended Logic Programming


Brewka (1996) defines aversion of extended logic programming (see Sec-
tion 5.5.3 above) with defeasible priorities and with a sceptical, well-
founded semantics (also often called 'grounded semantics', see Defini-
tion 9.2.6). Since my system was also originally, in Prakken & Sartor (1997a),
developed as an extension of well-founded extended logic programming, it is
interesting to compare the two systems. However, because ofthe complexity
of Brewka's system I confine myself to a few general remarks.
As in his PDL, Brewka's logic-programming system is also not argument-
based, and does not have a procedural version. Nevertheless, my formaliza-
tion of reasoning about priorities in Chapter 8 was inspired by an earlier
version of Brewka's system. Interestingly, in the snoring-professor example
Brewka now has the same outcome as my system. However, there are still
some differences: in particular, Brewka also applies the priorities iftwo rules
undercut each other, as with two rules d1 : rv p => q and d2 : rv q => p. While
in Brewka's system the outcome depends on the priority relation between
d1 and d2, in my system the priorities are irrelevant: the two arguments
are (mod ulo other arguments) both defensible. Another difference is that
Brewka regards rules without weak antecedents by definition as strict; thus
he cannot express the difference between two defeasible rules 1f p then q (in
my system p => q) and 1f p then q, unless shown otherwise (in my system
p A rv -,q => q). Yet Example 3.1.7 shows that in practice this distinction
sometimes occurs.

Evaluation
To conclude, with the non-constructive version of his PDL and with his ver-
sion of extendcd logic programming, Brewka has shown that for solving the
problems identified in Chapters 6 and 7 it is not strictly necessary to adopt
an argument-based approach; it suffices to have a one-directional condi-
tional, provided that the system is designed carefully. However, upon closer
examination his definitions still appear to implicitly embody argument-
based notions, which in my opinion, if made explicit, might have made the
definitions more natural.

9.3.2. REASON-BASED LOGIC

In various publications Jaap Hage and Bart Verheij have developed a very
original formal system, called 'reason-based logic' (RBL). Although the
system is not argument-based, it was especially developed for applica-
tion to legal reasoning, including its defeasible aspects. Therefore a de-
tailed discussion is in order. The latest versions of RBL are described in
Hage (1996; 1997) and Verheij (1996). These publications also extensively
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 241

discuss the philosophical foundations of RBL, which were mainly developed


by Hage.

RBL's View on Legal Knowledge


In one sentence, RBL is meant to capture how legal (and other) principles,
goals and rules give rise to reasons for and against a proposition and how
these reasons can be used to draw conclusions. The underlying view on
principles, rules and reasons is influenced by the theory ofRaz (1975) on the
role of reasons in practical reasoning and by other insights from analytical
philosophy.
The general view underlying RBL distinguishes two levels oflegal knowl-
edge. The primary level includes principles and goals, while the secondary
level inc1udes rules. Principles and goals express reasons for or against
a conclusion (a goal is a special kind of principle, giving rise to deontic
reasonSj for simplicity I do not explicitly discuss goals). Without the sec-
ondary level these reasons would in each case have to be weighed to obtain a
conclusion, but according to Hage and Verheij rules are meant to summarise
the outcome of the weighing process in advance for certain classes of cases.
Therefore, a rule does not only generate a reason for its consequentj it also
generates a so-called 'exclusionary' reason against applying the principles
underlying the rule: the rule replaces the reasons on which it is based. This
view is similar to Dworkin's (1977) well-known view that while reasons
are weighed against each other, rules apply in an all-or-nothing fashion.
However, according to Hage and Verheij this difference is just a matter
of degree: if new reasons co me up, which were not taken into account in
formulating the rule, then these new reasons are not exc1uded by the rule;
the reason based on the rule still has to be compared with the reasons
based on the new principles. Consequently, in RBL rules and principles
are syntactically indistinguishable; their difference is only reflected in their
interaction with other rules and principles.
Hage and Verheij also want to give a systematic account of reasoning
about rules and principles, since in the legal domain this phenomenon is
ubiquitous: lawyers frequently argue about the legal validity or the appli-
cability of rules and principles. Clearly, formalizing such arguments requires
that it is possible to talk about rules and principles in the object language.
RBL provides these means. Moreover, Hage and Verheij regard the knowl-
edge about what it takes for a rule or principle to give rise to reasons, as
general principles of reasoning. Therefore they do not state this knowledge
as premises, but as logical inference rules.
In sum, Hage and Verheij claim that rule application is much more
than simple modus ponens. It involves reasoning about the validity and
242 CHAPTER 9

applicability of a rule, and weighing reasons for and against the rule's
consequent.

Features 0/ a Logic 0/ Rules and Reasons


This general view on legal reasoning has the following implications for a
logic of legal reasoning. 6 Drawing conclusions is in RBL a two-step process:
first the reasons for and against a conclusion are gathered and then the
reasons are weighed to see which conclusion can be drawn. A rule or
principle gives a reason for its conclusion if and only if it applies to the
facts.
RBL formulates one necessary and one normally sufficient condition for
when a rule applies. A necessary condition is that the rulejprinciple is valid;
whether this is so must be derived from the premises. A normally sufficient
condition is that the rule is applicable, Le. that it is valid, its antecedent
is satisfied and that it is not excluded. Whether a rule is excluded is again
determined by the premises. It might be that the law explicitly makes a
rule inapplicable, as in Example 3.1.1, but it might also be that a principle
is excluded by a rule taking the principle into account. Applicability is
not a necessary condition for applying a rule: if its conditions are not
satisfied, a rule can still be applied analogically, provided that this agrees
with the principles and goals underlying the rule. Moreover, applicability
is only normally sufficient; there can also be reasons against applying a
rule, for instance, that its application is against its purpose or outside its
scope. Therefore an applicable rule does not directly give a reason for its
conclusion, but only for the rule's application; moreover, only if the reasons
for applying a rulejprinciple outweigh the reasons against its application,
can a rule be applied. Whether the reasons pro indeed outweigh the reasons
con is again determined by the premises and is therefore itself a matter of
debate.
However, even if a rulejprinciple applies, it does not yet warrant its
consequent; it may be that conflicting rulesjprinciples also apply and there-
fore a rulejprinciple only gives a reason for its consequent, which has to be
weighed against the conflicting reasons. And again it is determined by the
premises which set of reasons is the stronger.

A Sketch 0/ the FOTmal System


I now give an impression of the formalization of these ideas in RBL.

- Formalizing Metalevel Knowledge


As just said, RBL must provide the means to express properties of rules
in the object Ianguage. To this end Hage and Verheij use a sophisticated
6Several versions of RBL ex ist; the discussion of this section is based on Hage (1996).
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 243

naming technique, viz. reification, well-known from metalogic and AI; see
e.g. Genesereth & Nilsson (1988, p. 13). Every atomic predicate-Iogic for-
mula R(tl,'" , tn) is named by a function expression r(tl,'" ,tn ). Note
that while R is a predicate symbol, T is a function symbol. Furthermore,
compound formulas are named by naming the propositional connectives
with function symbols. For instance, the formula -,R( a) is named by the
function expression -,'r(a); here -,' is a function symbol corresponding to
the connective -'. And the conjunction R(a)I\S(b) is denoted by the function
expression I\'(r(a), s(b)), usually written with infix notation as 1'(a) 1\' s(b).
Note that unlike the naming technique used throughout this book, RBL's
technique reflects the logical structure of the named formula: from the name
the corresponding formula can be reconstructed.
How are rules named? This is done by using a function symbol 1'ule.
Thus rules can be denoted by terms like
rule(1', p(x), q(x))
(Here r is a 'rule identifier', which is handy in formalizations). Note that
this expression is not a formula but a term; a formula results only if the
term is combined with a predicate symbol, as e.g. in
Valid(rule(r,p(x), q(x)))
which says that rule r is valid.
By convention rule terms with variables are a scheme standing for all
their ground instances.
Perhaps the reader expects that the object language of RBL contains
a conditional connective corresponding to the function symbol 1'ule, just
as it contains connectives corresponding to e.g. -,' and 1\'. However, for
philosophical reasons Hage and Verheij do not assurne such a connective.
In their view a rule does not describe but constitutes states of affairs:
a legal rule makes someone a thief or something a contract, it does not
describe that this is the case. Thus, Hage and Verheij argue, a rule is not a
proposition, which can be true or false, but an object in the world, which
can be valid or not and applicable or not, and which must be applied to
make things the case.

- The RBL inference rules


Hage and Verheij state RBL as extra inference rules added to standard
first-order logic. I first summarize the most important rules and then I give
some (simplified) formalizations.
1. If a rule is valid, its conditions are satisfied and there is no evidence
that it is excluded, the rule is applicable.
2. If a rule is applicable, it gives rise to a reason for its application.
244 CHAPTER 9

3. A rule applies if and only if the reasons for its application outweigh
the reasons against its application.
4. If a rule applies, it gives rise to a reason for its consequent.
5. A formula is a conclusion of the premises if and only if the reasons for
the formula outweigh the reasons against the formula.
Note that condition (5) makes that the weighing process is between sets of
reasons instead of between individual reasons (cf. Section 7.5.4).
Here is how a simplified formal version of inference rule (1) looks like.
Note that condition and consequent are variables, which can be instanti-
ated with the name of any formula.
If Valid(rule(r, condition, consequent)) is derivable
and Obtains(condition) is derivable
and Excluded(r)) is not derivable,
then Applicable(r, rule(condition, consequent)) is derivable.
Condition (4) has the following form.
If Applies(r, rule(condition, consequent)) is derivable,
then Proreason( c01~sequent) is derivable.
Finally, he re is how in condition (5) the connection between objcct- and
metalevel is made.
If Outweighs(Proreasons(Jormula), Conreasons(Jormula)) is
derivable,
then Formula is derivable.
Note again that whether the pro-reasons outweigh the COll-reasons must
itself be derived from the premises. Note also that while formula is a
variable for an object term, occurring in a well-formed formula of RBL,
Formula is a metavariable which stands for the formula named by the term
f ormula. This is how object and metalevel are in RBL connected.
As just said, the RBL conditions are extra inference rules added to stan-
dard first-order logic, just as defaults are in default logic. However, while
the defaults of default logic express domain specific gelleralizations, the
RBL inference rules are intended to capture general principles of reasoning
with rules. This is why the principles quantify over rules: they hold for any
instantiation of condition, conclusion, formula and Formula.
However, as in default logic and many other nonmonotonic logics, in
RBL the derivability of certain formulas is defined in terms of the non-
derivability of other formulas. For instance, in (1) it may not be derivable
that the rule is excluded. Therefore, to prevent circular definitions, some
technical work has to be done. To this end RBL adapts techniques of
default logic: the 'derivability' conditions are restated as conditions on
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 245

membership of an extension. Clearly, this introduces a lookahead into the


definitions, which makes the definition of derivability non-constructive, just
as in default logic.

Using RBL
Let us now illustrate the basics of formalizing legal rules and principles in
RBL. Since rules/principles are not object level formulas, they can only
be stated indirectly, by assertions that they are valid. Thus, any particular
rule must be formalized as
Valid(rule(r, condition r , conclusian r ))
As for representing exceptions, RBL supports both the choice approach
and a variant of the exception clause approach. Explicit exceptions can be
formalized in almost the same way as in many other nonmonotonic logics
but with one difference: since RBL's rules on validity and applicability are
formalized as general inference ruIes, it is not necessary to give a rule an
explicit condition that it is not excluded. Thus, for instance, Example 3.1.7
can be formalized as follows.
Valid(rule(1, person(x)), has_capaci ty(x))
Valid(rule(2, minor(x), excluded(1) 1\' -,'has_capaci ty(x))
Valid(rule(3,minor(x) 1\' consent_oLrepr(x), excluded(2) 1\'
has_capaci ty(x))
Note that for a minor John acting without consent rule 2 gives rise to thc
conclusion that rule 1 is excluded, after which the above condition (1) of
the RBL inference rules cannot be used to derive a reason for application
of rule 1 to J ahn.
RBL's variant of the exception clause approach can also be used to
formalize collision rules, viz. by using the method described in Section 8.6,
Le. formalizing collision rules as general inapplicability rules. Thus this
method is in fact in between the exception clause and the choice approach.
This method means that a rule that is set aside because of a collision rule
does not generate reasons for its conclusion. This is necessary since, unlike
my system, RBL automatically weighs sets of reasons, so that a rule that is
individually defeated, can in combination with other rules still give rise to
conclusions. According to Hage and Verheij this is not how legal collision
rules work in practice.
RBL also supports the use of the choice approach. If two conflicting
rules both apply, then their application gives rise to conflicting reasons,
which then have to be compared by using information on the weight of
the reasons. The foregoing implies that this method is only feasible for
rules that are amenable to weighing, Le. for what Hage and Verheij call
'principles' .
246 CHAPTER 9

Evaluation
In evaluating RBL, I shall distinguish between its underlying philosophical
ideas and its merits as a formal system. Philosophically, RBL is a very deep,
interesting and on most points convincing analysis of the logical aspects of
legal reasoning. Particular strong points are its analysis of legal metalevel
reasoning (reasoning about validity, applicability, exclusion, and priority
and weight of rules and principles), and its convincing analysis of some
other features of legal reasoning, such as the gradual difference between
rules and principles, and analogous rule application as determined by the
goals and principles underlying the rule. However, I disagree with some
philosophical features of RBL. In Section 7.5.4. I argued that the alleged
need for weighing sets of reasons is less obvious than is argued by Hage and
Verheij. Moreover, their reluctance to treat rules not only as objects but
also as formulas seems to overlook that formulas are themselves a kind of
object, an observation wh ich is the basis of the development of metalogic.
When viewed as a logical system, a particularly attractive feature of
RBL is its detailed formalization of (legal) metalevel reasoning, with the
most sophisticated naming technique that is available. Another merit is
that RBL has at least placed the issue of comparing combined reasons on
the logical agenda. Finally, RBL has a very expressive language in which,
for instance, rules can be nested and negated. On the other hand, however,
it must be said that RBL is technically rather complex, because of its ex-
pressiveness, and since it combines metalogic features with nonderivability
notions. In this respect it would ue desirable to have some results on the
technical well-behavedness of the system. Furthermore, from a computa-
tional point of view the non-constructive nature of RBL and the lack of a
procedural formulation are unattractive; argument-based approaches seem
to fare better in these respects. Finally, a point of detail is that in case of
irresolvable conflicts RBL behaves as an extreme sceptic (cf. Section 7.5.2).
Nevertheless, as already discussed, much of Hage and Verheij's research
is very valuable, in particular their contribution to legal knowledge repre-
sentation, which shows how many types of legal knowledge can be formal-
ized in a first-order language. To a large extent their formalizations can also
be used in other systems, for instance, the system of Chapters 6-8, which
is similar to RBL in several respects. For instance, my system can also
deal with both the exception clause approach (using weak negation) and
the choice approach (using priorities). Moreover, RBL's two-step process of
drawing conclusions (first collecting reasons pro and con, then comparing
them) has its counterpart in my system in the form of first collecting
arguments pro and con and then comparing them. Now it is my guess
that if every defeasible rule of my system is assumed to have a validity
clause and a weakly negated inapplicability clause, RBL's inference rules
SYSTEMS FüR DEFEASIBLE AltliUMENTATIüN 247

can to a large extent be translated into object level rules of my theory,


facilitating similar ways of formalizing legal knowledge. Although clearly
a lot of technical work still has to be done, this enterprise seems very
worthwhile, since it would combine the work of Hage and Verheij on the
analysis and representation of legal knowledge with the work of Giovanni
Sartor and myself on the logical form of legal argumentation.
CHAPTER 10

USING THE ARGUMENTATION SYSTEM

This chapter completes the discussion of the argumentation system devel-


oped in the Chapters 6-8. First, Section 10.1 compares its use for modelling
the choice approach to exceptions to the various ways of modelling the ex-
ception dause approach, after which Section 10.2 discusses its prospects for
implementation in computer programs. The rest of the chapter illustrates
the role of the system as a tool in modelling legal argument. In Section 10.3
the system will be applied to so me issues raised in the Chapters 2 and 3, alld
in Section 10.4 it will be used to give a logical analysis of some aspects of
implemented AI-and-Iaw systems. Finally, in Section 10.5, thc system will
be placed in the context of a four-Iayered overall view on argumentation.

10.1. A Comparison ofthe Methods for Representing Exceptions

Having conduded the discussion of reasoning with inconsistent information


in Chapter 8, and having put the resulting system in the perspective of
related research in Chapter 9, we can now put the pieces together and
compare the two riyal methods for dealing with exceptions in nonmono-
tonic reasoning. More specifically, what will be compared is the choice
approach as applied in my system with the exception dause approach as
applicd in Chapter 5 in default logic, circumscription, Poole's framework
and logic programming. In the comparison I use the list of topics given in
Section 5.1.3. First I complete the schematic overview of Section 5.6.1 with
the most plausible formalizations in my system, comparing arguments on
specificity.
Hard rebutting defeaters
d 1 : Ax::::} Bx
h: \/x.Cx ---t -,Bx
Hard undercutting defeaters
d 1 : Ax 1\....., -,appld1(x) ::::} Bx
h: \/x.Cx ---t -,appldl(X)
Soft rebutting defeaters
d 1 : Ax::::} Bx
d 2 : Ax 1\ Cx ::::} -,Bx

249
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
250 CHAPTER 10

Soft undercutting defeaters


d l : Ax /\ '" -,appldl(x) => Bx
d2 : => appl(n)
d3: Cx /\ '" -,appld3 (x) => -,appldl(x)
Undecided confticts
d l : Ax => Bx
d2 : Cx => -,Bx
Note that, unlike in the exception dause approach, soft rebutting defeaters
can be represented without having to combine them with undercutting
defeaters.
Now I come to the final comparison of the methods on the points of
Section 5.1.3.

Structural Resemblance
Recall that what should be avoided is mixing several source units in one
knowledge base unit. In Chapter 3 we have already seen that such a 'many-
to-one' correspondence can be avoided in both approaches, although in the
exception dause approach only with general dauses. However, a one-to-one
correspondence between source alld knowledge base can be obtained more
easily in the choice approach: in the Sections 5.2 and 5.3 we saw that with
exception dauses soft rebutting defeaters can only be represented when
combined with undercutting defeaters, and this often results in a split of
one source unit into two knowledge base units, only kept together by giving
them the same name. As just shown, the choice approach deals with this
kind of exception in a more elegant way; note also that its use in under-
cutting defeaters of extra defaults of the form => appln( Xl, ... , Xi) is not
really a violation of one-to-one correspondence, since such expressions have
no relation with any specific source; they are more part of the 'hardware' of
the formalization method. In sum, with respect to structural resemblance
both methods do well, but with a small advantage for the choice approach.

Modularity
In Section 5.6.4 we have seen that the specific exception clause approach
is unmodular, while general clauses support modularity only partially: al-
though they prevent changing old rules when new exceptions are added,
the exceptions still have to mention to which rules they are an exception;
only if this is also done in the natural-Ianguage versions, as sometimes in
law, general exception clauses fully support modularity.
As for the choice approach it was noted in Section 5.1.3 that the use of
specificity is often claimed to support modularity, since what would suffice
is to formalize each rule in isolation, after which the specificity algorithm
would determine the preference relations; on the other hand, priorities are
USING THE ARGUMENTATION SYSTEM 251

often said to lead to unmodularity since in assigning them it would be


necessary to anticipate all interactions between the rules. However, of both
these claims we have seen in this book that they should be weakened. As for
specificity we have seen in Section 6.7 that the natural-Ianguage versions are
often ambiguous, in which cases the correct formalization is, particularly
in legal reasoning, determined by the outcome of a debate on which rule is
the exceptionj therefore, contrary to what is often claimed, using specificity
does not support a modular way of formalizing. Using priorities, on the
other hand, sometimes does, particularly in legal reasoning, viz. if they can
be assigned on the basis of general legal properties of the norms, such as
their time of enactment, or the hierarchical status of their source.
Finally, I should return to two remarks made earlier. Firstly, in Sec-
tion 5.1.3 I said that structural resemblance and modularity are often
not clearly distinguished. I promised there that this book would provide
reasons for making this distinctionj now these reasons have become ap-
parent: structural resemblance of source and knowledge base as a result
of the formalization process does not imply that this process itself was
modular. The second remark was made in Section 3.1 where I said that
this book would provide some reasons why structural resemblance is not
always beneficial. Now this reason is the same as just mentioned: structural
resemblance does not support modularity, and this contradicts one of the
often-claimed advantages of structural resemblance.
To summarize the comparison of the choice and the exception clause
approach to representing exceptions, neither of them completely supports
modularitYj the often-made claim that in the choice approach the use of
specificity does, has turned out to be too optimistic, in relying too much on
the exactness of the llatural-language express ions. Therefore, with respect
to modularity the methods are, roughly, equaHy good as weH, and this
means that one of the most used standards for comparing methods of
knowledge representation has turned out to lack discerning ability.

Resemblance to Natuml Language


Natural-Ianguage express ions normaHy have no general exception dauses,
although in law they sometimes have. In the choice approach they are not
needed but if such clauses do occur in the source expressions, this approach
can, as shown in Section 6.6, cope with them as weH, for which reason it is
doser to natural language than the exception dause approach.

Exclusiveness 0/ Specijicity
In 5.6.4 it was already noted that if exception clauses solve a confiict by
resulting in a unique answer, no other standard can overrule this solutionj
other standards can only be used in confiicts not prevented by exception
252 CHAPTER 10

c1auses but, of course, only if the conflicting answers can alternatively be


presented, as in default logic and Poole's framework. However, this does
not change the fact that thus specificity still has priority over the other
standards, a situation which at least for legalreasoning is incorrect. At this
point the choice approach has a major advantage over the exception c1ause
approach: it leaves room for any other standard for solving conflicts, and
it can also deal with (defeasible) reasoning about these standards. Note
that this is also an advantage over attempts as discussed in Section 4.1.4 to
make specificity a semantical feature of a logic for defeasible conditionals.

Implementation
The conc1usion of Chapter 5 was that, when restricted to c1auselike lan-
guages and if undecided conflicts can be avoided, the exception c1ause
approach can be rat her efficiently implemented with logic programming
methods, the cost being a less natural treatment of c1assical negation.
By contrast, the choice method intro duces a new layer of complexity, the
need to choose the best of conflicting answers, which has for a similar phe-
nomenon called abduction been shown to decrease tractability (see furt her
Section 10.2). In conc1usion, if the exception c1ause approach can meet
its aim to obtain unique answers, it can be more efficiently implemented
than the choice approach, but since unique answers can only be obtained at
restricted domains of application, we are in fact confronted with an instance
of the weIl-known trade-off between expressiveness and tractability.

Expressiveness
As the schemes of Section 5.6.1 and this section show, the methods mainly
differ in their treatment of undecided conflicts; all other situations listed
in Section 5.1.2 can be expressed in both approaches, although in the
exception c1ause approach not in all logics equally weIl. As for undecided
conflicts, we have seen that in the exception clause approach only default
logic and Poole's framework can alternatively present both answers. How-
ever, this only holds at the theoretical level: when it comes to the most
natural implement at ion in logic programming, then this advantage of these
formalisms disappears. TheoreticaIly, the choice approach can cope with
undecided conflicts almost by definition, but it remains to be seen whether
this also holds in efficiently implemented versions.
In sum, with undecided conflicts the choice approach has theoretically
fewer problems and in implement at ions at most aso many problems as the
exception c1ause approach; furthermore, the choice approach can also deal
with exception clauses and there is no type of exception which only the
exception clause approach can express; in conclusion, the choice approach
USING THE ARGUMENTATION SYSTEM 253

is more expressive than the exception dause approach, but again it should
be said that the price for this is an increase in computational complexity.

Final Evaluation
We can condude that the main differences between the methods occur
with respect to tractability and expressiveness. At the other points they
have by-and-Iarge turned out to do equally weIl. This holds particularly for
the points relevant to knowledge engineering in practice, with tbe exception
that the choice approach is a bit doser to naturallanguage: apart from this,
both methods can preserve the separation of rules and exceptions and in
both methods a knowledge engineer should not rely too much on a modular
formalization strategy. For this reason the only real issue is in which cases
expressiveness should be sacrificed for tractability.
In my opinion the answer will depend on the nature of the application
and the research goals. If the main task of a system is to give insight
into the consequences of a certain body of legislation, then nonmonotonic
techniques mainly serve as a knowledge representation device, in which
case the exception dause approach is probably a better choice. Obviously,
this particularly holds for domains in which the law is well-established
and does not leave much room for disagreement. On the other hand, if
the need for representing exceptions arises in research on modelling legal
argument, then the choice approach is obviously the better, since it is able
to capture more aspects of legal reasoning. As we have seen, it can in
particular be embedded in general methods for modelling the construction
and comparison of conflicting arguments. In the last three sections of this
chapter I shall discuss in more detail how the logical argumentation system
developed in this book can serve as a component in models of adversarial
legal argument. But first I turn to its prospects for implementation in a
computer program.

10.2. Implementational Concerns

Although research on implementation is beyond the scope of this book,


some general remarks are appropriate. Implementing the full theory of
Chapters 6-8 is problematic for a number of reasons. In fact, the system
suffers from three layers of computational complexity. Firstly, it uses the full
expressive power of first-order predicate logic, for which as a whole no the-
orem provers exist which are both complete and efficient. In consequence,
even the search for an individual argument for a proposition is intractable.
The second layer of complexity is that the search for counterarguments
is in fact a nonderivability check, which for first-order logic is known to
be non-decidable and which, as explained in Chapters 3 and 4, makes the
254 CHAPTER 10

defeasible proof procedure global: i.e. it gives rise to the need to check all
premises in every step of a proof. A final layer of complexity is given by
the relation between argumentation and so-called 'abduction'. As noted by
Gordon (1991; 1995) the problem of constructing and comparing arguments
for a desired conclusion is formally similar to the task of finding sets of
hypotheses whieh explain certain observed data; the latter process is often
called abduction and, as shown by Bylander et al. (1991), obstacles to the
tractability of abduction are (among other things) incompatibilities among
the hypotheses (whieh requires a consistency check) and the aim of finding
the best explanation; obviously my system gives rise to both obstacles.
However, logical argumentation systems (and other nonmonotonie log-
ies) do not only have tractability-decreasing features. Recent research
(Cadoli et al., 1996) has established that in nonmonotonic logics the
representation of a problem or domain is generally more compact than
in a monotonie logic, which has a positive effect on tractability.
Nevertheless, tractability remains a real problem. Now, as noted in
Section 4.3.2, there are in AI two common strategies to overcome this
problem: restricting the language to an efficiently computable fragment, and
sacrificing completeness or even soundness with respect to the semantics.
I now make some remarks on both of these options, pertaining not just to
my system but to argumentation systems in general.

RestTicting the Language


An option which is very often used is restricting the language, für example,
to that of logic programming. In fact, the system of Chapters 6-8 was
originally developed as a logic-programming system (cf. e.g. Prakken &
Sartor, 1997a). And, as remarked in Chapter 9, Nute's (1992) defeasible
logic and the implementation of Simari & Loui (1992) are also restricted to a
logic-programming-like language for reasons of tractability. Some have used
the even more restricted languages of multiple inheritance systems with
exceptions (cf. Touretzky, 1986; Horty et al., 1990). As noted in Section 6.2,
the latter systems deal with conflicting defaults by using a rudimentary
form of specificity, in giving priority to defaults on subclasses over defaults
on superclasses. However, the languages of these systems are for most legal
applications too weak in expressive power: most importantly, they do not
allow for conjunctive or negative antecedents.

Giving up Soundness or Completeness


Although sacrificing completeness might in practice be workable, giving up
soundness is, of course, a very risky thing to do. Note particularly that
incompleteness of an underlying monotonic logic turns into unsoundness
at the nonmonotonie level, since at that level the derivation of formulas
USING THE ARGUMENTATION SYSTEM 255

depends on the nonprovability of other formulas. In this respect it may be


appropriate to remark that in AI problems for which no tractable decision
procedures exist are often tackled by heuristic search, Le. by procedures
which normally will at least find a plausible or suboptimal answerj this
may be appropriate in problems concerning some many-valued entity, such
as travelling distance or financial profit, but logical problems are of a 'yes
or no' nature, for which the idea of a suboptimal solution has not much
value. In any case, even if one opts for an imperfect implementation of
an argumentation system, such a system still has its uses, since it does
at least make it possible to formulate exactly in which respects practical
applications are incomplete or incorrect.
Moreover, some have recently urged taking seriously the idea that not
only more information can change the conclusions, but also more computa-
tion: according to e.g. Loui et al. (1993), Pollock (1995) and Gordon (1995)
the same reasons for drawing conclusions on the basis of incomplete in-
formation apply to drawing conclusions on the basis of partial computa-
tion, viz. the lack of unlimited time and other resources. In Section 9.2.2
we have already discussed an example of 'resource bounded' reasoning,
viz. Pollock's (1995) notion of justificationj this notion considers not all
arguments that, on the basis of the premises, are logically possible, but
only those arguments that have actually been computed by the reasoner
(recall also that Dung's set Args in Definition 9.2.1 leaves both options
open). Thus furt her computation can change the conclusions even ifno new
premises are added. According to these authors, the study of nonmonotonic
reasoning should not (only) include standards for reasoning with incomplete
information, but (also) standards for reasoning with computational resource
bounds.

10.3. Applications
In this section I apply the argument-based approach, and my system in
particular, to so me issues discussed in earlier chapters. After showing that
the system in fact formalizes many of Toulmin's ideas, I make some com-
ments on how logical argumentation systems can be used as a tool in such
reasoning activities as interpretation and analogical reasoning.

10.3.1. TOULMIN ON THE STRUCTURE OF ARGUMENTS

As noted in Section 2.2.2, the ideas of Toulmin (1958) have recently at-
tracted the attention of a number of researchers in the field of AI and
law. I remarked there that, although these ideas are certainly valuable,
they do not show that the aspects of reasoning wh ich he discusses can-
not be analyzed in logical terms. I now use my system to support this
256 CHAPTER 10

claim, in arguing that it in fact formalizes Toulmin's ideas on the structure


of arguments. 1 Recall that in his view on this structure conclusions are
q'Ualified and can be obtained via data, whieh are made into reasons for
the conclusion by "inference licences" called warrants, which can be used
on account of backings but whieh are implicitly subject to exceptions. In
factual domains the relation between backing and warrant can, for example,
be based on induction, while in the legal domain a warrant is, for example,
backed by referring to the terms and dates of enactment of the relevant
provisions.
If we analyze this in terms of my system, then we can regard the
data as facts and the warrants as defeasible rules: in this interpretation
of warrants both their defeasibility and their 'inference-lieence' nature is
captured. Furthermore, as shown in Section 6.6, the backing of a rule can
be represented in the same way as explicit exceptions: each rule can be
assumed to have an extra condition requiring that it is backed, and then
furt her premises can state under whieh conditions the rule is indeed backed.
Finally, Toulmin's 'unless' clause is captured by the possibility of defeat
by counterarguments, while his qualifier of conclusions has its analogue
in the distinction between strict and defeasible arguments, the distinction
between justified and defensible arguments, and the defeasibility of the
latter notions. In sum, my system can be used to give a logieal formalization
of Toulmin's ideas on the structure of arguments. In so me respects such a
formalization is even rieher. For instance, it allows priority relations on
warrants and arguments about these relations, it allows the combination
of priorities and explicit exceptions, it supports arguments about backings
and, most importantly, it formally defines the defeasible consequences of
information that is structured according to Toulmin's ideas. In conclusion,
although Toulmin may have pointed at some !imitations of logieal research
in his days, he has not shown that these limitations are inherent in any
logieal analysis of reasoning, or that a logieal analysis has no value in
addition to an informal philosophieal analysis.

10.3.2. THE SYSTEM AS A TOOL IN REASONING

In Section 2.3 I argued that logic is also relevant for noninferential reasoning
activities, since they can use logic as a too!. I now discuss some ways in
which the argumentation system developed in this book can be usedj I
shall particularly focus on problems of interpretation and classification.
These problems are, of course, matters of content, whieh makes it senseless
to ask for the logical justification of their solution, but even when a lawyer
1 A more general discussion of modern-day logic's response to Toulmin's - and Perel-
man's - challenges can be found in van Benthem (1995).
USING THE ARGUMENTATION SYSTEM 257

interprets astatute norm or a precedent, or legally qualifies certain facts,


logic has its use. The reason is that one should obviously argue for an
interpretation or classification which makes the available information imply
the desired conclusion, and whether this is the case is determined by the
rules of logicj this even holds for the most extreme form of interpretation,
trying to break a rule: obviously, it only makes sense to break rules which
logically imply an undesired conclusion. Thus logic puts constraints on the
space of sensible interpretations and classifications or, to put it in another
way, thus logic can be used as a tool in exploring this space.
Now a basic constraint on interpretation and classification provided by
my argumentation system is, of course, that suggested arguments are con-
sistent, while another constraint concerns the space of arguments which can
be constructed (or need to be broken). For example, can arguments based
on contrapositive inferences be constructed? My logical analysis provides a
reason why they cannot. A clear example of the system being used as a tool
in interpretation is trying to find acceptable prcmises which make the ar-
gument for the desired conclusion more specific than its counterarguments.
Consider once more Example 3.1.4 on killing someone in a life-and-death
duel and assurne tImt the issue on which norm prevails has not yet been
settledj then the defender of a suspect found guilty with respect to both
norms should argue that the agreement to duel on life-and-death implies
the purpose to kill, since then 154-(4) Sr is a Lex Specialis of 287 Sr, which
leads to a lower maximum penaltYj in the other interpretation the argument
with 154-(4) Sr would only be defensible and the judge would be free to
find reasons to let 287 Sr prevail.
Another illustration of the instrumental role of an argumentation sys-
tem can be given by an analysis of analogical reasoning. In interpretation
and classification this kind of reasoning, Le. analogizing and distinguishing
cases, plays an important role. Now although in Section 2.3 I argued that
these things are a matter of content rat her than an inference mode, here the
argumentation system is also relevant. First I should explain how analogical
reasoning should exact1y be accounted for in my system: In Section 2.3 I
argued that it can best be regarded as a heuristic device for finding missing
pieces of information, and in terms of my system this me ans that it is
a way of extending the set of defeasible rules ß, Le. of jumping from one
default theory to another one, containing more information. Of course, such
a jump itself cannot be logically justified or criticised, but the same holds as
for interpretation and classification in general: the argumentation system
still plays a role, since it should be checked whether the desired conclusion
indeed follows from the proposed extended default theory: for example,
whether the proposed analogy or distinction leads to an argument which
is more specific or at least not less specific than its counterarguments.
258 CHAPTER 10

Thus the system puts constraints on the search for useful analogies and
distinctions.
I now give a final illustration of how an argumentation system can point
at sensible argumentation strategies. At first sight it might be thought
that, when faced with an interpretation problem, lawyers should assurne
whenever they can that the law is coherentj accordingly, they would have
to apply a 'conflict avoiding' interpretation strategy. However, although for
judiciaries this might be the case, lawyers defending one side of a legal
dispute should sometimes argue for the existence of a conflict. Consider an
example with two precedents, of which the first leaves room for formalizing
it as either one of the next two defeasible rules.
(1) p ~ q
(1') p /\ -'1· ~ q
Assurne timt the facts of the case can only be dassified as (p /\ r), and that
the only sensible interpretation of the second case is
2 r ~ -,q
Then the proponent of -,q should indeed argue for the conflict avoiding
interpretation (1'), which makes it impossible to use the first case in at-
tacking (2). However, the opponent should do otherwise: s/he had better
argue for the 'conflict preserving' interpretation (1). The point in arguing
this way is that then other reasons might be provided for giving precedence
to (1) over (2). Again we see that considering the possible outcome of a
comparison of conflicting arguments can give du es for what is the best
interpretation strategy to follow in a legal dispute, and this again shows
that an argumentation system can be regarded as a tool in various kinds
of reasoning. In the last section of this chapter I come back to this in more
general terms.

10.4. A Logical Analysis of Some Implemented Systems


In this section I analyze some implemented AI-and-Iaw systems in terms
of logical argumentation systems in general and, if appropriate, in terms of
my system in particular.

10.4.1. GARDNER'S PROGRAM

Although Gardner's (1987) program, which is on offer and acceptance


problems in American contract law, is very complex, for present purposes
the following abstract ion suffices. As was said, the task of the program is
to distinguish hard from easy quest ions and, in terms of this distinction,
clear from hard cases (cf. Section 3.3). Besides the input facts, Gardner
USING THE ARGUMENTATION SYSTEM 259

distinguishes three kinds of information: legal rules, common-sense rules


and cases2 • Furthermore, she makes the simplifying assumptions that cases
and common-sense rules are only used for interpreting antecedents of legal
rules, and that the only sources of disagreement are conflicts of cases with
other cases 01' with common-sense rules. Now the most simple form of a hard
quest ion is a legal problem on which the program has to be silent; in terms
of an argumentation system, when no argument at all can be constructed for
01' against the condusion which is at stake. More interesting are problems
to which conflicting rules apply. Gardner deals with them in the following
way. If two cases conflict then she regards the problem as hard, but if the
conflict is between a case and a common-sense rule, then she regards it as
deal', in letting the case prevail. Formalizing this in my system is rat her
simple: all three kinds of legal information are defeasible rules, and cases
and common-sense-rules are ordered by a simple ordering saying that the
first have priority over the latter. Note that since cases and common-sense
rules only serve to provide antecedents of legal rules, their relation to the
latter needs not be defined. Now in terms of my system the second kind of
a hard quest ion is the existence of two defensible arguments for opposite
condusions.
My system also suggests a plausible extension of the program. Gardner
remarks that the fact that all precedents are treated alike gives rise to too
many hard questions; an obvious way of dealing with this to compare cases
with respect to so me ordering defined on the case base. Actually, HYPO and
CABARET compare cases with respect to specificity (see below) but, as
we have seen above, my system allows the use of any criterion, for instance,
based on temporal 01' hierarchical relations between cases 01' even based on
substantial considcrations.

Gordon's Abductive Theory 01 Legal Issues


Gordon (1991) gives a more generallogical analysis of Gardner's program,
in analysing it as an implementation of his formal theory of legal issues,
which is formally very similaI' to Poole's (1988) logical framework for default
reasoning; in fact, an earlier version of his theory, in Gordon (1989), was
explicitly based on Poole's framework. Like Poole, Gordon is not interested
in defining a new notion of logical consequellce, but in demonstrating how
logic can be used to analyze various kinds of reasoning activities. Unlike
Poole, however, Gordon does not focus on defeasible reasoning, but on
"spotting legal issues", 01' distinguishing 'deal" from 'hard' cases. Like
Poole, Gordon assurnes a set F of facts and a set ß of defaults 3 . As
2Note that the qualifications 'hard' and 'clear' do not apply to stored cases hut only
to the input case.
3Below I ignore so me minor differences hetween Gordon's and Poole's account.
260 CHAPTER 10

expected, Gordon defines an argument for i.p as a subset D of ~ such that


F u D 1= i.p, and a rebuttal of such an argument as an argument D' such
that D U D' 1= ..1. Furthermore, D' is a counterargument of D iff D' is an
argument for -'i.p. Thus all counterarguments are rebuttals but not the other
way aroundj some rebuttals are only counterarguments of proper subsets of
D. Gordon does not allow for ways of comparing conflicting arguments but
he observes that the work of Poole (1985) on specificity and Brewka (1989)
on priorities could weIl be added to his theory.
Gordon defines the not ion of clear and hard cases in terms of issues.
Roughly, an issue with respect to a goal is adefault that (given the
premises) is logically relevant for deriving the goal. More formally, a formula
I is an issue with respect to a goal proposition G iff I is adefault and if it
is contained in a minimal and consistent argument for G.
As for Gordon's definition of a clear case, a subtlety is that it allows
for disagreement on matters of fact, to which end he divides the defaults
into factual and legal defaults (the factual defaults must not be confused
with the set F, which is the set of formulas beyond dispute). Now roughly,
Gordon defines a case about a certain goal G as clear iff the only disagree-
ment with respect to G is on the facts. More precisely, a case about G is
clear iff there is an argument D for G of which the only counterarguments
attack a factual issue with respect to G: i.e. iff for any I E D that is an
issue with respect to G, and such that there is an argument for -,1, I is a
factual default.
In my opinion an important merit of Gordon's abductive theory of
legal issues is that it was one of the first contributions to AI and law
demonstrating that logic can be used for more purposes than just modelling
the axiomatic view on legal reasoning (cf. Section 2.1.3). However, the
details of Gordon's (1991) proposal are not yet the final word, since with
the use of Poole's inconsistency handling approach Gordon inherits the
flaws of this approach discussed in Section 6.3. For present purposes, the
most important of these flaws is Poole's failure to reflect the dialectical
interaction between arguments. Assume, for instance, that argument A is
rebutted by argument B but that B is in turn rebutted by C, which thus
atternpts to reinstate A. Then intuitively not only the defaults in A but
also those in C are issues with respect to A's conclusion, since if these
defaults are not acceptable, then C's attempt to reinstatc A fails. However,
in Gordon's definition the only defaults that are an issue with respect to
A's conclusion are those in A.
It must be said that later, in Gordon (1995), Gordon has much improved
his analysis, avoiding these flawsj that work will be discussed below in
Section 10.4.5. Interestingly, although Gordon (1991) intuitivelyexpresses
his theory in terms of a legal dispute, his formal definitions do not yet
USING THE ARGUMENTATION SYSTEM 261

reflect these intuitionsj precisely this is what is improved in Gordon (1995).

10.4.2. CABARET

Giving a precise formal account of CABARET is impossible: as noted in


Section 3.5, the system is primarily intended to model the heuristic aspects
of legal reasoning and, although these aspects are, as explained in 10.3.2,
subject to logical constraints, the developers of the system have made no
effort to give a formal specification of these constraints. However, what is
possible is extracting from the descriptions of the program certain views
on the logical aspects of the information on which the heuristics operatej
what can be done next is trying to point at the formal not ions which come
the closest to these views and on which logical standards for the behaviour
of the pro gram can be based.
In Section 3.5 I discussed CABARET as an example of how logic can be
used as a tool in legal reasoning. Now I can be more precise on the nature of
these tools: they can best be analyzed in terms of an argumentation system,
since everything in the program is focused on suggesting, attacking or
comparing arguments for contradicting conclusions. An irnportant aspect of
the program is that it contains heuristics based, for example, on analogical
reasoning, to suggest the missing link in an argument supporting one's
claim. In fact, in terms of my system the program thus provides systematic
ways of jumping from one default theory to another. Now everything which
has been said in 10.3.2 also holds for CABARET: in short, in suggesting
or rejecting such jumps, the logical consequences of the resulting default
theories should be kept in mind.
A final example of the relevance of my system in this respect concerns
the fact that in comparing arguments the program uses a rudimentary form
of specificity: for instance, it does so in comparing precedents which are
suggested by way of analogical reasoning (with HYPO's 'more on point'
ordering), and also in defining one of the heuristics to break a rule, viz.
to point at a rule or case with an extra condition and with an opposite
conclusion. Now my analysis says that in comparing with respect to speci-
ficity 'superior evidence' (cf. Example 6.2.4) should not be used. Consider
the following example, which might eventually occur in the CABARET
system.
Cl: h => p rl: P => r
C2: h 1\ h => q 1'2: q => -,r
If the case-based part of the system suggests the two cases Cl and C2, and
rl and r2 are two rules of the rule-based part, then for reasons explained
above the system should not regard -,r as a better answer than l' on the
sole basis of C2 being a 'more on point case' than Cl.
262 CHAPTER 10

In conclusion, from these examples and also from what has been said
in 10.3.2 we can see that for systems like CABARET, modelling the heuris-
tie phase of argumentation, a logieal argumentation system also has its use,
in analysing the logieal nature of the information on whieh the heuristics
operate.

10.4.3. APPLICATIONS OF LOGrC METAPROGRAMMING

In Section 3.5.1 I described research on the use oflogic programming for rep-
resenting legislation. In two later projects this tradition has been enriched
with techniques from logic metaprogramming. Hamfeit & Barklund (1989)
use such techniques for (among other things) representing legal collision
rules. Their method uses logic programming's DEMO predicate, which
represents provability in the object language. For this reason, their style of
representation is very different from my method in Chapter 8, whieh uses a
priority predieate. Another difference is that Hamfeit & Barklund assurne a
hierarchy of separate language levels of legal knowledge. I have chosen not
to make this assumption, because of examples like the building regulations
in Section 8.5, whieh show that the Lex principles do not only apply to
legal object level rules, but also to each other and even to themselves.
Routen & Bench-Capon (1991) have applied metalogic programming to,
among other things, the separation of rules and exceptions. They preserve
this separation by enriching the knowledge representation language with
metalevel expressions SUbject_to(rulel, rule2), and by ensuring that the
metainterpreter of a logic program applies a rule only if of no rule to which
it is declared subject the body can be derived. Clearly, this nonderivability
check comes down to a defeasible assumption that a rule has no exceptions,
whieh can be defeated by a rule saying that there is an exception. In this
respect their method is similar to the use of general exception clauses,
explained in Chapter 5 and Section 6.6.2. However, there is one important
difference: in Routen & Bench-Capon's method a rule's antecedent does
not need to have an explicit 'no exception' assumption; instead this as-
sumption is, like in Hage & Verheij's reason-based logie, incorporated in
the metainterpreter, when it tests the nonderivability of the body of an
exceptional rule. Routen & Bench-Capon argue that this better preserves
the natural-Ianguage structure of legal texts.
Although this is indeed an elegant feature, their method also has some
restrictions. In particular, Routen & Bench-Capon do not distinguish be-
tween striet and defeasible rules, and they do not consider undecided con-
Biets or with reasoning about priorities. Thus their method is better suited
for representing coherent legal texts than for modelling legal argumentation.
USING THE ARGUMENTATION SYSTEM 263

10.4.4. FREEMAN AND FARLEY'S DART SYSTEM

Freeman & Farley (1996) have semi-formally described and implemented a


dialectical model of legal argument. The language of their system divides
rules into three epistemic categories: 'sufficient', 'evidential' and 'default',
in decreasing order of priority. Arguments are structured as a variant of
Toulmin's argument structures (see Section 2.2.2 above). The reasoning
involved in constructing arguments is much more complicated than in my
system. Firstly, besides modus ponens DART also allows modus tollens.
Moreover, the system allows certain types of nondeductive arguments, viz.
abductive (p :::} q and q imply p) and a contrario arguments (p:::} q and 'p
imply .q). Taken by themselves these inferences clearly are the well-known
fallacies of 'affirming the consequent' and 'denying the antecedent' but
Freeman & Farley deal with this by also defining how such arguments can be
attacked. In fact, such at tacks are instances ofPollock's (1995) undercutting
defeaters, which deny the link between the premises and conclusion of a
nondeductive argument. For instance, the above abductive argument can
be undercut in DART by providing an alternative explanation for q, in the
form of a rule r :::} q.
The strength of arguments is measured in terms of the four values 'valid',
'strong', 'credible' and 'weak', in decreasing order of priority. The strength
depends both on the type of rule and on the type of argument. For in-
stance, modus tollens results in a valid argument when applied to sufficient
rules, but in a weak argument when applied to default or evidential rules.
Abduction and a contrario always result in just a weak argument. Finally,
modus ponens yields a valid argument when applied to sufficient rules, a
strong argument with default mIes, and a credible argument with evidential
mIes. The strength of arguments is used to compare conflicting arguments,
resulting in defeat relations among arguments, which in turn determine, as
in my dialogue game, whether a move is allowed in a dispute.
Interestingly, the fact that modus tollens arguments are, when applied
to default rules, weaker than modus ponens arguments yields the correct
outcome in Examples 6.3.3 and 7.2.6. For instance, in the latter example the
argument against Bob misbehaves is, although logically possible, a 'weak'
argument since it uses modus tollens, while the argument for the opposite
is a 'strong' argument since it uses modus ponens.
DART's dialogue game has several variants, determined by different
levels of proof. This is because Freeman & Farley maintain that different
legal problem solving contexts require different levels of proof. For instance,
for the question whether a case can be brought before court, only a 'scintilla
of evidence' is required (in my system their interpretation of this notion
correponds to a defensible argument), while for adecision in a case 'dialec-
264 CHAPTER 10

tical validity' is needed (in my system a justified argument). Moreover, the


acceptability of nondeductive arguments mayaiso depend on the context;
if necessary, nondeductive arguments can be excluded.

Evaluation
The DART system has a heuristic flavour in that several of its features -
such as the ordering in strength of the various argument types - are given
no theoretical explanation by Freeman & Farley. For example, it would be
interesting to know why for default and evidential rules a modus tollens
argument is weaker than a modus ponens argument, while for sufficient
rules these argument types are equally strong, while yet in all these cases
modus tollens is regarded as logically possible. Furthermore, the fact that
with default and evidential rules abduction and a contrario are just as
strong as modus tollens seems in need of explanation. FinaIly, the variants
of DART's dialogue game are not based on an explicit argumentation-
theoretic semantics, as is my dialogue game.
In regard to some other aspects DART is more restricted than my
system. For instance, DART's language does not have weak negation, and
the only sources of rule priorities are specificity and the ordering of three
epistemic types of rules; my system, by contrast, allows any partial ordering,
which moreover is subject to debate.
However, on the positive side, Freeman & Farley give an interesting
treatment of nondeductive argument types. In particular, although they
regard such argument types not as heuristics (as I do) but as inference
modes, they are weIl aware that a proper treatment of nondeductive argu-
ments should not only define how they can be constructed but also how they
can be attacked. In my opinion this is important both for the 'heuristic'
and the 'inferential' view on nondeductive arguments. Moreover, Freeman &
Farley's claim that different problem solving contexts may require different
levels of proof, or even different types of arguments, is very interesting and
deserves further study.

10.4.5. THE PLEADINGS GAME

The final system that I shall discuss is Gordon's (1994; 1995) dialogue game
model of civil pleading. Since it is formally defined, USillg both standard
logic and an argumentation system, it is an excellent illustration of how
logic can be used as a tool in AI and law. Furthermore, compared to the
systems discussed above it contains a very important new feature, viz. the
regulated opportunity to introduce new information into a debate. For these
reasons I devote a longer discussion to the system.
USING THE ARGUMENTATION SYSTEM 265

General Ideas
The game is intended as a (normative) model of civil pleading, which is
the phase in a civil law casc where the parties exchange arguments and
counterarguments to identify the issues that must be decided by the court.
Gordon models civil pleading as a formally defined dialogue game, in which
two parties (plaintiff and defendant) construct and attack arguments for a
claim according to certain discourse rules. Since Gordon intends his model
to be normative, these discourse rules do not consist of an actual system of
procedural civillaw. In fact, they are inspired by Alexy's (1978) discourse
theory of legal argumentation, which is based on the idea that a legal
decision is just if it is the outcome of a fair and effective procedure for
disputes. It contains such principles as 'No speaker may contradict him-
self', 'Every speaker must, on demand, justify an assertion, unless he can
justify withholding the justification' and 'Every justification must contain
at least one legal rule'. Gordon shows that this procedural view on legal
argumentation also has logical aspects. The resulting formal model is in
itself interesting as a contribution to legal philosophy, but for AI and Law
Gordon intends it to serve as a specification of a computer system mediating
between players of the game, i.e. a system assuring that the rules of the
game are not violated.
The Pleadings Game is not the only AI & Law system with a procedural
view on legal reasoning. In Hage et al. (1994) a procedural account is
given of Hart's distinction between clear and hard cases. Philosophically
this account is related to the Pleadings Game but technically it is quite
different, since it does not use an argument-based system but Hage &
Verheij's reason-based logic. For this reason I shall not discuss this work in
detail.
Although both Gordon's Pleadings Game and my proof theory have
the form of a dialogue game, there is one crucial difference. While in my
game all moves are based on a fixed set of premises, which cannot be
altered by the players during the game, in the Pleadings Game the set
of premises is constructed dynamically: within the bounds defined by the
rules of the game, each player has the right to propose new premises during
the dispute. This makes the Pleadings Game a model of real disputes, in
which the parties rarely show all their cards at once but usually introduce
new statements and claims during the dispute, depending on the opponent's
moves. My game, by contrast, serves as a proof theory for an (argument-
based) nonmonotonic logic, i.e. it defines the defeasible consequenccs of a
given body of information. This body might be the joint premises at any
stage of a dispute, but it might also comprise the (incomplete or uncertain)
beliefs of a single person, or the norms of a single piece of legislation.
Since the Pleadings Game does not determine the consequences of a
266 CHAPTER 10

given set of premises, but instead regulates how players can construct
the premises during a dispute, the Game needs to apply an independent
logical argumentation system for determining whether a claim is entailed by
these premises. As such a system Gordon uses the proof theory of Geffner
& Pearl's conditional entailment (see Section 9.2.7), which also serves to
define such notions as argument, counterargument and defeat. However, it
is important to note that the general architecture of the Pleadings Game
does not crucially rely on conditional entailment; any argumentation sys-
tem of sufficient expressiveness could be used, for instance, the system of
Chapters 6-8 of this book.
In the Pleadings Game the introduction of new premises is regulated
by definitions of new types of moves, and by discourse rules for when these
moves are possible. Let us first look at the moves. 4

Types of Moves
If we disregard the first move of the game, then in my game the only
possible type of move is stating a counterargument to the other player's
previous move. Although this move is also possible in the Pleadings Game,
introducing premises gives rise to new types of moves. In fact, a new type
of move is not necessary for proposing, or "claiming" a new premise; this
can be modelled as the fact that astated argument contains the new
premise. Only responding to a claim requires new moves: in particular,
a player can deny a newly proposed premise (a "claim"), which makes it
obligatory either to state an argument for the claim or to leave the claim
as an issue for trial), and a player can concede a claim, after which the
claim becomes part of the premises. Finally, a player can also concede an
argument. However, this just has the procedural role of giving up the right
to state a counterargument; the player conceding an argument can still
deny its premises.
How can these new types of moves in turn be answered? Obviously, a
concession need not be answered, but for a denial the situation is different.
Since a denial does not itself claim anything, a counterargument against a
denial is impossible. Instead the Pleadings Game gives two other possibil-
ities. Firstly, a denial can be answered by a denial of the denial ('no, my
premise is OK'), against which no response is possible and the claim is left
as an issue for trial. Alternatively, a denial of a claim can be answered by
an argument for the claim. Such an argument keeps the game going, since
the other party can (as just explained) produce a counterargument or deny
any new claims of the argument.
4For explanatory purposes I here (and at some other points) simplify the Pleadings
Game.
USING THE ARGUMENTATION SYSTEM 267

The Rules fOT Playing the Game


Now that we know which types of moves are possible, we can describe how
the game is played, Le. when a move can or should be made. This is specified
by a set of 'discourse rules'. It would take too much space to describe these
rules completelYj therefore I confine myself to giving a brief impression.
According to the rules, the game starts when the plaintiff states its main
claim (again simplifying Gordon's definitions, this can be modelled as an
argument that just contains apremise). Then at each turn a player has the
obligation to respond to every move of the opponent that concerns an issue
and therefore requires an answer (such as a new argument, or a denial of
a claim). Consequently, the Pleadings Game must (unlike my game) allow
more than one move at each turn, since the other party may have introduced
(or denied) more than one claim, which all have to be answered. Thus a
turn of the game is aseries of moves by one player, together answering all
moves of the other party that require an answer. Because of the obligation
to respond, the game continues as long as there are moves of the other
party to be answeredj the game terminates if at the beginning of a turn no
answer is required.

The Result of aGame


When agame terminates, the quest ion arises as to its result. This is twofold:
first a list of issues idelltified during the game and, secondly, a winner, if
there is one.
Gordon's new definition of an issue repairs the flaws of the old one,
in taking the dialectical interaction between arguments into account, as
weIl as the procedural setting. Roughly, an issue is a claim of one of the
parties that is dialectically relevant for deciding the main claim, and that
has not been conceded by the other party. Whether a nonconceded claim is
relevant for deciding the main claim can be determined by inspecting the
'dialectical graph' of the main claim. This graph is defined in the same way
as my dialogue trees, except that Gordon's dialectical graphs contain not
only all moves of the opponent (defendant) that are made possible by the
premises, but also those of the proponent (plaintiff). Now a formula I is
an issue with respect to a goal G if and only if I occurs as a claim in the
dialectical graph for G and I is not implied by the conceded premises.
Finally, winning is defined relative to the set of premises constructed
during agame. If issues remain, there is no winner and the case must be
decided by the court. If no issues remain, then the plaintiff wins iff its
main claim is conditionally entailed by the constructed premises, while the
defendant wins otherwise.
268 CHAPTER 10

An Example
Let us illustrate the Pleadings Game with an example. For the sake of
illustration I simplify the Game on several details, and use a different (and
semiformal) notation. For notational convenience each formula is numbered,
while the move indicators are in bold.
The example concerns a dispute on offer and acceptance of contractsj the
legal background is the Dutch Civil Code, Sections 6:217-230. The players
are called plaintiff (11") and defendant (8). Plaintiff, who had made an offer
to defendant, starts the game by claiming that a contract exists. Defendant
denies this claim, after which Plaintiff supports it with the argument that
defendant accepted his offer and that an accepted offer creates a contract.
11"1: Argue[ (1) Contract ]
81 : Deny(1)
11"2: Argue[ (2) Off er, (3) Acceptance,
(4) Offer /\ Acceptance =? Contract ]
Now defendant attacks plaintiff's supporting argument [2,3,4] by defeating
its subargument that he accepted the offer. The counterargument says that
defendant sent his accepting message after the offer had expired, for which
reason there was no acceptance in a legal sense.
82 : Concede(2,4), Deny(3)
Argue[ (5) "Accept" late, (6) "Accept" late =? -, Acceptance ]

Plaintiff responds by strictly defeating 82 with a more specific counterargu-


ment, saying that even though defendant's accepting message was late, it
still counts as an acceptance, since plaintiff had immediately sent areturn
message saying that he recognizes defendant's message as an acceptance.
11"3: Concede(5), Deny(6),
Argue[ (5) "Accept" late, (7) "Accept" recognized,
(8) "Accept" late /\ "Accept" recognized =? Acceptance ]
The reason that 11"3 denies (6) is that conceding both premises of [5,6] would
make his subsequent counterargument contradict his concessions, which
is forbidden by the rules of the game. Actually in the Pleadings Game
defeasible rules cannot themselves be claimed, conceded or denied, since
in conditional entailment (as in my system) a defeasible rule cannot be a
conclusion of an argument. Instead these moves concern the claim that a
rule is backed (cf. Toulmin, 1958). For simplicity I ignore this complication.
Defendant now attempts to Ieave the issues for trial by conceding 11"3'S
argument (thus giving up the right to state a counterargument ) and its
premise (8), and by denying one of the other premises, viz. (7) (he had
already implicitly claimed premise (5) hirnself, in 82 ). Plaintiff goes along
USING THE ARGUMENTATION SYSTEM 269

with defendant's aim by simply denying defendant's denial and not stating
a supporting argument for his claim, after which the game termillates.
83: Concede(8,[5,7,8]), Deny(7)
1l'4: Deny(Deny(7))
This game has resulted in the following dialectical graph (since it is a
dialectical graph, the terms 'proponent' and 'opponent' are appropriate.)

°P
PI: [2,3,4] for Contract
1: [5,6] for --, Acceptance
2: [5,7,8] for Acceptance
The claims in this graph that have not been conceded are
(1) Contract
(3) Acceptance
(6) "Accept" late => --, Acceptance
(7) "Accept" recognized
So these are the issues 5 . Moreover, the set of premises constructed during
the game, Le. the set of conceded claims, is {2, 4, 5}. It is up to the judge
whether or not to extend it with the issues (6) and (7). In each case the
proof theory of conditional entailment must be used to verify whether the
other two issues, in particular plaintiff's main claim (1), are (defeasibly)
implied by the resulting premises. In fact, it is easy to see that they are
entailed only if (6) and (7) are added.
It is important to note that the above outcome of the game is completely
contingent, since at each turn the players might have introduced, conceded,
or denied different claims. This once more illustrates the difference between
using a dialogue game as a theory construction game or as a proof theory
for nonmonotonic reasoning.
Finally, in the above sketch I have ignored several interesting aspects of
the Pleadings Game. For instance, one concern of Gordon is to support an
efficient reasoning process, to formalise part of the 'effectiveness' require-
ment of procedural justice. For the details, and for other interesting aspects
of the Game, I refer to Gordon's own discussions.

Evaluation
How must we evaluate Gordon's Pleadings Game in the light of Chapters 6-
9? We have seen that at several specific points criticism is possible, for
instance with respect to Gordon's use of conditional entailment, which
hardwires specificity in the logic and which prevents a proper treatment
of examples like Example 6.3.3. Moreover, it seems to me that the set of
5More accurately, the third issue is the claim that rule (6) is backed.
270 CHAPTER 10

moves of the game should also include the possibility to retract claims, since
in real legal disputes this is what the parties often do.
However, more important is Gordon's positive contribution. He has
given an excellent illustration of how (deductive and nonmonotonic) logic
can be used as a tool in modelling legal argument. It is used to capture
notions such as 'argument', 'counterargument' and 'defeat', and to deter-
mine the defeasible consequences of the set of premises constructed during
a dispute. In addition, Gordon has shown that the discourse rules that use
these not ions can also be formalized, although interestingly he does not
formalize these rules in logic but as a procedure, with not ions from game
theory.
Finally, Gordon's Pleadings Game is a clear example of the proper
role of standard logic and logical argumentation systems in models of
argumentation. However, since this amounts to an evaluation of a main
contribution of this book, I shall now explain this further in aseparate
section.

10.5. Four Layers in Legal Argumentation


THE LOGICAL, DIALECTICAL AND PROCEDURAL LAYERS

Procedural accounts of legal reasoning (and other kinds of practical rea-


soning) agree with Toulmin's (1958, pp. 7-8) advice that logicians who
want to learn about reasoning in practice, should turn away from math-
ematics and instead study jurisprudence, since outside mathematics the
validity of arguments would not depend on their syntactic form but on
the disputational process in which they have been defended. According
to Toulmin an argument is valid if it can stand against criticism in a
properly conducted dispute, and the task of logicians is to find criteria
for when a dispute has been conducted properly; moreover, he thinks that
the law, with its emphasis on procedures, is an excellent place to find such
criteria. Toulmin himselfhas not carried out his suggestion, but others have.
For instance, Rescher (1977) has sketched a dialectical model of scientific
reasoning. Among other things he claims that such a model can explain the
feasibility of inductive arguments: they must be accepted if they cannot be
successfully challenged in a properly conducted scientific dispute. A formal
reconstruction of Rescher's model has been given by Brewka (1994b). In
legal philosophy I have already mentioned Alexy's (1978) discourse theory
of legal argumentation, based on the view that a legal decision is just if
it is the outcome of a fair procedure. A similar view on argumentation in
general underlies the so-called 'pragma-dialectical' school of argumentation
theory (van Eemeren & Grootendorst, 1992), which has been applied to
legal reasolling by Feteris (1996), Kloosterhuis (1996) and Plug (1996). And
USING THE ARGUMENTATION SYSTEM 271

in AI Loui (1997) has defended a procedural view on rationality. According


to hirn such a view can explain why nondeterministic reasoning can still
be rational, viz. if this reasoning takes place in the context of a fair and
effective protocol for dispute. Finally, we have seen that Gordon's Pleadings
Game takes its point of departure in Alexy's views.
Are such procedural accounts of (legal) argument indeed rivals to log-
ical accounts such as my system? Or do these accounts address different,
although related issues? In fact, our discussion of the Pleadings Game has
already revealed that they are perfectly compatible with each other. If we
abstract from the particular features of the game, we see that it is based
on a three-Iayered model of legal argumentation, which I shall call the logic
layer, the dialectical layer and the procedural layer (earlier described in
Prakken, 1995b).
Firstly, procedural models contain a logic layer. For example, one of the
procedural rules above says that a party may not contradict hirnself; clearly,
whether this happens is determined by logic. And the quest ion whether an
argument supports its conclusion at all, Le. without even looking at possible
counter arguments, is also determined by an underlying account of the rela-
tion between premises and conclusions of an argument, Le. by an underlying
logic. Alexy (1978) also recognizes this, since within his procedural account
of legal argumentation he pays much attention to the logical structure
of individual arguments. In addition to a logic layer, a procedural model
of legal argument contains a dialectical layer, 6 at which such notions as
'counterargument', 'attack', 'rebuttal' and 'defeat' are defined and at which,
given a set of premises and evaluation criteria, it is determined which of the
possible arguments prevail. As we have seen, these are the notions defined
by logical argumentation systems. Finally, there is the proceduml layer,
which regulates how an actual dispute can be conducted, Le. how parties
can introduce or challenge new information and state new arguments. In
other words, this level defines the possible speech acts, and the discourse
rules for when and how these speech acts can be performed. The Pleadings
Game is an example of a formalization of this layer.
We can now identify the proper role in models of argumentation of
systems for defeasible argumentation, such as the one defined in Chapters 6-
8 of this book: they serve as an indispensable link between a standard logical
system and a procedural model of disputation. It is perhaps insightful to
view the link between an argumentation system and a disputation model as
follows. We can say that in a model for dispute an argumentation system
receives a temporal index, relative to the state of the debate. To see this,
recall first that the output of an argumentation system is relative to its
input: argumentation systems (or better, their logic component) determine
6In Prakken (1995b) I called this the 'argumentation framework' layer.
272 CHAPTER 10

the spaee of possible arguments on the basis of a given set of premises,


and they determine the status of these arguments on the basis of the input
ordering. By eontrast, a proeedure for dispute is not defined over statie
information but over a sequenee of changing input states, ereated from eaeh
other by the introduetion of new statements and claims during the dispute.
Now the task of a proeedure for dispute is to regulate such information-
changing moves of the players, while the task of an argumentation system
is, every time the information has ehanged, to determine the status of the
arguments that are possible in the new state, given the evaluation eriteria
available in the new state.
The importance of this multi-Iayered model is that it shows that eval-
uating the rationality of legal argumentation involves many aspects, even
if we are only interested in matters of form (in the sense of 'logical' or
'mathematical' form). For long legal theorists interested in the form of legal
argumentation have stressed only that the content of a legal decision should
be reconstructible as a deductive argument (see e.g. MacCormiek, 1978;
Soeteman, 1989). However, we have seen that legal argumentation also has
dialectical and procedural features and that these features also have formal
aspects and ean thus partly be evaluated in formal terms.

THE PROCEDURAL ASPECT OF DEFEASIBILITY IN LAW

The connection between the dialectical and procedural layer also sheds
light on some early diseussions in legal philosophy on the defeasibility of
legal reasoning (originating from before the rise of nonmonotonic logic!).
Hart (1949) (extensively discussed in Baker, 1977 and Loui, 1995) puts
defeasibility in the pragmatic context of legal procedures. Very often in
a legal case, when the proponent of a claim has proven facts that could
lead to granting the claim, this does not have the effect that the case is
settledj instead the burden of proof is shifted to the opponent, whose turn
it then is to prove additional facts which, despite the facts proven by the
proponent, nevertheless prevent the claim from being granted. Clearly this
feature of legal procedures cannot be understood without admitting that
legal reasoning is logically nonmonotonic: if the conclusions supported by
the proponent's evidence were deductively valid, new information could
never change the result of the procedure, so a shift of the burden of proof
would be pointless. Yet, Hart does not comment on the implications for
logic (but see later MacCormick, 1995): although he discusses defeasibility
in terms that later became very common in AI to defend the development
of non mono tonic logics 7 , he just regards it as an aspect of legal proceduresj
he does not discuss how it can be reconciled with the view that judicial
7 As illustrated in detail by Loui (1995).
USING THE ARGUMENTATION SYSTEM 273

reasoning is still subject to the laws oflogie. The present study (and related
work, such as Sartor, 1995) gives insight into how such a reconciliation
is possible: the procedural level of legal argumentation presupposes not
only a logical level but also a dialectical level, at which arguments can
be defeated by stronger counterarguments, resulting in a nonmonotonie
notion of logieal, or 'argumentative' consequence. Perhaps this analysis
meets MacCormick's (1995, p. 114) call to logieians to develop systems
that capture the pragmatic nature of legal defeasibility.
Interestingly, similar observations have been made by Baker (1977) in
his discussion of Hart (1949), albeit in purely informal terms. He proposes
the idea of a "C-relation" between premises and conclusions of an argu-
ment, whieh is in fact a relation of nonmonotonic consequence. Baker then
remarks that the effect of establishing aC-relation between a set of known
premises and a claim is that the burden of proof shifts to the person who
challenges the claim, to find further evidence which defeats the C-relation
established by his opponent. The present analysis can also be regarded as
a formalization of Baker's observations.

A FOURTH LEVEL: STRATEGY

In fact, besides the logical, dialectical and procedural level we can even
identify a fourth level of what may perhaps be called strategy, at which
strategies and tactics for playing the game are identified. Note that a
procedural model like the Pleadings Game only defines when a dispute
has been conducted according to procedural rules, just as the rules of chess
only define how agame of chess can be played according to the rules of a
game. However, just as for being a good chess player much more is needed
than knowing the rules of the game, for being a good legal debater much
more is needed than knowing the rules of procedural justice.
Now one interesting challenge for legal philosophy and AI-and-Iaw is to
study what are the good ways to conduct a legal dispute. Perhaps in philos-
ophy part of the research of Perelman (1969; 1976) can already be regarded
as of this kind. And some AI-and-Iaw projects also study argumentation
heuristies and strategies. For instance, the 'prototype-and-deformations'
model underlying McCarty's TAXMAN 11 system (see Section 3.5.2) can be
regarded as an argumentation strategy. And the HYPO system studies 3-ply
strategies for analogical (and a contrario ) reasoning with legal precedents,
while Rissland & Skalak (1991) and Skalak & Rissland (1992) extend this
research to combined reasoning with rules and cases. In fact, if my view
in Section 2.3 on analogical reasoning is correct, viz. that it is not a form
of inference but a heuristic for suggesting new premises, then the level at
which analogy should be addressed is the strategie level; and this is indeed
274 CHAPTER 10

what HYPO and CABARET, as I interpret them, do.


For legal philosophy this research into the 'heuristics' of legal argument
is interesting in itself, and for AI and law it points at an interesting (al-
though very ambitious) long-term research goal: ifstrategies are formalized,
then a computer implement at ion of, say, the Pleadings Game could do more
than just mediate between the players of the game; it could also play the
game itself.
CHAPTER 11

CONCLUSION

Assuming that logic can provide theoretical foundations for Artificial In-
telligence research, this book has aimed at giving a logical analysis of
two important aspects of legal reasoning which are sometimes believed
to escape such an analysis: of defeasible reasoning, Le. of reasoning with
rules which are implicitly subject to exceptions, and of reasoning with
inconsistent information. A secondary aim has been to clarify the role of
logic in legal reasoning, particularly, to show that logic can also be useful in
the analysis of noninferential kinds of reasoning, like analogical reasoning.
Both aims have been satisfied, as I will summarize in this chapter. The
observations on the role of logic in legal reasoning are not new: what I
have mainly done is making them more specific for AI-and-Iaw research,
in order to avoid misunderstandings on the nature of my investigations.
And the credits for showing that defeasible reasoning and reasoning with
inconsistent information can be logically analyzed should also not go to mej
my research has been a contribution to developments initiated by others,
partly by applying these developments to the legal domain and partly by
adding something new to the developments themselves.

11.1. Summary

To break the ground for the main investigations of this book, I started in
Chapter 2 with a discussion of the role of logic in legal reasoning. Both in
legal theory and AI-and-Iaw research the usefulness of logic in analysing
legal reasoning has been disputed. It became apparent that some of the
arguments raised against logic are based on misconceptions of what logic
is and how it can be used. Other doubts on logic, however, turned out to
be based on the idea that the kinds of reasoning which are traditionally
studied by logic are the only ones which can be logically analyzed: partic-
ularly reasoning with rules which are subject to exceptions, and nontrivial
reasoning with inconsistent information would fall outside the scope of a
logical analysis. And since, because of the open, unpredictable nature of
the world to which the law applies, and the many competing interests and
opinions involved in legal disputes, these kinds of information are abundant
in the legal domain, a logical analysis of legal reasoning would be of little

275
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
276 CHAPTER 11

use, so it is said.
In Chapter 3 we saw in detail that legal reasoning often indeed operates
on defeasible and inconsistent information. In addition, we saw that the
way in which legal texts separate general rules from exceptions cannot be
accounted for with standard logical means. However, much of the rest of
this book was devoted to showing that these phenomena do not escape a
logical analysis at all, while, moreover, they are logically related to each
other. First, in Chapter 4, I gave a brief sketch of new logical developments
on modelling the two investigated kinds of reasoning, most of which are
the result of AI research on modelling common-sense reasoning. After that
I studied the application of so me of the new developments to the legal
domain. It appeared that reasoning with rules which are subject to ex-
ceptions can be modelIed in two ways: firstly, as reasoning with explicit
exception clauses which are assumed false unless the contrary is shown,
which method was investigated in Chapter 5; and secondly, as choosing the
most specific of confiicting conclusions. After observing that this second way
of modelling defeasible reasoning is in fact a special case of reasoning with
inconsistent information, I made in the Chapters 6, 7 and 8 a contribution
to the new logical developments themselves: I showed that both defeasible
reasoning and inconsistency tolerant reasoning can be modelled as instances
of the process of constructing and comparing arguments for incompatible
conclusions. Although the main source of inspiration for this part of the
research has been the legal domain, it is stated in a sufficiently general way
to make it a contribution to general AI research on modelling common-sense
reasoning.
In Chapter 9 it turned out that the system was an instance of a new,
argument-based development in AI research on nonmonotonie reasoning,
and in Chapter 10 I applied both this general approach and my particular
system to various issues, in particulaI' to issues in knowledge representation
and implement at ion, to Toulmin's criticism of standard logic, and to the
role of logic in noninferential forms of reasoning. Then I tried to justify
the assumption of this book that logic can be used as a standard for imple-
mented systems, in using my system, and the general idea of argumentation
systems, in a logical analysis of some implemented AI-and-Iaw systems.
Finally, I discussed how from this book a four-Iayered picture of legal
argumentation has emerged, connecting the logical, dialectical, procedural
and strategie aspects of legal reasoning.

11.2. Main Results

I willnow recapitulate some more specific conclusions. It should be stressed


that their relevance is not restricted to legal philosophy and AI-and-Iaw:
CONCLUSION 277

witness the discussion in Section 1.3 the role of logic in reasoning is not
only discussed in the legal domain, but also in AI and philosophy in general;
and, moreover, the patterns of reasoning which have been analyzed can not
only be found in law, but also in many other domains of common-sense
reasoning.

THE ROLE OF LOGIC IN LEGAL REASONING

With respect to the role of logic in legal reasoning the main conclusions
are that using logic does not commit to the 'axiomatic' or even to the
'naive deductivist' view on reasoning. That is, logic does not commit to
the narrow view on reasonillg as no more than running asound deductive
theorem prover over formulas in some logical language: it leaves room for
other reasoning activities, like induction, analogical reasoning and ways of
arguing against a rule. Logic even plays a positive role in the description
of these activities, since it defines the logical meaning of the formalisms
on which they operate; to put it another way, these activities use logic as
a tool, since they aim at suggesting or rejecting information with which a
desired or undesired conclusion can be derived, and deriving conclusions is
a matter of logic.
In terms of legal reasoning, logic does not enforce a mechanical appli-
cation of statute norms and established precedent, without any regard to
considerations of justice and fairness or to socio-political demands. The
simple reason is that the force of a logical conclusion is ultimately based
on the force of the premises: if the conclusion is not accepted, then the
premises can be changed and these changes can be based on any ground
whatsoever. The new developments discussed in this book even account for
an additional possibility: while in classicallogic the validity of a conclusion
can only be affected by removing or changing premises, in nonmonotonic
logics this can also be done by adding new premises.

DEFEASIBLE REASONING

It has become apparent that nonmonotonic logics are able to cope with
rules which are implicitly subject to exceptions. They can do so since they
deviate in one remarkable respect from standard logic: in these logics the
mere addition of new premises without changing or removing old ones
can invalidate previously valid conclusions. Of course, what falls outside
the scope of nonmonotonic logics is deciding that a newly considered case
gives rise to an exception, since this is a matter of contenti the advantages
of nonmonotonic logics are of a formal nature: they provide for ways of
jumping to general conclusions if not all information about exceptions
is available and for ways of drawing the exceptional conclusion if a new
278 CHAPTER 11

exceptional rule is externally added or if new factual information makes an


already existing exceptional rule applicable.
In investigating existing logics for defeasible reasoning I have focused on
two general ideas to obtain these advantages. The first is to attach explicit
exception clauses to defeasible rules, and to assurne these clauses to be false
unless the contrary is shown. The second way is to allow for the possibility of
incompatible premises and, if adding exceptional information indeed gives
rise to conflicting conclusions, to choose the one that is based on the most
specific information. We have seen that in both methods the separation in
legal texts of rules and exceptions can be preserved in the formalization.
Moreover, it has appeared that most nonmonotonic logics can be used in
modelling either of these two approaches. Of the first I have only studied
ways of formalizing it in existing nonmonotonic logics. It has turned out
that, as long as undecided conflicts can be avoided, this approach can be
formalized and applied to the legal domain rather weIl and, moreover, rat her
efficiently implemented with logic programming techniques. However, an
important conclusion is that if conflicts between rules cannot be broken
down in favour of one of them, the exception clause approach loses much
of its attraction.

REASONING WITH INCONSISTENT INFORMATION

The second way of modelling reasoning with exceptions is better suited


for dealing with undecided conflicts, since it is a special case of reasoning
with inconsistent information. Some interesting systems modelling that
kind of reasoning were already developed by others, e.g. by Alchourron
& Makinson (1981), Poole (1985; 1988) and Brewka (1989). However, upon
closer investigation these systems turned out to have some serious short-
comings. The most important one is that they fail to represent the step-
by-step nature of choosing between inconsistent premises, which nature
arises from the fact that proofs and arguments are also constructed step-by-
step. An important conclusion of this book has been that this shortcoming
can be avoided if inconsistency handling is formalized as constructing and
comparing arguments for incompatible conclusions, i.e. in terms of an argu-
mentation system. An additional important conclusion has been that if in
such a system the premises are still expressed in standard logic, it still does
not capture the proper nature of rules which are subject to collision rules:
the reason is that the contrapositive properties of the material implication
give rise to arguments which intuitively cannot be constructed at all. I
have shown that these problems can be avoided if defeasible conditional
premises are expressed as one-direction rules, i.e. rules for which contrapos-
itive inferences like modus tollens are invalid. Within the resulting system
CONCLUSION 279

I have studied some partieular ways of comparing arguments, viz. checking


for specificity and using arbitrary partial rule-orderings. Moreover, I have
given a formal account of situations in whieh the standards for comparing
arguments are themselves (defeasibly) derived from the premises. This has
made it possible to formalize the combined use of various legal collision
rules and, perhaps more importalltly, to formalize debates arising from
disagreement on the standards for comparing arguments.
In Chapter 9 the argumentation system developed in this book turned
out to be an instance of a new development in AI, where nonmonotonic
reasoning is formalized as the process of constructing and comparing argu-
ments. In my view the main contributiollS of this book to this development
are the analysis of the problems that arise if defeasible rules validate modus
tollens, a study of the combined use of exception clauses and priorities, the
formalization of reasoning ab out priorities, and an analysis of the proper
role of logieal argumentation systems in terms of a four-Iayered picture of
argumentation.

THE ROLE OF ARGUMENTATION SYSTEMS IN LEGAL REASONING

One of the points of departure of this book was that logic should be
regarded as a tool in reasoning. For instance, we have seen several times that
interpretation, although being an extra-Iogical activity, still presupposes
logic, viz. when the consequences of alternative interpretations are tested.
Now Chapters 6-8 have formalized certain kinds of logical tools for legal
argument, viz. those useful when legal knowledge is incomplete or uncertain,
or when lawyers have conflicting points of view. The four-Iayered view
on legal argumentation developed in Section 10.5, distinguishing a logical,
dialectieal, procedural and strategie layer, reveals the nature of these tools:
they guide those forms of reasoning that are aimed at obtaining a eertain
dialectical outeome. For instanee, at the third, proeedural level, we have
seen that Gordon's Pleadings Game allows only those new arguments that
have the potential to change the dialectical status of a claim. And in
Section 10.3.2 we have seen that the fourth, strategie layer studies strategies
for obtaining a certain result at the dialeeticallevel.

CONCLUSIONS FOR LEGAL PHILOSOPHY

Although the present study of legal reasoning was started as an AI-and-


law projeet, it is also a contribution to legal philosophy, in partieular if
the purely technical and eomputational aspects are disregarded. To start
with, this book has provided a better understanding of certain legislation
teehniques, viz. general exeeption clauses and eollision rules, showing that,
although going beyond standard logie, they can still be analyzed in logieal
280 CHAPTER 11

terms. Secondly, I have given a formal account of legal reasoning as rea-


soning about legal knowledge instead of as just mechanically applying it,
thus giving a more realistic picture of legal reasoning. In particular, I have
shown how debates on the validity, or 'backing' of a legal rule, and debates
on the comparison of arguments, can be formalized.
This book is also relevant for discussions on the logical reconstruction
of judicial decisions. Writers such as MacCormick (1978), Alexy (1978) and
Soeteman (1989) have argued that as a logical minimum of rationality it
should be possible to recast judicial decisions in deductive form. These
writers are well aware that this does not imply the naive deductivist view
on legal reasoning that I described above in Section 2.2, and they also
acknowledge that beyond this logical minimum there are other rational
constraints. What this book has added to the discussion is that so me of
these other constraints are still of a formal-Iogical nature: often the content
of a judicial decision has a dialectical structure, and this book provides the
logical tools to rationally reconstruct it.
Furthermore, my research has brought some further clarity in regard
to early discussions of legal defeasibility, which stressed its procedural role
but largely left the implications for logical analyses of judicial reasoning
untouched. My distinction between a logical, dialectical and procedural
level of legal argument explains why legal procedures allow for burden-
shifting, while it still makes sense to reconstruct the content of judicial
decisions in logical form.
Finally, after also distinguishing a fourth, strategic level, a four-Iaycred
picture of legal argumentation has emerged which, I hope, has helped to
put some well-known existing legal-philosophical work in perspective.

CONCLUSIONS FOR APPLICATIONS

The above conclusions mainly concerned the logical and philosophical as-
pects of the investigations, but with respect to applications to knowledge-
based systems also some conclusions can be drawn. One of them is that
in modelling defeasible reasoning the choice approach is computationally
more complex than the exception clause approach, but is also more widely
applicable. The main reason for its complexity is the need to compare and
choose between confiicting answers; by contrast, the aim of the exception
clause approach is to avoid confiicting answers, which leads to a simpler
problem solving task. However, unique answers can be obtained only on
restricted domains of application: particularly reasoning with inconsistent
information and metalevel reasoning rapidly need the possibility to present
alternative arguments and to argue about their strength. In conc1usion, as
so often in AI research a trade-off is necessary between expressiveness and
CONCLUSION 281

tractability.
For knowledge engineering an important conclusion has been that when
it comes to a modular way of designing knowledge bases the several ways
of representing exceptions do not differ much: in all formalization methods
and all formalisms it is better to keep an overview over the entire domain in
translating an individual expression: particularly, preserving the separation
of rules and exceptions does not necessarily support a modular formaliza-
tion process. For using exception clauses and priorities similar observations
were already made by others, but with respect to using a syntactic or
semantic specificity check these conclusions see m to be new.

11.3. Implications for Other Issues


WAYS OF FORMALIZING NONMONOTONIC REASONING

It has not been the aim of this book to make a comparative study of ways of
modelling nonmonotonic reasoning, particularly not to advocate one way
as the best. In fact, it might even be argued that there is no such best
way: it might be that there is no unique kind of reasoning which can be
called 'nonmonotonic reasoning', but that there is a variety of reasoning
patterns of which one common feature is nonmonotonicity but which can
differ in many other respects. In this view, argumentation systems are more
a formalization of some of these reasoning patterns than a general proposal
on how to model nonmonotonic reasoning.
Nevertheless, from my investigations some conclusions on this issue
can still be drawn. Firstly, to my knowledge the law is one of the few
domains in which nonmonotonic logics have been tested on more than just
toy examples, and in this respect it is perhaps significant that especially
argument-based systems have turned out to be suitable. And if this obser-
vation is combined with the fact that general argumentation frameworks,
like the BDKT approach, are able to unify existing nonmonotonic logics,
the conclusion can be drawn that the argument-based approach to the
formalization of nonmonotonic reasoning is at least very promising.
Some more specific conclusions can also be drawn. To start with, we have
seen that a formalization of reasoning with priorities on premises, which
is an important kind of nonmonotonic reasoning, needs more sophisticated
tools than is often presumedj particularly, there should be a way to account
for the step-by-step nature of comparing premises. We have also shed more
light on the role of the specificity principle in nonmonotonic reasoning.
We have seen that as a notational convention for exceptions it does not
induce a modular formalization process, since the specificity form often
just records a choice on other groullds, requiring aglobaI view on the
available information. And we have seen that specificity is not very useful
282 CHAPTER 11

as a common-sense principle for dealing with conflicting information: in


practice many other criteria are used, and many of them are more important
than specificity.
A final conclusion is that attempts to model nonmonotonic reasoning as
inconsistency tolerant reasoning in standard logic are far less attractive than
is often assumed. The reason is that conditional operators validating con-
trapositive inferences force to consider potential arguments (or extensions
or subtheories) that in actual reasoning practice are not considered at all.
Of course, my 'refutation' of the inconsistency handling approach has not
been a matter of hard mathematical proofj basically, the 'proof' consists of
a system with a nonstandard conditional in which the problems of existing
'classical' theories can be handled in an elegant and natural way, while in
the classical approach no such natural and elegant theory is available yet.
A reinstatement of the inconsistency handling approach should consist of a
satisfactory theory based on standard logicj of course, a mathematical proof
that such a theory does not exist is impossible, since this issue concerns the
link between a formal system and its domain of applicationj it may even be
that such a theory does exist but then the merits of the present research
are at least that it has pointed at some problems for which solutions have
to be found.

LOGICAL FOUNDATIONS OF AI

Coming back to the general claim that logic can provide formal foundations
of AI research, we can say that the present study also has implications
for the nature of this claim. The reason is that this book has been an
example of a somewhat different use of logic, in that I have used a logi-
cal system, default logic, as a component of a larger construct, which is
also defined in a formal way. This is in line with the point of departure
formulated in Section 1.4 that not only standard logical but also larger
formal constructs can specify the semantics of reasoning architectures. My
theory is by no means the only example of this way of using logicj aH
argumentation systems are of this kind, as weH as procedural models like
Gordon's Pleadings Game, and, for instance, belief revision theory and
recent research on metalevel reasoning. In fact, this even holds for the
investigations of Poole and Brewka: although I have criticized their views
on how to model nonmonotonic reasoning, I think an interesting aspect of
their research is the general idea of embedding logical systems in larger
formal theories, instead of forcing everything into the traditional form of a
logical system with one logical language, interpreted in a model-theoretic
semantics, and with one inferential system.
CONCLUSION 283

ARGUMENTATION THEORY

The four-Iayered view on argumentation that has emerged from this book
is not restricted to the legal domain, but is in fact a contribution to general
argumentation theory. As noted in SectiOll 10.5, in this field the procedural
aspects of argumentation have already been studied. At an informal level
this is done by e.g. the pragma-dialectical school, mentioned above in
Section 10.5, while, moreover, a field of 'formal dialectics' has evolved,
studying formal systems of procedural rules for dialogues; see e.g. Walton
& Krabbe (1995). The relevance of the present study for this field is that
it shows how the procedural level of argumentation presupposes not only
a logical level (as formal dialecticialls are weH aware) but also a dialectical
level.

DEONTIC LOGIC

A final topic for which my investigations have implications is the study


of deontic logic, the logic of deontic modalities, such as 'obligatory' , 'for-
bidden' and 'permitted' . Although this book has not direct1y been about
deontic reasoning, the above results are certainly relevant to it, since they
offer alternative ways of dealing with two traditional issues of deontic
logic: moral dilemmas and prima facie obligations (I have defended this
claim in more detail in Prakken, 1996a). First of all, it should be noted
that, although in my system the underlying logical language is first-order
predicate logic plus defaults, there is no reason why the first-order part
cannot be extended to modal logics. If this is done, then in my system
conflicting obligations can be represented as being inconsistent, but in a
more general sense still meaningful: although it is impossible to defend two
conflicting arguments simultaneously, since the combined argument would
be inconsistent, they can be defended alternatively as defensible arguments.
Thus part of the nation of a moral dilemma is captured by my system's
notion of a pair of conflicting defensible arguments.
The present investigations have also resulted in a particular way of
representing prima facie obligations. It is nowadays generally accepted
that prima facie obligations are unconditional obligations that are derived
from defeasible conditional norms (cf. e.g. Nute, 1997). However, different
opinions exist on how such norms must be formalized. Some approaches
within deontic logic consist of developing a special normative conditional
operator: for example, by making the deontic operators dyadic (an approach
initiated by Von Wright, 1964), or by letting the conditional operator fall
within the scope of so me modal operator expressing deontic normality, as in
e.g. Jones (1993) and Asher & Bonevac (1997). By contrast, this book offers
a formal theory in wh ich the defeasibility of conditional norms is captured
284 CHAPTER 11

by the logic of a general conditional operator, not connected with a deontic


modality. The advantage of this approach is that in this way techniques
and insights developed in other areas are directly available in a normative
context.

11.4. Suggestions for Further Research

The work on argumentation systems in the second part of this book has
genera ted various issues for furt her research. In discussillg them the four-
layered picture of argumentation is useful. Ifwe first look inside thc dialecti-
callevel, then one logical issue that needs furt her study is the formalization
of accrual of arguments. Although in Section 7.5.4 I sketched a way of deal-
ing with this phenomenon within my system, based on accrual 'by hand',
a more detailed comparison with alternative treatments, in particular with
the accrual 'by default' approach of Verheij (1996), is necessary before
a final evaluation can be given. Another required comparison between
alternatives concerns the proper formalization of reasoning about priorities
(see Section 8.6). While in my method the priorities are transported to
the metatheory of the system, the methods of Hage (1996) and Kowalski
& Toni (1996) keep the priorities inside the logical language and instead
extend the system's metatheory with other metalogical features. An in-
teresting technical research issue is the development of dialectical proof
theories for other semantics than just well-founded semantics. Preliminary
research is reported in Prakken (1997) but much work remains to be done.
Finally, as for knowledge representation, I have argued in Section 9.3.2
that it would be interesting to incorporate Hage & Verheij's work on legal
knowledge representation - which was done within an extension-based
system - in applications of my system, which is argument-based.
If we leave the dialecticallevel, we see that the four-Iayered view of argu-
mentation suggests various interesting research topics. As for the connection
between the dialectical and procedural level, it would be interesting to
formalize reasoning about procedural rules. Here the situation is analogous
to reasoning about the standards for comparing arguments since, just as
for these standards, the procedural rules for legal argumentation are also
not fixed, but debatable. Therefore, analogously to the formalization in
Chapter 8 of reasoning about priorities, it would be interesting to study
how these procedural rules could not only determine but also be the result
of the argumentation process. In legal philosophy this phenomenon of self-
modifying legal procedures has been extensively studied by Suber (1990).
In AI (and law) Hage et al. (1994) and Vreeswijk (1996) have studied the
logical aspects, but they leave much work to do.
Finally, as mentioned at the end of Chapter 10, the connection between
CONCLUSION 285

the procedural and strategie level also raises interesting research issues,
both for AI-and-Iaw and for legal philosophy. While the proceduralleveljust
defines the rules of agame, the strategie level defines how the game can be
played well; now a very challenging (but also difficult) task is to identify and
then, if possible, to formalize the strategies, or 'heuristies' for good and bad
arguing. A (modest) example of this research is Prakken & Sartor (1997b),
whieh is based on this book's 'heuristie' account of analogieal reasoning, and
in which this book's dialogue game is used as the dialectical core of a HYPO-
style protocol for analogical reasoning with legal precedents. It would be
interesting to compare this 'heuristie' approach to non-deductive argument
forms with approaches such as the one ofFreeman & Farley (1996), in which
such argument forms, and the ways to attack them, are defined not at thc
strategie but at the dialectieallevel.
APPENDIX A

NOTATIONS, ORDERINGS AND GLOSSARY

Al. General Symbols and Notations


STANDARD LOGIe

Symbols:
--, Not
/\ And
V Or
---t Material Implication
Material Equivalence
T True ('P ---t 'P)
1- False ('P /\ --, 'P )
f- Provability
F Entailment
If Non-Provability
~ Non-Entailment
:3 Existential quantifier (There Exists)
\/ Universal quantifier (For All)
Abbreviations:
Th(A) The deductive closure of A
I(t) The interpretation of the term t
I(P} The interpretation of the predicate P
wff 'well-formed formula'
iff 'if and only if'
Some notational conventions:
p,q,r, ... Metavariables for atomic formulas
'P,'IjJ,x Metavariables for any formula
P,Q,R, .. . Predicate constants
A,B,C, .. . Predicate constants
a,b,c, .. . Object constants
x,y,z Object variables

'Pl /\ ... /\ 'Pn ---t 'I/J ('Pl /\ ... /\ 'Pn) ---t 'I/J
\/Xl, ... ,Xn·'Pl /\ ... /\ 'Pn ---t 'IjJ = \/Xl,··· ,Xn('Pl/\ ... /\ 'Pn ---t 'I/J)

287
288 APPENDIX A

Typographical convention when relation and junction symbols have mOTe


than one letter:
relation(terml,'" ,termn )
function(terml,'" ,termn )

SET THEORY

Symbols:
E Element
fI. Not an Element
n Set Interseetion
U Set Union
C Subset
:J Superset
0 Empty Set
00 Infinity
Notational conventions:
U~nSi SnU ... uSm
{xESI···x ... } The set of all x E S such that ... x ...

A2. Ordering Relations


PROPERTIES OF ORDERING RELATIONS

Reflcxivity: \:IxRxx
Irreflexivity: \:Ix--,Rxx
Transitivity: \:Ix, y, z( (Rxy 1\ Ryz) --t Rxz)
Antisymmetry: \:Ix,y((Rxy 1\ Ryx) --t X = y)
Asymmetry: \:Ix, y--,(Rxy 1\ Ryx)
Linearity: \:Ix, y(Rxy V Ryx V x = y)

A partial preorder is a transitive and reflexive relation.


A partial order is a transitive, reflexive and antisymmetrie relation.
A strict partial order is a transitive, reflexive and asymmetrie relation.
A linear order is a relation whieh is transitive, irreflexive and linear.

ABBREVJATIONS
x 1:. y = not x ~ y (likewise for the following symbols)
x?y= y~x
x <Y= x ~ y and y 1:. x
x>y= y<x
x :::::: y = x ~ y and y ~ ::r
NOTATIONS, ORDERINGS AND GLOSSARY 289

A3. Notions of the Argumentation System of Chapters 6-8


Weak Negation
::::} Defeasible Implication
F A set of first-order formulas (the facts)
Fn A set of first-order formulas (the necessary facts)
Fe A set of first-order formulas (the contingent facts)
ß A set of defeasible rules
< A partial preorder on ß (Chapter 7)
< A strict partial order on ß (Chapter 8)
specificity An ordering on the set of an subsets of ß (Chapter 6)
~ The object level predicate denoting < (Chapter 8)
<s For any set S of arguments: the ordering < determined by
the priority arguments in S
defeat An ordering on ATgsr (Chapters 6,7)
S-defeat For any set S of arguments: the defeat orderillg
determined by <s (Chapter 8)
r An (ordered) default theory
(Fn U Fe U ß in Chapters 6,8)
((Fn U Fe U ß,~) in Chapter 7)
A1'gsr The set of an arguments on the basis of r
JUstA1"gsr The set of an justified arguments on the basis of r
I", Simple derivability ('there is an argument for')
I",a Argumentative derivability ('there is a justified argument
for')
ANT(1') For any rule 1": the antecedent of l'
CONS(1') For any rule r: the consequent of r
ANT(R) For a set of rules R = {rl,."", 1"n}: Ui=IANT(rd
CONS(R) For a set of rules R = {rl, ... , r n }: Ui~ICONS(ri)
ANTCON(R) For a finite set of rules R:
the conjunction of an members
of ANT{R)

A4. Glossary
Arity - The arity of a predicate is the number of arguments of the
predicate.
Atomic formula - Formula not composed of other formulas.
Closed formula - A formula without free variables.
Completeness - A proof system of a logic is complete with respect to
the semalltics of the logic iff every formula provable from the premises
is entailed by the premises.
290 APPENDIX A

Consequenee notion - (teehnieally) a function from sets of formulas to


formulas.
Consisteney - A set of formulas is eonsistent iff it does not deduetively
imply a eontradietion.
Contradietion - A wff of the form <p A --, <p.
Contraposition - The deduetive equivalenee of cp --+ 'IjJ and --, 'IjJ --+ --, <p.
Decidability - A logieal system is deeidable iff it has adecision pro ec-
dure.
Decision proeedure - A proeedure whieh for every wff <p of a given
logieal system eorreetly determines in a finite number of steps whether
<p is provable (or valid) or not.
Deduetive reasoning -- In this book: reasoning aeeording to a mono-
tonie eonsequenee notion (whether a semantieal or a syntaetieal one).
Deductive closure - The closure of a set of formulas und er an its de-
duetive eonsequenees, i.e. for an wff <p: <p E T iff T f- <p
Derivability - Provability.
Entailment - A formula <p is entailed by a set of formulas T iff <p is true
in all models of T.
Extension of a predicate - The extension of a predicate P with arity
n is the set of all n-tuples of objeets dl, ... ,dn for whieh the predieate
P holds.
First-order (predieate) logie - standard predieate logie, in whieh only
quantifieation is possible over objects.
Fixed point - A fixed point of a function f is an argument Xl, ... , Xn
such that f(xl, ... ,xn ) = Xl, ... ,X n .
Free variable - A free variable of a formula <p is a variable whieh is not
in the seope of a quantifier in <p.
Ground expression - a ground expression (formula, term) is an expres-
sion without variables.
Interpretation - Ofa term t the interpretation I(t) is the objeet denoted
by t. Of a predieate P the interpretation I(P) is the extension of P.
Literal - a literal is an atomic formula or a negated atomie formula.
Monotonie eonsequenee - A eonsequenee not ion IF is monotonie iff
for an sets of wff's X and Y such that X ~ Y and an wff's <p it holds
that if X IF <p then Y IF <p.
Modus Ponens - For any eonditional operator ::::} the provability of 'IjJ
from <p ::::} 'IjJ and <p.
Modus Tollens - For any eonditional operator::::} the provability of --, <p
from <p ::::} 'IjJ and --, 'IjJ.
Metalanguage - The language used to talk about another language, the
objeet language.
Object language - A language diseussed in another language, the met-
NOTATIONS, ORDERINGS AND GLOSSARY 291

alanguage.
Provability - A formula cp is provable from a set of premises T in a logic
iff the proof system of the logic sanctions a proof of cp from T.
Second-order logic - A system of predicate logic with thc possibility to
quantify over predicates.
Semi-decidability - A logic is semi-decidable iff a procedure exists which
for every provable (or valid) fonnula is guaranteed to tell in a finite
number of steps that it is provable (or valid).
Soundness - A proof system of a logic is sound with respect to its
semantics iff cvery formula provable from the premises is cntailed by
the premises.
Tautology - A theorem of propositional logic.
Theorem - A formula is a theorem of a logic iff it is provable without
premises.
Theory - A deductively cIosed set of wff's.
Term - An expression which can be an argument of a predicate letter,
i.e. a constant, variable or function symbol.
Validity - A formula cp is valid relative to a logic iff it is true in all models
of the logic.
References

Alchourr6n, C.E. & Bulygin, E. 1971. Normative Systems. Wien-New York: Springer
Verlag.
Alchourr6n, C.E. & Bulygin, E. 1984. Pragmatic foundations for a logic of norms.
Rechtstheorie 15, 453-464.
Alchourr6n, C.E. & Makinson, D. 1981. Hierarchies of regulations and their logic. In New
Studies in Deontic Logie, ed. R. Hilpinen, 125-148. Dordreeht: Reidel.
Alexy, R. 1978. Theorie der juristischen Argumentation. Die Theorie des rationalen
Diskurses als eine Theorie der juristischen Begründung. Frankfurt am Main:
Suhrkamp Verlag. (in german)
Allen, L.E. 1963. Beyond doeument retrieval toward information retrieval. Minnesota
Law Review 47, 713-767.
Allen, L.E. & Saxon, C.S. 1991. More IA needed in AI: interpretation assistance for
coping with the problem of multiple structural interpretation. Proceedings 0/ the
Third International Gon/erence on Artificial Intelligence and Law, 53-61. New York:
ACM Press.
Asher, N. & Bonevac, D. 1997. Common sense obligation. In De/easible Deontic Logic.
Essays in Nonmonotonic Normative Reasoning, ed. D.N. Nute, 159-204. Dordrecht:
Kluwer, Synthese Library.
Asher, N. & Morreau, M. 1990. Commonsense entailment: a modal theory of non mono-
tonic reasoning. Proceedings 0/ J ELf A 1990. Lecture notes in Artificial Intelligence
478, 1-30. Berlin: Springer Verlag.
Ashley, K.D. 1990. Modeling Legal Argument: Reasoning with Gases and Hypotheticals.
Cambridge, MA: MIT Press.
Ashley, K.D. & Rissland, E.L. 1987. But, See, Accord: Generating "Blue Book" citations
in HYPO. Proeeedings 0/ the First International Gonfe1'ence on Artificial Intelligenee
and ]'aw, 67--74. New York: ACM Press.
Aqvist, L. 1977. Legal wrongfulness as aprerequisite for liability in tort. In Deontis-
ehe Logik und Semantik, eds. A.G. Conte, R. Hilpinen & G.H. von Wright, 9-19.
Wiesbaden: Athenaion.
Baker, G.P. 1977. Defeasibility and meaning. In Law, Morality, and Society. Essays in
Honour 0/ I1.L.A. Hart, eds. P.M.S. Hacker & J. Raz, 26-57. Oxford: Clarendon
Press.
Barth, E.M. & Krabbe, E.C.W. 1982. From Axiom to Dialogue: a Philosophical Study 0/
Logic and Argumentation. New York: Walter de Gruyter.
Bench-Capon, T.J.M. 1993. Neuralnetworks and open texture. Proeeedings 0/ the Fourth
International Gon/erence on Artificial Intelligence and Law, 292-297. New York:
ACM Press.
Bench-Capon, T.J.M. & Sergot, M.J. 1985. Towards a rule-based representation of open
texture in law. In Gomputing Power and Legal Reasoning, ed. C. Walter, 39-60. St.
Paul, Minn.: West Publishing Co.
Bench-Capon, T.J.M. & Coenen, F.P. 1992. Isomorphism and Legal knowledge based
systems. Artificial Intelligence and Law 1, 65-86.
Benthem, J. van 1995. Logic and argumentation. Proceedings 0/ the Third ISSA Gon-
/erence on Argumentation. Volume I: Perspectives and Approaches, eds. F.H. van
Eemeren, R. Grootendorst, J.A. Blair & C.A. Willard, 18-31. Amsterdam: Sic Sat.

293
294 REFERENCES

Berman, D.H. & Hafner, C.D. 1987. Indeterminacy: achallenge to logic-based models of
legal reasoning. In Yearbook 0/ Law, Computers and Technology, Vo!. 3, 1-35. London:
Butterworths.
Beth, E.W. 1965. Mathematical Thought. An Intr-oduction to the Philosophy 0/ Mathe-
matics. Dordrecht: Reide!.
Bidoit, N. 1991. Negation in rule-based database languages: a survey. Theoretical Com-
puter Science 78, 3·-83.
Bing, J. 1992. Book review of Kevin D. Ashley, modeling legal argument: reasoning with
cases and hypotheticals. Ar·tificial Intelligence and Law 1, 103-107.
Birnbaum, L. 1991. Rigor mortis: a response to Nilsson's "Logic and artificial intelli-
gence" . Artificial Intelligence 47, 57-77.
Bondarenko, A., Dung, P.M., Kowalski, R.A. & Toni, F. 1997. An abstract
argumentation-theoretic approach to default reasoning. 1'0 appear in Artificial In-
telligence.
Brewka, G. 1989. Preferred subtheories: an extended logical framework for default
reasoning. Proceedings 0/ the Eleventh International Joint Con/erence on Artificial
Intelligence, 1043-1048.
Brewka, G. 1991a. Nonmonotonic Reasoning: Logical Foundations 0/ Commonsense.
Cambridge: Cambridge University Press.
Brewka, G. 1991b. Cumulative default logic: in defense of nonmonotonic inference rules.
Artificial Intelligence 50, 183-205.
Brewka, G. 1994a. Adding priorities and specificity to default logic. Proceedings 0/ the
Fifth European Workshop on Logics in Artificial Intelligence, Springer Lecture Notes
in AI 838, 247-260. Berlin: Springer Verlag.
Brewka, G. 1994b. A logical reconstruction of Rescher's theory of formal disputation
based on default logic. Pr-oceedings 0/ the Eleventh European Conference on ArtiJicial
Intelligence, 366-370. Chicester: Wiley.
Brewka, G. 1994c. Reasoning about priorities in default logic. Proceedings of the Twelfth
National Conference on Artificial Intelligence, 247-260.
Brewka, G. 1996. Well-founded semantics for extended logic programs with dynamic
preferences. Jom·nal 0/ Artificial Intelligence Research 4, 19-36.
Brouwer, P.W. 1994. Legal knowledge representation in the perspective of legal theory.
In Legal Knowledge Based Systems. The Relation with Legal Theory, eds. H. Prakken,
A.J. Muntjewerff & A. Socteman, 9-18. Lelystad: Koninklijke Vermande BV.
Bulygin, E. & Alchourr6n, C.E. 1977. Unvollständigkeit, Widersprüchlichkeit und Unbes-
timmtheit der Normenordnungen. In Deontische Logik und Semantik, eds. A.G.
Conte, R. Hilpinen & G.H. von Wright, 20-32. Wiesbaden: Athenaion.
Bylander, 1'., Allemang, D., Tauner, M.C. & Josephson, J.R. 1991. The computational
complexity of abduction. Artificial Intelligence 49, 25-60.
Cadoli, M., Donini, F.M. & Schaerf, M. 1996. Is intractability of nonmonotonic reasonillg
a real drawback? Artificial Intelligence 88: 215-251.
Clark, K. 1978. Negation as failure. In Logic and Databases, eds. H. Gallaire & J. 1-1illker,
293-322. New York: Plenum Press.
Delgrande, J. 1988. An approach to default reasoning based on a first-order conditional
logic: revised report. Artificial Intelligence 36, 63-90.
Doyle, J. 1979. A Truth Maintenance System. Artificial Intelligence 12, 231-272.
Dung, P.M. 1993. An argumentation semantics for logic programming with explicit
negation. Proceedings 0/ the Tenth Logic Programming Con/erence, 1993, 616-630.
Cambridge, MA: MIT Press.
Dung, P.M. 1994. Logic programming as dialogue garnes. Unpublished paper, Division of
Computer Science, Asian Institute of Technology, Bangkok.
Dung, P.M. 1995. On the acceptabilit.y of arguments and its fundamental role in non-
monotonic reasoning, logic programming, and n-person games. Artificial Intelligence
77, 321-357.
Dworkin, R.M. 1977. Is law a system of rules? In The Philosophy 0/ Law, ed. R.M.
REFERENCES 295

Dworkin, 38-65. Oxford: Oxford University Press.


Eemeren, F.H. van & Grootendorst, R 1992. Argumentation, Communi, .• ,un, and Falla-
eies. A Pragma-dialectical Perspective. Hillsdale, NJ: Lawrence Erlbaum Associates.
Enschede, Ch.J. 1983. Review of H.J.M. Boukema: Judging, towards a rational judicial
process. Nederlands 7'ijdscll7"ift voor Rechtsfilosofie en Rechtstheorie no. 1, 53-57. (in
dutch)
Etherington, D.W. 1988. Reasoning with Incomplete Information. London: Pitman.
Feteris, E.T. 1996. The analysis and evaluation of legal argumentation from a pragma-
dialectical perspective. Proceedings of the International Confe1'ence on Formal and
Applied Practical Reasoning, Springer Lecture Notes in AI 1085, 151-166. Berlin:
Springer Verlag.
Fj2IlIesdal, D. & HiIpinen, R. 1970. Deontic logic: an introduction. In Deontic logic:
Introductory and Systematic Readings, ed. R. HiIpinen, 1-35. Dordrecht: Reidel.
Frank, J. 1949. Courts on Trial. Princeton University Press.
Franken, H. 1983. Jurist en computer: theoretische achtergronden. In Jurist en Computer,
eds. A.H. De Wild & B. Eilders, 13-32. Deventer: Kluwer. (in dutch)
Freeman, K. & Farley, A.M. 1996. A model of argumentation and its application to legal
reasoning. Artificial Intelligence and Law 4: 163-197.
Fuller, L.L. 1958. Positivism and fidelity to law: a reply to Professor Hart. Harvard Law
Review 71, 630-672.
Gärdenfors, P. 1988. Knowledge in Flux. Modeling the Dynamics of Epistemic States.
Cambridge, MA: MIT press.
Gardner, A. von der, 1987. An Artificial Intelligence Approach to Legal Reasoning.
Cambridge, MA: MIT press.
Geffner, H. & Pearl, J. 1992. Conditional entaiIment: bridging two approaches to default
reasoning. Artificial Intelligence 53, 209-244.
Gelfond, M. & Lifschitz, V. 1988. The stable model semantics for logic programming. In
Logic Programming: Proceedings of the Fifth International Conference and Sympo-
sium, eds. RA. Kowalski & K. Bowen, 1070-1080.
Gelfond, M. & Lifschitz, V. 1989. Compiling circumscriptive theories into logic programs.
Proceedings of the Second International Workshop on Nonmonotonie Reasoning, Lec-
ture Notes in Computer Science 346, 74-99. Berlin: Springer Ve1'lag.
Gelfond, M. & Lifschitz, V. 1990. Logic programs with c1assical negation. Proceedings of
the Seventh Logic Programming Conference, 579-597. Cambridge, MA: MIT Press.
Genesereth, M.R. & Nilsson, N.J. 1988. Logical Foundations of Artificial Intelligence.
Palo Alto, CA: Morgan Kaufmann Publishers Inc.
Ginsberg, M.L. 1987. Introduction of Readings in Nonmonotonie Reasoning, ed.
M.L. Ginsberg, 1-19. Los Altos CA: Morgan Kaufmann Publishers Inc.
Gordon, T.F. 1988. The importance of nonmonotonicity for legal reasoning. In Expert
Systems in Law, eds. 11. Fiedler, F. Haft & R. Traunmüller, 110-126. Tübingen:
Attempto Verlag.
Gordon, T.F. 1989. Issue spotting in a system for searching interpretation spaces. Pro-
ceedings of the Second International Conference on Artificial Intelligence and Law,
157-164. New York: ACM Press.
Gordon, T.F. 1991. An abductive theory of legal issues. International Journal of Man-
Machine Studies 35, 95-118.
Gordon, T.F. 1994. The Pleadings Game: an exercise in computational dialectics. Arti-
fici al Intelligence and Law 2: 239-292.
Gordon, T.F. 1995. The Pleadings Game. An Artificial Intelligence Model 0/ Procedural
Justice. Dordrecht: Kluwer Academic Publishers.
Hage, J.C. 1987. De betekenis van niet-standaardlogica's voor juridische expertsystemen.
Computerrecht 4, 233-239. (in dutch)
Hage, J.C. 1996. A theory or legal reasoning and a logic to match. Artificial Intelligence
and Law 4: 199-273.
Hage, J.C. 1997. Reasoning With Rules. An Essay on Legal Reasoning and Its Underlying
296 REFERENCES

Logic. Dordre ' , etc.: Kluwer Law and Philosophy Library.


Hage, J.C., Leenes, R. c"" Lodder, A.R. 1994. Hard cases: a procedural approach. Artificial
Intelligence and Law 2: 113-166.
Hamfeit, A. & Barklund, J. 1989. Metalevels in legal knowledge and their runnable
representation in logic. Preproceedings of the III International Conference on "Logica,
Informatica, Diritto ", Vol. 11, 557-576. Florence.
Hart, H.L.A. 1949. The ascription of responsibility and rights. Proceedings of the Aris-
totelean Society, n.s. 49 (1948-9), 171-194. Reprinted in Logic and Language. First
Series, ed. A.G.N. Flew, 145-166. Oxford: Basil Blackwell. (The references in this
thesis are to the reprint).
Hart, H.L.A. 1958. Positivism and the separation of law and morals. Harvard Law Review
vol. 71, 593-629. Reprinted in The Philosophy of Law, ed. R.M. Dworkin, 17-37.
Oxford: Oxford University Press. (The references in this thesis are to the reprint).
Hart, 1I.L.A. 1961. The Concept of Law. Oxford: Clarendon Press.
Hayes, P.J. 1977. In defence of logic. Proceedings of the Fifth International Joint Confer-
ence on Artificial Intelligence, 559-565.
Herrestad, H.B. 1990. Norms and formalization. CompLex 12/90. Oslo: Tano.
Herrestad, H.B. & Krogh, C. 1995. Obligations directed from bearers to counterparties.
Proceedings of the Fifth International Confer·ence on Arlificial Intelligence and Law,
210-218. New York: ACM Press.
Hofstadter, D.R., 1985. Waking up from the boolean dream, or, subcognition as com-
putation. In Metamagical themes: Questing for the Essence of Mind and Paltern,
D.R. Hofstadter, 631-665. London: Penguin Books.
Hohfeld, W.N. 1923. Fundamental Legal Conceptions as Applied to Legal Reasoning. New
Haven, Connecticut: Yale University Press.
Horty, J.F., Thomasson, R.H. & Touretzky, D.S. 1990. A skeptical theory of inheritance
in nonmonotonic semantic networks. Artificial Intelligence 42, 311-348.
Israel, D.J. 1980. What's wrong with non-monotonic logie? Proceedings of the First
National Conference on Arlificial Intelligence, 99-101.
Israel, D.J. 1985. A short companion 1.0 the naive physics manifesto. In Formal Theories
of the Commonsense Wor-ld, eds. J.R. Hobbs & R.C. Moore, 427-447. Norwood, NJ:
Ablex.
Johnson-Laird, Ph.N. 1988. The Computer and the Mind. An Introduction to Cognitive
Science. Cambridge, MA: Harvard University Press.
Jones, A.J.1. 1990. Deontic logic and legal knowledge representation. Ratio Juris Vol. 3,
No. 2, 237-244.
Jones, A.J.1. 1993. Towards a formal theory of defeasible deontic conditionals. Annals of
Mathematics and Artificial Inlelligence 9, 151-166.
Jörgensen, J. 1938. Imperatives and logic. Erkenntnis 7, 288-296.
Karpf, J. 1989. Quality assuranee of legal expert systems. In Preproceedings of the III
International Conference on "Logica, Informatica, Diritto", Vol. 1,411-440. Florence.
Kleer, J. de, 1986. An assumption-based truth maintenance system. Artificial Intelligence
28, 127-162.
Kloosterhuis, H. 1996. The normative reconstruction of analogy argumentation in judicial
decisions: a pragma-dialectical perspective. Proceedings of the International Confer-
ence on Formal and Applied Practical Reasoning, Springer Lecture Notes in AI 1085,
375-383. Berlin: Springer Verlag.
Kolata, G. 1982. How can computers get common sense? Science Vol. 217, 1237-1238.
Konolige, K. 1988a. On the relation between default and autoepistemie logie. Artificial
Intelligence 35, 343-382.
Konolige, K. 1988b. Hierarchie autoepistemic theories for non monotonie reasoning: pre-
liminary report. Proceedings of the Second International Workshop on Nonmonotonic
Reasoning, 42-59. Lecture not es in Computer Scienee 346, Berlin: Springer Verlag.
Kowalski, R.A. 1989. The treatment of negation in logie programs for represcnting legis-
lation. Proceedings of the Second International Conference on Artificial Intelligence
REFERENCES 297

and Law, 11-15. New York: ACM Press.


Kowalski, R.A. 1995. Legislation as logic programs. In Informatics and the Foundations
0/ Legal Reasoning, eds. Z. Bankowski, I. White & U. Hahn, 325-356. Dordrecht: Law
and Philosophy Library, Kluwer Academic Publishers.
Kowalski, R.A. & Sadri, F. 1990. Logic programs with exceptions. Proceedings 0/ the
Seventh International Logic Programming Con/erence, 598-613. Cambridge, MA:
MIT Press.
Kowalski, R.A. & Toni, F. 1996. Abstract Argumentation. Artificial Intelligence and Law
4: 275-296.
Kraus, S., Lehmann, D. & Magidor, M. 1990. Nonmonotonic reasoning, preferential
models and cumulative logics. Artificial Intelligence 44, 167-207.
Leith, Ph. 1986. Fundamental errors in legal logic programming. The Computer Journal
Vol. 29, no. 6, 545-552.
Leith, Ph. 1990. Formalism in AI and Computer Science. Chichester: Ellis Horwood.
Lenat, D. & Guha, R. 1990. Building Large Knowledge-based Systems. Representation
and In/erence in the CYC Project. Reading, MA: Addison-Wesley.
Lifschitz, V. 1987a. On the declarative semantics of logic programs with negation. In
Foundations 0/ Deductive Databases and Logic Programming, ed. J. Minker, 177-192.
Los Altos, CA: Morgan Kaufmann Publishers Inc.
Lifschitz, V. 1987b. Pointwise circumscription. In Readings in Nonmonotonie Reasoning,
ed. M.L. Ginsberg, 179-193. Los Altos, CA: Morgan Kaufmann Publishers Inc.
Lin, F. & Shoham, Y. 1989. Argument systems. A uniform basis for nonmonotonic
reasoning. Proeeedings 0/ the First International Con/erence on Principles 0/ Knowl-
edge Representation and Reasoning, 245-255. San Mateo, CA: Morgan Kaufmann
Publishers Inc.
Lindahl, L. 1977. Position and Change. Dordrecht: Reidel.
Lloyd, J.W. 1984. Foundations 0/ Logic Progmmming. Berlin: Springer Verlag.
Loui, R.P. 1987. Defeat among arguments: a system of defeasible inference. Computational
Intelligence 2, 100-106.
Loui, R.P. 1995. Hart's critics on defeasible concepts and ascript.ivism. Proceedings 0/ the
Fi/th International Con/erence on Artificial Intelligence and Law, 21-30. New York:
ACM Press.
Loui, R.P. 1997. Process and policy: resource-bounded non-demonstrative reasoning. To
appear in Computational Intelligence 14: 1.
Loui, R.P., Norman, J., Olson, J. & Merrill, A. 1993. A design for reasoning with
policies, precedents, and rationales. Proceedings 0/ Fourth International Con/erence
on Artificial Intelligence and Law, 202-211. New York: ACM Press.
Loui, R.P. & Stiefvater, K. 1992. Corrigenda to Poole's rules and a lemma of Simari-
Loui. Report Department of Computer Science, Washington-University-in-St.Louis.
St. Louis, MO.
Lukaszewicz, W. 1990. Non-monotonie Reasoning. Formalization 0/ Commonsense Rea-
soning. Chichester: Ellis Horwood.
MacCormick, N. 1978. Legal Reasoning and Legal Theory. Oxford: Clarendon.
MacCormick, N. 1995. Defeasibility in law and logic. In In/ormatics and the Foundations
0/ Legal Reasoning, eds. Z. Bankowski, I. White & U. Hahn, 99-117. Dordrecht: Law
and Philosophy Library, Kluwer Academic Publishers.
MacCormick, N. & Summers, R. (eds.) 1991. Interp1'eting Statutes. A Comparative Study.
Aldershot etc.: Dartmouth.
Makinson, D. 1985. How to give it up: a survey of some formal aspects of the logic of
theory change. Synthese 62, 347-363.
Makinson, D. 1989. General theory of cumulative inference. Proceedings of the Second
International Workshop on Nonmonotonie Reasoning, Lecture Notes in Computer
Science 346, 1-18. Berlin: Springer Verlag.
Makinson, D. & Schlechta, K. 1991. Floating conclusions and zombie paths: two deep
difficulties in the "directly sceptical" approach to defeasible inheritance nets. Artificial
298 REFERENCES

Intelligence 48, 199-209.


McCarthy, J. 1968. Programs with common sense. In Semantic Information Processing,
ed. M. Minsky, 403-418. Cambridge: MA: MIT Press.
McCarthy, J. 1980. Circumscription - a form of nonmonotonic reasoning. Artificial
Intelligence 13, 27-39.
McCarthy, J. 1986. Applications of circumscription to formalizing common-sense knowl-
edge. Artificiallntelligence 28, 89-116.
McCarty, L.T. 1977. Reflections on TAXMAN: An experiment in artificial intelligence
and legal reasoning. Harvard Law Review 90, 837-893.
McCarty, L.T. 1986. Permissions and obligations: An informal introduction. In Automated
Analysis of Legal Texts, eds. A.A. Martino & F. Socci, 307-337. Florence 1986.
McCarty, L.T. 1988a. Clausal intuitionistic logic I. Fixed-point semanties. Journal 0/
Logic Programming 5: 1-31.
McCarty, L.T. 1988b. Clausal intuitionistic logic 11. Tableau proof procedures. Journal
of Logic Programming 5: 93-132.
McCarty, L.T. 1988c. Programming directly in a lIonmonotonic logic. Technical Report
LRP-TR-21, Computer Science Department, Rutgers University, September 1988.
McCarty, L.T. 1989. A language for legal discourse I. Basie features. Proceedings of the
Second International Conference on A rtificial Intelligence and Law, 180-189. New
York: ACM Press.
McCarty, L.T. 1995. An implementation of Eisner v. Macomber. Proceedings of the Fifth
International Conference on Artificial Intelligence and Law, 276-286. New York:
ACM Press.
McCarty, L.T. & Cohen, W.W. 1990. The case far explicit exceptions. Proceedings of the
Workshop on Logic Programming and Nonmonotonic Reasoning, 82-94. Austin, TX.
McCarty, L.T. & Sridharan, N.S. 1981. The representation of an evolving system of legal
concepts 11. Prototypes and deformations. Proceedings of the Seventh International
Joint Conference on Artificial Intelligence, 246-253.
McDermott, D. 1982. Non-monotonie logic 1I: non-monotonie modal theories. Journal of
the A CM 29 (1), 33-57.
McDermott, D. 1987. A critique of pure reasoll. Computational Intelligence 3, 151-160.
McDermott, D. & Doyle, J. 1980. Non-monotonic logic I. Ar·tificiallntelligence 13,41-72.
Meldman, J.A. 1977. A structural model for computer-aided legal reasoning. Rutgers
Journal of Computers and the Law 6, 27-71.
Minsky, M. 1975. A framework for representing knowledge. In The Psychology of Com-
puter Vision, ed. P.H. Winston, 211-277. New York: McGraw-Hill.
Moore, R.C. 1982. The role of logic in knowledge representation and commonsense
reasoning. Proceedings of the Second National Conference on Artificial Intelligence,
428-433.
Moore, R.C. 1985. Semantical considerations on non monotonie logic. Artificial Intelli-
gence 25, 75-94.
Newell, A. 1982. The knowledge level. Artificial Intelligence 18, 87-127.
Nieuwenhuis, J .H. 1976. Legitimatie en heuristiek van het rechterIijk oordeel. Rechts-
geleerd Magazijn Themis 494-515. (in dutch)
Nieuwenhuis, M.A. 1989. Tessec: een Expertsysteem voor de Algemene Bijstandswet.
Deventer: Kluwer. (in dutch)
Nilsson, N.J. 1991. Logic and artificial intelligence. Artificiallntelligence 47, 31-56.
Nute, D.N. 1992. Basic defeasible logic. In Intensional Logics for Programming, eds.
L. Fariiias deI Cerro & M. Penttonen, 125-154. Oxford: Oxford University Press.
Nute, D.N. 1993. Defeasible Logic. Research Report AI-1993-04, Artificial Intelligence
Programs, University of Georgia, Athens, GA.
Nute, D.N. (ed.) 1997. Defeasible Deontic Logic. Essays in Nonmonotonic Nonnative
Reasoning. Dordrecht: Kluwer, Synthese Library.
Opdorp, G.J. van & Walker, R.F. 1990. A neural network approach to open texture. In
Amongst Friends in Computers and Law. A Collection of Essays in Remembrance of
REFERENCES 299

Guy Vandenberghe, eds. H.W.K. Kaspersen & A. Oskamp, 279-309. Deventer: Kluwer
Law and Taxation Publishers.
Opdorp, G.J. van, Walker, R.F., Schrickx, J.A., Groendijk, C. & Berg, P.H. van den,
1991. Networks at work. A connectionist approach to non-deductive legal reasoning.
Proeeedings of the Third International Conferenee on Artifieial Intelligenee and Law,
278-287. New York: ACM Press.
Peczenik, A. 1990. Legal collision norms and moral considerations. In Coherenee and
Confiict in Law. Proeeedings of the 3rd Benelux-Seandinavian Symposium in Legal
Theory, eds. B. Brouwer ct al., 177-197. Deventer: Kluwer Law and Taxation Pub-
lishers / Zwolle: Tjeenk Willink.
Pereira, L.M. & Alferes, J.J. 1992. Well-founded semantics for logic programs with explicit
negation. Proeeedings of the Tenth European Conferenee on Artifieial Intelligence,
102-106.
Perelman, Ch. 1976. Logique Juridique. Nouvelle Rhtorique. Dallaz.
Perelman, Ch. & Olbrechts-Tyteca, L. 1969. The Ncw Rhetorie. A Treatise on Argumen-
tation. Notre Dame, Indiana: University of Notre Dame Press.
Plug, H.J. 1996. Complex argumentation in judicial decisions. Analysing conflicting argu-
ments. Proeeedings of the International Conferenee on Formal and Applied Practieal
Reasoning, Springer Lecture Notes in AI 1085, 464-479. Berlin: Springer Verlag.
Pollock, .J.L. 1987. Defeasible reasoning. Cognitive Seienee 11,481-518.
Pollock, .J.L. 1995. Cognitive Carpentry. A Blueprint for How to Build aPerson. Cam-
bridge, MA: MIT Press.
Poole, D.L. 1985. On the comparison of theories: Preferring the most specific explanation.
Proeeedings of the Ninth International Joint Conlerenee on Artificial Intelligenee,
144-147.
Poole, D.L. 1988. A logical framework for default reasoning. Artifieial Intelligenee 36,
27-47.
Poole, D.L. 1991. The effect of knowledge on belief: conditioning, specificity and the
lottery paradox in default reasoning. Artifieial Intelligenee 49, 281-307.
Prakken, H. 1991a. A tool in modelling disagreement in law: preferring the most specific
argument. Proeeedings olthe Third International Conlerenee on Artifieial Intelligenee
and Law, 165-174. New York: ACM Press.
Prakken, H. 1991b. Reasoning with normative hierarchies (extended abstract). Proeeed-
ings of the First International Workshop on Deontic Logie and Computer Scienee,
315-334. Amsterdam.
Prakken, H. 1993. An argumentation framework in default logic. Annals of Mathematies
and A rtificial Intelligenee 9, 93-132.
Prakken, H. 1995a. A semantic view on reasoning about priorities (extended abstract).
Proeeedings 01 the Seeond Duteh/German Workshop on Non-monotonie Reasoning,
Utrecht, 160-167.
Prakken, H. 1995b. From logic to dialectics in legal argument. Proeeedings of the Fifth
International Conferenee on Artifieial Intelligenee and Law, 165-174. New York:
ACM Press.
Prakken, H. 1996a. Two approach es to the formalisation of defeasible deontic reasoning.
Studia Logiea 57: 73-90.
Prakken, H. 1m17. Dialectical proof theory for defeasible argumentation with defeasible
priorities (preliminary report). Proeeedings 01 the 4th ModelAge Workshop 'Formal
Models 01 Agents', Certosa di Pontignano (Italy), 201-214.
Prakken, H. & Sartor, G. 1995a. On the relation between legal language and legal
argument: assumptions, applicability and dynamic priorities. Proeeedings 01 the Filth
International Conlerence on Artificial Intelligence and Law, 1-9. New York: ACM
Press.
Prakken, H. & Sartor, G. 1996a. A system for defeasible argumentation, with defeasi-
ble priorities. Proceedings 01 the International Conlerence on Formal and Applied
Practical Reasoning, Springer Lecture Notes in AI 1085, 510-524. Berlin: Springer
300 REFERENCES

Verlag.
Prakken, H. & Sartor, G. 1996b. A dialectical model of assessing conflicting arguments
in legal reasoning. Artificial Intelligence and Law 4: 331-368.
Prakken, H. & Sartor, G. 1997a. Argument-based extended logic programming with
defeasible priorities. Journal 0/ Applied Non-c/assical Logics 7, 25-75.
Prakken, H. & Sartor, G. 1997b. Reasoning with precedents in a dialogue game. Proceed-
ings 0/ the Sixth International Con/erence on A rtificial Intelligence and Law, 1-9.
New York: ACM Press.
Prakken, H. & Schrickx, J.A. 1991. Isomorphic models for rules and exceptions in
legislation. In Legal Knowledge-based Systems. Model-based Legal Reasoning, eds.
J.A. Breuker, R.V. de Mulder & J.C. Hage, 17-27. Lelystad: Koninklijke Vermande
BV.
Przymusinski, T. 1988. Perfect model semantics. In Logic programming: Proceedings of
the Fifth International Conference and Symposium, eds. R.A. Kowalski & K. Bowen,
1081-1096.
Raz, J. 1975. Practical Reason and Norms. Princeton University Press, 1975.
Reiter, R. 1978. On closed-world databases. In Logic and Databases, eds. H. Gallaire &
J. Minker, 119-140. New York: Plenum Press.
Reit.er, R. 1980. A logic for default reasoning. Artificial Intelligence 13,81-132.
Reiter, R. 1987. Nonmonotonic reasoning. Annual Reviews 0/ Computer Seien ce 2: 147-
186.
Reiter, R. & Criscuolo, G. 1981. On interacting defaults. Proceedings of the Seventh
International Joint Conference on Artificial Intelligence, 270-276.
Rescher, N. 1964. Hypothetical Reasoning. Amsterdam: North-Holland.
Rescher, N. 1977. Dialectics: a Controversy-oriented Approach to the Theory 0/ Knowl-
edge. Albany, N.Y.: State University of New York Press.
Rissland, E.L. 1988. Artificial Intelligence and legal reasoning. A discussion of the field
& of Gardner's book. AI Magazine Fall 1988, 45-56.
Rissland, E.L. 1990. Artificial Intelligence and Law: steppillg stones to a model of legal
reasoning. Yale Law Review Vol. 99, 1957-1981.
Rissland, E.L. & Ashley, K.D. 1987. A case-based system for trade secrets law. Proceedings
of the Fi7·st International Conference on Artificial Intelligence and Law, 60-66. New
York: ACM Press.
Rissland, E.L. & Ashley, K.D. 1989. HYPO: A precedent-based legal reasoner. In Ad-
vanced Topics in Law and In/or·mation Technology, ed. G.P. V. ValIdenberghe, 213-
234. Deventer/Boston: Kluwer Law and Taxation Publishers.
Rissland, E.L. & Skalak, D.B. 1991. CABARET: statutory interpretation in a hybrid
architecture. International Journal 0/ Man-Machine Studies 34, 839-887.
Robinson, J. 1965. A machine-oriented logic based on the resolution principle. Journal
0/ the A CM 12, 23-41.
Roos, N. 1991. What is on the Machine's Mind? Models for Reasoning with Incomplete
and Uncertain Knowledge. Doctoral Dissertation Technical University Delft.
Roos, N. 1992. A logic for reasoning with inconsistent information. Artificial Intelligence
57,69-103.
Routen, T. & Bench-Capon, T.J .M. 1991. Hierarchical formalizations. International
Journal of Man-Machine Studies 35, 242-250.
Royakkers, L. & Dignum, F. 1996. Defeasible reasoning with legal rules. In Deontic Logic,
Agency and Normative Systems, eds. M.A. Brown & J. Carmo, 174-193. London:
Springer Workshops in Computing.
Sacksteder, W. 1974. The logic of analogy. Philosophy and Rhetoric Vol. 7, 234-252.
Sartor, G. 1991. The structure of norm conditions and nonmonotonic reasoning in law.
Proceedings of the Third International Conference on A rtificial Intelligence and Law,
155-164. New York: ACM Press.
Sartor, G. 1992a. Normative conflicts in legal reasoning. Artificial Intelligence and Law
2-3, 209-236.
REFERENCES 301

Sartor, G. 1992b. Reasoning with hierarchies of premises: derivation versus assumption


based approaches. Unpublished paper, CIRFID, University of Bologna.
Sartor, G. 1993. A simple computational model for nonmonotonie and adversarial legal
reasoning. Proceedings of the Fourth International Conference on Artificial Intelli-
gence and Law, 192-201. New York: ACM Press.
Sartor, G. 1994. A formal model of legal argumentation. Ratio Juris 7, 212-226.
Sartor, G. 1995. Defeasibility in legal reasoning. In In/ormatics and the Foundations 0/
Legal Reasoning, eds. Z. Bankowski, I. White & U. Hahn, 119-157. Dordrecht: Law
and Philosophy Library, Kluwer Academic Publishers.
Scholten, P. 1974. Algemeen Deel van Asser's Handleiding tot de Beoe/ening van het
Nederlands Burgerlijk Recht, Derde druk. Zwolle: Tjeenk Willink. (in dutch)
Sergot, M.J. 1982. Prospects for representing the law as logic programs. In Logic Pro-
gramming, eds. K.L. Clark & S-A. Tarnlund, 33-42. London: Academic Press.
Sergot, M.J. 1988. Representing legislation as logic programs. In Machine Intelligence 11,
eds. J.E. Hayes, D. Michie & J. Richards, 209-260. Oxford: Oxford University Press.
Sergot, M.J. 1990. The representation of law in computer programs: a survey and
comparison. In Knowledge-Based Systems and Legal Applications, ed. T.J.M. Bench-
Capon, 3-67. London: Academic Press.
Sergot, M.J., Sadri, F., Kowalski, R.A., Kriwaczek, F., Hammond, P. & Cory, H.T. 1986.
The British Nationality Act as a logic program. Communications 0/ the ACM 29,5,
370-386.
Shoham, Y. 1988. Reasoning about Change. Time and Causation from the Standpoint 0/
Artificial Intelligence. Cambridge, MA: MIT Press.
Simari, G.R. & Loui, R.P. 1992. A mathematical treatment of defeasible argumentation
and its implementation. Artificial Intelligence 53, 125-157.
Skalak, D.B. & Rissland, E.L. 1992. Arguments and Cases. An Inevitable Intertwining.
Artificial Intelligence and Law 1, 3-44.
Snijders, H.J. 1978. Rechtsvinding door de Burgerlijke Rechter. Een Kwantitatief Onder-
zoek bij de Hoge Raad en de Gerechtshoven. Deventer: Kluwer. (in dutch)
Soeteman, A. 1989. Logic in Law. Remarks on Logic and Rationality in Normative
Reasoning, Especially in Law. Dordrecht etc.: Kluwer Law and Philosophy Library.
Suber, P. 1990. The Paradox 0/ Self-amendment: a Study 0/ Logic, Law, Omnipotence,
and Change. New York: Peter Lang.
Susskind, R.E. 1987. Expert Systems in Law. A Jurisprudential Inquiry. Oxford: Claren-
don Press.
Toulmin, S.E. 1958. The Uses of Argument. Cambridge: Cambridge University Press.
Touretzky, D.S. 1984. Implicit ordering of defaults in inheritance systems. Proceedings 0/
the Fourth National Con/erence on Artificial Intelligence, 322-325.
Touretzky, D.S. 1986. The Mathematics o/Inheritance Systems. London: Pitman.
Veltman, F. 1996. Defaults in update semanties. Journal 0/ Philosophical Logic 25, 221-
261.
Verheij, B. 1996. Rules, Reasons, Arguments. Formal Studies 0/ Argumentation and
De/eat. Doctoral Dissertation, University of Maastricht.
Vreeswijk, G.A. W. 1991. The feasibility of defeasible reasoning. Proceedings 0/ the Second
International Con/erence on Principles 0/ Knowledge Representation and Reasoning,
526-534. San Mateo, CA: Morgan Kaufman Publishers Inc.
Vreeswijk, G.A.W. 1993a. Studies in De/easible Argumentation. Doctoral dissertation
Department of Computer Science, Free University Amsterdam.
Vreeswijk, G.A.W. 1993b. Defeasible dialectics: a controversy-oriented approach towards
defeasible argumentation. Journal 0/ Logic and Computation 3, 317-334.
Vreeswijk, G.A.W. 1995. The computational value of debate in defeasible reasoning.
Argumentation 9, 305-341.
Vreeswijk, G.A. W. 1996. Representation of formal dispute with a standing order. Research
Report MATRIX, University 0/ Limburg.
Vreeswijk, G.A.W. 1997. Abstract argumentation systems. Artificial Intelligence 90, 225-
302 REFERENCES

279.
Walker, R.F., Oskamp, A., Schrickx, J.A., Opdorp, G.J. van & Berg, P.H. van den, 1991.
Prolexs: creatillg law and order in a heterogeneous domain. Internat'ional Journal of
Man-Machine Studies 35, 35-67.
Walton, D.N. & Krabbe, E.C.W. 1995. Commitment in Dialogtte. Basic Concepts of
Interpersonal Reasoning. Albany, NY: State University of New York Press.
Wild, J.R. de & Quast, J.A., 1989. The concept of 'commensurate work' in a legal
knowledge-based system. Preproceedings of the 'Expert Systems in Law' Conference.
Bologna.
Wittgenstein, L. 1958. Philosophical Investigations. New York: MacMillan.
Wright, G.R. von, 1964. A new system of deontic logic. In Danish Yearbook of Philosophy
1, 173-182. Reprinted in Deontic Logic: Introductory and Systematic Readings, ed.
R. Rilpinen, 105-120. Dordrecht: Reidel, 1971.
Wright, G.R. von, 1983. Practical reason. Philosophical Papers, Vol. I. Oxford: Basil
Blackwell.
INDEX

a contrario reasoning, 263 existenee of, 167, 223-224


abduetive reasoning, 98, 220, 229, grounded, 224-225, 227
263 multiple, 223, 224
eomputational eomplexity of, 252, preferred, 224, 227, 229
254 stable, 223-224
abnormality predicates, 85, 107 uniqueness of, 167, 210, 224
abstract argumentation systems, 230- argument moves, 29, 64-65
232 argument structures, 230.-231
admissible orderings, 236, 237 argument-based theories, 222
admissible sets of arguments, 223 finitary, 222, 224
adversarial aspect of, 73 argumentation, 22
AI and law, 1-2,5,35,50,56,57,133, defeasible, see defeasible argu-
138,144,179,220,260,264, mentation
265, 273-274 dialeeticallayer of, 271, 273, 283-
projects in, 1-2,56,58,61-65 284
Alchourr6n, C.E., 2, 15, 16, 90, 179, logic layer of, 271, 273, 283
181-183, 192 proeedural layer of, 271, 273,
Alexy, R., 23, 265, 270, 271 283-285
Alferes, J.J., 128 step-by-step nature of, 150, 163,
Allen, L.E., 2, 17 168-170,189,231,234,237,
ambiguity propagation, see seeptical 281
reasoning, extreme strategie layer of, 273-274, 285
analogical reasoning, 8,12,18,25-29, argumentation frameworks, 219-220,
31, 64, 220, 229 222, 229, 230.
as heuristic for suggesting premises, argumentation sequenees, 231
27, 29, 55, 257-258, 261, argumentation strategies, 258, 273-
273, 285 274,285
justifying force of, 27-29,60, 95 argumentation systems, see defeasi-
vs. non monotonie reasoning, 60, ble argumentation
95 argumentation theory, 220, 283
analogy arguments, 220
eounterexample of, 29 aeeeptability of, 223
relevanee of, 27-28 aeerualof, 198-20.0., 284
similarity of, 27-29, 64 alternative, 25, 52, 56, 61-63, 73,
answer set semantics, 96, 126-127 261
applieability assumption, 42 analytie vs. formally valid, 22-
applieability clauses, 40-41, 107, 250 24
applieability predicates, 217, 236 assessment of, 152,163-170,220-
argument extensions, 196-197, 221- 225, 227-228, 231, 233-234
225, 231 attaeking, 60, 156-158, 173, 263
eonflict-freeness of, 157, 168,210 assumption, 173
defensible, 197 eonclusion, 152, 156-157

303
304 INDEX

comparing, 25, 43, 59, 63, 152, au toepistemic Iogic, 73-76, 96-97
158-163,198,220,226-227, hierarchie, 180, 190
233, 252, 261, 263 axiomatic systems, 18
conflieting, 31,152,156-158,220, Aqvist, L., 2
226-227, 231, 233-235, 261
constructing, 152, 154-156, 263 backings, 22, 256
defeasible, 155 backward chaining, 9, 42, 120
defeat among, 152, 161-163, 173- Baker, G.P., 272-273
175,220, 222-223 Barklund, J., 262
hierarchical, 192 Barth, E.M., 164
rebutting, 174 BDKT approach, 221-225, 232, 237-
specificity, 162, 173 238
striet, 162, 222 belief revision, 12, 13, 30, 183-187
undercutting, 174 191 '
with defeasible priorities, 208 Bench-Capon, T.J.M., 17,24,34,35,
defensible, 152, 169, 195-196, 54, 55, 262
221, 228 Benthem, J. van, 256
incoherent, 157, 162 Berg, P. vd, 52
justified, 152, 164, 167, 195,221 Berman, D.H., 7-8
228 '
Beth, E.W., 23
orderings on, 25, 141, 232
Bidoit, N., 67, 81, 133
overruled, 152, 169, 221, 228
Bing, J., 27
rebutting, 192, 220
strict, 155 Birnbaum, L., 8, 11
structure of, 154-156, 222, 226, Bondarenko, A., 154, 156, 167, 197,
219, 221, 227
234, 237, 263
linear, 226 Bonevac, D., 283
suppositional, 226 Brewka, G., 45, 67, 72,82,90,92-94,
Toulmin on, 21-22,255-256 97, 102, 108, 119, 128, 133,
tree, 229, 230, 233 135,142-143,150,164,179,
undercutting, 220, 263 180, 183, 187-188, 190, 195,
validity of, 23 238-240, 260, 270
weight of, 198 British Nationality Act, 56, 61
arguments about Brouwer, P.W., 38
priorities, 203 Bulygin, E., 2, 15-16
rule backing, 176-177, 268 burden of proof, 102, 167, 263-264,
rule interpretation, 204, 216- 272-273
217,251 Bylander, T., 254
rule validity, 176-177
Artificial Intelligence, 1-5, 53, 180 C-relations, 273
ambitious vs. practical, 3-5, 12 CABARET, 2, 63-65, 259, 261-262,
logical foundations of, see formal 274
foundations canonical model, 78
symbolic vs. connectionist, 3-4 case law
weak vs. strong, 11 as source of exceptions, 48, 63
Asher, N., 89, 283 case-based reasoning, 26, 30-31
Ashley, K.D., 2, 8, 25-27, 29, 30, 64 combining rule-based and, 1-2
144 ' 64 '
assumptions, 58, 173, 221 cases
augmentation principle, 88 and rules, 30
INDEX 305

clear vs. hard, 2, 50-52, 62-63, completeness


258-260, 265 of a proof theory, 10
hypothetical, 30 of an implementation
cautious monotonicity, 94 giving up, 254-255
chaining, 115-116, 119, 123 of the law, 30
change of the law, 4, 56-57, 67 completion, 40, 43
choice approach, 102, 141-142, 177- completion formulas, 40-42
178, 180, 245,249-253 computational complexity, 5, 9
choice criterion, 192 computer systems, 3, 13
circumscription, 82-87, 96, 249 conceding, 266
pointwise, 86, 123 conceptualism, see naive deductivism
predicate, 85 conclusions, 155
prioritized, 86, 96, 114 alternative, 102, 252
representing exceptions in, 112- defensible, 170
116, 130-133 floating, 196-197,229
variable, 84-85 justified, 170, 197
circumscription formula, 82-83 overruled, 170
circumscription policies, 84, 96, 112- qualifier of, 22, 256
113, 123-124, 130 undesirable, 23
civil pleading, 265 conditional entaihnent, 236-237, 266,
claiming, 266 269
Clark, K., 80 Gordon's use of, 236-237
classification, 18, 49-56, 256-257 conditionallogics, 87-89, 94, 252
clauses, 79 conflict pairs, 158, 192
directionality of, 79, 123, 126 conflict-free sets of arguments, 158
closed-world assumption, 76-78, 80, conflicts
83 multiple, 157
Coenen, F.P., 34, 35 iterated, 149-150, 159, 168-
cognitive psychology, 7, 53 169, 182-185, 188
Cohen, W.W., 129 rules relevant to, 151, 158-160,
collision rules, 34, 43-47, 55, 63, 67, 186-190
151, 179, 187,245,262 undecided, 103, 111, 116, 118,
combining, 206, 252 129,132-133,171-172,250,
conflicting, 205 252
defeasibility of, 203, 205 conjunction principle, 95, 194
formalization methodologies for, consistency, 8
210-212,217-218 consistency check, 72, 99
legal aspects of, 46-47, 204-206 context of discovery, 28-29
logical aspects of, 46, 205 context of justification, 28-29
nonmonotonicity of reasoning contraction, 183
with, 46, 59, 203 partial meet, 183
priority or applicability rules?, contrapositive inferences, 135-136,
217-218 189
requirements for formalizing, 205- core of certainty, 50-51
206,211-212 counterarguments, 156, 266
scope of, 174, 206, 212 coullterfactual reasoning, 87, 94
selfreferential, 212, 214, 216 credulous reasoning, 71, 90, 95, 195,
statutory, 45-46, 56, 204 224
combining mo des of reasoning, 264 Criscuolo, G., 71, 106
common-sense reasoning, 67, 238 cumulativity, 94, 194-195
306 INDEX

undesirability of, 194-195 underlying logic of, 220-221,


CYC, 3 226, 229-230, 233
defeasible conditionals, 87-88, 102,
DART,263-264 141, 231, 232, 236, 252
data, 22, 256 defeasible logic, 232-235, 254
declarativist/proceduralist controversy, defeasible modus ponens, 173
8-11 defeasible reasoning, 7, 12, 232
deductive reasoning, 6, 8, 29, 59 defeasible rules, 22, 153, 172
default deduction, 154 defeat among arguments, see argu-
default deductive system, 154 ments
defeat status assignments, 227-228,
default extensions, 69-70, 154, 238-
231
239
multiple, 227
existence of, 73-112, 175, 225,
239 defeaters
might, 233
multiple, 70-71,239
rebutting, 102, 226
unintended, 71, 106
hard, 107, 109, 113-114, 118,
default logic, 69-73, 94, 96-97, 225,
121, 124, 131, 171, 249
249, 252 soft, 106-107, 110-111, 114-
constructive, 239 115, 119, 121, 124, 131-132,
cumulative, 95, 195 171,249,250
my system's use of, 152-153,172 undercutting, 102, 107, 172,226-
prioritized, 180, 238-239 227, 233
representing exceptions in, 105- hard, 109, 113, 119, 122, 130-
112, 130-133 131, 249
semantics of, 72 soft, 109--110, 114, 119, 123,
default theories, 69, 153 131, 176,250
ordered, 191, 207 definitions
defaults, 69, 89, 153 complete, 16-17,51
closed,69 non-constructive, 70, 72, 75, 239,
directionality of, 115-116, 119, 245
134-135,151-153,171,189, Delgrande, J., 87-89
236, 238 DEMO predicate, 262
free, 96 denying, 266
nonnormal, 71, 153, 175 deontic concepts, 2, 7, 38
normal,71 deontic logic, 2, 7, 16, 283-284
open, 69,91-92 derivability
reasoning about, 88 argumentative, 193, 273
seminormal, 71, 106, 111 simple, 154
defeasibility dialectical graphs, 267
in legal reasoning, 272-273 dialecticallayer, see argumentation
procedural aspect of, 272-273 dialectical proof theory, see proof
defeasible argumentation, 231 theory
systems for, 150-152, 191, 193, dialectical reasoning, 164, 260, 263-
219-238, 266, 271-272 264
declarative form of, 221, 231, dialectics
237 formal,283
general structure of, 219-221 dialogue games, 164, 236, 263-265
procedural form of, 221, 231, winning, 267
237 dialogue logic, 164
INDEX 307

dialogue moves, 164, 166, 209, 266 requirements for representing,


dialogue trees, 164, 167 103-105
priority, 210 separate representation of, 34-
dialogues, 59-60, 166 35, 45, 57, 104, 262
priority, 209-210 soft, 103
winning, 167 exemplars, 62
Dignum, F., 164 expert systems, 3
dimensions, 64 explanations, 91
disagreement, 7-8, 20, 25, 56, 59, 144 extensions
discourse rules, 265, 267 in default logie, see default ex-
discourse theory of legal argumenta- tensions
tion, 265, 270 in Poole's framework, 91
disputes, 59-60, 220, 260, 265, 270 in the BDKT approach, see ar-
procedures for, 271-272 gument extensions
distinguishing, 28, 29
Doyle, J., 68, 74, 97 factors
Dung, P.M., 154, 156, 164, 167, 197, weight of, 54
219-225, 227, 255 facts
Dworkin, R.M., 48, 51, 241 contingent, 145, 148
necessary, 145
possible, 145, 148-149, 159, 170
Eemeren, F.H. van, 270 faIIacies, 263
Enschede, Ch.J., 21 family resemblance, 53
epistemologieal reasoning, 219, 226, Fariey, A.M., 21, 263-264,285
229 Feteris, E.T., 270
procedural aspects of, 270 first-order predieate logic
Etherington, D.W., 61, 67, 72-73,81, language of, 10
82,84,98,99,107,112,115, non-decidability of, 99
142 theorem provers for, 9
evidence, 7, 76 formal foundations
exception dause approach, 102, 105, of AI, 2, 13, 99-100, 282
142, 249-253 of AI and law, 1-2, 5, 13, 61
exception dauses formal methods
combining priorities and, 172- scope of, 25
177 formal systems, 10
general, 40-42, 107-111, 117, formalism, see naive deductivism
122, 251 formalization, 16-18
specific, 38-39, 72, 106-107, 121 formalizations
exceptions alternative, 17
explicit, 102, 245 isomorphie, 34, 35
hard, 103, 108-109 resemblance to naturallanguage
implicit, 34, 36-37, 102 of, 4, 104, 138, 178, 251
methodologies for representing, structural resemblance of, 34-
35,44-46,57,102,129-134, 36, 39-41, 45, 48, 57, 58,
141-142,177-178,225,249- 104,107,136-137,177,250-
253 251, 262
modularity of adding, 34, 39-41, formalizing
45, 48, 104, 107, 111, 137, modularity of, 34, 35, 41, 104,
250-251 116,137,142,177-178,250-
rebutting, 22 251
308 INDEX

forward chaining, 9 inconsistency handling, 89-93, 143-


frames, 10, 53 144,179-180,232,238,260
Frank, J., 2 problems with, 150-151, 186-
Franken, H., 55 191, 282
Freeman, K., 21, 263-264, 285 inconsistent information, 12
fuzzy logic, 55 reasoning with, 7-8, 12, 25, 26,
31,43-46,52,55-56,62,63,
Gärdenfors, P., 13, 183 179-180
Gardner's program, 46, 62-64, 258- inductive reasoning, 8, 12, 18,22, 25,
259 31, 55, 220, 270
Gardner, A. vd L., 1,7,25,46,61-63, justifying force of, 60
144 inference, 10, 28
Geffner, H., 168, 210, 236-237, 266 inference licences, 22
Gelfond, M., 82, 96,126-127,133 inference rules, 244
Genesereth, M.R, 4, 13, 243 domairt specific, 22, 69, 126, 134,
Ginsberg, M.L., 67 233, 235
goals, 241 non mono tonic, 229-230
Gordon, T.F., 2, 7, 21, 51-52, 57, 61, unsound, 12, 18
144,177,210,220,236-237, inheritance
254, 255, 259-261, 264-271 with exceptions, 143, 195, 254
Grootendorst, R, 270 interpretation, 20
Gupta, R, 3 arguments about, see arguments
of legal knowledge, 17-18, 256-
Hafner, C.D., 7,8 258
Hage, J.C., 2, 61, 198, 199,216-218, statutory, 204
225, 240-247, 262, 265, 284 introspective reasoning, 74
Hamfelt, A., 262 in law, 75-76
Hart, H.L.A., 7, 49-51, 53-54, 272- intuitionistic logic, 129
273 Israel, D.J., 8, 11, 12, 98
Hayes, P.J., 8-10 issue spotting, 62, 259-260
Herbrand model, 78 issues, 260, 265-267
Herbrand universe, 78
Herrestad, H.B., 2, 15, 16 J0rgensen dilemma, 15-16
heuristics, 9, 98, 255, 261-262 J0rgensen, J., 15
for dispute, 63-65, 273-274, 285 Johnson-Laird, Ph.N., 53
for suggesting premises, 29, 60, Jones, A.J.I., 2, 283
264 justice, 19, 21, 30
Hofstadter, D.R, 3 justification (in Pollock's system),
Hohfeld, W.N., 2 228, 230, 231, 255
Horn clauses, 77-78
Horty, J.F., 143, 195,254 Karpf, J., 34-35
HYPO, 2, 29, 64, 259, 261, 273-274, KB unit, 35
285 Kleer, J. de, 97
hypothetical reasoning, 90 Kloosterhuis, H., 270
knowledge
ideal worlds, 16 common-sense, 2-4, 62
implementation, 99-100, 105, 112, defeasibility of, 11
120,137,177,234-235,246, control, 9-10
252-255 domain, 9, 10
inapplicability clauses, 42 legal
INDEX 309

ambiguity of, 19, 56 ambiguity of, 30


defeasibility of, 11 applicability of, 20, 242-243
incompleteness of, 19, 50-51, application of, 243-244
56 analogical, 27-28, 242
inconsistency of, 19, 52, 56 conflicting, 34
two levels of, 241 constitutive nature of, 243
uncertainty of, 50-51 defeasibility of, 7, 47-49, 57, 242
metalevel, 25 exceptions to, 20, 36-37, 47-48
knowledge engineer, 17 purpose of
knowledge representation as source of exceptions, 47, 63
declarative, 4-6 reasoning about, 25, 241
declarative vs. procedural, 4-5, running out, 63-64
8-11 validity of, 7, 20, 242
languages for, 10-11 legalism, see naive deductivism
semantics of, 10, 30 legislation, 19-20, 25
knowledge-based systems, 4, 6, 11 structural features of, 34, 36-37,
legal, 4-5, 8, 12, 25, 52, 55-58 55
maintenance of, 4, 34, 36, 57, 58 Leith, Ph., 7, 17, 20-21, 24-25, 30
questions to users of, 57-58 Lenat, D., 3
validation of, 4, 34, 36, 58 Lex Posterior, 46, 56, 105, 204, 210-
Kolata, G., 11 216
Konolige, K., 96, 142, 180, 190 Lex Specialis, 44-46, 56, 59, 105, 141,
Kowalski, R.A., 1, 37, 61, 82, 121, 172, 204, 210-211
127-128,176,217-218,221, Lex Superior, 45, 46, 56, 59, 105, 204,
225, 284 210-211
Krabbe, E.C.W., 164,283 Lifschitz, V., 82, 86, 96,123,126-127,
Kraus, S., 94 133
Krogh, C., 2 Lin, F., 229-231, 237-238
Lindahl, L., 2
language for legal discourse, 2, 62 Lloyd, J.W., 77, 120
learning,3 logic, 9-11, 26, 28
legal rules and reasoning, 11-12, 18, 98
clear, 20 as a standard for implementa-
conflicting, 30 tions, 10
legal concepts, 20, 62 as a tool, 6, 8, 12, 25, 29-31, 52,
defeasibility of, 31, 52-53, 62-63 56, 63, 256-258, 260, 261,
definitions of, 16-17 264, 270
open-texturedness of, 49-56 critical function of, 29
legallogicians, 21, 23 relevance of
legal philosophy, 1, 53, 179, 265, 270, for AI, 6-7, 13
272-274 for AI and law, 2, 6-7
legal reasoning, 24 scope of, 23-24
adversarial aspect of, 138, 253 standard
automating, 5-6, 25 insufficiency of, 34, 42-43, 49,
defeasibility of, 26, 57, 58 53,57,179
formalizing, 5-7, 25-26 logic metaprogramming, 262
logical aspects of, 7-8 logic programming, 42, 77-82, 96,
procedural aspects of, 265, 270 249, 252
rule-guided vs. rule-governed, 25 extended, 82, 96, 125-129
legal rules prioritized, 240
310 INDEX

general, 79-82 McCarty, L.T., 1-2,62,128-129,273


classieal negation in, 120-121 McDermott, D., 3, 68, 74, 108
Horn, 77-79 mechanieal jurisprudence, see naive
intuitionistie, 128-129 deductivism
representing exceptions in, 120- mediation systems, 265, 274
133 Meldman, J.A., 1
undecided confliets in, 122, 125- metalevel architectures, 9, 13
128, 132-133 metalevel reasoning, 4
logie programs legal, 4, 25, 205, 208, 262
com pletion of, 80 metalogie, 218, 246
looping, 81, 122, 127-128 minimal entailment
non-stratifiable, 122, 124, 127 in circumscription, 83-84
the law as, 1, 61 in logie programming, 78-79,
transformation of, 120-121,126- 81-82
128 minimal models, 78, 84
logieal consequence notions multiple, 142
game-theoretie, 164 unintended, 80, 85-86, 114, 115
monotonie, 43 minimization, 76
nonmonotonie, 43, 94-95, 237- of atoms, 86
238 of predicate extensions, 78, 84
properties of, 60, 94-95,193-195 Minsky, M., 11, 53
logieallanguages, 9 mod us ponens
semantics of, 10, 12, 18 defeasible, 154
syntactie restrietions of, 99, 254 modus tollens, 151, 189
logieal layer, see argumentation defeasible, 135-136, 178, 263
logieal methods, 30 Moore, R.C., 8, 74
criticism of, 18-21, 23-27 moral considerations, 8, 21, 25
in AI, 7-12 moral dilemmas, 283
in AI and law, 7-8 more on point, 64, 261
in legal philosophy, 7-8
Morreau, M., 89
in AI and law, 2
in legal philosophy, 2
misunderstandings about, 16- naive deductivism, 19-21, 23-25, 30
21, 23-24 naive physies, 3
logicism, 11-12 naming formulas
Loui, R.P., 34, 142, 143, 147, 148, conventions for, 107-108, 117-
164,168,219,220,235,237, 118, 207, 243
254-255, 271, 272 naturallanguage
Lukaszewiez, W., 67, 72 ambiguity and vagueness of, 17,
251
MacCormick, N., 204, 272, 273 negation as failure, 42-43,61, 76-82
Makinson, D., 2, 90, 94, 179, 181- procedural interpretation of, 79-
184, 192, 196 80, 120, 134, 142
material implication, 150-151 neural networks, 4, 54
mathematical model of reasoning, 23, Nieuwenhuis, J.H., 23, 29
24, 27, 270 Nieuwenhuis, M.A., 34, 35, 40, 56,
mathematics 104
foundations of, 23 Nilsson, N .•J., 4, 5, 8, 12, 13, 243
McCarthy, J., 11, 82-86, 112, 114, Nixon diamolld, 86, 103, 111, 116,
115 146, 222-225, 231
INDEX 311

nondeductive reasonillg, 6, 8, 11-12, Pearl, J., 168,210,236-237,266


18, 26-27, 220, 264, 285 Peczenik, A., 205
nonderivability, 244 penumbra of doubt, 50-51
demonstrable, 233 Pereira, L.M., 128
nonmonotonic logic (of McDermott Perelman, eh., 7, 8, 19, 24-25, 256,
and Doyle), 68, 74 273
nOllmonotonic logics perfect model semantics, 82
computational complexity of, 59, philosophy, 53
61, 72, 75, 87, 89, 99-100, plausible reasoning, 232
129, 253-254 Pleadings Game, 236, 264-271, 274
vs. expressiveness, 129, 253 Plug, H.J., 270
connections between, 82, 87, 96- political considerations, 8, 20, 21, 25
97, 123, 133, 225 Pollock, J.L., 168, 198, 219-221, 226-
expressiveness of, 105, 111,116, 233, 235, 237-238, 255, 263
120,122,129,138,178,252- Poole's framework, 13, 90-92, 98,
253 144-151,221,249, 252, 259
vs. computational complexity, problems with, 159, 160, 170
129, 253 representing exceptions in, 117-
non-semidecidability of, 99 120, 130-133, 145-148
objections to, 97-98 Poole, D.L., 13, 34, 44, 90-92, 96,
nonmonotonic reasoning, 7, 11, 18, 117-120,136,142-151,193,
35, 43, 46, 53, 56-61, 219 259-260
as constructing and comparing practical reasoning, 229
arguments, 219, 240, 281 pragma-dialectical school of argu-
as inconsistency handling, see mentation, 270
inconsistency handling Prakken & Sartor, xi-xii, 128, 167,
VS. analogical reasoning, 60, 95 169,197-198,209,210,219-
nonprovability operators, 42-43, 58, 221, 225, 235, 240, 254, 285
76 precedents, 7, 20, 26, 27, 29, 30
normality assumptions, 49, 57, 67, predicate circumscription, 83
87, 117 preferences, see priority relations (on
normative systems, 2 premises)
hierarchical, 45, 90, 187 preferential entailment, 93-94, 236
norms premises, 23
truth values of, 15-16 selecting, 6, 22, 265-266
notational conventions, 37, 181 methods of, 6
numerical methods, 54-55 prima facie obligations, 283-284
Nute, D.N., 160, 168, 195, 232-235, principles, 7, 20, 51, 241
237, 254 as source of exceptions, 47, 63
conflicting, 51
one-direction rules, 232 vs. rules, 241
Opdorp, G.J. van, 54 priority predicates, 207, 262
opponent, 164 priority premises, 207
OSCAR,226 syntactic restrictions on, 207-
overdetermination, 52, 63 208
priority relations (on premises), 45-
paraconsistent logics, 179 46, 92, 120, 232, 250-251
partial computation, 228-229, 231, combining exception clauses and,
238, 255 172-177
partial matching, 27-28 equality, 206, 211-212
312 INDEX

no, 206, 211 legal procedures, see procedures,


properties of, 181, 207, 218 legal
reasoning about, 203, 206-210, legal rules, see legal rules
225, 239-240, 252, 284 priorities, see priority relations
sources of, 203-205 (on premises)
problem solvers reasoning by cases, 88
general,9 reasolls, 198
problem solving tasks, 4 accrual of, 244, 246
legal,6 exclusionary, 241
procedurallayer, see argumentation in Pollock's system, 226, 229
procedures conclusive, 226
for dispute, see disputes prima facie, 226
legal, 265, 272-273 rebutting, 226
reasoning about, 284 ulldercutting, 226
PROLEXS, 58 in reason-based logic, 241
PROLOG,77 weighillg, 198,204-205,241-242,
proof methods 244-245
global, 59, 99, 254 weight of, 198
local,59 reillstatement, 103, 110, 119, 121,
prooftheory, 10,221 124-125, 138, 163, 170
dialectical, 164, 229, 231, 235, Reiter, R., 67, 69-73, 77, 96, 98, 106,
265, 284 112,154, 175, 190,233,239
cred ulous, 169-170 relevallce criterioll, 182, 189-190, 192
for defeasible priorities, 208- relevance logics, 179
210 Rescher, N., 90,164,179,270
sceptical, 166-169 resolution, 9
proponent, 164 SLD,77
prototypes, 53 SLDNF, 80, 120, 127, 133
prototypes and deformations, 62, 273 retracting, 270
Przymusinski, T., 82, 123 Riggs v. Palmer, 48
right answer , 20
Rissland, E.L., 2, 7,8,25-27,29-30,
Quast, J.A., 54 63, 144,273
quest ions Robinson, J., 9
easy vs. hard, 62-63, 258-259 Roos, N., 67, 90, 142, 179-181
Routen, T., 34, 262
rationality, 272 Royakkers, L., 164
procedural aspects of, 271 rule discrediting, 64-65, 257, 261
Raz, J., 241 rule-based reasoning, 19, 30-31
reason-based logic, 240-247, 262, 265 combillillg case-based and, 1-2,
representing exceptions in, 245 64
reasonableness and equity, 36, 48 Russkind, R.E., 15
reasoning
axiomatic view on, 18-19,25,30, Sacksteder, W., 27
45,46,52,260 Sadri, F., 82,127-128
combining modes of, 4, 26, 263 Sartor, G., 2, 61, 102-103, 105, 168,
noninferential, 26-29, 256 176,184-185,190,204,216-
resource-bounded, 238, 255 217,220,273
reasoning about, 72-73 Saxon, C.S., 17
defaults, see defaults scenarios, 90, 118
INDEX 313

multiple, 142 strategie layer, see argumentation


unintended, 119 stratifieation, 96, 123-124
seeptical reasoning, 71, 90, 195-196, in logic programming, 80-82,
224, 227 123
extreme, 195-196,234,246 strengthening of the antecedent, see
moderate, 195-196 augmentation principle
Schlechta, K., 196 strong consequence, 90, 181
Seholten, P., 27 structural resemblance, see formal-
Schrickx, J.A., xi izations
second-order logic, 87, 99 subarguments, 155
semantic indeterminacy, 50 Suber, P., 216, 284
semantic nets, 10 subtheories, 183, 187-188
semantics, 10, 13 comparing, 184-186
argumentation-theoretic, 221 in default logic, 190
model-theoretie, 221, 236, 282 preferred, 90, 92-93
Sergot, M.J., 1, 17, 23-24,35,55,56, multiple, 93
61 Summers, R., 204
Shoham, Y., 93-94,229-231,237-238 superior evidence, 147-148,261
Simari, G.R., 164, 168,219,235,237, Susskind, R.E., 21, 50, 57, 104
254 system of rules, 19, 21
Skalak, D.B., 2,30,63, 144,273
Snijders, H.J., 48, 177 TAXMAN, 1,2
social considerations, 20-21, 25 TAXMAN 11, 62, 273
Soeteman, A., 2, 15, 21, 23, 272 temporality principle, see Lex Poste-
soundness, 10 rior
giving up, 100, 254-255 TESSEC,56
source unit, 35 theorem provers, 9,87,97
specificity, 44-45, 89, 142-144, 160- Toni, F., 217-218, 221, 225, 284
161,203,207,220,236,250- Toulmin, S.E., 7, 21-26, 255-256,
251, 257, 261, 269, 281-282 263, 268, 270
defeasible, 161 Touretzky, D.S., 34, 107, 142-143,
exclusiveness of, 105, 137-138, 254
141, 152, 178,237,251-252 truth maintenance systems, 97
Loui's definition of, 143-144,147 Tweety, 11,69,74,88-89,223-224
Nute's definition of, 160, 234
undecided conflicts, 227
Poole's definition of, 144-149,
160, 235 underdetermination, 51, 63
problems with, 148-151 vagueness, 50, 54-55
priority premises on, 211 validity
speech acts, 266 of arguments, see arguments
Sridharan, N.S., 62 of legal rules, see legal rules
stable expansions, 74 variable standards, 54
multiple, 75 Veltman, F., 89
stable model semantics, 82, 126-127 Verheij, B., 2, 198,200,240-247,262,
stable models 265, 284
multiple, 127, 128 Vreeswijk, G.A.W., 148, 164, 168,
stable sets, 74 194,219,221,230-232,237-
statistical analysis, 54 238, 284
statutes, see legislation
Stiefvater, K., 148, 235 Walker, R.F., 54, 58
314 INDEX

Walton, D.N., 283


warrant
in Pollock's system, 228
ideal, 228
in Vreeswijk's system, 231
warrants, 22, 256
weak consequence, 90, 181
weak neation, 172
weakest link principle, 164, 168-169
well-founded semanties, 128, 240
Wild, J.H. de, 54
Wittgenstein, L., 53
Wright, G.H. von, 15-16,98,283

zombie paths, 196

You might also like