Professional Documents
Culture Documents
Logical Tools For Modelling Legal Argument
Logical Tools For Modelling Legal Argument
Logical Tools For Modelling Legal Argument
Managing Editors
PREFACE xi
1 INTRODUCTION 1
1.1 AI, Logic and Legal Reasoning: Some General Remarks 1
1.1.1 An Overview . . . . . . . . . . . . . . . . 1
1.1. 2 Artificial Intelligence . . . . . . . . . . . . 2
1.1.3 Computable Aspects of Legal Reasoning . 5
1.1.4 The Role of Logic 6
1.2 The Focus of Research . . . . . . . . . . . . . . 7
1.3 Logic and AI . . . . . . . . . . . . . . . . . . . 8
1.3.1 The Declarative vs. Procedural Debate . 8
1.3.2 Logics and Programming Systems 9
1.3.3 Logic and Reasoning . 11
1.4 Points of Departure . . . . 12
1.5 The Structure of this Book 13
v
vi TABLE OF CONTENTS
11 CONCLUSION 275
11.1 Summary . . . . . . . . . . . 275
11.2 Main Results . . . . . . . . . 276
11.3 Implications for Other Issues 281
11.4 Suggestions for Further Research 284
REFERENCES 293
INDEX 303
PREFACE
AS. Acknowledgements
Four and a half years have passed since I defended my PhD thesis, the
basis of the present book. I here briefly repeat the acknowledgements and
xi
xii PREFACE
support. The Dutch Science Foundation provided the grant for my PhD re-
search, while the Royal Netherlands Academy of Arts and Sciences awarded
me a three-year postdoctoral fellowship, which I could carry out at the Com-
puter/Law Institute of the Free University Amsterdam. The EC Working
Group ModelAge (Esprit WG 8319) also provided partial support, and
its SG 1 subgroup on defeasibility was a stimulating audience. I thank
John-Jules Meyer for inviting me to take part in the ModelAge project.
The GMD Bonn sponsored my two-week visit to Bonn in August 1994 on
invitation by Tom Gordon, while the Artificial Intelligence Center of the
University of Georgia, directed by Don Nute, provided hospitality during
the last four months of 1995. And the Faculty of Law of the Free University
Amsterdam offered their hospitality after the end of my fellowship, which
made it possible for me to finish this book without problems.
Finally, I thank all my colleagues of the Computer /Law Institute for
makillg the institute such a stimulating and pleasant place to work, during
both my research student and my postdoc years.
CHAPTER 1
INTRODUCTION
This book investigates logical aspects of legal reasoning. It draws its in-
spiration from the field of Artificial Intelligence and law, but the main
techniques it uses are those of logic and formal philosophy. Thus the book
is an example of applied legal philosophy, applied in the sense that the
intended result consists of formal-Iogical foundations of computer systems
performing legal reasoning tasks. However, since legal philosophy does not
provide ready-made insights that can serve as such foundations, this book
also contributes to legal philosophy, in further developing these insights.
In this introductory chapter I make some general re marks on Artificial
Intelligence (AI), its applications to legal reasoning and the relevance of
logic for these applications (Section 1.1), I present the focus of research
(Section 1.2), and then address so me general issues concernillg the use of
logic in AI (Section 1.3). On the basis of these discussions I conclude in
Section 1.4 by stating the points of departure of the present study.
1.1. AI, Logic and Legal Reasoning: Some General Remarks
1.1.1. AN OVERVIEW
1
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
2 CHAPTER 1
representation is also said to have the dis advantage that it may make
systems computationally less efficient than if their knowledge is tailored
to one specific task: if knowledge can be used in more than one way, often
more ways have to be tried before a solution has been found (Niisson, 1991,
p. 43). Moreover, as will be further explained in Section 1.3.2, declarative
knowledge representation is more a matter of degree than of 'yes or no',
since in practice the situation of purely declaratively represented knowledge
canllot be obtained. Still, one of the basic assumptions of the present
investigation is that in AI and law it is both desirable and possible to
represent a substalltial body of knowledge in a way which is as declarative
as possible.
In conclusion, there are, roughly, two kinds of AI research, distinguished
by their aims: some researchers try to let computer systems perform like
intelligent human beings, while others restrict themselves to more practical
aims. Of course, in reality this distinction is not that clear-cut, but is more
a matter of degree while, moreover, practical results are often the spin-off
from ambitious research. However, in spite of this, I have in this overview
chosen to leave out most of the nuances, in order to state my position
clearly: this book chooses for aims which are closer to the practical side of
the scale; when it comes to AI, it is not about logical aspects of making
really intelligent artificial judges or solicitors but about logical aspects of
legal knowledge-based systems: of systems of which a main feature is the
separation of knowledge and ways of using it.
things are matters of selecting the premises, rat her than of reasoning with
them. It may be that also for these activities rational procedures exist,
but the problem is that they will be too vague to be formalized, or timt
they will be too tentative, that is, they do not have compulsory force,
for which reason they must inevitably leave room for human judgementj
and this means that an essential aspect of these procedures escapes for-
mal treatment. To understand the scope of the present investigations it
is important to be aware of the fact that the information with which a
knowledge-based system reasons, as weIl as the description of the problem,
is the result of many activities which escape a formal treatment., but which
are essential elements of what is called 'legal reasoning'. In sum, the only
aspectS of legal reasolling which can be formalized are those aspectS which
concern the following problem: given a particular interpretation of a body of
information, and given a particular description of some legal problem, what
are then the general rational patterns of reasoning with which a solution
to the problem can be obtained? With respect to this question one remark
should be made: I do not require that these general patterns are deductivej
the only requirement is that they should be formally definable. This point
will be discussed in more detail in Section 1.3.2 and in Chapter 2.
It should be noted that, although the investigations will be restricted to
certain aspects of legal reasoning, they will not be restricted to certain legal
problem solving tasks, such as deciding a case, giving advice to a dient,
assisting in legal planning, mediating legal disputes, and so on. The reason
is that, as will be argued throughout this book, logic can be used as a tool
in many different problem solving tasks. Therefore, wh at in AI research
is often appropriate, specifying the task of the computer program, is for
present purposes not necessary.
A final question is what logic has to do with these issues. Timt logic is at
least a possible candidate for analysing formal aspects of legal reasoning
will not come as a surprise, since it is the very essence of logic to systematize
formal patterns in reasoning. Furthermore, although humans somehow dur-
ing their lifetime acquire the ability to reason according to certain patterns
(which is why lawyers can do without a course in formallogic), computers
do not yet grow up and therefore the ability to follow certain patterns
of reasoning has to be programmed explicitly. Finally, logic is an obvious
candidate for modelling the separation of knowledge and ways of using
it, because in logic this separation is total in the form of premises in some
formallanguage on the one hand, and an inferential apparatus on the other.
In condusion, logic is, at least at first sight, very relevant for 'practical' AI.
INTRODUCTION 7
In spite of this, both in general AI research and in AI and law the rele-
vance of logic has been questioned. Since the arguments require a detailed
discussion, an entire section of this chapter is devoted to the debate on the
significance of logic for AI, and even the entire next chapter is about the
relevance of logic for AI and law. First, however, more will be said about
the focus of research.
moral opinions, political goals and the like, or even upon the interests of
one's client in a law suit. It is gene rally acknowledged that there is a need
for legal knowledge-based systems which can give insight into the amount
of disagreement, hut some claim that for such systems logic-based methods
will not be of any use, since logic would require consistency (cf. Berman
& Hafner, 1987, p. 31 and also Birnbaum, 1991, pp. 63-4). Another major
objective of the present study is to show that this is not the case; that also
of reasoning with inconsistent information a logical analysis is possible, at
least if the results of the logical study of nonmonotonic reasoning are taken
into account.
As a final objection to the relevance of logic for legal reasoning it is
sometimes said that most patterns of legal reasoning are not deductive but,
instead, analogicalor inductive (PereIman, 1976, p. 17; Rissland, 1988, p.
48; Rissland & Ashley, 1989, p. 214); and since logical reasoning is deductive
reasoning, so it is said, a logic-based system would exclude these patterns
of reasoning. Now, although analogical and inductive reasoning will not be
a major suhject of this hook, this criticism allows me to state one of my
main points of departure, which is that logic should not be seen as a model
of, but rat her as a tool in legal reasoning. Thus other tools, like inductive
or analogical reasoning, are not excluded from the model and, moreover,
on various occasions it will become apparent that logic can even be a very
useful tool in these other kinds of reasoning.
In summary, the aim of the present investigations is to give a logical
analysis of two aspects of legal reasoning which are sometimes held to escape
such an analysis: reasoning with defeasible information and reasoning with
inconsistent information. The main hypothesis is that recellt developments
in the logical study of common-sense reasoning have made such an analysis
possible, while a secondary hypothesis is that the analysis will, together
with the instrumental view on the role of logic in reasoning, also help in
clarifying some noninferential aspects of legal reasoning. Finally, as already
indicated, the relevance of the present study may be expected to exceed
the legal domain, since I shall focus on issues which have also appeared in
the general debate on the role of logic in AI.
As was said in the previous sections, not only in the legal domain, but also
in AI research in general the usefulness of logic is often put into question,
often on similar grounds. 1 The origins of the controversy on logic can be
IThis section only claims soundness, not originality: I have profi ted much from dis-
cussions in Hayes (1977), Israel (1985), Moore (1982) and Nilsson (1991).
INTRODUCTION 9
traced back to research in the second half of the sixties on the development
of general purpose theorem provers for first-order predicate logic. Research
on this topic became popular after Robinson's (1965) resolution principle,
a rule of inference which is complete for first-order logic and which for
a useful fragment of first-order logic allows the development of efficient
theorem provers. Many researchers in AI thought that the problem of
artificial reasoning could be solved by developing efficient theorem provers
for fuH first-order logicj what would then remain to be done was formalizing
domain knowledge declaratively in first-order formulas. This is known as
the 'general problem solver' paradigm. However, this approach soon turned
out to be unworkable: in realistic applications the search space for these
theorem provers rapidly became too large and logic offered no way of
telling the system which deductions to choose from the many possible
ones, since problem solving strategies like forward or backward chaining,
or domain dependent heuristics cannot be expressed in a logical language
meant for formalizing knowledge about the world. This debate is known as
the declarativist/proceduralist controversy. 'Proceduralists' held the view
that it was impossible to strictly separate knowledge about the domaiIl
and knowledge about the control of the reasoning process. They argued
that knowledge should not be represented declaratively but procedurally: a
knowledge engineer should not only be able to represent domain knowledge
but also to express of a particular piece of knowledge how it should be used
in the reasoning process.
Nowadays the criticism that early theorem proving research neglected
the need for expressing control information is generally held to be justified
(see e.g. Moore, 1982, p. 431). As a consequence, ways have been developed
to extend logic-based systems with me ans to control the deduction process,
for example, by designing metalevel architectures in which knowledge about
the way to solve a problem can be expressed declaratively at a metalevel.
Moreover, the strict view on the separation between declarative and pro-
cedural formalisms has been weakened, as will be discussed in the next
subsection.
Apart from the need for ways to express control information, a second
conclusion from the debate on control issues is that the sharp dichotomy
declarative/procedural knowledge representation languages is not a correct
one (Hayes, 1977). To explain this, more needs to be said about what a
logic iso
In a logical system three components can be distinguished. First there
is a fm·mally specified languagej then there is an interpretation of this
10 CHAPTER 1
language: its formal semanties, which for each well-formed formula specifies
what it means that it is truej and, finally, there is an inferential appamtus
defined over the languagej normally this apparatus is intended to justify
only inferences which are valid according to the semantics: that is, only
those formulas should be derivable which according to the semantics of the
language must be true if the premises are true. If this is guaranteed, the
inferential system is said to be soundj if, in addition, all such formulas are
derivable, then the system is complete.
Now, according to Hayes (1977) the proceduralists had a misconception
about what logic iso Logic is not a programming system but a formal system
which, instead of making inferences only defines what a valid inference is;
an AI system, on the other hand, performs inferences. Despite the fact
that a knowledge representation language of an AI system can provide
means to express procedural knowledge, as a knowledge representation
language it also has logical aspects: the knowledge which its represents
is knowledge about the world, which can therefore be true 01' falsej and it
is logical semantics which specifies what it means that an expression is true
or false; moreover, in doillg so it gives criteria for determining the validity
of inferences, thus providing a standard for characterizillg the inferential
behaviour of a system. In addition to these logical aspects, knowledge
representation languages also have procedural aspects: for example, certaill
special tasks, like doing caIculations on numbers or dates, or inspecting
certain tables, might be performed by procedures, or there can be ways
of indexing knowledge for ease of retrieval. The important point is that
even if a representation language has some of these procedural features,
still quest ions can be asked like 'is it based on propositional 01' on predicate
logic?', 'does it only use universal or also existential quantifyers?' and so
on. In this respect it is also important to note that logic does not enforce a
particular syntax. For instance, it is perfectly possible to use a semantic net
or a frame as an alternative syntax for the language of first-order predicate
logic or so me other logical system. The important role of logic is, rat her
thall to determine a specific notation, to specify the meaning of a notation: a
notational syst.em which does not have a specified meaning is of little use in
representing knowledge, since there is no way of determining what it means
to say that a certain represented piece of knowledge is true, and therefore
the behaviour of the system cannot be criticised. This remark implies that
if, for example, semantic nets or frames are preferred over logical formulas
because of their access and retrieval features, this is not a choice for a
different semantics, but a choice for different procedural features.
In sum, it is impossible to make a sharp distinction between pro ce-
dural and decIarative or even between logical and nonlogical knowledge
representation languages: knowledge representation formalisms have both
INTRODUCTION 11
procedural and declarative aspects, and the importance of logic lies in its
ability to analyze the declarative aspects. Of course, some formalisms make
it easier to abstract their declarative parts than other ones and therefore
the distinction proceduraIfdeclarative is still valuable in beillg a matter of
degree. Now knowledge-based systems are normally meant to model general
ways of solving problems rather than to solve a particular problem, for
which reason their knowledge will not be tailored to deal with one particular
kind of problem; therefore knowledge-based systems will normally be more
to the declarative end of the scale, and this ensures that a logical analysis
of such systems will not be too abstract to cease to have value.
So far the debate has mainly been about control issues. However, the
criticism of early theorem proving research that it neglected the need for
expressing control knowledge rapidly grew into a general distrust of any
use of logic in AI. Well-known is the discussion between John McCarthy
and Marvin Minsky about whether AI should be 'strong' or 'weak' (cf.
Kolata, 1982; Israel, 1985). According to Minsky, who adopts the 'strong
AI' view, the best way to make computer systems intelligent is to let them
reason in the same way as the human mind does, and in his opinion this
is almost certainly not with logic. Among other things, Minsky points at
the defeasible nature of common-sense knowledge, that is, at the many
exceptions which a rule can have in every-day life. As noted in Section 1.2,
legal scholars have said the same of legal information. Minsky illustrates his
point with an example which has become classical. Consider the common-
sense rule 'Birds fly': what if some bird Tweety is an ostrieh or a penguin?
These exceptions could be attached to the rule, but what then if Tweety
is dead or has its feet set in concrete? Minsky's point is that the list of
exceptions to such common-sense rules is open-ended, that it can never
be completely specified once and for all. McCarthy, on the other hand,
employing the 'weak AI' view, says that it does not matter whether an
AI program works in a psychologically realistic way, since AI is about
intelligence that is artificial. To the problems with exceptions in common-
sense reasoning he responds with a new view on logic, which allows for
so-called nonmonotonic reasoning.
Many others have also criticized what is sometimes called the logicist
approach to AI. 'Logicism' is one of those terms which is used in many
different ways, but a good example of its use in criticism of logical methods
is provided by Birnbaum (1991, p. 59), who describes one of the features
of logicism as "( ... ) its emphasis on sound, deductive inference", for which
reason "logicists tend to ignore other kinds of reasoning", one of which is
12 CHAPTER 1
"reasoning by analogy". Some, e.g. Israel (1985) and Nilsson (1991) have
warned against a confusion which sometimes underlies this criticism, viz.
thinking that using a logical language (i.e. a language with a logically
specified meaning) for representing domain knowledge would necessarily
imply a choice for modelling the entire reasoning process as "running a
sound theorem prover over such a language" (Israel, 1985, p. 432). Although
it is true that reasoning is more than deduction, it should be realized that
this observation is perfectly compatible with using logic-based languages
to represent knowledge. Among other things, the use of a logicallanguage
does not make it impossible to formally define over the language certain
reasonable but deductively unsound mo des of reasoning like defeasible,
analogicalor inductive reasoning. For example, a careful analysis of many
AI algorithms for analogicalor inductive reasoning will reveal that these
algorithms use logic as a tool in that they operate on the form of expres-
sions instead of on their contentj later chapters of this book contain some
examples. Likewise is it not the case timt logic is only useful in case of
consistent knowledge bases: on the contrary, the observation that a certain
set of premises is inconsistent can be used to let the system undertake so me
action: for instance, to let it perform some kind of belief revision or to let
it prefer some consistent part of the knowledge, with or without calling a
higher level of knowledge with which a choice might be made. The role of
logic is dear: it defines when a set of premises is inconsistent and what
the consequences are of possible revisions or preferences. In condusion,
both in case of nondeductive types of reasoning and in case of inconsistent
knowledge logic can be useful: what is essential is that logic should be
regarded as a tool in a larger framework, which is called reasoningj using
logic to represent domain knowledge does not commit at all to a particular
way of modelling ,·easoning: in particular, it does not commit to regarding
reasoning as no more than mechanically deriving logical consequences from
a set of logical formulas.
On the basis of the above discussions this section formulates some points
of departure for the coming investigations. The first is that the kind of
AI research which will be investigated is eloser to the 'practical' side of
the scale of aims than to the 'ambitious' side in that the focus will be on
logical aspects of applying knowledge-based technology to law. A second
point of departure is that logic should be regarded as a tool in, rat her than
as a model of reasoning. Thus other kinds of reasoning are not only made
possible, but can also be better understood, precisely because logic is one
of their components. Furthermore, in Hne with the aims of 'practical' AI
INTRODUCTION 13
research the analysis will be prescriptive rat her than descriptive: it will be
ab out what is sound, rational or principled reasoning rat her than about
how people actually reason. For this reason I regard remarks like 'mistakes
are tolerable since people also sometimes make mistakes' as besides the
point (moreover, in my opinion it is not tolerable that computers make the
same mistakes as humans do, as long as computers do so few things weH
which humans do weH). FinaHy, the investigations will be at a formal level
of analysis. I do not intend to develop directly implementable algorithmsj
rather, my aim is to develop the formal foundations which can be used in
a critical analysis of implemented systems. This leaves room for the use of
an implemented syntax other than logical formulas, and also for systems
with parts of their knowledge represented in a procedural way.
With respect to the need for formal foundations, two final remarks must
be made. The first is about the fact that the articles on which the previous
section was based mainly stress the need for formal semanticsj in fact, the
meaning of 'semantics' should be extended a little. In recent years, formal
theories have been developed in which logical languages are embedded
in larger frameworks, which are also specified in a formal way. Examples
related to the subject of this book are Poole (1988)'s abductive framework
for default reasoning, belief revision research of e.g. Gärdenfors (1988) and
research on metalevel architectures, cf. Genesereth & Nilsson (1988, Ch.
10), to name but a few. And also the main contribution of this book will not
be a formal semantics of some logicallanguage, but a larger framework, in
which a logicallallguage with a formal semantics is only a component. Such
broader theories can also serve as formal foundations for AI research, in
specifying the meaning (in a broader sense) ofprograms, and in determining
the soundness of the system in doing some task still closely related to logical
proof, like comparing arguments or revising beliefs. In conclusion, while
still adhering to the general conclusion of the previous section, the range
of useful formal theories should be broadened.
The aim of this chapter is to illustrate the point of departure of this book
that logic should not be regarded as a model of, but as a tool in legal
reasoning. I shall do so mainly by discussing some objections to logical
methods which can be found in legal philosophy and AI-alld-Iaw. Some of
them are an application to the legal domain of more general criticism of
logic discussed in Chapter 1. Section 2.1 briefly discusses some basic misun-
derstandings about the nature of logic, after which Section 2.2 examines in
more detail another misunderstanding, the opinion that applying logic to
legal reasoning would be the same as regarding law as a coherent system of
rules which can somehow be discovered and formalized. Then, Section 2.3
deals with the criticism that logical methods can only model a small part of
legal reasoning since most forms of reasoning used in law are nondeductive.
Finally, Section 2.4 relates the mainly philosophical discussions of the first
three sections to an issue in AI-and-Iaw research, the distinction between
'rule-based' and 'case-based' reasoning.
I shall not discuss purely fundamental issues concerning the application
of logic to law, such as the question whether norms can have truth values
(Von Wright, 1983; Alchourron & Bulygin, 1984; Susskind, 1987; Soete-
man, 1989; Herrestad, 1990). In principle this issue is certainly relevant to
our subject, since in Chapter 1 I argued that, although a direct use of logic
in AI occurs mainly at the proof-theoretical level, every proof-theoretic
account of logical consequence should be based on a semantical theory.
However, for practical reasons I confine myself to very briefly indicating
my position. The problem is clearly stated by J0rgensen (1938) in what has
been named the 'Jorgensen dilemma'. On the one hand, it seems obvious
that only descriptive statements can be true or false and therefore, since
norms do not describe but prescribe, norms cannot be true or false. On the
other hand, it seems also obvious that the relation between the premises and
the conclusion of a legal or moral argument is not arbitrary. The dilemma
is that, assumed that norms can indeed not be true or false, it has either
to be defended that norms have no logical relations, or that a logic of
norms cannot be based on truth. One approach to this dilemma is trying
to avoid it by arguing that a logic of norms can be based on truth, at
least if the concept of truth is extended to cover not only the real world
15
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
16 CHAPTER 2
but also 'ideal worlds'. This is the standard approach to deontic logic (cf.
F0llesdal & Hilpinen, 1970). Others, on the other hand, e.g. Alchourron
and Bulygin (1984), have tried to base a logic of norms on something else
than truth, and some have even chosen for the first horn of the dilemma:
Von Wright (1983, p. 132), for example, argues that the relations between
norms are not a matter of logic, but of principles of rational legislation.
Although this book is not about deontic logic, I shall informally adhere
to the view that if thc concept of truth is cxtended to ideal and almost ideal
worlds, norms can have truth values. Of course, this might be the wrong
choice; it might turn out that logics of norms based on something else than
truth are capable of giving a superior analysis of the formal aspects of
legal reasoning. However, in my opinion non truth-based logics have not
yet reached that level of superiority (see also Herrestad, 1990,- p. 36) and
therefore I shall in this book assume that norms can be true or false in the
sense just explained.
In the next two seetions I discuss four misunderstandings about logic which
can sometimes be found in the literat ure. Since the fourth requires more
than just a few paragraphs, it will be dedicated aseparate section.
where v stands for 'the object is a vehicle' and -,p stands for 'the object is
not allowed to enter the park' (the formal symbols used in this book are
explained in the Appendix. Now, if there are no further rules about what
objects are vehicles, the concept 'vehicle' is not defined at all, and even
when rules are added like
(2) c -+ v 'a car is a vehicle'
(3) w -+ v 'a wheelchair is a vehicle'
THE ROLE OF LOGIC IN LEGAL REASONING 17
then 'vehicle' is defined only partially, since from the rules (1-3) nothing
follows about whether, for instance, a skateboard or a war memorial tank is
a vehicle. The logical formalization does not imply that (2) and (3) are the
only possible instantiations of the concept 'vehicle'. This becomes different
only if the following formula is added.
(4) (...,e t\ ...,w) -t ""v
Only this formula would make 'vehicle' a completely defined concept. How-
ever, whether a formula like (4) is true depends completely on how the
norm is formulated by the legislator or interpreted by the judiciaryj it does
not become true because of the logical formalization. This shows that a
formalized concept can be partially undefined, that a 'logical' world can be
open.
The second argument against the use of logic in legal reasoning is based
on the observation that, because of the ambiguity and vagueness of natural
language, formalizing the law is impossible without interpreting it, for which
reason a formalization carries more information than its natural-Ianguage
counterpart, which is always open to different interpretations. According to
Leith (1986, p. 546) this goes against the observation that in law it is the
judiciary who interprets a norm, when applying it to the facts of a casej
therefore an interpretation beforehand by a knowledge-engineer would be
premature.
Of course, the observation that each formalization is an interpretation
of the law is correct, and almost all who use logic to formalize a piece
of legislation are aware of this. For this reason some authors, e.g. Bench-
Capon & Sergot (1985) and Allen & Saxon (1991), discuss the possibility
to let a legal knowledge-based system contain more alternative syntactic
formalizations. However, if it is said that by formalizing astatute no room
is left at all for the judiciary to interpret it, a distinction is ignored between
two levels of interpretation. The first is the syntacticallevel, on which prob-
lems occur with respect to the correct logical form of an expression, like:
does a norm saying that christian schools are entitled to receive financial
support exclude other religious schools from financial support? In other
words: is the norm a conditional ('if') or a biconditional ('if and only if')?
A logical formalization will have to choose between the two possibilities
and will therefore interpret the norm. A variant of this situation is when it
can be debated whether a natural-Ianguage expression occurring in several
norms has the same or a different meaning in each norm. This must also
be decided beforehand, while formalizing the norms. In sum, with respect
to the syntactical level of interpretation Leith's observation is correct.
18 CHAPTER 2
The view on the legal reality which is the focus of the present criticism
is almost never explicitly maintained, but very often attacked: in the lit-
erature it is referred t.o in various ways, like 'mechanical jurisprudence',
'conceptualism', 'formalism' , 'legalism' and, in one usage of the term, 'rule-
based reasoning'. In the rest of this book I shall refer to it as the naive
deductivist view on legal reasoning. It is the old-fashioned view that the
law is a consistent and complete body of rules which can somehow be
discovered. In this view, all there is to legal reasoning is finding the valid
rules and applyillg them to the facts in a deductive manner. This way
of looking at the law should be carefully distinguished from the axiomatic
view, described in Section 2.1.3, which is only a formal view on how to solve
problems and implies nothing about the status of the axioms; essentially,
the naive deductivist view on law is the axiomatic view on legal reasoning
supplemented with a belief in the fact that the truth or validity of the
premises can easily be established.
Perelman & Olbrechts-Tyteca (1969, pp. 197~8) give a nice description
of the naive deductivist view, not restricted to reasoning in law, when dis-
cussing strategies to deal with incompatibilities (note that in this fragment
the terms 'logic' and 'logical' are used in an informal sense, as only referring
to the described strategy and not to formal logic):
The first (strategy), which may be called the logical, is that in wh ich the primary
concern is to resolve beforehand all the difficulties and problems which can arise
in the most varied situations, which one tries to imagine, by applying the rules,
laws, and norms oue is accepting. (... ) The logical approach assumes that one can
clarify sufficiently the ideas one uses, make sufficiently clear the rules one invokes,
so that practical problems can be resolved without difficulty by the simple process
of deduction. This implies moreover that the unforeseen has been eliminated, that
the future has been mastered, that all problems have become technically soluble.
Nobody doubts that as a model of the legal reality the naive deductivist
view is much too simplistic. A first source of complexity in legal reasoning
is the fact that legislation is often the product of a heavy socio-political
battle: therefore it is often ambiguous and sometimes even inconsistent.
Another complicating feature of the legal domain is that the world is so
complex that the legislator cannot foresee everything. This has various
consequences: a first one is that legislation and established precedent often
simply do not address a certain problem, for which reason other grounds for
adecision have to be found; another one is that legislation often contains
20 CHAPTER 2
general terms, of which the applicability to the facts of a case can lead
to substantial disagreement; a final consequence is that, even if a rule is
itself clear and unambiguous, unforeseen circumstances might arise in which
applying the norm is regarded as inappropriate.
These complicating phenomena have at least two consequences for a
realistic model of legal reasoning. The first is that, since in dealing with
incomplete, inconsistent or ambiguous rules and precedents social and po-
litical factors inevitably play a role, ignoring these factors in a model
of the legal reality would often be a severe oversimplification. Another
consequence is that, even if it could be defended that in theory there is
a right answer to every legal problem, few believe that in practice this
anS wer can be obtained by just applying some undisputed legal axioms
to the facts of the case. Therefore a more realistic picture of the legal
domain is that, even if the facts of a case are agreed upon, there can be
dis agreement about the interpretation of a legal rule or concept, about the
validity of a rule, about whether there is an exception to a rule, based
on some legal principle or socio-political demands, about the binding force
of a precedent, etcetera. As a result it is sometimes said that in law it is
possible to argue for almost anything: in practice legal knowledge cannot be
'discovered' by so me objective method but must be argued for in a 'battle of
persuasion'. Of course, it can be debated to which degree the legal domain
contains disagreement; at this point more and less extreme positions have
been defended. Nevertheless, I believe that the just sketched 'realistic' view
contains a considerable degree of truth and therefore thecoming logicai
investigations will bear this view in mind.
context and which, its proponents have argued, is not open to negotiation. [my
italies, HP]
Finally, on p. 550 he says:
Legal logicians are certainly so me of those who are most adamant that justice is
best found through Iogic - using Iogical reasoning, proceeding step by step through
the legal norms. This is a view we reject; ( ... ) judges in our view should be held
responsible for their adjudications. They should apply the law, not in a formal
manner excluding the social factors and context, but as the Iegisiators and society
would wish.
This criticism is quite different from the last argument of the previous sec-
tion: that argument only amounted to the formal criticism that lawyers do
not always reason deductively, but often in other modes; Leith's criticism,
however, is not a matter of form, but of content: he states that logicians have
"a false epistemology" with respect to law (1986, p. 552): according to Leith
using logical methods for representing knowledge implies a commitment to
the philosophical doctrine that a complete and consistent body of legal
rules can somehow be discovered as being 'the law', largely by looking
at legislation and established precedent, and that deduction from the thus
discovered body of rules is all that is necessary in doing justice. Particularly
Leith's above remark on what a logical sentence is shows that according to
hirn to formalize a rule is to sanction it as a valid rule of law.
Because of these effects logic would force judges to make decisions
they do not want to make; it would exclude socio-political considerations
from the legal process and it would deprive a judge of his or her moral
responsibility for adecision: s/he would often be forced to say: 'I would like
to decide otherwise, but, unfortunately, logic dictates this decision'. The
last of the above quotations of Leith (1986) provides an excellent example
of this criticism, and also in other places the view timt it is Iogic and not
the premises which can be held responsible for an undesirable conclusion
can be found: Susskind (1987, p. 197) cites a British judge who did not like
the outcome of applying a certain statute and therefore said:
In my opinion, the rules of formal Iogic must not be applied to the Act with too
great strictness
And Soeteman (1989, p. 230, fn. 18) points at a similar view held by
Enschede (1983, pp. 53-4).
Toulmin
Leith is not the ouly oue iu ascribing the naive deductivist view to logicians:
the same criticism can be found in the work of the philosopher Stephen
Toulmin, whose ideas have in recent years attracted the attention of AI-and-
law researchers (e.g. of Gordon, 1994, 1995 and Freeman & Farley, 1996).
In Toulmin's case a more detailed analysis is necessary.
22 CHAPTER 2
so
Data -------,--------+ Qualifier, Conclusion
unless
since
Exception
Warrant
I on account of
Backing
at all that a solution to a legal problem should logically follow from sound
and evident first principles; logicians only require, if they require anything
at all, that the conclusion follows from, in Toulmin's terms, the data and
so me appropriately selected warrant, whether that warrant is sufficiently
backed or not. The latter question falls outside the scope of logic; it is an
epistemological question. Again in Toulmin's terms, but contrary to what
he says: logicians study formally valid arguments, not analyt.ic ones.
Leith (e.g. 1990, p. 63) and Toulmin are also aware ofthe limited, formal
scope of logic; their main criticism seems to be that logicians themselves
do not realize this. However, as an empirical statement this is simply false;
among the many counterexamples is the group which is the most severely
criticised by Leith, the logic-programming group of Imperial College in
London (cf. Bench-Capon & Sergot, 1985; Sergot, 1990); and this book
aims to provide yet another counterexample.
A final reason why I have discussed the criticism of Leith and Toulmin
in detail is that they often express their criticism in such a way that
readers with little knowledge of logic might easily be led to believe that it is
logic itself which implies the naive deductivist view. The above quotationd
re mark of Leith (1986, p. 549) on logical sentences provides an example,
and another critic of logic, Chaim Perelman, who is hirnself weIl aware of
the limited scope of logic, sometimes also expresses his views in a confusing
way, e.g. in Perelman & Olbrechts-Tyteca (1969, pp. 197-8) as cited above
and in Perelman (1976, p. 6):
Parce qu'j) est presque toujours controverse, le raisonnement juridique, par oppo-
sition au raisonnement deduetif purement formel, ne pourra que tres rarement etre
eonsidere eomme correct ou incorrect, d'une fac;on, pour ainsi dire, im personelle
(Beeause it is almost always ehallenged, a legal argument ean, as opposed to a
purely formal deduetive argument, only sei dom be eonsidered as objeetively eorreet
or incorreet, in a, so to speak, impersonal way).
Although Leith, Toulmin and others are wrong in ascribing the naive de-
ductivist view to logicians, AI-and-Iaw researchers can certainly learn from
their remarks. A first merit of the critics is that they have pointed at the
limited scope of formal methods in modelling legal reasoning, by stressing
the fact that inevitably room has to be left for socio-political or moral de-
cis ions of content. However, it should be realized that this correct criticism
applies not only to logic, but to alt attempts to computerize legal reasoning,
since, as already said in Chapter 1, AI has to restrict itself to aspects of
reasoning wh ich can be formalized.
A second merit of the criticism is that it shows that even the formal
part of legal reasoning is more complicated than the axiomatic view on
legal reasoning, described in Section 2.1.3. As already noted, this view is
essentially the naive deductivist view without its epistemological ambitions.
In fact, the critics of logic support the view of this book that logic should
not be regarded as a model of but as a tool in legal reasoning. In short,
we can say that the criticism makes us aware that there is a distinction
between legislation and law. By this I do not me an that there are also
other important sources of legal knowledge, like precedentsj this goes, of
course, without sayingj what I mean is that legal rules do not express 'the
law' but are "objects of discourse" (Leith, 1986, p. 548), which can be
applied, but which can also be put into question. Accordillg to Leith (1990,
p. 103) rules are "flexible objects which cannot be used in the deductive,
axiomatic manner ... ". Similarly, Gardner (1987, p. 3) describes reasoning
of lawyers as "rule-guided" instead of "rule-governed" .
This has some important consequences for the way in which logic should
be used in formal models of legal reasoning or in knowledge-based systems.
At the very least, since the legal domain is full of dis agreement , there should
be knowledge-based systems which are able to reason with an inconsistent
knowledge base in a nontrivial way - that is, systems which are able to gen-
erate alternative consistent arguments for or against a certain conclusion,
just as lawyers can do in practice. Furthermore, if a program should do more
with rules than only apply them, it should be able to reason about rules,
and therefore to express knowledge about them, for example, in the form of
standards to compare arguments or rules. In other words, there should be
systems which are able to contain metalevel knowledge, which points at the
need for modelling some kind of metalevel reasoning. Both points will be
discussed in much more detail in the next chapter. Finally, as writers such
as Toulmin, Perelman, Ashley and Rissland have rightly observed, in legal
problem solving induction and analogy play an important role. Therefore,
logical methods should be embedded in models of legal reasoning which
26 CHAPTER 2
also allow for these other modes of reasoning. More on how this should be
done will be said in the next section.
The criticism of this section is not only relevant for the question in
which way logic should be used, but also for the question which system of
logic is the best to use. There does not exist one unique, universally appli-
cable logical system: different logical systems formalize different aspects of
reasoning, and in various levels of detail; moreover, logicians often disagree
on the best way to allalyze a certain aspect of reasoning, for which reason
often riyal theories are proposed. Now, some of the criticism is justified
as objections to so me particular existing logical systems. For example,
Toulmin (1958, p. 142) rightly blames the logicians of his days for having
neglected the defeasible nature of legal reasoning, and in the previous
chapter criticism of so me AI researchers was discussed that standard logic
cannot analyze the way people obtain useful information from inconsistent
knowledge. However, objections to particular logical systems should not
be confused with a rejection of logic as such; a logical analysis of both
defeasible reasoning and drawing nontrivial conclusions from inconsistent
knowledge is very weIl possible, as has already been demonstrated by others
and as will become furt her apparent in the next chapters.
... (in mathematics) one does not justify a conclusion by citing cases but rat her
through the methods of logical inference.
According to these quotations, the difference with justification by deductive
reasoning would be that justifying adecision by analogy does not require
that the new case exactly matches all features of the precedent, but only
that the cases are sufficiently similar. These remarks contain two elements:
the first is that analogy is held to be a kind of justification, and the second
is that it is nondeductive.
What this section purports to show is that the justifying force of an
analogy is entirely a matter of content, for which reason analogical reasoning
should not be regarded as a way of justifying a conclusion, but as a way
of suggesting new premises. Furthermore, it will be shown that as such,
analogical reasoning uses logic in a particular way. In the philosophical
and AI-literature both various types of analogy and various uses which can
be made of them are discussed. I shall concentrate on analogies which are
meant to justify a normative decision in a case. Such a supposed justification
consists of stating a similarity between the current case and a past case
on which it is supposed a certain normative conclusion may be based
(Sacksteder, 1974, p. 235). In stating such an analogy two main elements
can be distinguished: first it must be decided which aspects of the cases
should be compared, and then it has to be decided und er which conditions
the cases are similar.
An interesting example, giving much insight into the nature of analogical
reasoning, can be found in Dutch civillaw (Scholten, 1974, p. 60ff). This ex-
ample consists of comparing, instead of two cases, a case and an anteccdent
of a statutory rule. Thc Dutch civil code (BW) contains a provision saying
that selling an apartment does not termiIlate an existing lease contract
(1612 BW). The case to be compared was donating a house instead of
selling it. Now, the way in which in Dutch law the analogy was justified was
by first observing that both selling and donating are instances of the more
general legal concept 'transferring property' and by then arguing that 1612
BW is based on the principle that transferring property does not terminate
an existing lease contractj since this rule was indeed accepted as being
the principle behind 1612 BW, both 1612 BW and the decision to let the
act of donating a house not terminate an existing lease contract could be
derived logically from the rule. This example shows that what is important
in an analogy is that the two cases which are matched are both instances
of a mOre general rute or principle from which the desired conclusion in
both cases can be derived (cf. also Sacksteder, 1974, p. 236j Bing, 1992,
pp. 105-6). If such a rule is already present in the knowledge base then, of
course, the system can apply this rule, which is simply logical inferencej the
interesting cases are those in which the system has to construct such a rule
28 CHAPTER 2
I end this chapter with relating the first three sections to one of the primary
topics in AI-and-Iaw research, the distinction between so-called 'rule-based'
and 'case-based reasoning'. Research on case-based reasoning (CBR) (e.g.
Ashley & Rissland, 1987; Rissland & Ashley, 1987; Rissland & Ashley, 1989;
Ashley, 1990; Skalak & Rissland, 1992) arose out of discontent with earlier,
what CBR-proponents have called 'rule-based' developments. In general,
the criticism is that rule-based systems are based on models of legal rea-
soning which have too much confidence in the possibility to solve legal
problems with general rules (Rissland, 1988, 1990). Although authors do
not always make explicit what they me an by the terms 'rule' and 'RBR',
mostly RBR is described in terms which indicate that the axiomatic view
is meant. See e.g. Rissland (1990, p. 1969, fn. 56):
( ... ) a typical rule-based approach to handling difficulties with thc rule set is to
resolve conflicts by hand before encoding the rules in thc program.
and Rissland (1990, p. 1967):
( ... ) the rule- based approach assumes that the set of rules has no inherent difficul-
ties, like ambiguities, gaps and conflicts.
Sometimes it is even equated with the naive deductivist view, e.g. by
Leith (1986, p. 546):
'Rule based' philosophers of law posit that we can view the law as a collection
of rules which seem, at least to me, to have 'an existence of their own'. It is not
unfair to see these philosophcrs as agents of a formal technical view of the law,
since they hold that justice is best arrived at by 'applying the rules' in a formal
and technical manner.
According to CBR-proponents assumptions like these are unrealistic: of-
ten legal disputes cannot be solved with the available rules and therefore
lawyers have to resort to reasoning with individual cases, whether prece-
dents or hypothetical examples. CBR-research concentrates on this aspect
of legal reasoning.
What is relevant for the present chapter is that the term 'rule-based' is
often equated with 'logic-based', for which reason the negative connotations
of the term 'RBR' carry over to the word 'logic'. However, by now it will
be clear that using logic does not imply at all that the domain knowledge is
seen as a consistent axiomatic system; it is perfectly possible to use logic as
a tool in other kinds of legal reasoning. Therefore I propose to use the term
'logic-based' also for systems which use logic as a too!. If used in that way,
the term also covers things like specifying the meaning of some knowledge
representation language(s), (which implies defining when a knowledge base
is inconsistent or when two arguments are contradictory), determining what
the consequences are of a particular revision of an inconsistent knowledge
THE ROLE OF LOGIC IN LEGAL REASONING 31
2.5. Summary
The purpose of this ehapter was to determine the proper role of logic
in AI-and-Iaw research. We have seen that using logic to formalize legal
knowledge does not commit to so me extreme view on how to do justice,
viz. as applying legal rules in a strict way, without looking at the social or
moral consequenees. Sinee logic has nothing to do with the status of the
premises but only with the form of an argument, it is always possible to
reject the premises if the conclusion is considered undesirable.
We have also seen, however, that realistic models of legal reasoning
should allow for more than just running a sound theorem prover over a set
of legal 'axioms' formulated in some logicallanguage. The incompleteness,
uncertainty and inconsisteney of mueh legal knowledge requires that logie
is used in a different way than in this axiomatie view on reasoning: drawing
logical inferenees should not be seen as a model of, but as a tool in reasoning.
This view on the use of logie in AI and law has been illustrated by a
diseussion of a noninferential kind of reasoning, analogical reasoning.
Furthermore, briefly so me reasons were indieated why legal reasoning
requires logical tools which are different from the tools of standard logic,
which were mainly developed for being used in mathematieal reasoning.
The next ehapter will be devoted to a detailed analysis of so me of these
nonstandard features of the legal domain: some of them eaIl for the need to
draw nontrivial eonclusions from inconsistent information, and other ones
require the possibility to draw eonclusions which are defeasible.
CHAPTER 3
33
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
34 CHAPTER 3
Because of this, the present investigations are not only relevant to AI and
law, but also to research in knowledge representation in general.
In AI-and-Iaw research the desirability of structural resemblance be-
tween source and formalization is by no means undisputed: see, for example,
Sergot's objections, discussed in Bench-Capon & Coenen (1992). In this
respect it should be remarked that it is not my aim to argue in favour of
this methodology; my position is mainly that the value others attribute
to structural resemblance makes it worthwhile to investigate under which
conditions this kind of representation is possible. The main conclusion will
be that the discussion on this issue is not complete without mentioning
nonstandard logics, since if one wants to maintain the separation of rules
and exceptions, one has to accept that the reasoning behaviour of a system
becomes nonstandard, in particular nonmonotonic. Of course, I would not
go into this issue if I would not see any point at all in aiming for structural
resemblance, but in my opinion its desirability still has to prove itself in
practice; in fact, my investigations will even provide some ammunition
for opponents of this aim, as will be furt her explained in Chapter 10,
Section 10.1. In this section I shall, following a few terminological matters,
present some examples of separate rules and exceptions in legislation, after
which I show why they cause problems for standard reasoning techniques
if their separation is to be preserved. Then semiformally two nonstandard
methods are presented, which both provide solutions for these problems.
3.1.1. TERMINOLOGY
for which reason in the present context their differences do not matter.
The above implies that the two following situations are deviations from
structural resemblance: the situation that one source unit is formalized in
more KB units; and the situation in which one KB unit contains concepts
from more source units, unless a source unit itself refers to other source units
(as in Example 3.1.3 below). Generally, only the second kind of situation is
held to cause problems with respect to validation and maintenance. There-
fore, the main problem discussed in this section and the rest of this book
is how to avoid the second situation when in legislation general rules and
exceptions are expressed separately. First some examples will be presented,
all taken from Dutch law.
3.1.2. EXAMPLES
I now list some examples of expressions and techniques which are typical
for legislation. They will serve as illustrations throughout this book: among
other things, they will in Section 5.1 illustrate a classification of kinds of
exceptions.
Example 3.1.1 Section 2 of the Dutch rent act (in Dutch 'Huurprijzen-
wet', HPW for short) states that the act is not applicable to lease contracts
which by their nature concern a short termed usage. Since 2 HPW is not
mentioned in the rest of the act, this section causes in fact all other sections
of the HPW to have an implicit exception. Note that 2 HPW does not
itself contradict another rule but only renders other rules under certain
conditions inapplicable.
Example 3.1.2 According to section 6:2-(2) of the Dutch civil code (BW)
a rule binding upon a relation between a creditor and a debtor by virtue
of, among other things, the law, does not apply to the extent tImt, in the
given circumstances, this would be unacceptable according to the principles
of reasonableness and equity. The role of this norm is similar to the one
of 2 HPW in Example 3.1.1: it causes all norms concerning creditor-debtor
relations to have an implicit exception.
Example 3.1.3 Section 4 HPW lays down certain necessary and sufficient
conditions for the possibility to change rents once a year. Section 30-(2)
HPW explicitly makes an exception to this section by stating that "contrary
to 4 HPW" rents included in a re nt contract of before 1976 must remain
unchanged until three years after the enactment of the HPW.
Example 3.1.4 Section 287 of the Dutch code of criminallaw (Sr) attaches
a maximum penalty of fifteen years of imprisonment to killing someone on
purpose, whereas section 154-(4) Sr imposes a maximum penalty of twelve
years upon killing someone in a life-and-death duel.
THE NEED FOR NEW LOGICAL TOOLS 37
Example 3.1.5 Section 25 of the Dutch trafik road act (WVW) forbids
any behaviour which can cause danger in trafik. This norm adds an implicit
exception to all obligations and permissions in Dutch trafik law.
Example 3.1.6 Section 1624 BW declares that if a contract has features
of both lease of business accommodation and another contract, and a norm
concerning the other contract type conflicts with a norm concerning lease
of business accommodation, the latter prevails. This is a very complex
example of implicit exceptions: every rule which is not about lease of
business accommodation has an implicit exception for cases in which it
is in conflict with a norm concerning lease of business accommodation. To
complicate the example even further, section 1637a BW gives in a similar
way precedence to norms concerning labour contracts.
Example 3.1.7 Section 3:32-(1) BW declares every person to have the
capacity to perform juridical acts, "to the extent that the Zaw does not
provide otherwise". One of the places in which the law does so is 1:234-(1)
BW, but with the same qualification: minors do not have the capacity to
performjuridical acts, to the extent that the law does not provide otherwise.
Subsection (2) of this norm contains such an exception in case, under certain
additional conditions, the minor acts with consent of its legal representative.
A First Formalization
Example 3.1.1
N HPW: 'v'x.conditions /\ -, x is a short-termed contract ---t
conclusion
Note that thus 2 HPW is not formalized as aseparate KB unit. A similar
formalization can be given of Example 3.1.2. The other examples become 1
Example 3.1.3
4 HPW: 'v'x.conditions /\ -, contract date of x before 1976
---t rent of x can be changed
30-(2) HPW: 'v'x.conditions /\ contract date of x before 1976
---t -, rent of x can be changed
Example 3.1.4
287 Sr: 'Ix, y. x kills Y /\ x acts wi th intent /\ -, x and y
duel on life-and-death ---t maximum penalty for x is
15 years
154-(4) Sr: 'Ix, y. x kills Y /\ x and y duel on life-and-death---t
maximum penalty for x is 12 years
Example 3.1.5
If, for convenience, we express deontic concepts with first-order predicates,
thell 25 WVW can be represented in predicate logic as
25 WWW: 'Ix. x causes danger ---t -, x is permitted
where x is assumed to range over trafik acts. Furthermore, every obligation
and permission in Dutch trafik law should have an extra condition
-, x causes danger
Example 3.1.6
1624 BW: this would require for each rule possibly coming into conflict
with a norm about lease of business accommodation to have a condition
lWith respect to Example 3.1.4 Brouwer (1994, p. 15) remarks that it is philosoph-
ically questionable to regard intent as a property of an action. Although his point is
interesting, I ignore such subtleties for explanatory purposes.
THE NEED FOR NEW LOGICAL TOOLS 39
However, this is still not sufficient, since, in addition, all possible exceptions
to the exceptions should be taken into account. Therefore, (2) should be
changed into something like
2' \:Ix. x is a person /\ x is a minor /\ ..., x acts wi th consent of
his legal representative /\ ..., exceptionkH /\ ... /\
..., exceptionn - t ..., x has legal capacity
Note that this only deals with exceptions to 1:234-(1) BW; ifthe exceptions
to the other exceptions are also induded, the formalization becomes still
more complex.
Since in these formalizations several non-contiguous source units are
mixed in one KB unit, it is obvious that structural resemblance is lost
completely. Other disadvantages of this way of formalizing are that each
time a new exception is added to the knowledge base one or more old rules
have to be changed, and that if a rule has many exceptions, its formulation
can become very complex, particularly if there are also exceptions to the
exceptions. Furthermore, in formalizing a rule the entire domain has to be
considered.
40 CHAPTER 3
Some, e.g. Nieuwenhuis (1989, p. 62), defend deviatiolls from the struc-
ture of the source by arguing that thus the implicit logical structure of
the statute is made explicit. However, in the next seetion a nonstandard
logical structure of legislation will be assumed, a structure which allows
the separation of general rules and exceptions, and which is therefore al-
ready explicit in the statute itself. First, however, another attempt within
standard logic will be discussed.
If <PI, ... , <Pn 11= 'IjJ, then also <PI,···, <Pn, <Pn+1 11= 'IjJ
However, the just sketched way of reasoning does not have this property,
since it offers a way to draw conclusions on the basis of the nonderivability
of other conclusions, and the important point is that such conclusions can
become invalid if additional information leads to the derivability of these
other conclusions. Consider again the example: if first only condition is
added to the rules, then conclusion can be derived, but if after that
also the fact e is a short-termed contract is added, then in the way
explained above N is inapplicable to e becomes derivable, which blocks
the application of N HPW. This kind of reasoning, in which conclusions can
be invalidated by additional information, is called nonmonotonie, and it is
obvious that a logical analysis of this kind of reasoning needs methods which
go beyond standard logical tools. In fact, I have already briefly mentioned
one possible way to account for negation as failure, a way which tries to
stay as close to standard logic as possible: this is the method of adding
completion formulas to the premises and to apply standard logic to the
resulting set. As noted, the formal details will be discussed in Chapter 4.
It might be argued that there is another method of preserving the
separation of rules and exceptions, a method in which standard logic need
not be given up. This will be discussed next.
Collision Rules
Making the assumption that there are no exceptions if the opposite cannot
be proven is not the only formal method of separating rules from exceptions.
Legal systems have ways of solving conflicts between norms by way of
collision rules and, as will be shown now, these principles provide another
way to represent rules and exceptions in a way resemblillg the source. Two
such principles, based, respectively, Oll notions of specificity and priority,
will be discussed. This method is particularly appropriate for systems which
can generate alternative arguments from an inconsistent knowledge base,
since the collision rules can be used in comparing the arguments to see
which one is the best. At first sight it seems that a formal description of
this method does not need a nonmonotonic logic, since it seems that these
collision rules can simply be added to a standard logical system. However,
things are not so simple as they seem, as will be explained after the details
of the method have been discussed by way of so me examples.
44 CHAPTER 3
Specijicity
In Example 3.1.4 the interpretation which structurally is the most faithful
to the original text is to regard 154-(4) Sr as a rule which covers a more
specific case than 287 Sr, which makes the legal principle Lex Specialis
Derogat Legi Generali applicable. This rule is generally held to be a collision
rule of any legal system. What for our purposes is particularly interesting
about this rule is that in AI its counterpart the 'specificity' principle is
often advocated as thc best way of separating rules from exceptions in a
knowledge base (e.g. by Poole, 1991). The general idea is, instead of intro-
ducing a new operator into the language of a logical system, to augment
an existing logical system with a metaprinciple to restore consistency if a
contradiction has been derived. In case of specificity, this boils down to
defining an algorithm which determines by syntactic means which of the
contradictory conclusions is based on the most specific premises.
To illustrate this with Example 3.1.4, ass urne that it is formalized in
standard predicate logic augmented with the Lex Specialis principle. Then
at first sight a problem is that the element x acts with intent is not part
of the formulation of 154-(4) Sr, which prevents the antecedent of 154-(4)
Sr from being a specific case of the antecedent of 287 Sr. Neverthcless,
accordillg to common-sense the agreement to duel on life-and-death implies
the intention to kill, and if this piece of common-sense knowledge is explic-
itly added to the formalization, specificity becomes syntactically apparent
(the technical details will be discussed in Chapter 6). Now a formalization
of this example illustrating the specificity method is the followillg one (in
which Iassume that enough axioms are added to make the consequents of
the two norms contradict each other).
Formalization 2 0/ Example 3.1.4
287 Sr: 't/x, y. x kills Y 1\ x acts wi th intent --+
maximum penalty for x is 15 years
154-(4) Sr: 't/x, y. x kills Y 1\ x and y duel on life-and-death--+
maximum penalty for x is 12 years
CS: 't/x, y. x and y duel on life-and-death--+
x acts with intent
Above Example 3.1.3 was formalized with an exception clause, but it can
also be regarded as a Lex Specialis case. In this interpretation it is formal-
ized as follows.
Formalization 20/ Example 3.1.3
4 HPW: 't/x. conditions == rent of x can be changed
30-(2) HPW: 't/x conditions 1\ contract date of x before 1976 --+
--, rent of x can be changed
THE NEED FOR NEW LOGICAL TOOLS 45
Again, the idea is that the reasoning mechanism will recognize the second
rule as a lex specialis of the first, since the antecedent of 30-(2) HPW
is a specific case of the one of (the if-part of) 4 HPW. Note that the
incorporation of the conditions of 4 HPW in the formalization of 30-(2)
HPW does not mix several source units, since 30-(2) HPW itself refers to
those conditions by me ans of "contrary to 4 HPW".
With respect to knowledge representation the advantages of the speci-
ficity method will be obvious: apart from preserving the structure of the
text, this method makes it also possible to add new exceptions without
having to change the old rule and, moreover, the formalized rules maintain
a simple and elegant structure. Formally, at least two things are required to
make this work. First, the reasoning mechanism should be able to identify
consistent subsets of the premises to serve as the basis for possible con-
clusions; and secondly, it should have logical criteria available by which it
can determine which conclusion is based on the more specific information.
How these things are possible, will be the subject of Chapter 6. It will be
obvious, however, that this will involve more than mechanically deriving
deductive consequences from a set of logical formulas, as in the axiomatic
view on reasoning.
Pl'iol'ities
Sometimes another metalevel method is the most natural way of dealing
with separate exceptions, viz. a method based on hierarchical relations
between norms. Consider Example 3.1.5: since almost all penal traffic pro-
visions are contained in a regulation which is hierarchically lower than
the Dutch traffic act, an ordering based on the Lex Superior Derogat Legi
Inferiori principle will give precedence to the prohibition of 25 WVW if it
conflicts with a lower permission. The Lex Superior principle is based on
the general hierarchy of a legal system: it says that if legislative authority is
divided along the lines of the hierarchical structure of a legal system, lower
authorities have to respect what higher authorities have issued. Another ex-
ample is Example 3.1.6, which is a complex example of implicitly expressed
exceptions. Its most natural interpretation is as folIows: if conflicting rules
apply to a case with a contract of mixed nature, 1624 BW gives rules
concerning lease of business accommodation priority over rules concerning
other types of contracts. Note that in this example the priorities are not
determined by the general hierarchical structure of the legal system, but
by a specific collision rule which is itself part of astatute. In Chapter 8 I
discuss the question how such specific collision rules like 1624 BW can be
combined with general principles like Lex Superior and Lex Specialis.
Like Lex Specialis, the Lex Superior principle also has its counterpart
in AI: sometimes (e.g. by Brewka, 1989) it is suggested that the separation
46 CHAPTER3
official collision rule. An interesting quest ion is how in legal reasoning all
these collision rules are combined. Both the legal and the logical aspects of
this issue will be investigated in Chapter 8.
EXAMPLES
Also in real life examples of this kind can be found. Snijders (1978)
discusses, among other ones, the following examples from Dutch law. Sec-
tion 95 of the Dutch accident insurance act gives the insurer the right of
recourse against the injurer. In one such case the injurer and the injured
person were married in communal estate; the Dutch supreme court (Hoge
Raad) decided in HR 2-2-1973, NJ 1973, 225 that applying the rule to such
cases would not be reasonable. Another example concerns Dutch labour
law. Section 1638b BW states as a general rule that the employee's right to
receive wages ends if s/he does not perform the agreed labour. In addition
to the two legislative exceptions to this rule in the sections 1638c and 1638d
BW the Dutch supreme court formulated in HR 10-11-1972, NJ 1973, 60
another exception: the right to receive wages does not end if not doing the
agreed labour was caused by circumstances of which it is more fair that the
employer rat her than the employee should bear the risk. In the case at hand
such a circumstance was a strike of other employees which prevented the
employee to do his job. Finally, Dutch civillaw contains the analogue of the
famous American Riggs v. Palmer case, discussed in Dworkin (1977). In the
Dutch supreme court case (HR 7-12-1990, NJ 1991, 593) a man had killed
his wife five weeks after they had married. At the time of their marriage the
woman was not only much older but also much richer than the man, and
since they had married in communal estate the man claimed his half of the
community according to section 1:100 of the Dutch Civil Code; however,
the Dutch supreme court decided that the principles of reasonableness and
equity motivated an exception to this rule in this case.
In sum, legal rules are defeasible: since the legislator cannot foresee every
possible situation which satisfies a rule's conditions, in some such situations
principles or the purpose of the rule may give rise to an exception.
reason why standard logic cannot cope with the defeasibility of legal rules,
which has more to do with their application to the world. Someonc requiring
absolute certainty of derived conclusions would in case of defeasiblc rules
have to say: 'unless I positively know that there are no exceptional cir-
cumstances, I cannot apply the rule'. This, however, is simply not wh at
happens in the practice of legal reasoning: often only some exceptions
are considered, and many rules are applied even without any reference to
the absence of exceptional circumstances. In criminal cases, for example,
normally only a small proportion of the possible exceptional circumst.ances
is considered explicitlyj for t.he rest their absence is simply tacitly assumed.
And when laywers refer to a norm concerning debtor-creditor relations
(cf. Example 3.1.2 above), they will often not even mention 6:2-(2) BW.
The reasons for this are twofold: firstly, information concerning possible
exceptions is often simply not. availablej and secondly, even if it is in
principle available, trying to find it would often be too tedious and time
consuming. In conclusion, in every-day li fe defeasible rules are not only
applied when it is known that all exceptions are false, but also when
information concerning exceptions is absent, and this means that if this
information becomes available at a later time, the conclusion is not valid any
more. However, as explained above, standard logic cannot account for this
kind of reasoning, since standardly valid conclusions cannot be invalidated
by adding new premises.
the computer speak about problems of open texture is one of the major
tasks of AI-and-Iaw research.
However, I shall not investigate this problem, but a related problem of
a logical nature: even if the computer speaks in problems of open texture,
it cannot always do so in the same way as about other problems; more
specifically, even if additional information is available, for example judi-
ciary decisions in previous cases or expert opinions, it is often of such an
inconsistent, incomplete or vague nature that the way lawyers reason with
it cannot be modelled with standard methods. The reasons for this will be
examined in the present section; my analysis will be based on a distinction
between four logical types of open text ure or, more accurately, between four
kinds of situations which are in the literat ure often referred to as problems
of open texture. Other terms are also used, in particular, 'vagueness' and
'semantic indeterminacy'. See Susskind (1987, pp. 187-8) for a discussion
of tcrminological matters.
not meant for normal transport are no vehicles', may turn out to be subject
to exceptions.
Judges are generally well aware of the defeasible nature of their classifi-
cation rules: see, for example, the following fragment of the Dutch supreme
court case HR 13-12-1968, NJ 1969, 174 on the classification of a case as
force majeure:
If the nature of the contract, common opinion, or the reasonableness of an indi-
vidual case do not indicate reasons for deciding the contrary, it can as a general
rule be assumed that ...
According to Hart (1949; 1961) this phenomenon is a common characteristic
of legal concepts: it is almost never possible to give sufficient conditions for
the applicability of a concept: any rule which tries to do so, is subject to
exceptions, because it is impossible to foresee all possible situations which
might occur in real life. Hart (1949) has called this characteristic of legal
language the "de/easibility 0/ legal concepts".
This development in legal philosophy mirrors similar developments in
analytic philosophy, cognitive psychology and Artificial Intelligence (see
for an overview Johnson-Laird, 1988, pp. 242-246). In cognitive psychology
concepts as people use them are not regarded any more as being strictly
defined by a set of necessary and sufficient conditions, but as being captured
in a more loose sense by a set of prototypes, from which deviations in
individual cases are possible. An often-used example, already discussed
in Section 1.3.3, is the concept 'bird': typically birds can fly, but some
instances, like penguins and ostriches, lack this property. In AI these devel-
opments have caused the critical remarks on logic of among others Minsky
(cf. 1.3.3), who has in Minsky (1975) suggested a knowledge representation
formalism called 'frames', in which it is possible to specify default values
for features of a concept, which can be overridden by specific instances.
One of the philosophical sources of this new way of looking at concepts is
the ideas of Wittgenstein's (1958) Philosophical Investigations on instances
of concepts as having only 'family resemblances' instead of all having the
same set of essential features. In conclusion, both legal theory, philosophy
and cognitive psychology provide ample evidence for the defeasibility of the
concepts which people use in every-day life.
Logically the defeasibility of legal concepts is of the same type as the
defeasibility of legal rules, discussed above: in both cases the antecedent of
a conditional statement is true, but reasons exist for making an exception
to the rule. Unlike cases in which a concept is open textured because it
lacks a complete definition, this kind of situation indeed creates problems
for standard logic, viz. those discussed in Sections 3.1 and 3.2: both their
separated representation and their application in case not everything is
known about exceptions requires nonmonotonic reasoning methods.
54 CHAPTER3
3.3.3. VAGUENESS
Still another aspect of legal language which has often been referred to
as open text ure often (but not exdusively) goes together with terms
like 'reasonable', 'sufficient', 'appropriate', 'suitable', etcetera, named by
Hart (1961, pp. 128-130) "variable standards". Consider by way of example
the question whether an apartment is suitable for a candidate tenant. While
in some cases judges state their decision explicitly in the form of a rule,
often the decision is justified in the followillg way: 'On the one hand A, B
and C, but on the other hand X, Y and Z and in this case the latter factors
outweigh the first: therefore the apartment is not suitable'. This style of
justifying adecision can particularly be found if factors can have many
different values, as is the case with income, age, rent, etcetera. Consider
the following typical fragment from a Dutch supreme court decision (HR
19-5-67, NJ 1967, 261), in which the question whether the debtor can
appeal to a liability restricting dause in a contract was held to
depend on numerous circumstances, such as: the seriousness of the fault, also in
connection with the nature and the weight of the interests involved in any conduct,
the nature and the furt her content of the contract in which the dause is contained,
the social position and the mutual relationship of the parties, the way in which
the dause has come into existence, the degree in which the other party was aware
of the scope of the dause.
From such a justification no general rules can be derived, because the fact
that the judge refers to the weight of the factors is an indication that new
factors might easily alter the outcome; all that can be known is that factors
have a certain weight, that their presence or absence in a case influences the
outcome of a case to a certain degree. An expert on landlord-and-tenant
law might, for example, know that the presence of an elevator increases the
chance for a house to be suitable if the tenant is old, without being able to
tell 'if there is an elevator and the tenant is old and ... , then the house is
suitable'. I shall call this situation the vagueness type of open texture. It is
obvious that knowledge of and reasoning with the mere weight of factors
cannot be modelled in a logic-based way: a numerical treatment is more
appropriate. Candidate methods are, for example, statistical analysis (e.g.
De Wild & Quast, 1989), or a neural network approach (e.g. Van Opdorp
& Walker, 1990; Van Opdorp et al., 1991; Bench-Capon, 1993). Numerical
methods, however, fall outside the scope of the present research.
Nevertheless, I would like to make two remarks on this issue. The first is
that these Illunerical methods are ways to make the system speak in cases
which have never been considered before. What these methods do in cases
with new combinations of elements, is giving an answer which as dosely
as possible reflects the role of the elements in previous cases in different
combinations. Of course, since the combination of the case at hand has
THE NEED FOR NEW LOGICAL TOOLS 55
never occurred before, the answer has a certain degree of uncertainty, but
in trying to deal with new, previously undecided cases, they are an attempt
to break the silence of a system in cases of open text ure. In this respect, they
are like induction or analogy, although they go further: as was argued above,
analogy is only a heuristic principle for suggesting adecision in a new case,
since as a mode of reasoning it has no justifying force at all; the numerical
methods, however, provide at least a certain degree of justification, based
on the statistical aspects of the theories underlying them.
A second remark concerns the fact that sometimes juzzy logic is sug-
gested as a way of dealing with open texture, e.g. by Franken (1983, p. 32).
Fuzzy logic is based on the concept of 'fuzzy sets', of which membership is a
matter of degree. For example, a person being 1.60 tall might be regarded as
a tall person to degree 0,1, a person being 1.80 to degree 0,5 and someone
being 1.95 to degree 0,9. However, this example already shows that this
approach is, like the other numerical methods, not applicable to the first
two types of open texture, but only to the vagueness type (as observed
earlier by Bench-Capon & Sergot, 1985). The reason is that in the first two
types being an instance of a legal concept is not a matter of degree, but a
matter of 'yes or no'. For instance, in the vehicle example the quest ion is
not whether a skateboard is more or less a vehicle, but whether it is or is
not. Being a vehicle to degree 0.7, if to say so makes any sense at all, does
not have less serious legal consequences then being a vehicle to degree 0.9.
Fuzzy logic is only applicable if so me of the 'input' or 'output concepts' are
multi-valued, like 'rent' 'compensation', 'suitability' or 'reasonableness', or
if a multi-valued concept is made two-valued by a 'threshold' qualification,
as in 'sufficiently suitable'. Of course, as already indicated, such concepts
certainly occur in law; in such cases, however, fuzzy methods are more
an alternative to other numerical methods in modelling vagueness than of
symbolic methods in modelling defeasibility and disagreement.
In this section more will be said ab out the nonstandard aspects of legal
reasoning identified in this chapter: reasoning with inconsistent information
and reasoning with defeasible rules. Most attention will be paid to the latter,
since in practice reasoning with inconsistent information rapidly becomes
a special kind of defeasible, or non mono tonic reasoning.
Relevance
To summarize the potential reasons which we have identified to use methods
of nonmonotonic reasoning in AI and law, we can divide them illtO two
groups. The first concerns reasons of knowledge representation: we have
THE NEED FOR NEW LOGICAL TOOLS 57
know whether <p is true or false, otherwise it cannot draw any conclusion.
A nonmonotonic system, on the other hand, can also draw conclusions if
nothing is known ab out <p, and this ability is desirable in two cases. The
first case is when it is appropriate not to ask the user for the missing
information, as, for example in Example 3.1.1, since the concept 'short
termed contract' is worked out in a large amount of case law which is
highly factual and specific in nature (see Walker et al., 1991, p. 45). The
second case is situations in which the missing information about <p is asked
to the users, but in which the user can give 'I don't know' answers. Such
an answer has the same effect as no information about <p without asking.
In addition to this, designers of knowledge-based systems sometimes feel
the need to assurne certain trivial factual information to be true without
having to ask the user whether these facts indeed hold. For example, the
PROLEXS system on Dutch land lord and tenant law (Walker et al., 1991)
assumed the user to be an adult and the lease contract to be valid. In
conclusion, even in a practical environment there can be reasons to apply
general rules und er incomplete information about exceptions.
In addition to these arguments it may be remarked that even if from
a reasoning point of view a certain application does not need nonmono-
tonic methods, there are still the above mentioned arguments from the
knowledge representation point of view: separating rules and exceptions in
a formalization often better preserves the structure of what is formalized,
and apart from this many hold that it benefits validation and maintenance.
Finally, from a research point of view the nonstandard features of legal
reasoning deserve to be investigated simply because they are aspects of
legal reasoningj and of AI research it is not that strange to say that every
feature of reasoning is in principle a subject of research, no matter what
the immediate practical gains are of investigating them.
In conclusion, although it has by no means been shown that legal
knowledge-based systems are not worth their money if they cannot reason
nonmonotonically, what has been shown is that nonmonotonic aspects of
legal reasoning at least deserve to be placed on the agenda of AI-and-Iaw
research. The final section of this chapter will discuss so me existing projects
in the legal domain which have already done this, although sometimes
without being explicitly based on a theory of nonmonotonie reasoning.
First, however, I discuss in more detail the formal nature of reasoning with
an inconsistent knowledge base and of nonmonotonie reasoning.
Characteristics
As explained, nonmonotonie reasoning is drawing conclusions which might
be invalidated by additional information. Using a nonprovability operator
has been shown to be a form of nonmonotonic reasoning: if conclusions de-
THE NEED FOR NEW LOGICAL TOOLS 59
pend on the faHure to derive other conclusions, they become invalid as soon
as new information makes it possible to derive these other conclusions. Also
using collision rules intro duces nonmonotonicity: in case of, for example,
Lex Superior and Lex Speeialis the validity of a conclusion derived from
a lower or more general rule depends on the faHure to derive the opposite
conclusion from a higher or more specific rule.
This, by the way, explains the last remark of Section 3.4.1 that the
comparison of arguments makes modelling dis agreement nonmonotonie: if
standards are used to determine whieh of the conflicting arguments is the
best, then it might be that additional information gives rise to a new argu-
ment, being better than the one whieh was preferred on the basis of the old
information. Therefore, allowing for inconsistent knowledge bases to model
disagreement becomes nonmonotonie as soon as conflieting arguments are
eompared to see which one is the best.
As described so far, nonmonotonicity is a formal property of conse-
quence not ions; however, also computationally nonmonotonie reasoning
systems differ from standard logies in an important respect. Classieal proof
methods are loeal: if there is a monotonie proof for a formula cp from a
part of the knowledge base (KB), this counts as a proof from the entire
KB, since new information cannot invalidate this proof. For example, if 'IjJ
describes the facts of a case and a monotonie system tries to answer the
question whether cp holds, then if the first rule it checks is 'IjJ ---t cp, it can stop
as soon as it has applied modus ponens to this rule. Nonmonotonie proof
methods, on the other hand, are global, since if they have found a way to
derive cp, they have to continue searching, because new information might
invalidate nonmonotonie conclusions. For example, in the general exception
clause approach a conclusion cp based on the nonderivability of a formula 'IjJ
from apart of the KB can be invalidated by a derivation of 'IjJ from another
part of the KB; and in the collision rule approaches the opposite of cp may
be derivable from a more specific or higher rule in the KB. Obviously, global
proof methods are less tractable than local ones, since they require timt for
every conclusioll the entire KB has to be inspected.
To summarize, nonmonotonie reasoning differs from standard reasoning
in two important respects: conclusions may be invalidated by additional
information; and for every step in a derivation the entire amount of available
information has to be considered.
It may be useful to compare nonmonotonie reasoning with other modes
ofreasoning with respect to their role in adversarial argumentation. Imagine
a dialogue situation in whieh the parties first re ach agreement on the basis
for the discussion, i.e. on the initial premises, and then try to persuade
each other of a certain conclusion. If one party presents a conclusion whieh
is derived from the premises in a deductive, monotonie way, then the only
60 CHAPTER 3
Many other writers have made the same point, in the AI-and-Iaw domain
e.g. Gardner (1987), Hage (1987), Gordon (1988) and Sartor (1991), to
name but a few. In conelusion, it can be said that the intractability of the
formalisms is caused by the intractability of the kinds of reasoning which
are formalized (cf. Etherington, 1988, p. 69).
First I discuss something which, rat her than a particular system, is a general
approach to formalizing law for computer applications. This is the idea of
formalizing law as logic programs, viz. as a set of formulas of a logical
language for which automated theorem provers exist. This approach, of
which the underlying ideas are set out in Sergot (1988), is most closely
associated with Sergot and Kowalski. The best known application is the for-
malization of the British Nationality Act (Sergot et al., 1986). The authors
have made ample use of the negation as failure approach to negation, which
is characteristic of logic programming methods and which, as explained in
Section 3.1, is a form ofnonmonotonic reasoning. They regard this approach
as particularly appropriate for exceptions. Accordingly, they use negation
as faHure specifically for norms containing phrases like 'unless the contrary
is shown' and 'subject to section ... '. They do not use general but specific
exception elauses. An example of the use of negation as failure is the norm
to the effect that, under certain additional conditions, an abandoned child
acquires British citizenship if the identity of the parents is unknown. The
authors explicitly regard this as a form of nonmonotonic reasoning and,
moreover, they observe that the legislator has not anticipated the possibility
that at a later point in time it becomes known that the parents were not
British citizens, which would make the previous conelusiön invalid.
62 CHAPTER 3
3.5.2. TAXMAN 11
Gardner (1987) has designed a pro gram to perform so-called 'issue spotting'
in the American common-Iaw domain of the formation of contracts by offer
and acceptance. The task of the program is to determine of an input case
which legal quest ions to be solved are easy and which are hard and to solve
the easy questions. If all the questions involved are easy, the case can be
reported as dear, otherwise it is hard. The system contains domain knowl-
edge of essentially three different legal types: firstly, general legal rules,
derived from case law in a semi-official way (the Restatements of American
contract law)j secondly, so-called 'common-sense rules', providing general
non legal knowledge about aspects of the world relevant to the program
domain; and finally, case law, generalized in the form of universally quan-
THE NEED FOR NEW LOGICAL TOOLS 63
tified conditionals. The program considers a quest ion as hard if either "the
rules run out" (the underdetermination-type of open texture), or different
rules or cases point at different solutions, without there being any reason
to prefer one over the other (the overdetermination-type). Because of the
second possibility the program is able to deal with conflicting information.
The reason why Gardner's program performs nonmonotonie reasoning is
that, before a case is reported as hard, conflicting alternatives are compared
to check whether one is preferred over the other and, as noted above, this
is a form of nonmonotonie reasoning. Because of this feature, the system is
also able to handle the defeasibility type of open texture, which it does in
the following way. Gardner uses case law in her program for indirectly
representing purposes of rules and principles: a common way in which
judicial decisions do this, is to set aside legal rules or literal interpretations
of legal concepts (represented in Gardner's program by the common-sense
rules) by an interpretation based on principle or purpose. The way Gardner
implements this, is to give case law precedence over common-sense rules.
Assume by way of an example that the program tries to apply a rule
forbidding to sleep in a railway station, to a commuter who has dozed
off (an example of Fuller, 1958). The common-sense rules would no doubt
say that the man is sleeping, but an argument based on the purpose of the
rule (to keep tramps outside the station building) could say otherwise. The
corresponding judicial decision then sets the literal reading aside. Thus,
Gardner's system provides an example of a nonstandard collision rule.
Furthermore, in using collision rules to compare conflicting arguments, her
system performs nonmonotonie reasoning.
3.5.4. CABARET
67
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
68 CHAPTER 4
(b !\ M -,p) -+ f
DEFAULT LOGIC
It must now be chosen which default to apply, since applying both of them
makes the extension inconsistent. Now if dl is applied, then Maximum = 12
is not consistent any more with the resulting extension and the application
of d2 is blocked. If, on the other hand, d2 is applied first, d l is blocked.
Thus, this default theory has two extensions:
EI = TU {Maximum = 15}
E2 = Tu {Maximum = 12}
Both are reasonable sets of beliefs on the basis of (T, Ö), but they cannot
be held simultaneously: it is necessary to make a choice between them.
Because of the possibility of multiple extensions, two alternative conse-
quence notions can be defined for default logic, depending upon whether a
formula is in some 01' in all extensions of adefault theory. Generally these
two kinds of reasoning are called cTedulous and sceptical reasoning, and
they can also be defined for many other nonmonotonic logics.
Definition 4.1.3 (credulous and sceptical consequence). Let (T, Ö) be a
default the01'Y. Then
- (T,Ö) I",s cp iffcp is in all extensions of(T,Ö) (sceptical consequence)
- (T, Ö) I",c cp iff cp is in some extension of (T, Ö) (credulous conse-
quence)
The idea behind sceptical consequence is to capture precisely those conclu-
sions which, on the basis of the present knowledge, cannot be challenged
at all: if a conclusion cp follows sceptically from (T, Ö), no argument at
all can be set up against cp on the basis of (T, Ö). Of course, a different
matter is whether cp can be challenged under additional factual knowledge.
Credulous consequence, on the other hand, is very permissive: if cp is a
credulous consequence of (T, Ö), this only means that there is at least one
possible state of affairs compatible with the facts of T and generated by
the defaults of Ö such that in this state of affairs cp holds.
The observation that in case of multiple extensions sometimes only one
ofthem is intended led Reiter & Criscuolo (1981) to use so-called seminor-
mal defaults, giving up the idea of Reiter (1980) that normal defaults are
the only ones that are needed. Adefault is nOTmal iff the justification is
identical to the conclusion, i.e. iff it is of the form
a(x) : w(x)/w(x)
and it is seminormal iff it is of the form
a(x) : ß(x) 1\ w(x)/w(x)
Otherwise it is nonnormal.
Typically, ß(x) will be the negation of the prerequisite of a conflict-
ing default which should have priority (the reader will recognize the
72 CHAPTER4
then the revised default theory (F, {d l " d2 }) has only one extension, viz.
E 2 , since d l , is blocked by the fact that its justification, now entailing
-'life_and_death_duel, is not consistent any more with the facts. Accord-
ing to Dutch law E 2 is indeed the intended extension, since 154-(4) Sr is
meant as an exception to 287 Sr. Note that since -'life_and_death_duel
is added to the justification of dl' instead of to its prerequisite, it is not
necessary to prove the absence of the exceptional circumstance in order
to apply the general rule. For this reason this method is in principle a
candidate for capturing the defeasibility of legal rules. In Chapter 5 I shall
go in detail into the quest ion whether this method is indeed satisfactory in
all cases.
Many modifications and extensions of default logic have been pro-
posed, for instance by Lukaszewicz (1990) and by Brewka (1991b; 1994a;
1994c), whose work will be discussed in Chapter 9. Furthermore, Ether-
ington (1988) has developed a model-theoretic semantics for default logic,
which, however, only gives a semantic characterization of extensions, and
not of defaults themselves. Informally, the idea is that all subsets of the
set of models for the first-order theory F are ordered as to how weIl they
satisfy the defaults. Each maximal set in this ordering is the set of models
for some extension.
Although its intuitive clarity is very appealing, default logic also has
some recognized drawbacks. One of them is its computational complexity,
a main source of which is the fact that the definition of an extension is,
as explained above, not constructive. The problem is that in applying the
consistency check to see whether adefault can be used to enlarge an exten-
sion, it is not sufficient to inspect the extension as it has been constructed
so far by other defaults; instead the content of the entire extension has
to be guessed beforehand, and this cannot easily be mechanized. More
precisely, it is known that the extensions of default logic are not 'recursively
enumerable': i.e. the problem of determining whether a given formula is in
some extension of a given default theory (Reiter, 1980, Th. 4.9) is not
semi-decidable.
A furt her problem is that in default logic it is impossible to reason
about defaults. Consider the next modification of Example 3.1. 7 about the
capacity to perform legal acts.
LOGICS FOR NONMONOTONIC REASONING 73
As Etherington (1988, pp. 84-5) remarks: " .. applying any one default leaves
one other applicable. Applying any two, however, results in the denial of
the nonnormal part of the justifications of one of them". The problem is
that defaults cannot be left unapplicable: Definition 4.1.2 requires that at
each step Ei+l all defaults are applied of which the prerequisite is in Ei
and the justification consistent with E.
Notwithstanding these problems, default logic is one of the best known
formalizations of nonmonotonie reasoning, probably because of the intu-
itive c1arity with which it can be presented and applied to common-sense
reasoning. For example, as will be shown in Chapter 6, it corresponds
rat her naturally to the adversarial aspect of legal reasoning, consisting of
generating and comparing alternative arguments for and against a certain
conclusion.
For example, if a person holds that a bird can fly unless it is known to be a
penguin, then if the person observes that s/he does not know that the bird
Tweety is a penguin, s/he can infer that Tweety can fly. To make this not ion
of not knowing or believing a fact precise, various formalisms have been
proposed which make use of modal operators interpreted in an epistemic
way. The best known is Moore's (1985) autoepistemic logic (AEL), which
arose out of an attempt to improve McDermott & Doyle's NML.
AEL tries to capture the beliefs 3 of an ideal introspective reasoner. The
idea is that on the basis of the premises or 'base beliefs' of the reasoner a set
of formulas is constructed which contains everything the reasoner believes,
not only about the world, but also about its own beliefs. The statements
expressing belief or ignorance are represented with the help of a modal
operator L, intuitively standing for 'it is known that'. The introspective
observations can subsequently be used to jump to conclusions. For example,
if the default 'birds fly' is represented as
(b 1\ -,Lp) -+ f
and not knowing of some bird that it is a penguin is expressed by
-,Lp
It can now be explained why AEL is nonmonotonic. The reason is that the
quest ion whether a statement Lep is in a stable expansion T of a base set
B depends both on what is and on what is not in B. Assurne a reasoner
considering the quest ion whether a givcn bird flies provides the following
base beliefs.
B = {b, (b 1\ -,Lp) --+ f}
Then -,Lp is in T since p is not in Tj furthermore, since B is included in
T and since T is deductively closed, by simple modus ponens also f is in
T. However, if p is added to the belief set B, then T contains Lp instead of
-,Lp and f will not be in T any more.
AEL has much in common with default logic. Among other things, stable
expansions are similar to extensions of default logic: for example, a set of
base beliefs can have multiple stable expansions. Consider the following set
of base beliefs with two atomic facts and two defeasible rules, representing
the Nixon diamond (cf. Reiter & Criscuolo, 1981).
Quaker 1\ -,L-,Pacifist --+ Pacifist
Republican 1\ -,LPacifist --+ -,Pacifist
Quaker,Republican
Since neither Pacifist, nor -,Pacifist is known, there is an expan-
sion with -,L-,Pacifist and therefore with Pacifist, but also one with
-,LPacifist and therefore with -,Pacifist. On the other hand, there is
no expansion with both Pacifist and -,Pacifist, since as soon as, for
example, the first default is applied, LPacifist holds, and the antecedent
of the second default is false.
AEL also shares a drawback with default logic. The fixed point definition
of a stable expansion is not constructive, for the same reason as for the
extensions of default logic: therefore AEL is, like default logic, computa-
tionally unattractive. Finally, it should be noted that, like default logic,
AEL is also extended and adapted in various ways (see the overviews of
footnote 4).
AEL will not be discussed in the rest of this bookj I confine myself
to briefly indicating the prima facie relevance of AEL for modelling legal
reasoning. It particularly makes a natural formalization possible of rules
which attach a normative consequence to certain facts 'provided that the
law does not state otherwise', since in such norms the law introspects on its
own content. For example, Section 3:32-(1) BW (cf. Example 3.1.7 abovc)
could in propositional AEL be formalized as
Person 1\ -,L-,Has_capaci ty --+ Has_capaci ty
76 CHAPTER4
Then, if the base set does not contain other norms and facts from which
-,Has_capaci ty can be derived, -,L-,Has_capaci ty is in T and the rule can
be applied. Rules on legal evidence procedures, such as the principle that
any suspect is held innocent as long as his/her guilt has not been proven,
are also of an introspective nature. A simplified formalization of this rule
in AEL is
-,LGuil ty -+ -,Guil ty
4.1.3. MINIMIZATION
NEGATION AS FAlLURE
In the 70's in various areas of computer science the idea arose to interpret
the negation sign as 'failure to derive the opposite'. In one of these areas,
the theory of deductive databases, this was motivated by reasons of efficient
storage of data. It is often impractical to store all negative facts in a
databasej the standard example is an airline database: if it had to contain all
connections which do not exist, it would become much too large: therefore
it gives the answer 'there is no flight from Amsterdam to Miami' if it cannot
derive from the database that there is such a flight.
This kind of reasoning is correct under the Closed- World Assump-
tion (CWA), which says that all positive facts are actually stored in
the database. If this assumption is correct, we can safely say that a fact
which cannot be derived from the database is false. Obviously, this kind of
reasoning is nonmonotonic, since if additional positive facts become known,
LOGICS FOR NONMONOTONIC REASONING 77
the assumption turns out to be false and the negations of the additional
facts cannot be derived any more. Often the CWA is expressed as an
informal notion, but several attempts have been made to give a formal
account. Reiter (1978) proposed the following formalization.
Definition 4.1.5 Reiter's closed-world assumption. Assume DB is a set
of first-order formulas and cp an atomic first-order formula. Then
DB I-cw A ,cp ifJ not DB I- cp
This works fine if all facts in the database are atomic: for example, if D B =
{Flight(Amsterdam, London)}, then DB I-CWA ,Flight(Amsterdam,
M iami). However, if the language is extended, then problems can arise: for
instance, if DB = {Pa V Qa}, then both DB I-CWA ,Pa and DB I-CWA
,Qa, but this is inconsistent with Pa V Qa.
To deal with this problem, more sophisticated formalizations of reason-
ing with tbe CWA have been developed. Circumscription, to be discussed
below, can be regarded as one of them. Now I discuss the largest fragment
of the language of first-order logic in which the above 'naive' CWA is
unproblematic. This fragment plays an important role in a new use of the
language of first-order logic in computer science: until the 70's it was used
only to say things about programs and programming languages, but then
the idea arose to use logic itself as a programming language. PROLOG was
one of the results of this development.
The basic fragment of logic programming is the part consisting only of
Horn clauses.
Definition 4.1.6 (Horn dause logic) AHorn dause is of the form
\/XI, ... , Xj.LI /\ ... /\ Ln -+ L m
where all Li are atomic formulas, of which the variables are among Xl, ..• , Xj.
Normally the universal quantifier is lelt implicit in the notation. The
antecedent of aHorn clause is called the body and its consequent is called
the head. Atomic formulas Li are a border case of Horn clauses since they
can be regarded as aHorn dause with a valid lormula as body: T -+ Li.
Negated atoms Q1'e also a border case of Horn clauses since ,Li can be
written as a dause with a contradiction as head: Li -+ ..L. A set 01 Horn
dauses is called a logic program, and will be denoted by a possibly indexed
11.
One reason why Horn dauses are interesting for logic programming is that
for this fragment an efficient theorem prover has been developed, viz. so-
called SLD-resolution (cf. Lloyd, 1984). It is also known that if a logic
program contains only Horn dauses, Reiter's formalization of the CWA is
unproblematic, in that it does not give rise to inconsistencies. Accordingly,
78 CHAPTER4
The phrase 'smallest' Herbrand model is justified by the fact that the
minimal Herbrand models of Horn programs have been shown to be unique;
moreover, they have been shown to always exist. In such a case, where the
intended model is unique, sometimes the term canonical model is used.
Next the new entailment notion for Horn dauses can be defined.
4I(a) and I(P) denote the interpretation of the term a and the predicate P.
LOGICS FOR NONMONOTONIC REASONING 79
Note that both derivat ions are dassically invalid. Note finally that, again
because of the procedural interpretation of negation, it does not make sense
to allow negations in the head of a dause: 'if<p then 'lj; is not derivable' has
no sensible interpretation.
The first attempt to give a semantic account of negation as failure was
Clark's (1978) predicate completion, a method which I have already applied
in the formalizations in Section 3.1.3. Predicate completion can also be
regarded as an alternative formalization of the Closed-World Assumption.
The idea of the completion of a predicate P is to add extra information
to the premises expressing that the objects of which the premises say that
they have the property P, are the only ones which have this property. In
fact, this method makes sufficient conditions for deriving atoms with P
also necessary ones. More precisely, assurne we want to complete an n-
place predicate P. Then if <PI, ... ,<Pn are the bodies of all the dauses with
P(XI, ... , x m ) as their head, then the following completion formula for P
is added to the premises (note that ' stands for classical negation):
The semantical reason for this is that, although the canonical model of
stratified programs is unique in being the canonical model, it is, as shown
above, not always the unique minimal model of the program, for which
reason it is not the intersection of alt models of the program.
A second re mark is that the significance of the idea of stratification is
not restricted to logic programming; also when applied to other nonmono-
tonic formalisms it can identify suitable subsets of the language which are
computationally well-behaved (cf. 4.2 below; see also Etherington, 1988,
pp. 61,88 and Brewka, 1991a, pp. 91-93 for more comments on this issue).
Finally, it should be remarked that the stratified semantics for general
logic programs is not universal; it gives a semantic characterization for
stratified programs only. Because of this, research into the semantics of logic
programming has not stopped with stratification; over the past few years
many interesting developments have taken place. For example, Przymusin-
ski (1988) has developed a 'perfect model' semantics based on so-called
prioritized circumscription, which is a variant of another minimization
approach, to be discussed next. Furthermore, Gelfond & Lifschitz (1988)
have developed stable model semanties, which is a direct application of
autoepistemic logic: negation as failure is interpreted as -,L and then the
stable models of general programs are defined as the stable expansions
restricted to this syntactic class. Research also continues for other reasons:
for example, some researchers have focused on extending the language
of logic programming with classical negation (Gelfond & Lifschitz, 1990;
Kowalski & Sadri, 1990); thc reason is that in general logic programs all
negations stand for negation as failure, which for purposes of knowledge
representation is often a too heavy restriction. In Chapter 5 I return to this
lssue.
CIRCUMSCRIPTION
The nonmonotonic nature of this idea is caused by the fact that the content
of the circumscription formula entirely depends on the theory: for this
reason, if the theory is extended, then the circumscription formula needs
to be changed to C', and deductive consequences of T + C need not hold
any more for T + C'.
More specifically, this technique works in the following way. If certain
predicates of a theory are regarded as expressing exceptional properties or
relations, then the circumscription formula circumscribes these predicates,
.Le. it says that only those objects have the exceptional property or relation
of which the theory says that they have it. For example, if the theory is
about criminallaw and it is regarded as exceptional that criminal offenders
are mentally ill, then the predicate 'mentally-ill' can be circumscribed: then,
if not hing can be proved ab out an offender's mental illness, s/he is assumed
to be in mental health. This explanation of circumscription sounds like the
closed-world assumption, and circumscription has indeed been studied as
a way of formalizing the CWA. For the oldest and simplest form of cir-
cumscription, called predicate circumscription the circumscription formula
is defined in the following way (taken from Brewka, 1991a).
Definition 4.1.10 (predicate circumscription). Let T be a first-order for-
mula (the conjunction of the premises of the theory) containing an n-place
predicate symbol P. Let T(P') be the 1'esult of replacing aU occurrences of
P in T by the predicate variable <p. Then the predicate circumscription of
P in T is the foUowing second order formula
T(P) 1\ VP'[(T(P') 1\ Vx(P'x ---t Px)) ---t Vx(Px ---t P'x)]
where x = Xl, ... , x n .
To give a hint as to how this formula 'works', the idea is that if the
circumscribed predieate P is known to hold for a certain object c, Le. if Pc
holds, then we should be smart enough to substitute for P' X the expression
X = c, after which we can deduee Vx(Px == X = c), which says that only
c has the property P. However, in this book I only foeus on the semantics
of circumscription; for examples applying the cireumscription formula the
interested reader is referred to the introduetory texts mentioned above.
The idea behind the semantical appearance of circumseription is, instead
of giving truth definitions for new logical operators of the language, to
define a new notion of semantie entailment over the language of first-order
predicate logic. The general idea is the same as in the semantics of logic
programming: instead of ehecking alt models of the premises to see if the
conclusion holds, only so me of the models are eheeked, viz. those in whieh
the extensions of the predicates are minimal. There are, however, two main
differenees with the semanties of logic programs. A technical difference is
that now ordinary models are considered instead of Herbrand models. A
84 CHAPTER4
difference which for our purposes is more important is that the notion of
a minimal model has to be refined: not all but only some of the predicate
extensions should be minimal, viz. the extensions of those predicates which
are circumscribedj in addition, some predicate-extensions should be fixed,
and other ones should be allowed to vary among the minimal models. This
second difference has to do with the nature of default rcasoning, in which
we can distinguish three kinds of information: first the 'input facts' of a
case, then the exceptiollal information, which should be assumed false if
possible, and finally the 'output facts', the conclusions we are interested
in. While, of course, the exceptional predicates should be minimized, the
input pTedicates should in all models stay fixed since they just describe
the facts of the case, and the output predicates should be allowed to vary,
since the answer should co me as a 'surprise'. This leads to the followillg
generalized definition of a minimal model, which, in fact, corresponds to
variable circumsc1'iption (McCarthy, 1986), of which the syntactical form
is slightly more complicated than of predicate circumscription.
Definition 4.1.11 (minimal model). Let T be a first-m·der the01'y and P,
Q and Z be disjoint sets of p1'edicate letters such that their union is the set of
all predicate letters of the language ofT. P denotes the set of circ'U1nscribed
predicates, Q is the set of fixed predicates, and Z contains the vaTiable
predicates. Let M and M' be two models of T. We say that M :::;P,Q,z M'
iJJ
1. M and M' have the same domain;
2. for all PEP: h1(P) ~ IM,(P)
3. for all Q E Qh1(Q) = h1,(Q).
M is a minimal model ofTp,Q,z iJJ M is minimal in the orde1'ing :::;P,Q,z,
i.e. iJJthe1'C is no model M' ofT such that M' :::;P,Q,z M and not M :::;P,Q,Z
M'.
Note that in this definition nothing is required about the variable predicates
Z.
Definition 4.1.12 (minimal entailment for variable circumscription)
Tp,Q,z Fmin <p iJJ <p is tme in all minimal models of Tp,Q,z according to
the usual tmth definitions.
Etherington (1988) has shown that the syntactic form of variable circum-
scription is complete with respect to this semantics: the minimal mod-
els of Tp,Q,z are exactly all models of T + C, where C is the formula
circumscribing all predicates in P according to the definition of variable
circumscription.
The following examples show how this definition of minimal entailment
can be used to model defeasible reasoning.
LOGICS FOR NONMONOTONIC REASONING 85
not all circumscriptive theories have minimal models, which means that not
every circumscriptive theory can be semantically characterized. Another
problem is the computational complexity of circumscription, which, among
other things, is caused by the fact that in general the circumscription
formula has to be second-order, because it has to quantify over predicates.
The problem is that second-order logic has unattractive proof theoretical
properties: particularly it is known to lack asound and complete proof pro-
cedure. To overcome this, research has been done to reduce certain classes
of circumscription axioms to first-order formulas, after which standard
theorem proving techniques can be used. And progress in theorem proving
research has also been made in the second-order case, while for certain
classes of circumscriptive theories equivalences with logic programs have
been identified. In general, however, circumscription is computationally
still a tough nut to crack, also because of another reason, which is that
in general there is no algorithm for solving the problem which predicate
has to be substituted for P' in the circumscription formula; this requires
real intelligence, and this is something which logical theorem provers cannot
provide.
because it might result in theories which can also be used for modelling the
way lawyers reason with inconsistent normative systems, which above was
identified as another source of nonmonotonic reasoning. This topic will be
discussed in Chapters 7 and 8.
Among the researchers modelling nonmonotonic reasoning as incon-
sistency tolerant reasoning are Brewka (1989; 1991a), Poole (1985; 1988)
and Roos (1991; 1992). Earlier, Alchourron & Makinson (1981) were the
first who used similar ideas to model the way lawyers reason with hier-
archically structured normative systems. The general idea goes back to
Rescher (1964). If the set of premises turns out to be inconsistent, consistent
subsets are identified and ordered according to some preference relation:
in applications to nonmonotonic reasoning generally the subset containing
the most specific information is preferred. The preferred subsets can be
used to define two nonstandard consequence relations, corresponding to
following from one of the preferred subsets or from all of them. Thus, like
in Definition 4.1.3 above both a credulous and a sceptical consequence
not ion are dcfined, but a difference is tImt now membership of a subtheory
only counts if the subtheory is preferred.
The following definition is taken from Brewka (1991a).
Definition 4.1.16 (weak and strong consequence)
- A fo1'mula rp is weakly provable from a set of p1'emises T iff the1'e is a
prefe1'1'ed subtheo1'Y S of T such that SI- rp;
- A formula rp is strongly provable from a set of premises T iff fo1' alt
prefe1'1'ed subtheories S of T we have SI- rp.
In this subsection I present the approaches of Poole and Brewka.
like those based on minimal model semantics, in a very natural way, but
others only after considerable modifications of the logicj notably Reiter's
original default logic cannot be captured in the framework of preferential
entailment. For this reason Shoham's notion of preferential entailment has
only partly turned out to be a unifying scheme for nonmonotonic logics.
4.2.3. CONNECTIONS
been settled whether the various logics model different aspects of reasoning,
or whether they are rivals in formalizing the same kind of reasoning.
question with which logicians in the field are concerned is not whether for-
mal theories of nonmonotonic reasoning can be called a logic, but whether
these theories correctIy systematize rational patterns of reasoning with
incomplete information. I think that most of these logicians would not
object to use a different name for such theories. In fact, a parallel can
be drawn with the discussion whether norms are subject to logical rela-
tions (see above, p.i5). One of the positions which has been taken is Von
Wright's (1983, p. 132) opinion that norms are not subject to logicallaws,
but to requirements of rational legislation. It seems that this is the same
inessential terminologie al issue as in case of nonmonotonic reasoning.
A more interesting criticism is put forward by Israel (1980). He attacks
the basic assumption of researchers in nonmonotonie logic that the rational
patterns of defeasible reasoning can be expressed with the mathematical
tools of nonmonotonie logics. Israel recognizes that sometimes beliefs have
to be retracted when new beliefs are added, but he denies that this is
a matter of inference; instead, he regards it as a matter of rational belief
revision or acceptance, which according to hirn is one of the things in which
reasoning is more than logic (cf. Section 1.3.3 above). What is essential in
Israel's criticism is that he claims that for the question whether a belief
can be rationally held no proof-theoretic account can be given at all; the
best we can hope for is heuristic rules, and Israel claims that these can
be found in the literat ure on the philosophy of science rat her than in the
logicalliterature.
Etherington (1988, p. 65) replies that, even if nonmonotonic reasoning
is regarded as an aspect of belief revision/fixation, it does not follow that
the rational patterns of this activity cannot be formalized at all with the
help of logical notions. In my own opinion the development in the field of
nonmonotonie reasoning since 1980 supports this reply to a considerable
degree, witness, for example, the systems reviewed in this chapter. Another
interesting reply is of Reiter (1987, pp. 181-2), who makes a distinction
between two questions in so-called abd'Uctive reasoning, wh ich is finding a
theory which can explain a certain observed fact (cf. Poole's framework
for hypothetical reasoning): the first question is what the space of possible
theories is, and the second one is what the best theories are in this space.
According to Reiter, the proper role of nonmonotonic logics is to specify the
space of possible theories, while thc second question is governed by rational
criteria of kinds which go beyond the scope of current nonmonotonie logics.
However, Reiter goes on to suggest that at least so me of these criteria might
be stated in formal terms. In fact, this is wh at Chapters 6, 7 and 8 of this
book will be about: in these chapters my airn is to show timt not only the
first but also the second quest ion, how to compare alternative theories, call
to a considerable degree be allalyzed with the help of logical notions.
LOGICS FOR NONMONOTONIC REASONING 99
4.3.2. INTRACTABILITY
101
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
102 CHAPTER 5
5.1. Introduction
Structural Resemblance
A first issue, discussed in Section 3.1, is which method best preserves the
separation of rules and exceptions in the legal natural-Ianguage sources.
Recall that I have defilled this as an aspect of the result of the formalization
process: the maill point is to avoid a situation where several natural- lan-
guage expressions are mixed in one knowledge-base unit; the situation that
one source unit is divided over several knowledge- base units is generally
not regarded as needing to be avoided.
Modularity
This issue was already briefty discussed in Section 3.1: the quest ion is
whether a natural-Ianguage expression can be formalized without having
to consider the rest of the domain. In the literat ure on nonmonotonic
reasoning modularity is one of the main aspects on which nonmonotonic
logics and formalization methods are compared (Touretzky, 1984, 1986;
Etherington, 1988, pp. 44-5; Loui, 1987, p. 106; and Poole, 1991, p. 295).
However, modularity, which is an aspect of the process of formalization,
is not always clearly distinguished from structural rescmblance, which is
an aspect of the 7'esult of formalization. One contributioll of the present
investigation will be to provide reasons why this distinction should be made.
The main disadvantage of non-modular translation is that it increases
the complexity of the task of validating and maintaining a knowledge base:
if a knowlcdge engineer has to consider many interactions between elements
of the knowledge base, then, particularly in large systems, s/he is likely to
make mistakes; this not only holds for designing a knowledge base, but
also for maintaining it, since with non-modular formalization adding new
items to thc knowledge base, for example, newly discovered exceptions,
causes the need to change old rules. In the rest of this book a formalization
method that makes it possible to add exceptions without having to change
the general rule will be said to support 'modularity of adding exceptions'.
Exclusiveness of Specijicity
Is specificity the only possible criterion in determining the correct outcome,
or can other criteria also be applied? If so, can they only be applied
alternatively or also in combination? The importance of this issue is given
by the fact that in law several standards are combined: for example, in
Section 3.1.4 we have seen that not only the Lex Specialis principle but
also the Lex Superior and the Lex Posterior principle are used. This issue
will discussed in detail in Chapter 8.
Implementation
Obviously the various methods and formalisms will be evaluated with re-
spect to the prospects for implementation in legal knowledge-based systems.
Expressiveness
This is of course a very important point of comparison between rivallogieal
systems intended to model the same kinds of reasoning. It involves questions
like: Can aB distinctions between kinds of exceptions be made? Is the
exceptional conclusion the only one or the preferred one? Are alternative
answers possible in case of undecided conflicts? Does an exception to an
exception reinstate the general rule? Is a defeasible form of modus toBens
possible?
In the last section of this chapter I return to these points, and in Chapter 10
I shall use them again, in a comparison of the exception clause approach
with the methods developed in Chapters 6 to 8. The rest of the present
chapter is devoted to the explicit exception clause approach: Sections 5.2
to 5.5 will investigate its use in respectively default logic, circumscription,
Poole's framework and logie programming, after which Section 5.6 draws
these investigations together by, among other things, a schematie overview
of how the exception clause method can be applied in each of the non-
monotonie logies. It should be noted that Sartor (1991) has made similar
investigations, particularly with respect to logic programming (including
recent developments) and Poole's framework for default reasoning; some of
my observations concerning these formalisms are also made by Sartor.
normal defaults the default theory in this example has two extensions,
whereas intuitively only the extension with Maximum = 12 is intended. As
already noted there, one way of validating the exceptional conclusion is to
use seminormal defaults, in which the nonnormal part of the justification
stands for an exceptional condition: more specifically, a unique extension
can be enforced by adding the negation of the prerequisite of the more
specific default to the justification of the more general default. Thus adding
the exceptional fact to F or deriving it from another default blocks the ap-
plicability of the more general default, which results in only one extension,
with the exceptional conclusion. Below I investigate how this method can
deal with the various types of exceptions, first with specific and then with
general exception clauses. Most of these formalizations are applications to
the legal domain of ideas developed by others. Sometimes, however, these
ideas had to be combined in an (as far as I know) relatively new way: this
holds in particular for my treatment of soft rebutting defeaters.
Thus there are as many app1n predicates as there are defaults. Another
way is to use McDermott's (1982) idea to encode the name of a rule as
a constant instead of as a predicate, which results in expressions of the
form appl(n, Xl, ••• , X m ). As observed by Brewka (1991a, p. 36), the second
naming technique is more flexible, since it allows one to say things about
rules and to quantify over classes of rules, which in law has very natural
applications: consider e.g. Examples 3.1.1 and 3.1.2, in which undercutting
defeaters render not just one norm but a group of norms inapplicable.
Note that in both naming methods the appl predicate has places for
other arguments besides for rule name. This is because it should be possible
to express the (in)applicability of adefault for some but not an individuals:
if, for example, Tweety is a penguin and Polly a raven then the default
'Birds fly' should only be blocked for Tweety and not for pony. To make this
possible the appl predicate must also have arguments for all free variables
occurring in the default. A complication is that not an defaults have the
same arity: as a consequence, in the second method we need not just one
appl predicate, but several such predicates app1i with different arities i,
where i is the number of free variables occurring in the default plus one
(since there must be an extra place for the name of the default).
This problem is avoided by the third method, in which just one appli-
cability predicate is used and in which a tuple of terms (n, Xl, ..• , X m ) is
replaced by one function expression n(xl, .. . ,xm ), resulting in a formula
appl(n(xl, ... , x m )). Thus an applicability atoms are of arity 1, for which
reason formulas that quantify over rules need not be duplicated for every
arity. It is this naming method that I shall use in the rest of this book.
However, for readability I omit the outer pair of parentheses, which results
in appln(xl, ... ,xm ).
Recall that the aim is to use the applicability clauses in such a way that
a unique extension is obtained. Below it is not always necessary to use an
applicability clause to obtain one extension but, as will be explained below,
these clauses also support modular adding of exceptions, and therefore they
will be added to every rule which is possibly subject to exceptions.
Hard Exceptions
Consider the following formalization ofExample 3.1.1, in which as a general
rule made inapplicable by 2 HPW section 6-(2) HPW is used, a rule which
decIares void any contract cIause giving an unfair profit to a third person.
Lease_clause(x) A Unfair_profit_to_third(x) :
app16-(2)(x) A Void(x)
Void(x)
In saying tImt the rent act is not applicable to leases concerning short-
termed usage, section 2 HPW is obviously an undercutting defeaterj if it is
REPRESENTING EXPLICIT EXCEPTIONS 109
Now if d lO and d lO , are applicable, both dg and d g, are blocked. Note also
that in that case d7 is reinstated since, although Minor( M ary) still holds,
this fact does now not give rise to the inapplicability of d7, since dg is made
inapplicable by dlO •
REPRESENTING EXPLICIT EXCEPTIONS 111
This example can also be used to explain why using applicability clauses
supports modularity of adding exceptions (see also Etherington, 1988, p.
103; Brewka, 1991a, p. 117). d7 would also be blocked by dg/ dg, if dg and
dg, would not have an applicability clause, but in that case the addition of
the exception dlO/d lO , would cause the need to change dg and dg, by still
giving it such a clause, since without it the resulting default theory would
have two extensions.
Undecided Confiicts
In default logic, how can the exception clause approach deal with undecided
conflicts between defaults? In the literature on nonmonotonic reasoning
the standard example is the so-called 'Nixon diamond' (see above, Sec-
tion 4.1.2). An example from the legal domain is a modification of Exam-
pIe 3.1.4 in which the agreement to duel on life-and-death is not regarded
as implying the purpose to kill, in which interpretation neither of the
two norms is an exception to the other. In the formalization methodology
developed in this section this is rendered by
Kill/\ Intentional: Maximum = 15/\ appld 1
Maximum = 15
Kill/\ life_and_death_duel : Maximum = 12 /\ appld 2
Maximum = 12
F: {life_and_death_duel ---+ Intentional, Kill,
life_and_death_duel, -,(Maximum = 12/\ Maximum = 15)}
ß: {dl' d2}
For reasons similar to those in Section 4.1.1 this default theory has two
extensions: one containing Maximum = 15 but not Maximum = 12, and
one the other way around. According to the credulous consequence notion
defined above both conclusions can be derived, while according to the
sceptical notion the disjunction Maximum = 12 V Maximum = 15 is derivable,
since by virtue of the deductive validity of cp ---+ (cp V X) this formula is
in both extensions. In conclusion, default logic nicely represents undecided
conflicts.
5.2.3. EVALUATION
5.3. Circumscription
Again it is assumed that enough axioms are added to express the incom-
patibility of the two consequents. If now only
Charles kills Henry 1\ Charles acts with intent
is given, then in all minimal models of these premises exc287( Charles,Henry)
is false, for which reason all those models make maximum penalty for
Charles is 15 years true. If, however, also
Charles and Henry duel on life-and-death
114 CHAPTER 5
is given, then the theory dassicaIly implies both maximum penalty for
Charles is 12 years and exc287( Charles, Henry). Thc reason is that then
in alt models the antecedent of 154-(4) Sr and therefore also its conse-
quent is true, because of which with the additional axioms in aIl models
maximum penalty for Charles is 15 years is false; hut then the material
implication 287 Sr can in all models only be true if its antecedent is false as
weIl, and since the first two conjuncts are given, -, exc287( Cha1"les,Hen1"Y)
must in aIl models be false.
The problem is that if the input atoms of the exception are true for, say,
the contract a with dause band if y = 1(a, b), (2') is equivalent to a
disjunction exc2'(a, 1(a, b)) V exc1(a, b), which has two minimal models,
whereas intuitively the minimal model which makes excl(a, b) true and
exc2'(a, 1(a, b) false is the intended one, since there is no reason to make the
latter atom true: Le. it is not supported by a rule with this atom as the head.
As explained in Chapter 4, this has motivated the development of so-called
prioritized circumscription, in which predicates or atoms of lower priority
are aIlowed to vary if predicates or atoms of higher priority are minimized:
thus the intended model can be obtained by giving exc2' (a, 1(a, b)) priority
over exc1(a, b).
Note that in order to capture the directional eifect of the rules for, say,
Mary, the minimization of the exception atoms has to bc prioritized as
exc7(M ary) > exc5(M ary) > exc4(M ary)
More on this will be said in the seetion on logie programming.
Undecided Confiicts
The final situation which has to be considered is the Nixon Diamond, for
which I use the same legal version as in default logic. If in Example 3.1.4
both penal rules are formalized with an exception predicate (which is the
only way of avoiding inconsistency) then they become
287: \/x, y. x kills Y 1\ x acts with intent 1\ ..., exc287(x, y)
~ maximum penalty for x is 15 years
154-(4): \/x, y. x kills Y 1\ x and y duel on life-and-death
1\ ..., excl54-(4)(x, y) ~ maximum penalty for x
is 12 years
For the same reasons as in Example 4.1.14, this has two minimal models,
one with only exc287( Charles,Henry) true, and one with only excl54-
(4)( Charles,Henry) true; in both models the disjunction
maximum penalty for Cha1'les is 15 years V
maximum penalty for Charles is 15 years
is the strongest formula which holds for the output predicate, for which
reason no more than this disjunction can be derived. Note that in circum-
scription it is impossible to present both disjuncts alternatively as accept-
able beliefs given the facts and legal rules; in this respect circumscription is
less expressive than default logic, in which the disjuncts can be presented
as contained in different extensions.
EVALUATION
In conclusion, if general exception clauses are used, then all kinds of ex-
ceptions can also be formalized in circumscription. However, whereas it
is in default logic always possible to formalize in such a way that one
unique extension results, in circumscription priorities sometimes have to
be used to choose between multiple minimal models. This is particularly
necessary in two situations which are very common in the legal domain,
viz. situations with soft undercutting defeaters, and situations with hard
undercutting defeaters of which the input predicate is made true by a
defeasible rule. Dnless general criteria for prioritizing minimization can
be expressed, this decreases the modularity of the formalization process.
This difference between default logic and circumscription is mainly caused
by the fact that unlike adefault, which is an inference rule, the material
implication is not directional.
REPRESENTING EXPLICIT EXCEPTIONS 117
which clearly reveals that this is in fact the applicability clause approach.
Actually Poole's naming technique is the first method explained above
in Section 5.2.2, with a predicate app1i' where i is the name of the default.
As remarked there, this technique is less flexible than denoting the name of
a rule by a function expression, which is the technique used throughout this
chapter. It can also be used without any difficulty in Poole's framework,
and because of its advantages I shall do so in the remainder of this section.
With this methodology, how can the various kinds of exceptions be
modelled in Poole's framework? Below I focus on scenarios instead of on
extensions, making use of Poole's (1988, p. 30) proof that every scenario is
contained in at least one extension. In all cases adefault
(1) appl(x)
should be added to ß. Now first it will be shown that undecided confiicts
between rules can be expressed by Poole's framework without difficulties:
assume that the next two conflicting rules are in F, together with the truth
of their 'input predicates' and the information that nobody is both a hawk
and a dove.
(2) \Ix. x is a quaker /\ app12(x) --t X is a dove
(3) \Ix. x is a republican /\ app13(x) --t x is a hawk
(4) Nixon is a quaker /\ Nixon is a republican
(5) \Ix --, (x is a hawk /\ x is a dove)
F U {app12(Nixon)} explains Nixon is a dove and F U {app13(Nixon)}
explains Nixon is a hawk, while both scenarios explain the disjunction
Nixon is a hawk V Nixon is a dove. Note that because ofthe consistency
requirement for scenarios app12(Nixon) and app13(Nixon) cannot be used
in one scenario.
Consider next hard rebutting defeaters and assume that instead of the
premises {( 2 - 5)} the following general rule is a member of :F.
(6) \lx.Ax /\app16(x) --t Bx
Thus, if
(8) Aa /\ Ca
is added, in which the empty head stands for the false proposition. Con-
straints can be represented without extra logical machinery by using an-
other transformation trick: for each rule with Px in the head an extra literal
rv P*x is added to the body and vice versa, which blocks the simultaneous
derivation of two clauses Pa and P*a (see Kowalski, 1989 for applications
to representing legislation). However, as we will see below, with SLDNF
resolution this often results in looping programs.
Rebutting Defeaters
As just explained, the formalization of rebutting defeaters requires the use
of a translation trick, but apart from this no real problems arise; it is not
even necessary to encode the incompatibility of the various predicates. Hard
rebutting defeaters can be formalized in the following way.
(1) Px 1\ rv Qx -+ Rx
(2) Qx -+ R*x
If besides Pa also Qa holds, then (1) is blocked, so that only R*a can be
derived. Note that from {(I, 2)} it is impossible to derive, for any individual
a, Ra and R* a at the same time. The general idea is to add the negation of
the body of the exception to the general rule; if this body has more atoms
the general rule has to be split: for example, if (2) is changed into
(2') Qx 1\ Sx -+ R*x
then besides (1) also
(1') Px 1\ rv Sx -+ Rx
is needed.
Soft rebutting defeaters can also easily be expressed. The same method
as for a general rule and its exception can be used for the exception and
its exception. Here is a soft rebutting defeater of (1) in case of Qx, joined
with its own (hard) rebutting defeater in case of Tx.
(3) Qx 1\ rv Tx -+ R*x
(4) Tx -+ Rx
Then if Pa, Qa and Ta hold, (3) is blocked and with (4) we can derive Ra.
Note, however, that the application of (4) does not reinstate (1).
122 CHAPTER 5
Undecided Conflicts
Here thc first problems arise. Consider thc Nixon Diamond, first without
cxception clauses.
(5) Rx --t P*x
(6) Qx --t Px
If Rn and Qn are added, then both Pn and P*n can be derived without any
recognition of the contradiction. To prevent this, exception clauses should
be used. The first possibility is to use the integrity constraint
Px 1\ P*x--t
in the way just described, viz. by changing (5) and (6) into
(5') Rx 1\ ,..., Px --t P*x
(6') Qx 1\ ,..., P*x --t Px
However, if now Rn and Qn are added, then with SLDNF resolution this
is a looping program (note also that the program is not stratifiable). To
avoid this, another way is needed to add exception clauses, in which the
method of representillg hard rebutting defeaters is used as if both rules
were exceptions to each other:
(5") Rx 1\ ,..., Qx --t P*x
(6") Qx 1\ ,..., Rx --t Px
If now both Rn and Qn are added, then both rules are blocked, since the
negated atoms in the bodies are false: therefore not hing can be concluded at
all, which is the sceptical approach to ulldecided conflicts. In this rcspect the
logic of general programs is, when compared to the other formalisms, less
expressive: like circumscription it cannot present alternative conclusions,
and it is the only formalism in which not even the disjunction Pn V P*n
can be derived, which formula, if P and P* are only incompatible but not
complementary (as in the 'hawk' and 'dove' version ofthe Nixon diamond),
at least carries some information.
Undercutting Defeaters
Hard undercutting defeatcrs can be formalized without any problem.
REPRESENTING EXPLlCIT EXCEPTIONS 123
Rebutting DefeateTs
HaTd rebutting defeaters cannot be expressed in the same simple way as in
circumscription, which would be
(13) Px A '" excl3(x) -+ Qx
(14) Rx -+ Q*x
The problem is that with Pa and Ra this yields both Qa and Q*a, without
recognition of the contradiction. Therefore, besides (14) an extra rule is
needed, viz.
(15) Rx -+ excl3(x)
Soft rebutting defeaters are formalized in the same way as in the other
approaches, viz. combined with an undercutting defeater: (13) is softly
rebutted by the next two rules
REPRESENTING EXPLICIT EXCEPTIONS 125
Undecided Confticts
Again problems arise in case of the Nixon diamond, particularly if it is
represented as
(18) Qx /\ rv excl8(x) -t Px
(19) Rx /\ rv excl9(x) -t P*x
While with the use of specific exception clauses both rules are blocked if
Rn and Qn are added, now both exception clauses are negated by failure,
for which reason both Pn and P*n can be derived. The ollly way to obtain
the same result as above is to add two rules
(20) Qx - t excl9(x)
(21) Rx - t excl8(x)
which, however, is again the extreme sceptical approach, in which neither
alternative conclusions, nor a disjunctive conclusion can be derived.
The discussion so far has revealed that with respect to the present topic
logic programming with general programs mainly has difficulties with un-
decided conflicts between rules: sometimes two intuitively incompatible
conclusions are derived at the same time, and sometimes a looping program
results. Since the main cause of the problems is the inability of generallogic
programs to express classical negation, it becomes interesting to consider
some recent developments in logic programming, consisting of adding clas-
sical negation -, to the language. This results in so-called extended logic
programs, which are sets of ground clauses of the form
LI /\ ... /\ L m /\ '" L m +1 /\ ... /\ '" Ln :::} Lo
(where n 2: m 2: 0). The crucial difference with generallogic programs (cf.
Definition 4.1.9) is that now each Li does not have to be an atom; it may
also be a classically negated atom. Clauses with variables are interpreted
as a scheme for all their ground instances. Note that classical negation can
occur both in the body and in the head of a clause, and that not only
atoms but also classically negated atoms can be negated by failure. What
126 CHAPTER 5
This program is transformed into (ground instances of) (5) and (6), and this
has a unique stable model, satisfying both Pn and P*n, without recognizing
the contradiction.
So it is necessary to use exception clauses, but this can he done in
several ways. If (22) and (23) are changed into
(22') Qn 1\ rv Rn ::::} Pn
(23') Rn 1\ rv Qn ::::} ,Pn
then the translation results in (5") and (6"). If now Rn and Qn are added,
then again both rules are blocked; like general logic programming, stahle
model semantics does not conclude anything either.
The only way in which two answer sets/stable models can be obtained
is by letting a rule 'assume' its own head in its body:
(22") Qn 1\ rv ,Pn ::::} Pn
(23") Rn 1\ f'V Pn ::::} ,Pn
Note the similarity with normal defaults of default logic, of which the
justification is equal to their consequent. The general counterpart of this
extended program is {( 5',6') }. Ahove we remarked that this program is
not stratifiable, but in the new semantics it has two stable models, one
satisfying that Nixon was a pacifist and the other satisfying that Nixon was
not a pacifist. Note that to obtain this result for every undecided conflict,
it is crucial that every dause of an extended program is formalized in the
above way, Le. with the contrary of its head negated by failure in its body.
However, this solution still does not solve thc procedural problems
with thc Nixon diamond, sincc with SLDNF resolution {22", 23"} is still a
looping program. For this reason much currcnt research in logic program-
ming aims at developing alternative proof procedures for the new semantic
developments.
Well-founded Semantics
While the approaches based on stable semantics are credulous, another pro-
posed semantics for general and extended logic programming, well-founded
semantics (Pereira & Alferes, 1992), is sceptical: where the credulous ap-
proach gives alternative conclusions, well-founded semantics draws no con-
clusion. While in case of undecided conflicts this is a drawback, an im-
portant advantage of well-foullded semantics is that it assiglls a canonical
model to every logic program. However, some have argued that well-founded
semantics loses some intuitive conclusions of stable semantics; for this
reason research continues (see e.g. Brewka, 1996; Prakken & Sartor, 1997a).
instance, the inference of 1/1 from {c.p ---t 1/1, -,c.p ---t 1/1}. Another invalidity
is the inference of c.p from -'-'c.p, which blocks so me (but not all) con-
trapositive inferences, as thc reader can easily check. The effect of this
is that adding negation does not make the logic intractable. To model
nonmonotonic reasoning McCarty (1988c) introduces a failure operator,
to be used only in combination with intuitionistic negation. McCarty &
Cohen (1990) go on to advocate the use of explicit exception clausesj in
their opinion knowlcdge-engineering problems can be avoided by starting
to write a program without exception clauses and by then 'debugging' the
program by running various anticipated queries through the interpret.er and
blocking t.he unintended inferences by adding exception clauses. For present
purposes it is particularly interesting that McCarty's system can alterna-
tively present contradictory conclusions, while, furthermore, all kinds of
exceptions Iisted in 5.1.2 can be formalized in a similar way as in 'classical'
logic programming. It remains to be investigated how the exact relationship
is bctwecn McCarty's ideas and the other new developments. An interesting
question is what the philosophical impIications are of using intuitionistic
rat her than classical logic.
5.5.4. SUMMARY
5.6. Evaluation
This extended version is defined in the following way. Besides the categories
F for the facts and !:l. for the defaults Poole intro duces a new category C for
so-called constmints, which cannot be used for constructing explanations,
but only for blocking them. This is ensured by adding to Definition 4.1.17
of a scenario the requirement that F uD u C is consistent. Now defeasible
modus tollens can be blocked for (1) by adding the formula
(3) Vx. ,Bx --+ excl(x)
not to the facts but to the constraints. To see this, assume that ,Ba is added
to the facts F: then F U C implies excl(a) and therefore the explanation
for ,Aa becomes inconsistent with FUC, since it contains ,excl(a). Note
that (3) can also be added if ({1, 2)} is a circumscriptive theory, but then
it is just another premise, for which reason it can also be used for other
derivations, while in Poole's system it is only meant for blocking derivations.
As in default logic, in logic programming no contrapositive inferences
are possible at all. For Horn clauses this is obviously caused by the absence
of negation in the language, and for general clauses by the fact that only
atoms occurring in the body of a clause can be negated. And contraposi-
tive inferences are also invalid in the above mentioned extensions of logic
programming with classical negation, since in these approaches the clauses
are interpreted as inference rules. However, this has nothing to do with
considerations of defeasible reasoning; the only reason why these inferences
are invalidated is to preserve completeness (or near completeness) of logic-
programming's efficient proof methods. This, however, is philosophically
highly questionable: the only sensible reason to invalidate inferences like
modus tollens for a conditional seems to be that the conditional is inter-
preted as a defeasible one.
In conclusion, while default logic and logic programming do not allow
for any form of defeasible modus tollens, circumsctiption and Poole's frame-
work allow for both validating and blocking defeasiblemodus tollens, by
providing options for formalizing the problem in different ways. However,
since this choice is left to the user, these systems do not give any insight
into the philosophical quest ion whether defeasible modus tollens should be
valid.
Structural Resemblance
As already explained in Chapter 3, mixing source units in the formalization
can only be avoided if the exception clauses are general. However, it seems
REPRESENTING EXPLICIT EXCEPTIONS 137
that for soft rebutting defeaters in default logic and circumscription the sit-
uation of a one-to one correspondence between source units and knowledge-
base units cannot be obtained but only be approximated, by giving the same
name to knowledge-base units based on the same source unit.
Modularity
In general the exception clause approach is non-modular. Sometimes it
is said to support modularity (e.g. by Etherington, 1988, pp. 102-3 and
Brewka, 1991a, p. 117), but this only holds for modularity of adding excep-
tions; the formalized exception still has to mention explicitly to which rule
it is an exception, for which reason the knowledge engineer must still be
aware of all possible interactions between the various rules and exceptions
(cf. Touretzky, 1986, p. 18; Poole, 1991, pp. 282, 295). This is different
only if natural-Ianguage exceptions themselves mention to which rules they
are an exception, as is sometimes the case in law. Furthermore, we have
seen that in case of soft undercutting defeaters and in case of chaining rules
circumscription needs prioritization of defaults, which leads to non-modular
formalization if this cannot be done on the basis of general criteria. Such
criteria are provided by logic-programming's concept of stratification, but
only if the theory does not contain undecided conflicts.
Implementation
Of course, logic programming with general programs offers the best prospects
for implementation, but this is mainly caused by its underlying idea of
restricting the language to a computable fragment. We have seen that if
the languages of the other formalisms are restricted in a similar way, then
under certain conditions they become formally translatable into general
logic programs. Furthermore, even if this does not hold, then at least
a general program can be designed which roughly preserves the style of
formalization and which validates more or less the same inferences. For
these reasons logic programming is more a way of implementing logics than
a logic itself. However, the restrictions under which these observations
hold are rat her serious: they imply that conflicts between rules are either
resolved or blocked, and they give up the possibility to express classical
negation. For these reasons it is interesting to see what the results will be
of the ongoing investigations on the proof theory of logic programming.
Exclusiveness 01 Specijicity
As noted several times, the aim of capturing specificity by way of the
exception clause approach is to obtain a unique extension in which the
exceptional conclusion holds. In fact, in this way specificity is expressed
implicitly in the way the exception clauses are assigned. Now if this results
138 CHAPTER 5
in unique answers there is nothing to choose, for which reason there is only
room for other standards in case of undecided conflictsj if a conflict can
be solved by specificity, no other standard can solve it otherwise, and this
makes specificity exelusive.
Expressiveness
Specific exception elauses are of limited use, but general elauses can be
used for the formalization of most kinds of exceptions. Problems arise
with two kinds. Firstly, in case of undecided conflicts between rules logic
programming either does not recognize the conflict, or results in loop-
ing programs, or has to be completely silent, and circumscription can-
not present alternative conelusions, but only one disjunctive conelusion.
Secondly, Poole's framework cannot represent soft undercutting defeaters,
which, among other things, prevents reinstatement of general rulesj the
other formalisms do not have this problem. We have also seen that, since
the exception elause approach aims at obtaining unique answers, in no
formalism the exceptional conclusion can be presented as the preferred one
of conflicting alternatives. Furthermore, defeasible modus tollens is only
possible in circumscription and Poole's framework, although it depends for
each individual default on the chosen circumscription policy. Finally, the
language of logic-programming is 10gicaIly far less expressive than the other
formalisms, but this is the price of tractability.
Overall assessment
In sum, although the exception dause approach has turned out to work
rat her weIl, if it has to be efficiently implemented it does so on a restricted
domain of application. The most important restriction is that it must
always either solve the conflict in favour of Olle of the rules, or block their
simultaneous application. For many applications this restriction will not
be a disadvantage, but for other ones it will be. In particular, attempts
to model the adversarial aspect of legal reasoning, which is one of the
main topics in current AI-and-Iaw research, need ways of dealing with
undecided conflicts without resolving or blocking them. It is true that,
at least philosophically, default logic satisfies these requirements, but when
used for the exception elause approach it has still a restriction: without
undecided conflicts it results in unique extensions, for which reason it does,
as just explained, not leave room for other standards to resolve norm con-
REPRESENTING EXPLICIT EXCEPTIONS 139
6.1. Introduction
In this chapter I start the investigation of a second way of modelling rea-
soning with defeasible rules; this is the method of allowing for incompatible
solutions to a problem and choosing the exceptional one if it exists. In this
method exceptions can be left implicit: no use has to be made of exception
or applicability clauses, since exceptions are identified as a result of the
choice. As alrcady mentioned in Chapter 3, when applied to legislation this
method corresponds to application of the legal collision rule Lex Specialis
Derogat Legi Generali.
In the previous chapters three main reasons were identified in favour
of modelling reasoning with exceptions as choosing between alternative
conclusions. The first has to do with undecided conflicts: it should be
possible to say that a problem has two alternative, incompatible solutions,
and that there is no reason to prefer one of them. Another reason is that
the choice approach is, just like the exception clause approach, a way of
preserving the separation of rules and exceptions in legislation, but with
the advantage that it is often closer to natural language, since it does not
need to use exception clauses when the natural-Ianguage text does not use
them. A third and very important reason is that a system based on choosing
between answers leaves room for other standards besides specificity, which
still might solve conflicts between rules of which no one is an exception
to the other, or which might even override the specificity considerations.
The law is one domain in which such other standards are used: conflicts
between norms in legislation are not only solved on the basis of the Lex
Specialis principle, but also, and even with higher priority, on the basis of
the time of enactment of a norm and on the basis of the general hierarchical
structure of the legal system. This observation is the reason why in this book
I do not go into the third method of dealing with exceptions, besides the
exception clause and the choice approach. This method, already mentioned
in Section 5.1, is to rcgard specificity as a principle of the semantics of
a logic for defeasible conditionals. Obviously, if the specificity critcrion is
modelled as a semantic principle, no other standards can override it.
Within the choice approach the choice can be made in two ways. The
first, which will be investigated in this chapter, is to use an explicit formal-
141
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
142 CHAPTER6
be modelled by changing the logic or only by changing the way the logic is
used? The reason why this question particularly arises in the choice method
is that, as shown in Section 3.1 above, ways of choosing between conflicting
answers make the reasoning process nonmonotonic even if the underlying
logic is standard logic; and if inconsistency handling suffices to obtain
nonmonotonicity, why then also change the logic? The most outspoken
proponent oft his approach is Brewka (1989; 1991a), and Poole (1985; 1988)
also claims that if his way of using logic is followed, no nonstandard logics
are needed (note, however, that Poole, 1988 does not restrict this claim
to the way he models the choice approach; in that paper Poole mainly
investigates the use of his framework for modelling the exception clause
approach). In short, in the inconsistency handling approach defaults are
not regarded as different linguistic entities, but as approximations to the
truth, which might be set aside in specific circumstances. The discussion
in this chapter of Poole and in the next chapter of Brewka also serves
to investigate the tenability of this paradigm in modelling nonmonotonic
reasoning.
In this chapter two questions have to be answered: how can the notion
of specificity be formally defined as a collision rule, and what is the general
formal context determining when there is something to choose? Section 6.2
investigates the answer of Poole (1985) to the first question, after which
Section 6.3 criticizes the way Poole's uses his specificity definition in his
framework for default reasoning; this criticism results in an answer to
the second question in the form of an 'argumentation system', which is
presented in Sections 6.4 to 6.6. Finally, in Section 6.7 the choice method
with specificity is evaluated.
This categorization involves more than testing for a subset relation, but it
is still based on implicit intuitions of what it means that an argument is
more specific than another. Poole's (1985) "theory comparator" is the first
attempt to give a semantic definition of specificity, without any reference to
syntactic cases, for which reason it can, if correct, be used for determining
the soundness of any syntactic categorization of specificity.
Poole (1985) presents his formalization of the specificity principle against
the background of his general view on default reasoning presented in detail
in Poole (1988), which, as explained in Chapter 4, is that if defaults are
regarded as possible hypotheses with which theories can be constructed
to explain certain facts, nonmonotonic reasoning can be modelled without
giving up standard logic. My initial reason to study Poole's ideas was
their striking similarity (also observed by Gordon, 1989) to what I call the
'modelling disagreement' view on legal reasoning. Very roughly, this view,
which is particularly popular among Anglo-american researchers in the
field of AI and law (Rissland & Ashley, 1987; Gardner, 1987; Gordon, 1989;
Skalak & Rissland, 1992) says that lawyers do not try to argue for the
legally correct solution, if it exists at all , but for the solution which best
serves the client's interests. More specifically, the similarity is as follows:
the legal counterpart of Poole's explanations are arguments for a desired
solution of a case: certain facts must be obeyed by such arguments: for
example, facts about the case at hand, or necessary truths such as 'a man
is aperson' or 'a lease is a contract', but for the rest lawyers have available
a large body of conflicting opinions, rules and precedents from which
they can choose a coherent set of premises which best serves the client's
interests. Of course, lawyers do not only express disagreeing views but
also cornpare them; now for this Poole's framework also provides a formal
counterpart, viz. the possibility to make a choice between incompatible
explanations. In Poole (1985) he only investigates the specificity principle,
hut in Poole (1988, p. 45) he remarks that also other criteria might he
used.
Recall that in Poole's framework adefault theory is a pair (T, ß), where
T is a consistent set of closed first-order formulas, the facts, and ß a set
of first-order formulas, the defaults; furthermore, a scenario of (T, ß) is
a consistent set Tu D, where D is a subset of ground instances of ß;
finally, an explanation of cp from (T, ß) is a scenario of (T, ß) implying
cp. Now, although Poole's (1985) main ohjective is to provide a semantics
for inheritance networks with exceptions, he does not restrict his specificity
principle to such networks, hut defines it on the semantics of full first-order
predicate logic. Consider explanations Ai = TU Di for cp and Aj = Tu D j
for -'cp. Informally, the idea is that Ai is more specific than Aj iff there is a
possible situation in which only A j applies. To make this precise, the facts
PREFERRING THE MOST SPECIFIC ARGUMENT 145
6.3. Problems
The first problem was discovered by Loui & Stiefvater (1992). As explained
above, the general intuition behind Poole's specificity comparator is that
an argument Al is strictly more specific than an argument A 2 if there is a
possible situation in which only A 2 applies, and there is no possible situation
in which only Al applies. Although as a general intuition this seems sound,
PREFERRING THE MOST SPECIFIC ARGUMENT 149
the next example shows that the notion of a 'possible situation in which an
argument applies' should be defined carefully.
Example 6.3.1 Consider:
Al = {t, s t -t r s-tq (qAr)-tp}
A2 = {s s-tq q - t .p}
Fe = {s, t}
Fn = 0
Intuitively, Al should be strictly more specific than A 2 • However, according
to Poole's definition this is not the case, since there is a possible fact
making Al explain p without making A 2 explain 'p: this possible fact
is tA (r - t q). The problem is that this fact 'sneaks' a new way of deriving
an intermediate conclusion into the argument by introducing a new 'link'
r - t q, thereby intuitively making it a different argument: with the possible
fact Al uses a different default to explain q than with the actual facts,
viz. t - t l' instead of s - t q, and therefore it cannot be said that Al
itselj applies in the possible situation. Note that these considerations are
of a syntactical nature, which indicates that Poole's aim to have a purely
semantical definition of specificity cannot be maintained.
Even with a framework satisfying the just stated demands there will be
so me problems left. As noted above, advocates of the inconsistcncy handling
approach to nonmonotonie reasoning stress thc fact that this approach can
be based on standard first-order logic. However, there are strong reasons
to doubt the tenability of this claim: I shall now show that if the required
general framework of comparing arguments which has just been sketched
is combined with Poole's (and Brewka's) view that defaults can be repre-
sented as material implications, then arguments can be constructed which
intuitively should not be possible at all.
Example 6.3.3 Consider the example of Bob having killed Karate Kid in
self-defence.
(1) x kins y ~ x is guilty of murder
(2) x kins yA x acts in self-defence ~
--, x is guil ty of murder
(3) x defends himself against KK ~ x acts in self-defence
Fe: {Bob kins KK, Bob defends himself against KK}
ß: {(I - 3)}
Clearly, from this set of rules and facts --, Bob is guil ty of murder
should be the preferred conclusion. However, Poole's framework allows us
to explain Bob acts in self-defence from Fe U {(3)}, but also --, Bob
acts in self-defence from Fe U {(1, 2)}. The reason is timt (1) together
with Bob kins KK implies Bob is guilty of murder, which by modus
PREFERRING THE MOST SPECIFIC ARGUMENT 151
tollens together with (2) implies --, Bob kills KK V --, Bob acts in
self-defence, which with the facts implies --, Bob acts in self-defence;
furthermore, since Bob kills KK is a possible fact making F n U {(1,2)}
explain --, Bob acts in self-defence without making F n U {(3)} explain
Bob acts in self-defence, the argument against Bob acts in self-
defence is less specific, which would mean that given the premises there is
an irresolvable legal issue concerning whether Bob acted in self-defence and
this in turn would mean that the argument for --, Bob is guil ty of murder
uses a non-preferred sub argument and cannot be preferred.
However, in my opinion the argument for --, Bob acts in self-defence
is intuitively not constructible, for the following reasons. This argument is
based on the fact that, since (1) and (2) have contradicting consequents,
logically their antecedents cannot be true at the same time; however in
the inconsistency handling approach, as weIl as in legal reasoning, the very
purpose of using collision rules is to cope with situations to which conflicting
rules apply, and therefore it is strange to allow an argument based on the
fact that such a situation cannot arise. A system which makes the argument
against Bob acts in self-defence possible fails to recognize that it is
(1) and (2) that are in conflict and that this is the conflict that has to
be resolved. And since this argument is based on modus tollens, we must
conclude that rules that are subject to collision rules do not validate modus
tollens.
It must be admitted that, as already discussed above, Poole (1988, pp.
137-40), recognizes the invalidity ofmodus tollens for defaults as "a possible
point of view", and presents a method to block it. However, as we have seen
in Section 5.6.3, this method is optional: the choice whether to use it or not
must be made separately for each default and philosophically this is not
satisfactory: if modus tollens is regarded as invalid for defaults, this should
be expressed in their logic, by making them one-directional.
To summarize the main results of this section, we have, firstly, seen
that the choice approach to deal with exceptions should take into account
the step-by-step nature of argumentation and, secondly, that the claim of
Poole and others that this approach can be modelled without having to
change the logic cannot be maintained: defaults should be represented as
one-direction rules.
One main conclusion of the previous section was timt even in a system
for comparing arguments defeasible statements need to be formalized with
a nonstandard, one-directional conditional operator. To meet this require-
ment, the underlying logical language of the system will be that of default
logic. However, it is important to realise that the only aspect of default
logic that is used by the system is its language; Reiter's not ion of adefault
PREFERRING THE MOST SPECIFIC ARGUMENT 153
extension is replaced by the last three not ions just discussed, that of an
argument, of defeat among arguments and, most importantly of the status
of an argument.
The language of default logic is used in the following way. With respect
to the facts, the idea is to represent the facts of the case at hand as the set
:Fe of contingent facts, and necessary truths such as 'a man is aperson' or 'a
lease is a contract', as the set :Fn of necessaTY facts. With respect to default
statements the idea is to formalize them as non-normal defaults 'P : T /1/;
(where T stands for any valid formula), which in the rest of this book will
be written as 'P => 1/;. Unconditional defaults will be represented as defaults
of the form => 'P which is shorthand for T => <po The one-directional nature
of Reiter-defaults invalidates the formal construction of intuitively invalid
arguments.
At first sight, the use of non-normal defaults seems surprising, since in
Chapter 4 we have seen that non-normal default theories do not always
have extensions. However, the present system solves this problem, as will
be discussed in more detail below.
Next the language will be formally defined. Defaults will from now
on be called 'defeasible rules', or just 'rules'. Moreover, for notational
simplicity they are now regarded as object level formulas instead of as
(domain specific) inference rules, as they are in default logic. However, it
should be noted that they can still not be combined with other formulas,
i.e. they cannot be negated, conjoined, nested, and so on.
Definition 6.4.1 Let Lo be any jiTst-oTdeT language.
- A defeasible rule is an expression of the form
'PI 1\ ... 1\ 'Pn => 1/;
where each 'Pi (0 :S i :S n) is a formula of Lo. The conjunction at the
left of the aTrow is the antecedent and the literal at the right of the
arrow is the consequent of the rute. A rule with open Lo-formulas is a
scheme standing for alt its ground instances.
The defeasible extension LI of Lo is Lo extended with the set of alt
defeasible rules 'P => 1/;.
- Adefault theory is a set (:Fe U F n U~), where Fe U Fn is a consistent
subset of Lo and ~ is a set of defeasible rules.
To define some useful notation, for any rule 1', ANT(r) and CONS(1')
denote, respectively, the antecedent and the consequent of 1', while the con-
junction of all elements of ANT(r) is denoted by ANTCON(1'). Further-
more, for any finite set of rules R = {1'1,"" 1'n }, ANT( R) = ANT( 1'1) U
... U ANT(1'n ); likewise for CONS(R), while, finally, ANTCON(R) is the
conjunction of all members of ANT(R).
154 CHAPTER 6
6.4.3. ARGUMENTS
that arguments do not contain unused defeasible rules: every defeasible rule
in an argument is both 'applicable' and 'applied' . This is because, since in
the present system reasoning about defeasible rules is impossible, including
such rules in an argument without using them does not serve any purpose.
The second property is that arguments do not contain circular chains of
rules; without this condition it would sometimes be possible to save inferior
arguments by extending them circularly.
[]
[q]
[=> P,pJ
[=> p,p,q]
[q,q => 1',1'J
[=> p,p, q, q => 1',1'J
[=> p,p, q, q => 1',1', (p t\ 1')J
[=> p,p, q, q => T, 1', (p t\ 1'), q t\ l' => s, sJ
Of these arguments only [ land [qJ are strict.
In the rest of this book I shall for readability of the examples often
only list the premises of an argument and leave the logical axioms and
derivation steps implicit. I also use the following notational convelltions. As
noted above, the definitions below assume an arbitrary default deductive
system. Furthermore, unless specified otherwise, they also assume a fixed
but arbitrary default theory r = (Fe U F n U ß). F will denote Fe U F n .
Furthermore, when I write Fe, Fn or D, I implicitly assume that Fe ~
Fe, Fn ~ F n and D ~ ß. And when I say of an argument A that A =
[Fe, F n , D], I mean that the premises of Aare all and only elements from
Fe, Fn and D.
So far the notions have been fairly standard; now the adversarial aspects
of the system will be defined. First it will be defined when an argument
attacks, i.e. is a counterargument of another argument 3 . This definition
does not yet include any way of checking which argument is better; it only
teIls us which arguments are in conflict. At first sight, it would seem that
arguments attack each other iff they have contradictory conclusions, i.e.
iff two rules in the respective arguments are in a head-to-head conflict.
However, the necessary facts complicate the matter. To see this, consider
the following example.
Example 6.4.6 Assume we have the defeasible rules
d1 : x has children => x is married
d2: x lives alone => x is a bachelor
with the necessary fact
In: 't/x. x is married ---+ ..., x is a bachelor
and the contingent facts
3To prevent terminological confusion, it should be noted that Dung and Bondarenko
et al. use the term 'attack' in a different way, coming doser to my notion of 'defeat', to
be defined below. They do not have an explicit notion of what I call 'attack'.
PREFERRING THE MOST SPECIFIC ARGUMENT 157
The arguments [fCl' dll and [fC2' d2 l have no contradictory conclusions. Yet
intuitively they attack each other, since the necessary fact fn is a linguis-
tic convention, declaring the predicates is married and is a bachelor
incompatible. To capture this intuition, two arguments must be defined
as attacking each other iff they have conclusions that together with the
necessary facts are inconsistent.
Definition 6.4.7 (attack) Let Al and A 2 be two arguments. Al attacks A 2
iff Al U A2 U F n I- .L
Let me now discuss and illustrate this definition. Note first that if Al at-
tacks A 2, then A 2 attacks Al. Next, every internally inconsistent argument
attacks not only itself but every other argument, since every well-formed
formula of Co is implied by the inconsistent conclusions alone. Although this
might seem strange, it will not cause problems, since inconsistent arguments
will always come out as overruled (see furt her Section 6.5).
Example 6.4.8 The first example shows that in order to attack an argu-
ment, a counterargument can point its attack at the 'final' conclusioll of an
argument, but also at one of its proper subarguments, thereby indirectly
attacking the entire argument. So if we have
dl : ~ f forged evidence e
d2: f forged evidence e ~ ..., e is admissible evidence
d3 : ~..., f forged evidence e
we have that [d 3 l does not only attack [dll, but also [d l , d2 l. And if we vary
this example into
dl : ~ f forged evidence e
d4 : f is police officer 1\ f forged evidence e ~ ...,
f is trustworthy
d5 : f is police officer ~ f is trustworthy
d6 : f is trustworthy ~ ..., f forged evidence
fc: f is police officer
then we have that [d l , d4 ] does not only attack [fc, d5] but also [Jc, d5, d6].
And [fc, d5, d6l does not only attack [dd, but also [d l , d4 ].
The concept of 'attack/counterargument' is very important, since clearly
a minimum requirement on any system for comparing arguments is that if
two arguments are in conflict with each other, they cannot both be accepted
as justified. The present system should agree with this, in the sense that
the set of 'justified arguments', to be defined later, is free from conflicts.
To this end, I now define the following notion.
158 CHAPTER 6
Now that we know which arguments are in conflict with each other, the
next step is to compare conflicting arguments. This step has three aspects:
picking out the rules that are relevant to a conflict, comparing the relevant
rules with respect to specificity and determining how this results in defeat
relations among arguments.
The new specificity definition will be of a syntactic nature, for two
reasons. Firstly, Example 6.3.1 shows that an intuitive specificity defi-
nition that is free from syntactic considerations is impossible. Moreover,
Example 6.3.3 has shown that it is important to identify the rules that are
responsible for a conflict. It is these rules that will be compared with respect
to specificity. Therefore we first have to define when a rule is relevant to a
conflict.
fact. Instead the conflict is between [d l , p, Je, -, q] and [d2, q]. Now a crucial
difference with the first example is that the conflict pair of these arguments
is (0,0): again this is because Je is now just a contingent fact and not a
linguistic convention, for which reason it cannot be said that it is the two
defeasible rules that are conflicting.
Let us now examine the three examples from Section 6.3 that revealed
problems of Poole's approach. The will be used throughout below to illus-
trate the remaining components of the present system.
Example 6.4.11 Consider first the translation of Example 6.3.l.
Al = [t, s t :::} r s :::} q q /\ r :::} p]
A2 = [s s :::} q q :::} -,p]
Fe = {s, t}
Fn = 0
There is only one conflict pair, viz. (q /\ r :::} p, q :::} -,p), since {p, -,p} I- 1...
Example 6.4.12 Consider next Example 6.3.2, in which the arguments
conflict on two issues.
A I = [p dl: p:::}q d2: q:::}r d3 : r :::} s]
A2 = [p,r d4: p /\ r :::} -,q d5 : -'q :::} t d6 : t :::} -,s]
F e = {p,r}
F n = {r ~ t}
Since these arguments are in conflict on two issues, they have two conflict
pairs, viz. (dl,d4) and (d 3 ,d6 ).
Example 6.4.13 Consider finally again Example 6.3.3, which motivated
our shift to adefault logic language.
dl : x kills y ~ x is guilty of murder
d2 : x kills y /\ x acts in self-defence ~
-, x is guil ty of murder
d3 : x defends himself against KK ~ x acts in self-defence
Fe: {it: Bob kills KK, 12: Bob defends himself against KK}
The two arguments attacking each other are
Al = [it, d l , Bob is guilty of murder]
A2 = [h,d3,it,d l ,-, Bob is guilty of murder]
There is only one conflict pair, viz. (d I, d2 ). So the rules that are respon-
sible for the conflict are d l and d2 , which agrees with the conclusion of
Section 6.3.3.
Example 6.4.14 Finally I show an example in which more than two rules
are relevant. Assume that Al = [dl::::} Rab, d2 ::::} Rbe] , A 2 = [d3::::} Rea] ,
160 CHAPTER 6
Perhaps the reader has been overwhelmed by the large number of definitions
in this section. Therefore I now briefly summarize what we have at this
point. We have the means to express strict information in the language of
standard logic and defeasible information as one-direction rules. Arguments
can be formed by applying monotonie inference rules to the premises, viz.
those of standard first-order logic plus a special modus ponens rule for
the one-direction rules. The simplest type of conflict between arguments is
when two defeasible rules have directly contradictory heads. Accordingly,
the basic way of comparing arguments is very simple, viz. checking which
of the two conflicting rules is more specific. However, two things complicate
the matter: indirectly contradicting rule heads because of strict information,
and multiple conflicts between two arguments. It is these two complications
that gave rise to the complex definitions: indirect rule conflicts induced a
complex definition of the relevant rules and their comparison, while multiple
conflicts induced a complex definition of defeat.
For the rest of this chapter all that is relevant is that the definitions
have, given a set of premises, produced a set of possible arguments and a
binary relation of defeat among these arguments; these two elements suffice
to define the notion of a justified argument. Since this not ion is the central
part of the system, it will be discussed in aseparate section.
We have seen that the final result of the preceding section was a binary
relation of defeat among arguments. However, this notion just expresses
the relative strength of two conflicting arguments. If we want to know
which arguments can be accepted as justified relative to the entire set
of possible arguments, we also need adefinition that takes aB ways into
account in which arguments can interact. For instance, even if A strictly
defeats B, this is insufficient to know whether A is justified and B is
not; B can be reinstated by an argument C that strictly defeats A, and
that itself survives aB attacks, either by its own strength or in turn with
the help of other arguments reinstating C. Another important feature of
assessing arguments that should be captured is the step-by-step nature of
argumentation. Example 6.3.2 has shown that any definition of a justified
argument must satisfy the 'weakest link' principle that an argument cannot
be justified unlcss all its subarguments are justified.
In the present section a way of assessing arguments will be defined that
accounts for these features. It takes as input the set of all possible arguments
164 CHAPTER 6
Below, the proponent's moves are denoted with Pi and the opponent's
moves with Oi. The proponent, which in this case will be the prosecutor,
starts the dispute by claiming that a certain piece of evidence e (perhaps
a bloody glove found on the driveway) proves that the suspect is guilty.
P hereby uses the default assumption d 1 that any piece of evidellce is
admissible unless shown otherwise.
PI: [dl: =? e is admissible evidence,
d2 : e is admissible evidence =?
e proves guilt of suspect OJ]
Now the opponent, in this example the defence, has to defeat this argument.
o does so by arguing that the evidence is not admissible: the suspect is
black and the police officer who found the evidence is araeist since all
police officers in LA are raeists, so it is to be expected that the police
officer has forged the piece of evidence.
01: [ICi: I is police officer in LA,
d3: I is police officer in LA =? I is aracist,
I C2: suspect OJ is black,
d4: I is aracist /\ suspect OJ is black =?
I forged evidence e,
ds: I forged evidence e =? -, e is admissible evidence J
The proponent now has to counterattack with an argument that strict1y
defeats 0 1 and thus reinstates PI' P does so by arguing that police officer
I is known to have a black wife, which implies that he is not araeist. Note
that thus P (strict1y) defeats O's argument by (strict1y) defeating one of
its proper subarguments.
166 CHAPTER6
time P stated the argument, P will also have a move the second time, so no
repetition by P can make P win a dialogue. Finally, the last two conditions
form the heart of the definition: they state the burdens of proof for P and
O.
In the following definitions r will be left implicit.
Definition 6.5.3 A dialogue tree is a tree of moves such that
1. Each branch is a dialogue;
2. 1f Playeri = P then the children of movei are alt defeaters of Argi.
The second condition of this definition makes dialogue trees candidates for
being proofs: it says that the tree should consider all possible ways in which
o can attack an argument of P.
Definition 6.5.4 A player wins a dialogue if the other player cannot move.
And a player wins a dialogue tree iff it wins alt branches of the tree.
The idea of this definition is that if P's last argument is undefeated, it
reinstates all previous arguments of P that occur in the same branch of a
tree, in particular the root of the tree.
Definition 6.5.5 An argument A is justified iff there exists a dialogue tree
with A as its raot, and won by the proponent.
Formal Praperties
Let us now discuss some formal properties of the set of justified argu-
ments. The proofs can be found in Prakken & Sartor (1997a). The dialogue
game was originally developed in Prakken & Sartor (1996b; 1997a) as a
dialectical proof theory for a semantical characterisation of the set of jus-
tified arguments, formulated in terms of a fixed point operator, in Prakken
& Sartor (1996a). This semantics is an instance of a general semantical
framework for defeasible argumentation, developed by Dung (1995) and
Bondarenko et al. (1997), and which will be discussed below in Section 9.2.1.
The semantics of the present system is sceptical; to each set of premises it
assigns a unique set of (justified) consequences. This set is guaranteed to
exist, which is an improvement over default logic, in which not all default
theories have extensions. In Prakken & Sartor (1997a) it was shown that
the dialectical proof theory is sound with respect to this semantics, i.e. that
every argument that is justified according to the dialectical proof theory,
is also justified according to the fixed point semantics. Moreover, it was
shown that, on the condition that every argument is attacked by at most
a finite number of arguments, the proof theory is also complete, i.e. every
semantically justified argument is also dialectically justified. It also holds
that the set of justified arguments is conflict-free, as desired.
Finally, how about the step-by-step nature of comparing arguments, i.e.
the 'weakest link' principle that an argument cannot be justified if not all
168 CHAPTER 6
We have seen that the two arguments Al for 8 and A 2 for ,8 have two
conflict pairs, viz. (d 1, d4 ) and (d 3 , d6 ), and that A 2 wins the earlier and AI
the later conflict. In Section 6.3.2 I argued that the earlicr conflict should
be dealt with before the later, and that therefore A 2 should be justified.
This is indeed the outcome of the definitions. Here is the proof that A 2 is
justified. P starts with the argument A 2 for ,8.
But P can now reply with the first part of PI, which is cxactly that part
of H which wins the earlier conflict.
P2 : [p,1', d4 : pl\1' ='I- ,q]
And P has no way of strictly defeating 01, so P has run out of moves:
the weakest link principle prevents Al from being justified since Al has a
subargument which is not justified.
PREFERRING THE MOST SPECIFIC ARGUMENT 169
6.5.3. ILLUSTRATIONS
Above I have already illustrated several aspects of the system. As for the
two requirements stated at the beginning of Section 6.5.1, reinstatement
was illustrated with the OJ Simpson example in that section. For instance,
P2 reinstates P1 by strictly defeating 0 1, In fact, in any dialogue tree
won by P its first move is reinstated by all its subsequent moves. The
same example illustrates the step-by-step nature of argumentation, since
P2 strictly defeats 0 1 by doing the same for a subargument of 0 1 •
I now discuss the remaining two examples in Section 6.3 that revealed
problems of Poole's original approach. Consider first Example 6.3.1.
Al = [t, s t =} r 8 =} q q /\ l' =} p]
A2 = [8 8 =} q q =} -,p]
:Fe = {S, t}
:Fn = 0
It is easy to see that Al is justified, since there is only one conftict pair,
viz. (q /\ r =} p, q =} -, p), and the first default is strictly more specific
than the latter.
Next I show that the new system correctly handles Example 6.3.3.
d1 : x kills y ---+ x is guilty of murder
d2 : x kills y /\ x acts in self-defence ---+
-, x is guilty of murder
d3 : x defends himself against KK ---+ x acts in self-defence
:Fe: {/t: Bob kills KK, 12: Bob defends himself against KK}
The justified conclusion is that Bob is not guilty. The proof is very simple:
P starts with
P1 : [12, d3, /t, d 1 , -, Bob is guilty of murder]
And 0 has already run out of moves. The construction of the unwanted
argument against Bob acts in self-defence is prevented by the one-
directional nature of defeasible rules. Therefore there is only one possible
PREFERRING THE MOST SPECIFIC ARGUMENT 171
Now P has run out of moves. It is easy to see that the proof that f did not
forge evidence fails in the same way.
Next the inferenee rule DM P of default modus ponens (see Definition 6.4.2)
has to be redefined. The idea is that in applying DM P the assumptions of
a defeasible rule ean be ignored; if an assumption is untenable, this will be
reflected by a sueeessful attaek on the argument.
Definition 6.6.2 DM P is an inference rule of the form
d: rpo /\ ... /\ rpj /\ '" rpk /\ ... /\ '" rpm => rpn,
rpo /\ ... /\ rpj
The second problem concerns the interaction between the two kinds of
attack. If one argument attacks the condusion of another (dause (1) of
Definition 6.6.4) and the other attacks an assumption of the first (dause
(2) of Definition 6.6.4), which argument should defeat which? I shall answer
this with a discussion of the following default theory.
dI : q 1\ f'V P ::::} -,p
d2 : ::::} p
fe: q
Note that d l is strictly more specific than d2 . [d 2 l attacks an assumption of
[fe, dll but [Je, dll and [d 2 l also have contradictory condusions, while [fe, dll
uses a strictly more specific rule. Still I think that [d 2 l should strictly defeat
[fe, dd· The reason is that this is the only way in which none of the two
rules has to be rejected (in an intuitive sense). If d 2 is accepted, then the
assumption of d l does not hold, so besides d 2 also d l can, as a rule, be
accepted. By contrast, accepting d l implies rejecting d2 , in the sense that
its antecedent is believed but its consequent is not. In fact, I here employ the
logical counterpart of the legal principle that the law should be interpreted
as coherently as possible.
I now incorporate this intuition in the new definition of defeat among
arguments. Since defeat now comes in two kinds, it is convenient to first
define the two kinds separately, before combining them. The first kind is just
the 'contradicting a condusion' kind of defeat of Definition 6.4.17. I repeat
its definition and now call it 'rebutting' an argument. The second kind of
defeat, contradicting an assumption, I call 'undercutting' an argument.
Definition 6.6.5 Let Al and A 2 be two arguments.
- Al rebuts A 2 iff
1. Al U A2 U F n I- 1-; and
2. A 2 is defeasible; and
(a) Al is strict; 01'
(b) for some conflict pair (Cl, C2) of (Al, A 2) it holds that C 2 is
not strictly more specijic than Cl.
- Al undercuts A 2 iff Al U F,l I- -''P for any assumption 'P of A 2 •
Note that, as desired, undercutting is independent of specificity.
Finally, these two not ions can be combined in the new definition of
defeat. As desired, it makes an attack on an assumption stronger than an
attack on a condusion: if one argument undercuts the other, and the other
does not undercut but only rebuts the first, the first defeats the second but
the second does not defeat the first.
Definition 6.6.6 Let Al and A 2 be two arguments. Then Al defeats A 2
iff
PREFERRING THE MOST SPECIFIC ARGUMENT 175
- Al undercuts A 2; or
- Al rebuts A2 and A2 does not undercut Al.
We say that Al strictly defeats A 2 iff Al defeats A 2 and A 2 does not defeat
Al.
Note that the dialogue game does not have to be changed: its rules apply
whatever the source is of the defeat relations.
6.6.2. ILLUSTRATIONS
Note that the necessary fact fn ensures that different rules that intu-
itively are in a head-to-head conflict on whether someone has legal capacity,
indeed make arguments that make use of the rules attack each other.
by default that rules are valid, then each rule d should receive a strong
antecedent valid(d); then d can only be used if the premises give rise to
an argument that d is indeed valid.
6.7. Evaluation
How should the choiee method with specificity be evaluated with respect
to the points listed in Section 5.1.3? That this method supports structural
resemblance has already been shown in Chapter 3, and implementational
aspects will be discussed in Chapter 10 (this chapter will also contain a
detailed comparison between the choiee approach and the exception clause
approach). With respect to the other points the following remarks can be
made.
An aspect of specificity whieh is often stressed in the literature is that
it would support a modular jormalization process. However, in my opinion
this advantage does in general not hold. For example, in Example 6.2.3
the choiee between (4) and (4') depends on the question whether bankrupt
conservatives are regarded as by default poor or not: i.e. the relation of
the default 'Conservatives are rich' to the default 'Bankrupt people are
poor' must be considered, as weIl as to all other defaults about poor or rieh
persons. Example 6.4.16 about duels on life-and-death provides a genuine
example from the legal domain. The reason why it cannot be formalized in
a modular way is that the decision to add the third default to ß is the result
of antieipating a possible confliet between 287 Sr (dt) and 154-(4) Sr (d 2 ):
the second legal rule is meant as an exception to the first, but without the
third rule d2 is syntactically not a special case of d1; however, anticipating
possible confliets is not a modular way of formalizing. Another example
from Dutch law is the question whether the regulations on hidden defects
are a special case of the regulations on breach of contract (cf. Snijders, 1978,
pp. 50-52, who also discusses some other examples). In general the problem
is that in the legal domain such questions are often real legal issues, of whieh
the solution cannot be read off from the letter of the law, since the law is
often ambiguously formulated; for this reason the syntactieal occurrence of
specificity in the knowledge base is often no more than the formalization
of the outcome of a debate on such an issue: first it must be checked how
the outcome of the debate was, only then the appropriate formalization can
be chosen. Obviously, the presence of such issues in a domain completely
frust rates the goal of modular formalization of a piece of legislation.
With respect to the other points mainly positive remarks can be made.
The system developed in this chapter has been shown to bc sufficiently
expressive to represent all kinds of exceptions listed in Section 5.1.2, in a
way which is often closer to natural language than thc exception clause
178 CHAPTER 6
approach, since defeasible rules can be formalized both with and without
exception clauses. A slight drawback with respect to expressiveness is that,
since the system is based on default Iogic, no defeasible modus tollens is
possible. A very important advantage of the present approach is that it
allows for other standards for comparing arguments besides specificity, since
our dialogue game leaves room for any way of determining defeat. In the
rest of this book I shall aim to exploit this advantage.
To summarize the results of this chapter, an important conclusion has
been that the choice approach to dealing with exceptions needs a general
system for constructing and comparing arguments, defining the various
ways in which arguments can interact, and determining whell a choice
should be made. Another conclusion was that such a system should be
defined on top of a non-standard Iogic: defeasible rules should be formalized
as one-direction conditionals. A system satisfying these requirements has
been developed and its combination with an improved specificity definition
has been shown to be adequate for representing all types of exceptions.
Moreover, the application of the system is not restricted to dealing with
exceptions, since in leaving room for any standard for comparing arguments
it provides a general theory for reasoning with inconsistent information. In
the next two chapters I shall use the system in this way: in Chapter 7 I
apply it to modelling reasoning with inconsistent but ordered premises, and
in Chapter 8 I investigate how it can deal with multiple sources of premise
orderings and with reasoning about such orderings.
CHAPTER 7
7.1. Introduction
This chapter is devoted to a logical analysis of reasoning with inconsistent
information. For legal philosophy and AI-and-Iaw this subject very rele-
vant since, as noted in Chapter 3, the information with which a lawyer is
confronted is often contradictory. However, the relevance of this chapter
is not restricted to the legal domain; in other domains of common-sense
reasoning people are also often confronted with conflicting sources of in-
formation. The problem faced by a logical analysis is that according to
elassical logic inconsistent premises are of no use at all , since elassically
from a contradiction everything can be derived. Therefore, standard logic
is, if not inappropriate, at least insufficient to model nontrivial reasoning
with inconsistent information.
How then to account for this kind of reasoning? Some logicians have
developed alternative logics, in which contradictions, although they are still
false, do not have the devastating consequences which they have in elassical
logic (relevance logics, paraconsistent logics). Others, on the other hand,
have attempted to retain the semantics and proof theory of elassicallogic by
embedding it in a larger framework. The main idea is to allow the premises
to be ordered and to use the ordcring in such a way that one or more
consistent subsets result, to which the rules of elassicallogic can be applied;
thus the nonstandard consequences of the inconsistent set are the standard
consequences of the resulting set or sets (Rescher, 1964; Alchourron &
Makinson, 1981; Brewka, 1989; Roos, 1992). Whatever the merits are of
the first approach, the second seems to be eloser to legal reasoning, in
which, as we have seen above, sometimes collision rules, are used to choose
between incompatible conelusions. The purpose of this chapter is to analyze
the formal aspects of this kind of reasoning with knowledge that is subject
to collision rules. Although the basic idea is rat her simple, its full formal
analysis turns out to be surprisingly subtle. In fact, a major aim of this
chapter is to argue that all existing attempts fall short in certain respects.
In reasoning with inconsistent information two situations can be dis-
tinguished. The first occurs when a contradiction prevents the nontrivial
derivation of conelusions which intuitively have not hing to do with the
conflict. For such conelusions there will be no consistent subsets of the
179
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
180 CHAPTER 7
they apply the orderings, these theories will not be discussed separately.
All approaches use orderings on formulas, for which reason some nota-
tional conventiolls have to be explained: x ~ y means that y is preferred over
x; x ~ Y is shorthand for x ~ y and y ~ x; and x < y abbreviates x ~ y and
y 1:. x. Unless stated otherwise, the ordering relation ~ is in this chapter
assumed to be a partial preorder, i.e. a relation which is transitive (if x ~ y
and y ~ z then x ~ z) and reflexive (x ~ x). For legal applications it might
be asked whether more properties of the ordering should be required: in
particular, whether it should it be linear (x ~ y 01' Y ~ x 01' X ~ y).
In other words: should a hierarchical relation be defined for every pair of
norms? Whether undefined relations between moral or legal norms exist is,
of course, not a formal question but a question of ethical or legal theory,
for which reason a formal theory of hierarchical reasoning should leave this
possibility open; if it does so, it then at least includes total orderings as
a special case, so that it is left to those who apply the system whether to
regard a ccrtain 'reallife' hierarchy as total or not.
Probably the first who studied the formal aspects of nontrivial reasoning
with inconsistent legal information were Alchourron & Makinson (1981).
Their theory provides two things: a way of comparing arbitrary sets of
norms on the basis of an ordering of their elements, and a not ion of non-
trivial consequence for reasoning with inconsistent premises. Although they
restrict their investigations to normative reasoning, nothing in their theory
prevents its application to other domains.
Sets of premises are ordered accordillg to their minimal elements.
Definition 7.2.1 A set X is calted at least as exposed as a set Y (X ~ Y)
iJJ for alt y E Y there is an x E X such that x ~ y. A nd X is strictly more
exposed than Y (X < Y) iJJ X i= 0 and for alt y E Y there is an x E X
such that x < y.
With the help of this definition the following 'weak' and 'strong' conse-
quence not ions are defined.
Definition 7.2.2 Consider a partially orde1'ed set of p1'emises X.
- Y ~ X indicates <p iJJ Y f- <p and for alt Z ~ X such that Z f- '<p:
y 1:. z.
- Y ~ X determines <p iJJ Y f- <p and f01' alt Z ~ X such that Z f- -'<p:
Y>Z.
Note the similarity with Definition 4.1.16 of weak and strong consequence;
the main difference is that the present definition refers to arbitrary subsets
instead of maximal consistent subsets.
182 CHAPTER 7
Belief revision, or theory revision, (cf. Gärdenfors, 1988) is about the dy-
namics of 'belief sets': it studies the definition of and requirements for the
process of revising a set of propositions with respect to a certain proposition.
Examples of such requirements are that the contraction of a formula <p
from a theory should result in a theory not implying <p, and that revisions
should be minimal in that as much as possible of the information should
be preserved. The theory of belief revision has been applied to several
problems: for instance, to testing scientific hypotheses, to counterfactual
reasoning, and to updating databasesj in this section its application to
deriving nontrivial conclusions from inconsistent information will be inves-
tigated. The reason why beliefrevision is a possible candidate framework for
this purpose is that oue way of characterizing the nontrivial consequences
of inconsistent premises is defining them as the standard consequences of
the set resulting from contracting ..L from the premises (an idea suggested
by Alchourron & Makinson, 1981). The belief revision method which seems
best suited for these purposes consists of identifying all maximal consistent
subsets of the premises (for which I use Brewka's (1989) term 8ubtheories),
comparing them with respect to some ordering relation, and taking the
intersection of all sets that are maximal in this orderingj to the result of this
operation simply a monotonie logic can be applied. This is called 'partial
meet contraction' (Gärdenfors, 1988, pp. 59, 80) applied to arbitrary sets,
Le. to sets that need not be closed under deductive consequence. Note again
the similarity with Definition 4.1.16 above.
It should be noted that if belief revision is used to model nontrivial
reasoning with inconsistent premises, it seems better not to define it on sets
which are closed under logical consequence, Le. on theories. The reason is
that otherwise conclusions based on rejected premises sometimes stay valid,
which is not what we want: if, for example, we have the premises p and
p --+ q and for some reason p is rejected, then q should also go, since it is
184 CHAPTER 7
The reason that F is the best is that it is the only subtheory not containing
the lowest norm (11), while of the other subtheories an subsets inconsistent
with F contain (11) (in fact, the only such subsets are the subtheories
themselves): and therefore all these sets are strictly more exposed than F.
In conclusion, the only maximal element of the set of subtheories is F. What
does it say about the consequences ofBob's snoring? Since (11) is not in F,
(9) cannot be used to derive Bob may be removed: the conclusion, then, is
timt Bob cannot be removed from the library. Note that the same outcome
is obtained by Definition 7.2.2: F determines -, Bob may be removed, since
its minimal element, which is (9), is strictly higher than the minimal element
of an sets implying the opposite, which in an these sets is (11).
However, I want to argue for a different view on the example, a view
which is more in line with Chapter 6. On the one hand, clearly there is a
186 CHAPTER 7
potential conflict between the rules (9) and (10), since when a professor is
held to be misbehaving a choice must be made about which of them takes
precedence. On the other hand it seems natural to say that there is no dis-
pute that Bob is misbehaving: just as in Section 6.3.3 it can be said that Bob
misbehaves is merely an intermediate conclusion for making (9) applicable,
for which reason (11) is irrelevant to the conflict about whether Bob should
be removed from the library. Instead of preferring -, Bob may be removed
by rejecting the conclusion that Bob is misbehaving, it seems more natural
to make a choice between the norms which are certainly in conflict with each
other, (9) and (10): and since these norms are of equallevel, the outcome
should be that the conflict cannot be resolved.
This point of view can also be illustrated with a legal example of the
same form but with a different ordering of the premises.
Example 7.2.7 Consider a provision (5 GW) of the Dutch constitution
declaring every person to have the right to submit a written request to the
proper authority, an (imaginary) case law decision stating that arequest by
fax is a writtcn requcst, and an (also imaginary) act stating that prisoners
do not havc thc right to submit requests to a proper authority. I leave
the ordering relation between case law decisions and lcgislation undcfincd.
Finally, in order to make the example closer to most legal systems, I assurne
that the Dutch constitution is higher than statutes; in fact in Dutch law
their relation is more complicated.
(13) x is a fax ---+ x is wri tten
(14) x is arequest 1\ x is written ---+ a proper authority
must accept x
(15) x is a prisoner's request ---+ -, a proper authority
must accept x
(16) My-lette1' is a fax 1\ My-letter is a prisoner' s request
1\ Vx. x is a prisoner' s request ---+ x is arequest
(16) > (14), (14) > (15)
For reasons explaincd ahove with Example 4.1.21 open formulas are schemes
for all their ground instances.
Thc subtheorics are
I: {(13, 14, 15)} J: {(13, 14, 16)} K: {(13, 15, 16)} L: {(14, 15, 16)}
In the ordering of the subtheories I is lower than the other three, since
it implies the negation of (16), and since according to Definition 7.2.5
{(16)} >- {(13, 14, 15)} sincc (16) > (14,15). However, because of the
undefined status of (13) thc other sets are incomparable, which according
to Definition 4.1.16 makes all ofthem weakly imply their consequenccs, and
REASONING WITH INCONSISTENT INFORMATION 187
this means that both a proper authori ty must accept My-letter and its
negation are only weakly provable. Again in my opinion a different analysis
seems closer to legal reasoning: I do not think many people will accept that
the enactment of astatute norm like (15) can block the application of the
constitutional norm just because the antecedent of the constitutional norm
is provided by a case law decision; again a more natural view seems to
be that it is (14) and (15) which are in conflict, and since (14) is higher
than (15), this view leads to the outcome that a prisoner's request must
be accepted. In general terms Definition 7.2.5 has the following undesirable
consequence: if a norm later in an 'argument chain' takes part in a conflict,
then norms earlier in the chain can only give rise to justified intermediate
conclusions if they are higher than the lowest norm involved in the conflict.
7.3. Diagnosis
As already briefly indicated, the problem with the above methods is that
they regard too many premises as relevant to the conflicts about whether
Bob may be removed from the library and whether the request may be
submitted: in both approaches it is alt members of the (classically) minimal
inconsistent set which are regarded as relevant, whereas Examples 7.2.6
and 7.2.7 have illustrated that it is only a subset of this set which matters:
informally, only conditional rules with conflicting consequents are relevant
to the conflict. At first sight this point seems to be rather ad hoc, in that
it only pertains to the specific form of these examples. Nevertheless, it
REASONING WITH INCONSISTENT INFORMATION 189
Definition 7.4.2 For any two sets Rand R' of defeasible rules,
R < R' iff for some 1· E Rand alt r' E R' it holds that 1" < r'.
The intuitive idea behind this definition is that if R < R', R can be made
better by replacing some rule in R with any rule in R', while the reverse is
impossible.
Finally, in the definition of rebuttals (see Definition 6.6.5) this ordering
takes the place of the specificity ordering.
With this new not ion of rebutting, Definition 6.4.17 again yields a defeat
relation of arguments, to which the rules of the dialectical proof theory of
Section 6.5 can be directly applied. Since this proof theory is independent
of any ground for the defeat relations, all its properties still hold.
Here is how the new definitions deal with the critical examples.
In essence, what has been developed in the last two chapters is a system for
comparing conflicting arguments, Le. a system for defeasible argumentation.
This has resulted in a nonmonotonie notion of logical consequence, viz.
that of being a justified conclusion of an ordered default theory. Below this
notion will be denoted by f I""a r.p, in words 'f argumentatively implies r.p'.
In this section some general features of the system and consequence notion
will be discussed.
This property follows from the deductive closure of the set of justified
formulas, which as remarked above in Section 6.5.2, can be proven along
the lines of Prakken (1993).
Another property that is often regarded as essential is cumulativity.
For our argumentation system this property has the following form. I,,-,a is
cumulative iff for any default theory (F U~,~)
d1: '*
p
d4 : r,* -,p
h:p
On the basis of this default thcory all of p, q and T are justified conclusions.
This is since the only counterargument to the arguments supporting these
conclusions is [h,d 1 ,d2 ,d3 ,d4], which is incoherent and therefore overruled
by the empty argument.
However, if we add the justified conclusion T to the facts, as
12: r
then in the new default theory neither of p, q and rare justified any more.
The reason is that now there is a different argument for p, viz. [12, d4], which
is not incoherent and is therefore able to defeat the argument [h, d1, d2 , d3 ].
Is the lack of cumulativity a drawback of the present system? In my
opinion it is not; I think (following Vreeswijk, 1993a, pp. 82-6): that such
examples clearly illustrate why in general cumulativity is not a desirable
property of nonmonotonic consequence not ions. What the above shows is
that in evaluating arguments it is important to take the derivation history of
the conclusions into account: with the initial default theory the conclusion
r is based on an intermediate conclusion p, for which reason the subsequent
argument for -,p is incoherent; however, in the extended default theory r
does not depend on p any more, which makes the new argument for -,p
REASONING WITH INCONSISTENT INFORMATION 195
coherent. In other words, in the second default theory the argument for ...,p
does not use the 'same' l' as in the first default theory.1
Next it will be discussed how thc system deals with sceptical and credulous
reasoning (cf. Chapter 4). Recall that sceptical consequences are those con-
clusions which on the basis of the given information cannot be challenged,
while credulous consequences are those conclusions that hold in at least
one possible state of affairs admitted by this information. Clearly sceptical
reasoning is captured by the not ion of a justified argument and credulous
reasoning by the notion of a defensible argument. When two arguments are
in an irresolvable conflict, neither of them is justified but they can still be
furt her pursued as alternative defensible points of view.
The ability of the system to capture both sceptical and credulous reaSOll-
ing is closely related to a particular, 'moderate' view on sceptical reasoning,
which is not shared by all approaches in the literat ure. All alternative,
more extreme account of sceptical reasoning can be found in Horty et
al.'s (1990) system for inheritance networks with exceptions, and also in
the work of Nute, e.g. (1992). This account is that a sceptic, if faced with a
conflict which is irresolvable, refuses to furt her pursue any of the arguments
involved in the conflict. In other words, extremely sceptical reasoners 'cut
off' an argument not only when it is overruled, but also when it is merely
defensible.
Example 7.5.1 The following variant of our OJ example illustrates this
kind of scepticism.
dl: :::;.. f forged evidence e
d2 : f forged evidence e :::;.. ..., e is admissible evidence
d3: :::;....., f forged evidence e
d4: "-J"" e is admissible evidence :::;.. e proves guilt
Assurne that ~ only contains ~ relations. Here is how in the present system
the proof for 'e proves guilt' fails.
P1: [d4: "-J"" e is admissible evidence :::;.. e proves guilt I
01: [d 1: :::;.. f forged evidence e,
d2 : f forged evidence e:::;.. ..., e is admissible evidence]
Makinson & Schlechta (1991) call arguments like [dl, d2J "zombie paths":
although this argument is not fully alive, since it is not justified, it neither
is totally dead, since it can still prevent other arguments from being justi-
fied. Makinson & Schlechta argue that a suitable theory of inheritance or
argumentation should allow for such an intermediate status of a path or
argument. The present system does so, in the form of defensible arguments.
However, extremely sceptical systems do not have this intermediate cate-
gory; in such systems [d 1 , d2] is, in our terms, not allowed to prevent [d 4 J
from becoming justified, since it has a sub argument that is in an irresolvable
conflict with another argument, viz. [d3]; and an 'extreme sceptic' then
refuses to pursue the argument any furt her. As a result, [d4] is in such
systems justified, even though it is attacked by a 'zombie', i.e. an argument
that is not justified but that is not worse than any counterargument.
Granted that the different accounts of scepticism reflect a 'dash of
intuitions', I still think that the moderate account of the present system
is justified by its underlying general ideas. Recall that the system regards
an argument as justified only if, given the premises and the way they are
ordered, no doubt at all can be cast on the argument (unless, of course,
new premises are added). Now in our example doubt can be cast on the
argument [d4], since it has a counterargument that is not weaker than any
argument attacking it. I think that if such a situation were to arise in
practice, a judge would feel compelled to determine whether d l < d3 before
deciding in favour of P.
express a possible point of view admitted by the premises. Any formula that
in all such extensions is the conclusion of some argument can be regarded
as a justified conclusion, even if it is not in all extensions supported by the
same argument, i.e. even if it is not the conclusion of a justified argument.
In the literature various ways of defining such extensions can be found,
all differing in subtle ways (see e.g. Dung, 1995 and Bondarenko et al., 1997,
which will be discussed below in Section 9.2.1). However, for present pur-
poses their differences do not matter; it suffices to present just one such
definition; the main idea also applies to the alternatives. The following
definition is taken from Prakken & Sartor (1997a).
Clearly, any ordered default theory has at least one defensible extension.
Then the not ion of a justified conclusion can be redefined as follows.
EI: {[dlJ, [d l , d2 ]}
E2: {[d 3 J, [d 3 , d4]}
says that sometimes the combination of two reasons pro is instead a reason
con, and by the last observation, which says that reasons are sometimes
combined but sometimes not. These observations suggest that combining
reasons is not a matter of logic but of formulating premises. Each time when
the premises contain more than one rule with the same consequent, it should
be decided whether in addition a third rule, conjoining their antecedents
and with the same consequent, should also be added to the premises.
Plausible as this solution may seem, its formalization requires a subtlety.
To see this, let us first give a direct formalization of the idea. Suppose we
have the rules
d1 : It rains ~ --, I go jogging
d2 : It is hot ~ --, I go jogging
The idea is that it is adecision of content whether also
d 1/ 2 : It rains 1\ It is hot ~ --, I go jogging
Should be in the premises. Moreover, in general the strength of the com-
bined rule does not depend on the strengths of d1 and d2 • So if the premises
also contain
d3 : It is Sunday ~ I go jogging
then even if both dl > d3 and d2 > d3, it may still be that d 1/ 2 < d3. So
not only combining rules, but also assigning the priorities to the combined
rule is a matter of decision, to be taken in each individual case.
However, upon reflection this proposal does not see m completely ade-
quate. Assurne the priorities are such that the combined rule is weaker than
its component rules: dl > d3, d2 > d3, d 1/ 2 < d3. This induces the following
defeat relations.
[dll strictly defeats [d31
[d2 1strict1y defeats [d 31
[d31 strictly defeats [d 1/ 2 1
As remarked above, the intuitive outcome is that [d 3 ] is justified while
the other arguments are overruled. However, although [d 31 strictly defeats
[d 1/ 2 ], it is in turn still strictly defeated by both [dI] and [d 2 ], which
themselves are not defeated by any argument. This makes [d3] overruled
and the other arguments justified, as can easily be checked. However, the
intuitive outcome is that I go jogging, so our first proposal is flawed.
Yet there is a way to re pair it, viz. by changing the representation of the
example in the way described by Hage (1997, p. 204). In can be argued that
if we have a hot and rainy Sunday, then the individual reasons concerning
heat and rain cease to apply: when the antecedent of d1/ 2 is justified, d1
and d2 cannot be used any more to set up arguments.
200 CHAPTER 7
7.6. Conclusion
Inspired by a formal analysis of legal reasoning, this chapter has investi-
gated nontrivial reasoning with inconsistent but ordered information. Two
REASONING WITH INCONSISTENT INFORMATION 201
8.1. Introduction
Whereas in Chapter 6 arguments could only be compared with respect to
specificity, in Chapter 7 any criterion has been allowed, provided that it can
be expressed as an ordering on rules. However, apart from specificity almost
nothing has yet been said about the possible sources of the priorities. For
so me time in AI the (often implicit) hope has been that the sources of these
priorities are of a general, domain independent nature. As a consequence,
the question where these priorities can be found is usually not treated
as a matter of common-sense reasoning. If this question is addressed at
all, then it is usually considered a metalogical issue. In particular, much
research has concentrated on formalising the specificity principle, which, as
we have seen, can be expressed in purely logical terms. However, abrief
look at the legal domain already suffices to see that the hope that we
can find useful domain-independent sources of preferences is unrealistic.
In law, but also in many other domains of common-sense reasoning, such
as bureaucracies, collision rules are themselves part of the domain theory
(see e.g. the detailed overview of Peczenik, 1990 in the context of Swedish
law). This even holds for specificitYi although checking which argument is
more specific may be a logical matter, deciding to prefer the most specific
argument is a legal decision. Moreover, the collision rules not only vary
from domaiIl to domain, they can also be incomplete or inconsistent, in
the same way as 'ordinary' domain information can be. In other words,
reasoning about priorities is nonmonotonic reasoning. These observations
me an that in a logic that is meant to formalise this kind of reasoning, the
consequences of a set of premises do not only depend on the priorities, they
also determine the priorities. In most current nonmonotonic logics these
observations are ignored (but see Chapter 9 for some exceptions).
In this chapter these issues will be addressed. The argumentation system
will be extended in such a way that information about the priorities can
be expressed in the premises and that the priorities needed for resolving
conflicts can be derived as conclusions of these premises, in the same way
as any other conclusion. This means that arguments call be set up for and
agaillst priority conclusions alld, if necessary, they call be compared with
the help of other priorities, in turn derived from the premises in the same
203
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
204 CHAPTER 8
way.
I start this chapter with an overview of legal collision rules, resulting in
so me requirements for their formalization. Then I extend the definitions of
the current system in such a way that priority information can be expressed
and derived in the system, after which I sketch and illustrate a methodology
for representing legal collision rules.
No Hierarchy of Metalevels
I now discuss some aspects of reasoning with and about legal collision rules
that have to be respected by a formal analysis. First of all, we cannot
assurne a hierarchy of separate layers of collision rules. For instance, the
three Lex principles do not only apply to confiicts between 'ordinary' legal
rules, but also to confiicts in which they are themselves involved. To give an
example, in Dutch law the confiict just mentioned between Lex Posterior
and 1-(2) Sr is, as just mentioned, solved with the Lex Specialis principle,
and below we will even come across examples in which Lex Posterior is
applied to itself.
206 CHAPTER 8
And if P has no answer, then PI and P2, turn out not to be justified.
Now these ideas will be incorporated in the definition of a dialogue. All
conditions are the same as above in Definition 6.5.2, except for the new
type of move for P, and the references to 'defeat', which are made relative
to the defeating argument.
210 CHAPTER8
8.5. Examples
Let us now apply the new definitions and the formalization methodology to
some legal examples. The first example is about the relation between the Lex
Posterior principle and Seetion 1 of the Duteh eriminal code (Sr). Reeall
that this section says that if the law ehanges during a eriminal ease, the
rule shall be applied that is the most favourable for the suspect. Clearly, if
the most favourable rule is the original one, this eollision rule itself eollides
with the temporality principle. In Duteh law this principle is eodified as
Section 5 ofthe 'Aet general provisions' (AB), which makes it hierarehieally
equal to 1 Sr. Let us furt her assurne that both provisions were enacted at
the same time, so that they are also temporally equal (note that here a
eollision rule, viz. Lex Posterior, is applied to itself; this possibility will
be furt her illustrated with the seeond example). The eonflict ean then be
resolved with Lex Specialis, observing that the eollision rule of the eriminal
code is more specifie than 5 AB, which is written for legislation in general.
The eriminal eollision rule ean be formalized as follows (where r,1",c and
s are variables).
l(r,r',c,s): cis a criminal case /\ s is suspect in c /\
r changes during c into 1.1 /\
r is more favourable for s than 1" =? r' -< r
In addition, F n eontains
In: \;/1',1", c. r changes during c into r' -4 r is earlier than r'
REASONING ABOUT PRIORITY RELATIONS 213
But now P takes the debate to the meta-meta level, resolving the conflict
between the collision rules 1 Sr and Lex Posterior with the Lex Speeialis
rule.
Now 0 has run out of moves, and PI has been shown to be justified.
The second example is from Italian town planning legislation, which
contains a collision rule stating that rules intended to protect buildings
of architectural merit prevail over rules concerning town planning. The
example is constructed in such a way that this collision rule conflicts with
the temporality principle. The case concerns a town planning rule saying
that if a building needs rebuilding, its exterior may be modified, and an
earlier, and conflicting, architectural-heritage rule saying that if a building
is on the list of protected buildings, its exterior may not be modified. The
example is partly intended to illustrate a technical subtlety with philosoph-
ical significance, viz. the application of a priority rule to itself (or better,
to one of its instances). Rule dg states that rule d3 is later than the Lex
Posterior principle T, wh ich implies that d 3 prevails over T, according to
T itself.
REASONING ABOUT PRIORITY RELATIONS 215
Here is the proof that the exterior of My-villa may not be modified.
PI: [d7: =}My-villa is a protected building,
dl: My-villa is a protected building =? ..., My-villa' s
exterior may be mOdifiedJ
o can respond in only one way.
0 1: [d s : =} My-villa needs rebuilding,
d2: My-villa needs rebuilding =? My-villa' s exterior
may be modified J
P can now neutralize O's defeater with the following priority argument,
saying that the protection rule prevails over the town planning rule.
P2: [d4: =? d1 is about the protection of architectural
heritage,
d5: =? d2 is a town planning rule,
d3: dl is about the protection of architectural
heri tage A d2 is a town planning rule
=? d2 -< dI]
But 0 can 0-defeat this priority argument with a conflicting priority argu-
ment based on the Lex Posterior principle.
02: [d6: =} d 1 is earlier than d2,
regulations principle, the former is inferior to the latter on the basis of the
former. Although this seems to be self-referential, formally it is not, since
one instance of Lex Posterior speaks about another instance of itself.
P3: [d9: => T is earlier than d3,
T': T is earlier than d3 => T ~ d3 1
Now 0 has run out of moves, and we know that My-villa's exterior may
not be modified. Note that in this dialogue the function arguments of T
and d3 are instances of d 1 and d 2, while of T' they are the full vers ions of
T and d3 • I leave it to the reader to write the full name of T'.
Interestingly, the outcome of this example is that one instance of T, i.e.
of Lex Posterior, makes another of its instances inferior to a competing rule.
A variant of this example is when a rule like 1637c BW (see Example 3.1.6)
takes precedence over Lex Specialis by virtue of being more specific. Neither
logically nor legally does this seem problematic, as also argued by Suber
(1990, p. 216). The present analysis formalises Suber's intuitions.
Interpretation Debates
Finally, I schematically illustrate how interpretation debates can be formal-
ized. The form of i 1 and i 2 below was earlier suggested by Sartor (1994),
while a similar style is used by Hage (1996). Assume that d1 and d2 are
two alternative interpretations of astatute section S.
dl: pl\r=>q
d2: p V r => q
Assume, moreover, that we have two opinions as to which is the correct
interpretation of section S.
il: cp => d 1 is the correct interpretation of Section S
i2: 'Ij; => d2 is the correct interpretation of Section S
Let us assume that dl is a teleological and d2 a literal interpretation of
Section S (in realistic examples this will itself be a mater of dispute, but
for simplicity I give the following rules empty antecedents and assume that
there are no conflicting rules).
i3: => dl is a teleological interpretation of Section S
't4: => d2 is a literal interpretation of Section S
Thc necessary facts say that there is exactly one correct interpretation of a
legal rule, and they say that d1 and d2 are two different rules. These facts
are needed to make il alld i2 conflicting rules.
Inl: Vx, y, z. x is the correct interpretation of z 1\
Y is the correct interpretation of z ~ x = y
In2: --, d1 = d2
REASONING ABOUT PRIORITY RELATIONS 217
(note that an expression x = y does not say timt x and y are of equal
priority but that they are the same object, viz. the same rule).
Finally, Iassume there is a rule on the priority of alternative interpre-
tation methods, saying that teleological interpretations prevail over literal
interpretations. Again this will in practice be a matter of dispute, but again
I make some simplifying assumptions.
Z5: x is a teleological interpretation of z /\
Y is a l i teral interpretation of z =} Y -< z
Assurne now that cp and 'l/J hold as a matter of fact. Then the interpretation
arguments containing il and i 2 are in conflict, since the consequents of these
rules are, together with in! and in2' inconsistent. d 1 and d2 are the relevant
rules of the two conflicting interpretation arguments and their conflict is
decided by the priority argument [i 3, i4, i 5], saying that i2 -< il. So d l is the
correct interpretation of Section S.
Note, however, that this conclusion does not yet warrant the application
of d 1 and block the application of d 2 • To realize this, applicability clauses
must be used (this is again inspired by Sartor, 1994 and Hage, 1997). Firstly,
d l and d2 must receive applicability assumptions.
dl: P /\ r /\ -, "" appl(dr) =} q
d2 : (p Vr) /\ -, '" appl(d 2 ) =} q
Next we add a general rule, saying that an interpretation of a rule is not
applicable if it is not the correct interpretation of the rule.
91: x is an interpretation of y /\
-, x is the correct interpretation of z
=} -,appl(x)
If we finally add to F n that literal interpretations are interpretations
i n 3: V x, y. x is a teleological interpretation of y --+
x is an interpretation of y
then we can derive that d2 is not applicable from the modified theory.
Hage (1996) has argued that the legal collision rules do not state prior-
ities between legal rules but applicability conditions for legal rules. In this
reading the Lex SupeTior principle says that if a higher rule conflicts with
a lower rule, the lower rule is inapplicable. Let us see to what extent this
view can be formalized in the present system. Every defeasible rule receives
an assumption that it is not inapplicable. For instance,
d1(x): x is a protected building 1\ '" -.appld1(x)
=> -. x's exterior may be modified
Then the three principles can be expressed as follows.
H: '" -.appl(H) 1\ x conflicts with y 1\ Y is inferior to x 1\
'" -.appl(x) => -.appl(y)
T: '" -.appl(T) 1\ x conflicts wi th y 1\ x is earlier than y 1\
'" -.appl(x) => -.appl(y)
S: '" -.appl(S) 1\ x conflicts with y 1\ Y is more specific than x
1\ '" -.appl( x) => -.appl(y)
In this chapter the argumentation system developed in the last three chap-
ters will be compared with related research on defeasible argumentation.
First, Section 9.1 gives a conceptual description in general terms of the
notion of an argumentation system, followed by a discussion in Section 9.2
of the main systems of this kind that have so far been developed. After
that, Section 9.3, discusses some other recent developments, which are
not argument-based, but which nevertheless deal with the same issues as
argumentation systems.
Let us now step back a little and see what it is that we have developed
in Chapters 6-8. I have called it a system for defeasible argumentation, or
an argumentation system (AS)l. In this section I discuss the general ideas
behind such systems.
My system is an example of arecent development, in which several
researchers have taken up the idea of formalizing nonmonotonic reason-
ing as constructing and comparing alternative arguments. Most ASs have
been developed in AI research on nonmonotonic reasoning, although Pol-
lock's work on defeasible argumentation (see below in Section 9.2.3) was
initially developed to analyze epistemological issues in the philosophy of
science. In AI argument-based systems have been developed either as a
reformulation of (Dung, 1995; Bondarenko et al., 1997), or as an alternative
to (Loui, 1987; Simari & Loui, 1992; Vreeswijk, 1993a, 1997; Prakken &
Sartor, 1997a) earlier nonmonotonie logics. The key idea is to analyze
nonmonotonic reasoning in terms of the interactions between arguments
for alternative conclusions. Nonmonotonicity arises from the fact that ar-
guments can be defeated by stronger counterarguments. Since in the legal
domain notions like argument, counterargument, rebuttal and defeat are
very common, it comes as no surprise that in formalizing the defeasibility
of legal reasoning especially argumentation systems have been successfully
1 Earlier (e.g. in Prakken, 1995b) I used the term 'argumentation framework', but I now
want to reserve this term for formalisms that leave the underlying logic for constructing
arguments unspecified.
219
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
220 CHAPTER 9
applied (Prakken, 1991a, 1993; Loui et al., 1993; Sartor, 1993, 1994; Gor-
don, 1994, 1995; Prakken & Sartor, 1996b).
To describe the general structure of ASs (generalizing the description of
Section 6.4.1), they contain the following five elements, although sometimes
implicitly: an underlying logical language, definitions of an argument, of
conflicts between arguments and of defeat among arguments and, finally, a
definition of the assessment of arguments.
ASs are built around an underlying logical language for expressing
arguments. Some ASs, like my system of Chapters 6- 8, assurne a partic-
ular logic, while other systems leave the underlying logic partly or wholly
unspecified; thus these systems can be instantiated with various alternative
logics, which makes them frameworks rather than systems. Then ASs have
the notion of an argument, wh ich corresponds to a proof in the underlying
logic. This is a narrow use of the term 'argument', which should not be
confused with the broader meaning it often has in AI and law and argu-
mentation theory, viz. when it stands for a dispute. Some systems, notably
Dung (1995), also leave the internal structure of an argument unspecified,
and exclusively focus on the way arguments interact. Thus Dung's frame-
work is of the most abstract kind; other frameworks, by contrast, also define
the internal structure of an argument.
The notions of an underlying logic and an argument still fit with the
standard picture of what a logical system iso The remaining three elements
are what makes an AS a framework for adversarial argumentation. The
first is the notion of a conflict between arguments. Usually, two types of
conflict are distinguished (starting with Pollock, 1987), viz. rebutting an
argument, where arguments have contradictory conclusions, and undeTcut-
ting an argument, where one argument denies an assumption of another
argument (as in Definition 6.6.5 above) or when it denies the link between
the premises and conclusion of the other argument; obviously this link can
only be denied of nondeductive arguments, such as inductive, abductive or
analogical arguments.
An AS also has ways of comparing arguments, on the basis of certain
standards, to see whether an argument defeats a counterargument. In AI
the specificity principle is regarded as very important, but one of the main
themes of the previous chapters was that any standard might be used: their
content is part of the domain theory, and is debatable, just as the rest of
the domain theory iso
Since attacking arguments can themselves be attacked by other argu-
ments, comparing pairs of arguments is not sufficient; what is also needed
is adefinition of the status of arguments on the basis of all the ways in
which they interact. It is this definition that produces the output of an
AS: it typically divides arguments in three classes: arguments with which a
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 221
dispute can be 'won', arguments with which a dispute should be 'lost' and
arguments which leave the dispute undecided. Sometimes, the last category
is not explicitly defined. In this book I have denoted these classes with the
terms 'justified', 'overruled' and 'defensible' arguments.
These notions can be defined both in a declarative and in a procedu-
ral form. The declarative form, usually with fixed point definitions, just
declares certain sets of arguments as acceptable, (given a set of premises
and evaluation criteria) without defining a procedure für testing whether
an argument is a member of this set, while the procedural form amounts
to defining just such a procedure. Thus the declarative form of an AS can
be regarded as its (argumentation-theoretic) semantics, and the procedural
form as its proof theory. (similar observations have been made earlier by
Pollock, e.g. 1995, and Vreeswijk, 1993a, pp. 88-9). Note that the system
of Chapters 6-8 has only been given in a procedural form; its semantics is
described in Prakken & Sartor (1996a; 1997a). Note, finally, that it is very
weIl possible that, while an AS has an argumentation-theoretic semanties,
at the same time its underlying logic has a model-theoretic semantics in
the usual sense, for instance, the semantics of standard first-order logic.
The two basic notions of Dung (1995) are a set of arguments and a
binary relation of defeat among arguments. Dung completely abstracts from
both the internal structure of an argument and the origin of the set of
arguments, and this is what makes the approach a framework rat her than
a particular system. In terms of these two notions, Dung defines various
notions of so-called argument extensions, which are intended to capture
various types of defeasible consequence. These not ions are declarative, just
declaring sets of arguments as having a certain status. Finally, Dung shows
that many existing nonmonotonic logics can be reformulated as instances
of the abstract framework.
To illustrate the level of abstract ion of this approach with the system
of Chapter 6, Dung completely abstracts from everything contained in
Section 6.4; he is only interested in the fact that this section results in a set
of arguments (which in that section are alliogically possible arguments on
the basis of the premises) and in a defeat relation defined on that set. Thus
Dung only studies the final element of an AS, the assessment of arguments.
Here are the main formal notions of Dung (1995) (with some termino-
logical changes).
Definition 9.2.1 An argument-based theory (ATj2 is a pair (Args, de-
feat), where Args is a set of arguments, and defeaf3 a binary relation on
Args.
- An AT is finitary iff each argument in Args is defeated by at most a
finite number of aryuments in Args.
- A set of arguments is conflict-free iff no argument in the set is defeated
by another argument in the set.
The idea is that an AT is defined by so me nonmonotonic logic or system for
defeasible argumentation. Usually the set Args will be all arguments that
can be constructed in these logics from a given set of premises (as e.g. in
my system of the previous chapters). Unless stated otherwise, I shall below
implicitly assurne an arbitrary but fixed AT.
From Dung's use of his framework it appears that he intends, as in my
system, the relation of defeat to be a weak notion: i.e. intuitively 'A defeats
B' me ans that A and B are in conflict and that A is not worse than B.
This means that two arguments can defeat each other. A typical ex am pie
is the Nixon Diamond, with two arguments 'Nixon is a pacifist because
he is a Quaker' and 'Nixon is not a pacifist because he is a Republican'.
If there are no grounds for preferring one argument over the other, they
(intuitively) defeat each other. As we have seen in Section 6.4.5, a stronger
notion is captured by strict defeat (not explicitly mentioned by Dung),
2Dung uses the term 'Argumentation Framework'.
3Dung uses 'attack'.
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 223
To give also such AT's a credulous semanties, Dung defines the not ion of a
preferred extension.
Definition 9.2.5 A conflict-free set is a preferred extension iff it is a
maximal (with respect to set inclusion) admissible set.
All stable extensions are preferred extensions, so in the Nixon Diamond and
the Tweety Triangle the two semantics coincide. However, not all preferred
extensions are stable: in the above example with circular defeat relations the
empty set is a (unique) preferred extension, whieh is not stable: preferred
semantics leaves all arguments involved in the odd cycle of defeat out of
the extension, so none of them is defeated by an argument in the extension.
Preferred and stable semanties capture a credulous notion of defeasible
consequence: in cases of an irresolvable confiict as in the Nixon diamond,
two incompatible extensions are obtained. As we have seen in Chapter 4
(e.g. Definitions 4.1.3 and 4.1.16), one way to define sceptical consequence is
to take the intersection of all credulous extensions. However, Dung defines
sceptical consequence in a different way, resulting in a unique extension. It
is this notion that was the semantie basis of the dialectical proof theory of
Section 6.5. Dung defines the sceptieal semanties with a monotonie operator
F AT , whieh for each set 5 of arguments returns the set of all arguments
acceptable to 5. Since F AT is monotonie (Le. if 5' ;;2 5, then F AT (5') ;;2
F AT (5)), it is guaranteed to have aleast fixed point, i.e. a smallest (with
respect to set inclusion) set 5 ~ Args such that F AT (5) = 5. This least
fixed point captures the smallest set which contains every argument that
is acceptable to it; it is this set, which by definition is unique, which Dung
defines to be the sceptical (grounded) extension of AT.
Definition 9.2.6 Let AT = (Args, defeat) be an argument-based theory
and 5 any subset of Args. The characteristic function of AT is:
- FAT : Pow(A1'gs) --+ Pow(Args)
- FAT(5) = {A E ArgslA is acceptable with respect to 5}
I now give a perhaps more intuitive variant of this definition, which for
finitary AT's is equivalent to the fixed point version and in general results
in a subset of the grounded extension.
Definition 9.2.7 For any AT = (Args,defeat) we define the following
sequence of subsets of Args.
- F~T = 0
- F~V = {A E Arg s lAis acceptable with respect to F~T}'
Then the set JustArgsAT of arguments that are justified on the basis of AT
is U~O(F~T)'
What this definition does is to approximate the least fixed point of F AT
by iterated application of FAT, starting with the empty set. Thus first all
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 225
arguments that are not defeated by any argument are added, and at each
furt her application of F AT all arguments that are reinstated by arguments
that are already in the set are added. This is achieved through the notion
of acceptability. To see this, suppose we apply FAT for the ith time: then
for any argument A, if all arguments that defeat Aare themselves defeated
by an argument in F i - 1 , then A is in F i . To illustrate this with the above
Tweety Triangle: FfT = {C}, FiT = {A, Cl, FfT = FiT' so A is reinstated
at F 2 by C. Finally, that this semanties is sceptieal is illustrated by the
Nixon Diamond: FJv D = Ff?, D = 0.
The developers of the BDKT approach have also studied procedural
forms for the various semantics. One of them was the basis of the dialogue
game of Section 6.5, viz. Dung's (1994) game-theoretic version of extended
logie programming. Furthermore, within the BDKT approach, Kowalski
& Toni (1996) have studied formalization methodologies for rules with
exceptions, which in their surface structure are very similar to those of
Section 5.6.1 above. As already discussed above in Section 8.6, Kowalski
& Toni (1996) also define a way of expressing priority rules in terms of
applicability clauses, in a similar way as Hage (1996) does within his reason-
based logic, to be discussed below.
Evaluation
In my opinion the abstract BDKT approach is an extremely useful tool
in revealing relations and differences between the various existing non-
monotonie logics. Moreover, the approach makes it very easy to formulate
alternative semantics for these logics. For instance, default logic, whieh was
shown to have a stable semantics, can very easily be given an alternative
semanties in which extensions are guaranteed to exist, like preferred or
grounded semantics. Moreover, the proof theories that have been or will be
developed for the various argument-based semantics immediately apply to
the nonmonotonie logies that are an instance of these semanties. Because
of these features, the BDKT framework is also very useful as guidance in
the development of new systems, as Giovanni Sartor and I have used it in
developing the system of the previous chapters.
On the other hand, the level of abstractness of the BDKT approach also
leaves much to the developers of particular systems, such as the internal
structure of an argument, the ways in which arguments can conflict, and
the ways in whieh the defeat relations are defined. Moreover, it also seems
that at some points the BDKT approach needs to be refined or extended.
For example, in Chapter 8 I needed to extend the approach to let it cope
with reasoning about priorities. Although Kowalski & Toni (1996) argue
that their alternative method avoids extending the semanties, we have seen
in Section 8.6 that they in turn need to extend it with metalogie features.
226 CHAPTER 9
9.2.2. POLLOCK
dercut reason states between its premises and its conclusion. (In agreement
with his emphasis on epistemology, Pollock defines the strength of reasons
in probabilistic terms.) His favourite example of an undercutting defeater is
when an object looks red because it is illuminated by a red light: knowing
this undercuts the reason for believing that this object is red, but it does
not give a reason for believing that the object is not red.
Over the years, Pollock has more than once changed his definition of the
status of arguments. While earlier versions (e.g. Pollock, 1987) dealt with
(successful) attack on a subargument (cf. Example 6.4.8 above) via the
definition of defeat, as in my system, the latest version makes this part of
the status definition (by explicitly requiring that all proper sub arguments
of an 'undefeated' argument are also undefeated). While this change is
just a matter of taste, a more substantial change is that, while his earlier
definitions correspond to the grounded semantics of Definition 9.2.7 (as
shown by Dung, 1995), his latest version corresponds to a slight refinement
of the preferred semantics of Definition 9.2.5 (as remarked by Bondarenko
et al., 1997). Here is the latest definition, presented in Pollock (1995).4
Definition 9.2.8 (Pollock). An assignment of 'defeated' and 'undefeated'
to a set S of arguments (closed under subarguments) is a partial defeat
status assignment iJJ it satisfies the following conditions.
1. All premises are (as arguments) assigned 'undefeated ';
2. A E S is assigned 'undefeated' iJJ:
(a) All proper suba7"!Juments of Aare assigned 'undefeated'; and
(b) All arguments defeating Aare assigned 'defeated '.
3. A E S is assigned 'defeated' iJJ:
(a) One of A 's proper subarguments is assigned 'defeated'; or
(b) A is defeated by an argument that is assigned 'undefeated '.
A defeat status assignment is a maximal (with respect to set inclusion)
partial defeat status assignment.
The conditions (2a) and (3a) on the proper subarguments of A make it the
case by definition what I prove as Proposition 6.5.6, while the conditions
(2b) and (3b) on the defeaters of Aare the analogues of Dung's notion of
acceptability.
As in the previous section it is easy to verify that in case of an undecided
conflict, i.e. when two arguments defeat each other, an input has more than
one status assignment. Since Pollock wants to define a sceptical consequence
notion, he therefore has to consider the intersection of all assignments.
Accordingly, he defines, relative to a set S of arguments, an argument to
4 Actually, Pollock states his definition in terms of an inference graph instead of a set
of arguments.
228 CHAPTER 9
Evaluation
Evaluating Pollock's system, we can say that it is based on a deep and plau-
sible philosophical (epistemological) theory of defeasible reasoning. More-
over, logically it is a rich theory, including both linear and suppositional
arguments, and deductive as weIl as nondeductive (mainly statistical and
inductive) arguments, with a corresponding distinction between two types
of conflicts between arguments. Pollock's definition of the assessment of
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 229
Like the BDKT approach and Lin & Shoham (1989), Vreeswijk (1993a;
1997) also aims to provide an abstract framework for defeasible argumen-
tation. His framework builds on the one of Lin & Shoham, but contains the
main elements that are missing in their system, viz. not ions of confliet
between arguments and of comparing conflicting arguments. As Lin &
Shoham, Vreeswijk's system also assurnes an unspecified logieal language
(only assumed to contain the symbol .1, denoting 'False'), and an unspec-
ified set of monotonie and nonmonotonic inference rules (which Vreeswijk
calls 'striet' and 'defeasible'), which also makes his system an abstract
framework rather than a particular system. Other aspects taken from Lin &
Shoham are that in Vreeswijk's framework arguments can also be formed by
chaining inference rules into trees and that Vreeswijk's defeasible inference
rules are also not domain specific but general logieal principles. However,
Vreeswijk adds to Lin & Shoham's basic elements that of an ordering on
arguments (on which more below).
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 231
Vreeswijk, 1993b, 1995); this version was one of the sources of inspiration
of the dialogue game of Section 6.5 above.
Vreeswijk also discusses a distinction between two kinds of nonmono-
tonic reasoning, 'defeasible' and 'plausible' reasoning. According to hirn,
the above definition captures defeasible reasoning, which is unsound (i.e.
defeasible) reasoning from firm premises, like in 'typically birds fly, Tweety
is a bird, so presumably Tweety flies'. Plausible reasoning, by contrast, is
sound (i.e. deductive) reasoning from uncertain premises, as in 'all birds
fly (I think), Tweety is a bird, so Tweety flies (I think)'. The difference is
that in the first case adefault proposition is accepted categorically, while
in the second case a categorical proposition is accepted by defauIt. In fact,
Vreeswijk would regard reasoning with ordered premises such as I have
studied in Chapters 6-8 not as defeasible but as plausible reasoning.
One element of this distinction is that for defeasible reasoning the
ordering on arguments is not part of the input theory, reflecting collision
rules for, or degrees of belief in premises, but a general ordering of types
of arguments, such as 'deductive arguments prevail over inductive argu-
ments' and 'statistical inductive arguments prevail over generic inductive
arguments'. Accordingly, Vreeswijk assurnes the ordering on arguments to
be fixed for all sets of premises (although relative to a set of inference
rules). Vreeswijk formalizes plausible reasoning independently of defeasible
reasoning, with the possibility to define input orderings on the premises
(but not with reasoning about the orderings), and then combines the two
formal treatments. To my knowledge, Vreeswijk's framework is unique in
treating these two types of reasoning in one formalism as distinct forms
of reasoning; usually the two forms are regarded as alternative ways to
look at the same kind of reasoning (cf. e.g. the 'inconsistency handling'
and 'defeasible conditional' approaches to default reasoning discussed in
Chapter 4).
Evaluating Vreeswijk's framework, we can say that, as Pollock but in
contrast to BDTK, it formalizes only one type of defeasible consequence
and that it has little attention for the details of comparing arguments, but
that it is philosophically well-motivated, and quite detailed with respect to
the structure of arguments and the process of argumentation.
on the idea that defaults are not propositions but inference licences. Thus
Nute's defeasible rules are, like Reiter's defaults, one-directional. However,
unlike Reiter's defaults they are twoplacej assumptions are dealt with by
an explicit category of defeater rules, which are comparable to Pollock's
undercutting defeaters, although in Nute's case they are, like his defeasible
rules, not intended to express general principles of inference but, as in
default logic, domain specific generalizations.
As for the underlying logicallanguage, since Nute's aim is to dcvelop
a logic that is efficiently implementable, he keeps the language as simple
as possible. He assurnes a logic programming-like language with three cat-
egories of rules, strict rules A ~ p, defeasible rules A ::::} p and defeaters
A ~ p. In all three cases p is a (in my terms) strong literal, i.e. an atomic
proposition or a classically negated atomic proposition, and A is a finite
set of strong literals. Strict and defeasible rules play the same role as in my
system, but defeaters must be read as 'if Athen it might be that p', and
they can only be used to block an application of a rule B ::::} p.5 An example
is 'Genetically altered penguins might fly', which undercuts 'Penguins don't
fly'. Thus Nute's system has, like Pollock's and my system, both rebutting
and undercutting conflicts between arguments.
Arguments can be formed by chaining rules into trees, and conflicting
arguments are compared with the help of an ordering on the rules. Ac-
tually, Nute does not have the explicit notion of an argumentj instead he
incorporates it in two notions of derivability, strict (I-) and defeasible (I",)
derivability, to be explained below. To capture nonderivability Nute does
not use the familiar notions If (meaning 'not 1-') and 11- (meaning 'not
I""). Instead, his aim of designing a tractable system leads hirn to define
two not ions of demonstrable nonderivability -l and "'I, which require that
a proof of a formula fails after finitely many steps.
Finally, as for the assessment of arguments, as just stated Nute's notion
of an argument is implicit in his definitions of derivability. Strict derivability
of a formula p is simply defined as the existence of a tree of strict rules with p
as its root. The definition of dcfeasible derivability more complex, although
the basic idea is simple. Nute has two core definitions, depending on when
the last rule of the tree is strict or defeasible. The definition for the second
case is as follows (for a set of premises T).
Definition 9.2.9 (defeasible derivability) TI", P if there is a rule A ::::} pE
T such that
1. T -l p, and
2. for each a E A, T I'" a, and
3. for each B ~ pET there is a bEB such that T ",I b, and
SFor any p, if P is an atomic formula then 15 stands for ""p, and if p is a negated atomic
formula ""q, then 15 stands for q.
234 CHAPTER 9
Simari & Loui (1992) present a dedarative system for defeasible argu-
mentation that combines the ideas of Pollock (1987) on the interaction
of arguments with (Pooie, 1985)'s specificity definition and Loui's (1987)
view on defaults as metalinguistic rules. Their system is similar to the one
of Chapter 6 above in several respects. They too divide the premises into
sets of contingent and necessary first-order formulas and one-directional
default rules, and they too have a sceptical (but not extremely sceptical)
notion of defeasible consequence. In fact, Simari & Loui use the definition
of Pollock (1987), which, as remarked above, corresponds to the sceptical
semantics of Definition 9.2.7. For comparing conflicting arguments Simari
& Loui use Poole's (1985) specificity definition.
The main differences between Simari & Loui's system and the one of
this book are that their system does not have assumptions in its language,
and so they only have the head-to-head kind of conflict, they only compare
arguments with respect to specificity, and they do not formalize reasoning
about priorities. Moreover, because they use Poole's specificity definition,
their system is subject to the problem discussed above in Section 6.3.1
(although this problem is addressed in Loui & Stiefvater, 1992). On the
other hand, Simari & Loui prove so me interesting formal properties of
arguments and they sketch an architecture for implementation, which has
the same dialectical form as my dialectical proof theory.
236 CHAPTER 9
Pearl's original language. The resulting system has all the elements of my
system: assumptions, applicability, strict and defeasible ruIes, and reasoning
with and about priorities.
Still, with respect to priorities there are so me differences. Gordon's idea
is to use the fact that in conditional entailment specificity is hardwired in
the logic (via the requirement that orderings are admissible), by transform-
ing winning rules (according to any priority criterion) into more specific
rules. However, this is obviously impossible when the inferior rule is more
specific than the winning one, precisely because specificity is in Geffner &
Pearl's system the overriding kind of defeat. As I have discussed above in
Chapter 8, and as Gordon (1995) acknowledges, this is undesirable in legal
applications.
do not lead to branching extensions, since the defaults are ordered and
at each step only the highest of the applicable defaults is applied. Clearly
this only results in alternative extensions if a conflict between two defaults
cannot be solved with the priorities. Interestingly, since Brewka only allows
normal defaults, checking the applicability of adefault does not need a
lookahead assumption (as was already ShOWll by Reiter, 1980): testing
whether applying adefault introduces an inconsistency is done not with
respect to the (guessed) extension E as it will be constructed, but instead
with respect to the extension Ei as it has been constructed thus far. For
this reason the existence of extensions is guaranteed.
Although Brewka's PDL gives the intuitive outcome in a large class of
cases, it still has problems with Example 7.2.6. Here it is again, formalized
in prioritized default logic.
dl : x misbehaves :::} x may be removed
d2 : x is a professor:::} --, x may be removed
d3: x snores :::} x misbehaves
h: Bob is a professor 1\ Bob snores
d3 < d I, d3 < d2
As I argued in Chapter 7, this example should have two extensions, since
the two defaults that intuitively are in conflict, dl and d2 , have no priority
over each other. However, as in his preferred subtheories approach, Brewka
obtains a unique extension, in which d2 is applied at the cost of dl. The
reason is that at step EI there are only two applicable defaults, viz. d2 and
d3 , since only those defaults have their antecedent in the facts, Le. in E o.
And since d3 < d2, we have to apply d2. Then, although at E 2 we can apply
d3 (since there is no conflicting default), at E3 we cannot apply d1 , since
applying it would make the extension inconsistent.
Although Brewka (1994a) does not agree that his outcome is coun-
terintuitive, he accepts my criticism as an alternative point of view and
defines aversion of his PDL which respects my criticism. However, this
version of PDL again contains a lookahead assumption, which makes it
non-constructive. For this reason Brewka prefers his original versioll.
Evaluation
To conclude, with the non-constructive version of his PDL and with his ver-
sion of extendcd logic programming, Brewka has shown that for solving the
problems identified in Chapters 6 and 7 it is not strictly necessary to adopt
an argument-based approach; it suffices to have a one-directional condi-
tional, provided that the system is designed carefully. However, upon closer
examination his definitions still appear to implicitly embody argument-
based notions, which in my opinion, if made explicit, might have made the
definitions more natural.
In various publications Jaap Hage and Bart Verheij have developed a very
original formal system, called 'reason-based logic' (RBL). Although the
system is not argument-based, it was especially developed for applica-
tion to legal reasoning, including its defeasible aspects. Therefore a de-
tailed discussion is in order. The latest versions of RBL are described in
Hage (1996; 1997) and Verheij (1996). These publications also extensively
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 241
applicability of a rule, and weighing reasons for and against the rule's
consequent.
naming technique, viz. reification, well-known from metalogic and AI; see
e.g. Genesereth & Nilsson (1988, p. 13). Every atomic predicate-Iogic for-
mula R(tl,'" , tn) is named by a function expression r(tl,'" ,tn ). Note
that while R is a predicate symbol, T is a function symbol. Furthermore,
compound formulas are named by naming the propositional connectives
with function symbols. For instance, the formula -,R( a) is named by the
function expression -,'r(a); here -,' is a function symbol corresponding to
the connective -'. And the conjunction R(a)I\S(b) is denoted by the function
expression I\'(r(a), s(b)), usually written with infix notation as 1'(a) 1\' s(b).
Note that unlike the naming technique used throughout this book, RBL's
technique reflects the logical structure of the named formula: from the name
the corresponding formula can be reconstructed.
How are rules named? This is done by using a function symbol 1'ule.
Thus rules can be denoted by terms like
rule(1', p(x), q(x))
(Here r is a 'rule identifier', which is handy in formalizations). Note that
this expression is not a formula but a term; a formula results only if the
term is combined with a predicate symbol, as e.g. in
Valid(rule(r,p(x), q(x)))
which says that rule r is valid.
By convention rule terms with variables are a scheme standing for all
their ground instances.
Perhaps the reader expects that the object language of RBL contains
a conditional connective corresponding to the function symbol 1'ule, just
as it contains connectives corresponding to e.g. -,' and 1\'. However, for
philosophical reasons Hage and Verheij do not assurne such a connective.
In their view a rule does not describe but constitutes states of affairs:
a legal rule makes someone a thief or something a contract, it does not
describe that this is the case. Thus, Hage and Verheij argue, a rule is not a
proposition, which can be true or false, but an object in the world, which
can be valid or not and applicable or not, and which must be applied to
make things the case.
3. A rule applies if and only if the reasons for its application outweigh
the reasons against its application.
4. If a rule applies, it gives rise to a reason for its consequent.
5. A formula is a conclusion of the premises if and only if the reasons for
the formula outweigh the reasons against the formula.
Note that condition (5) makes that the weighing process is between sets of
reasons instead of between individual reasons (cf. Section 7.5.4).
Here is how a simplified formal version of inference rule (1) looks like.
Note that condition and consequent are variables, which can be instanti-
ated with the name of any formula.
If Valid(rule(r, condition, consequent)) is derivable
and Obtains(condition) is derivable
and Excluded(r)) is not derivable,
then Applicable(r, rule(condition, consequent)) is derivable.
Condition (4) has the following form.
If Applies(r, rule(condition, consequent)) is derivable,
then Proreason( c01~sequent) is derivable.
Finally, he re is how in condition (5) the connection between objcct- and
metalevel is made.
If Outweighs(Proreasons(Jormula), Conreasons(Jormula)) is
derivable,
then Formula is derivable.
Note again that whether the pro-reasons outweigh the COll-reasons must
itself be derived from the premises. Note also that while formula is a
variable for an object term, occurring in a well-formed formula of RBL,
Formula is a metavariable which stands for the formula named by the term
f ormula. This is how object and metalevel are in RBL connected.
As just said, the RBL conditions are extra inference rules added to stan-
dard first-order logic, just as defaults are in default logic. However, while
the defaults of default logic express domain specific gelleralizations, the
RBL inference rules are intended to capture general principles of reasoning
with rules. This is why the principles quantify over rules: they hold for any
instantiation of condition, conclusion, formula and Formula.
However, as in default logic and many other nonmonotonic logics, in
RBL the derivability of certain formulas is defined in terms of the non-
derivability of other formulas. For instance, in (1) it may not be derivable
that the rule is excluded. Therefore, to prevent circular definitions, some
technical work has to be done. To this end RBL adapts techniques of
default logic: the 'derivability' conditions are restated as conditions on
SYSTEMS FOR DEFEASIBLE ARGUMENTATION 245
Using RBL
Let us now illustrate the basics of formalizing legal rules and principles in
RBL. Since rules/principles are not object level formulas, they can only
be stated indirectly, by assertions that they are valid. Thus, any particular
rule must be formalized as
Valid(rule(r, condition r , conclusian r ))
As for representing exceptions, RBL supports both the choice approach
and a variant of the exception clause approach. Explicit exceptions can be
formalized in almost the same way as in many other nonmonotonic logics
but with one difference: since RBL's rules on validity and applicability are
formalized as general inference ruIes, it is not necessary to give a rule an
explicit condition that it is not excluded. Thus, for instance, Example 3.1.7
can be formalized as follows.
Valid(rule(1, person(x)), has_capaci ty(x))
Valid(rule(2, minor(x), excluded(1) 1\' -,'has_capaci ty(x))
Valid(rule(3,minor(x) 1\' consent_oLrepr(x), excluded(2) 1\'
has_capaci ty(x))
Note that for a minor John acting without consent rule 2 gives rise to thc
conclusion that rule 1 is excluded, after which the above condition (1) of
the RBL inference rules cannot be used to derive a reason for application
of rule 1 to J ahn.
RBL's variant of the exception clause approach can also be used to
formalize collision rules, viz. by using the method described in Section 8.6,
Le. formalizing collision rules as general inapplicability rules. Thus this
method is in fact in between the exception clause and the choice approach.
This method means that a rule that is set aside because of a collision rule
does not generate reasons for its conclusion. This is necessary since, unlike
my system, RBL automatically weighs sets of reasons, so that a rule that is
individually defeated, can in combination with other rules still give rise to
conclusions. According to Hage and Verheij this is not how legal collision
rules work in practice.
RBL also supports the use of the choice approach. If two conflicting
rules both apply, then their application gives rise to conflicting reasons,
which then have to be compared by using information on the weight of
the reasons. The foregoing implies that this method is only feasible for
rules that are amenable to weighing, Le. for what Hage and Verheij call
'principles' .
246 CHAPTER 9
Evaluation
In evaluating RBL, I shall distinguish between its underlying philosophical
ideas and its merits as a formal system. Philosophically, RBL is a very deep,
interesting and on most points convincing analysis of the logical aspects of
legal reasoning. Particular strong points are its analysis of legal metalevel
reasoning (reasoning about validity, applicability, exclusion, and priority
and weight of rules and principles), and its convincing analysis of some
other features of legal reasoning, such as the gradual difference between
rules and principles, and analogous rule application as determined by the
goals and principles underlying the rule. However, I disagree with some
philosophical features of RBL. In Section 7.5.4. I argued that the alleged
need for weighing sets of reasons is less obvious than is argued by Hage and
Verheij. Moreover, their reluctance to treat rules not only as objects but
also as formulas seems to overlook that formulas are themselves a kind of
object, an observation wh ich is the basis of the development of metalogic.
When viewed as a logical system, a particularly attractive feature of
RBL is its detailed formalization of (legal) metalevel reasoning, with the
most sophisticated naming technique that is available. Another merit is
that RBL has at least placed the issue of comparing combined reasons on
the logical agenda. Finally, RBL has a very expressive language in which,
for instance, rules can be nested and negated. On the other hand, however,
it must be said that RBL is technically rather complex, because of its ex-
pressiveness, and since it combines metalogic features with nonderivability
notions. In this respect it would ue desirable to have some results on the
technical well-behavedness of the system. Furthermore, from a computa-
tional point of view the non-constructive nature of RBL and the lack of a
procedural formulation are unattractive; argument-based approaches seem
to fare better in these respects. Finally, a point of detail is that in case of
irresolvable conflicts RBL behaves as an extreme sceptic (cf. Section 7.5.2).
Nevertheless, as already discussed, much of Hage and Verheij's research
is very valuable, in particular their contribution to legal knowledge repre-
sentation, which shows how many types of legal knowledge can be formal-
ized in a first-order language. To a large extent their formalizations can also
be used in other systems, for instance, the system of Chapters 6-8, which
is similar to RBL in several respects. For instance, my system can also
deal with both the exception clause approach (using weak negation) and
the choice approach (using priorities). Moreover, RBL's two-step process of
drawing conclusions (first collecting reasons pro and con, then comparing
them) has its counterpart in my system in the form of first collecting
arguments pro and con and then comparing them. Now it is my guess
that if every defeasible rule of my system is assumed to have a validity
clause and a weakly negated inapplicability clause, RBL's inference rules
SYSTEMS FüR DEFEASIBLE AltliUMENTATIüN 247
249
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
250 CHAPTER 10
Structural Resemblance
Recall that what should be avoided is mixing several source units in one
knowledge base unit. In Chapter 3 we have already seen that such a 'many-
to-one' correspondence can be avoided in both approaches, although in the
exception dause approach only with general dauses. However, a one-to-one
correspondence between source alld knowledge base can be obtained more
easily in the choice approach: in the Sections 5.2 and 5.3 we saw that with
exception dauses soft rebutting defeaters can only be represented when
combined with undercutting defeaters, and this often results in a split of
one source unit into two knowledge base units, only kept together by giving
them the same name. As just shown, the choice approach deals with this
kind of exception in a more elegant way; note also that its use in under-
cutting defeaters of extra defaults of the form => appln( Xl, ... , Xi) is not
really a violation of one-to-one correspondence, since such expressions have
no relation with any specific source; they are more part of the 'hardware' of
the formalization method. In sum, with respect to structural resemblance
both methods do well, but with a small advantage for the choice approach.
Modularity
In Section 5.6.4 we have seen that the specific exception clause approach
is unmodular, while general clauses support modularity only partially: al-
though they prevent changing old rules when new exceptions are added,
the exceptions still have to mention to which rules they are an exception;
only if this is also done in the natural-Ianguage versions, as sometimes in
law, general exception clauses fully support modularity.
As for the choice approach it was noted in Section 5.1.3 that the use of
specificity is often claimed to support modularity, since what would suffice
is to formalize each rule in isolation, after which the specificity algorithm
would determine the preference relations; on the other hand, priorities are
USING THE ARGUMENTATION SYSTEM 251
Exclusiveness 0/ Specijicity
In 5.6.4 it was already noted that if exception clauses solve a confiict by
resulting in a unique answer, no other standard can overrule this solutionj
other standards can only be used in confiicts not prevented by exception
252 CHAPTER 10
Implementation
The conc1usion of Chapter 5 was that, when restricted to c1auselike lan-
guages and if undecided conflicts can be avoided, the exception c1ause
approach can be rat her efficiently implemented with logic programming
methods, the cost being a less natural treatment of c1assical negation.
By contrast, the choice method intro duces a new layer of complexity, the
need to choose the best of conflicting answers, which has for a similar phe-
nomenon called abduction been shown to decrease tractability (see furt her
Section 10.2). In conc1usion, if the exception c1ause approach can meet
its aim to obtain unique answers, it can be more efficiently implemented
than the choice approach, but since unique answers can only be obtained at
restricted domains of application, we are in fact confronted with an instance
of the weIl-known trade-off between expressiveness and tractability.
Expressiveness
As the schemes of Section 5.6.1 and this section show, the methods mainly
differ in their treatment of undecided conflicts; all other situations listed
in Section 5.1.2 can be expressed in both approaches, although in the
exception c1ause approach not in all logics equally weIl. As for undecided
conflicts, we have seen that in the exception clause approach only default
logic and Poole's framework can alternatively present both answers. How-
ever, this only holds at the theoretical level: when it comes to the most
natural implement at ion in logic programming, then this advantage of these
formalisms disappears. TheoreticaIly, the choice approach can cope with
undecided conflicts almost by definition, but it remains to be seen whether
this also holds in efficiently implemented versions.
In sum, with undecided conflicts the choice approach has theoretically
fewer problems and in implement at ions at most aso many problems as the
exception c1ause approach; furthermore, the choice approach can also deal
with exception clauses and there is no type of exception which only the
exception clause approach can express; in conclusion, the choice approach
USING THE ARGUMENTATION SYSTEM 253
is more expressive than the exception dause approach, but again it should
be said that the price for this is an increase in computational complexity.
Final Evaluation
We can condude that the main differences between the methods occur
with respect to tractability and expressiveness. At the other points they
have by-and-Iarge turned out to do equally weIl. This holds particularly for
the points relevant to knowledge engineering in practice, with tbe exception
that the choice approach is a bit doser to naturallanguage: apart from this,
both methods can preserve the separation of rules and exceptions and in
both methods a knowledge engineer should not rely too much on a modular
formalization strategy. For this reason the only real issue is in which cases
expressiveness should be sacrificed for tractability.
In my opinion the answer will depend on the nature of the application
and the research goals. If the main task of a system is to give insight
into the consequences of a certain body of legislation, then nonmonotonic
techniques mainly serve as a knowledge representation device, in which
case the exception dause approach is probably a better choice. Obviously,
this particularly holds for domains in which the law is well-established
and does not leave much room for disagreement. On the other hand, if
the need for representing exceptions arises in research on modelling legal
argument, then the choice approach is obviously the better, since it is able
to capture more aspects of legal reasoning. As we have seen, it can in
particular be embedded in general methods for modelling the construction
and comparison of conflicting arguments. In the last three sections of this
chapter I shall discuss in more detail how the logical argumentation system
developed in this book can serve as a component in models of adversarial
legal argument. But first I turn to its prospects for implementation in a
computer program.
defeasible proof procedure global: i.e. it gives rise to the need to check all
premises in every step of a proof. A final layer of complexity is given by
the relation between argumentation and so-called 'abduction'. As noted by
Gordon (1991; 1995) the problem of constructing and comparing arguments
for a desired conclusion is formally similar to the task of finding sets of
hypotheses whieh explain certain observed data; the latter process is often
called abduction and, as shown by Bylander et al. (1991), obstacles to the
tractability of abduction are (among other things) incompatibilities among
the hypotheses (whieh requires a consistency check) and the aim of finding
the best explanation; obviously my system gives rise to both obstacles.
However, logical argumentation systems (and other nonmonotonie log-
ies) do not only have tractability-decreasing features. Recent research
(Cadoli et al., 1996) has established that in nonmonotonic logics the
representation of a problem or domain is generally more compact than
in a monotonie logic, which has a positive effect on tractability.
Nevertheless, tractability remains a real problem. Now, as noted in
Section 4.3.2, there are in AI two common strategies to overcome this
problem: restricting the language to an efficiently computable fragment, and
sacrificing completeness or even soundness with respect to the semantics.
I now make some remarks on both of these options, pertaining not just to
my system but to argumentation systems in general.
10.3. Applications
In this section I apply the argument-based approach, and my system in
particular, to so me issues discussed in earlier chapters. After showing that
the system in fact formalizes many of Toulmin's ideas, I make some com-
ments on how logical argumentation systems can be used as a tool in such
reasoning activities as interpretation and analogical reasoning.
As noted in Section 2.2.2, the ideas of Toulmin (1958) have recently at-
tracted the attention of a number of researchers in the field of AI and
law. I remarked there that, although these ideas are certainly valuable,
they do not show that the aspects of reasoning wh ich he discusses can-
not be analyzed in logical terms. I now use my system to support this
256 CHAPTER 10
In Section 2.3 I argued that logic is also relevant for noninferential reasoning
activities, since they can use logic as a too!. I now discuss some ways in
which the argumentation system developed in this book can be usedj I
shall particularly focus on problems of interpretation and classification.
These problems are, of course, matters of content, whieh makes it senseless
to ask for the logical justification of their solution, but even when a lawyer
1 A more general discussion of modern-day logic's response to Toulmin's - and Perel-
man's - challenges can be found in van Benthem (1995).
USING THE ARGUMENTATION SYSTEM 257
Thus the system puts constraints on the search for useful analogies and
distinctions.
I now give a final illustration of how an argumentation system can point
at sensible argumentation strategies. At first sight it might be thought
that, when faced with an interpretation problem, lawyers should assurne
whenever they can that the law is coherentj accordingly, they would have
to apply a 'conflict avoiding' interpretation strategy. However, although for
judiciaries this might be the case, lawyers defending one side of a legal
dispute should sometimes argue for the existence of a conflict. Consider an
example with two precedents, of which the first leaves room for formalizing
it as either one of the next two defeasible rules.
(1) p ~ q
(1') p /\ -'1· ~ q
Assurne timt the facts of the case can only be dassified as (p /\ r), and that
the only sensible interpretation of the second case is
2 r ~ -,q
Then the proponent of -,q should indeed argue for the conflict avoiding
interpretation (1'), which makes it impossible to use the first case in at-
tacking (2). However, the opponent should do otherwise: s/he had better
argue for the 'conflict preserving' interpretation (1). The point in arguing
this way is that then other reasons might be provided for giving precedence
to (1) over (2). Again we see that considering the possible outcome of a
comparison of conflicting arguments can give du es for what is the best
interpretation strategy to follow in a legal dispute, and this again shows
that an argumentation system can be regarded as a tool in various kinds
of reasoning. In the last section of this chapter I come back to this in more
general terms.
10.4.2. CABARET
In conclusion, from these examples and also from what has been said
in 10.3.2 we can see that for systems like CABARET, modelling the heuris-
tie phase of argumentation, a logieal argumentation system also has its use,
in analysing the logieal nature of the information on whieh the heuristics
operate.
In Section 3.5.1 I described research on the use oflogic programming for rep-
resenting legislation. In two later projects this tradition has been enriched
with techniques from logic metaprogramming. Hamfeit & Barklund (1989)
use such techniques for (among other things) representing legal collision
rules. Their method uses logic programming's DEMO predicate, which
represents provability in the object language. For this reason, their style of
representation is very different from my method in Chapter 8, whieh uses a
priority predieate. Another difference is that Hamfeit & Barklund assurne a
hierarchy of separate language levels of legal knowledge. I have chosen not
to make this assumption, because of examples like the building regulations
in Section 8.5, whieh show that the Lex principles do not only apply to
legal object level rules, but also to each other and even to themselves.
Routen & Bench-Capon (1991) have applied metalogic programming to,
among other things, the separation of rules and exceptions. They preserve
this separation by enriching the knowledge representation language with
metalevel expressions SUbject_to(rulel, rule2), and by ensuring that the
metainterpreter of a logic program applies a rule only if of no rule to which
it is declared subject the body can be derived. Clearly, this nonderivability
check comes down to a defeasible assumption that a rule has no exceptions,
whieh can be defeated by a rule saying that there is an exception. In this
respect their method is similar to the use of general exception clauses,
explained in Chapter 5 and Section 6.6.2. However, there is one important
difference: in Routen & Bench-Capon's method a rule's antecedent does
not need to have an explicit 'no exception' assumption; instead this as-
sumption is, like in Hage & Verheij's reason-based logie, incorporated in
the metainterpreter, when it tests the nonderivability of the body of an
exceptional rule. Routen & Bench-Capon argue that this better preserves
the natural-Ianguage structure of legal texts.
Although this is indeed an elegant feature, their method also has some
restrictions. In particular, Routen & Bench-Capon do not distinguish be-
tween striet and defeasible rules, and they do not consider undecided con-
Biets or with reasoning about priorities. Thus their method is better suited
for representing coherent legal texts than for modelling legal argumentation.
USING THE ARGUMENTATION SYSTEM 263
Evaluation
The DART system has a heuristic flavour in that several of its features -
such as the ordering in strength of the various argument types - are given
no theoretical explanation by Freeman & Farley. For example, it would be
interesting to know why for default and evidential rules a modus tollens
argument is weaker than a modus ponens argument, while for sufficient
rules these argument types are equally strong, while yet in all these cases
modus tollens is regarded as logically possible. Furthermore, the fact that
with default and evidential rules abduction and a contrario are just as
strong as modus tollens seems in need of explanation. FinaIly, the variants
of DART's dialogue game are not based on an explicit argumentation-
theoretic semantics, as is my dialogue game.
In regard to some other aspects DART is more restricted than my
system. For instance, DART's language does not have weak negation, and
the only sources of rule priorities are specificity and the ordering of three
epistemic types of rules; my system, by contrast, allows any partial ordering,
which moreover is subject to debate.
However, on the positive side, Freeman & Farley give an interesting
treatment of nondeductive argument types. In particular, although they
regard such argument types not as heuristics (as I do) but as inference
modes, they are weIl aware that a proper treatment of nondeductive argu-
ments should not only define how they can be constructed but also how they
can be attacked. In my opinion this is important both for the 'heuristic'
and the 'inferential' view on nondeductive arguments. Moreover, Freeman &
Farley's claim that different problem solving contexts may require different
levels of proof, or even different types of arguments, is very interesting and
deserves further study.
The final system that I shall discuss is Gordon's (1994; 1995) dialogue game
model of civil pleading. Since it is formally defined, USillg both standard
logic and an argumentation system, it is an excellent illustration of how
logic can be used as a tool in AI and law. Furthermore, compared to the
systems discussed above it contains a very important new feature, viz. the
regulated opportunity to introduce new information into a debate. For these
reasons I devote a longer discussion to the system.
USING THE ARGUMENTATION SYSTEM 265
General Ideas
The game is intended as a (normative) model of civil pleading, which is
the phase in a civil law casc where the parties exchange arguments and
counterarguments to identify the issues that must be decided by the court.
Gordon models civil pleading as a formally defined dialogue game, in which
two parties (plaintiff and defendant) construct and attack arguments for a
claim according to certain discourse rules. Since Gordon intends his model
to be normative, these discourse rules do not consist of an actual system of
procedural civillaw. In fact, they are inspired by Alexy's (1978) discourse
theory of legal argumentation, which is based on the idea that a legal
decision is just if it is the outcome of a fair and effective procedure for
disputes. It contains such principles as 'No speaker may contradict him-
self', 'Every speaker must, on demand, justify an assertion, unless he can
justify withholding the justification' and 'Every justification must contain
at least one legal rule'. Gordon shows that this procedural view on legal
argumentation also has logical aspects. The resulting formal model is in
itself interesting as a contribution to legal philosophy, but for AI and Law
Gordon intends it to serve as a specification of a computer system mediating
between players of the game, i.e. a system assuring that the rules of the
game are not violated.
The Pleadings Game is not the only AI & Law system with a procedural
view on legal reasoning. In Hage et al. (1994) a procedural account is
given of Hart's distinction between clear and hard cases. Philosophically
this account is related to the Pleadings Game but technically it is quite
different, since it does not use an argument-based system but Hage &
Verheij's reason-based logic. For this reason I shall not discuss this work in
detail.
Although both Gordon's Pleadings Game and my proof theory have
the form of a dialogue game, there is one crucial difference. While in my
game all moves are based on a fixed set of premises, which cannot be
altered by the players during the game, in the Pleadings Game the set
of premises is constructed dynamically: within the bounds defined by the
rules of the game, each player has the right to propose new premises during
the dispute. This makes the Pleadings Game a model of real disputes, in
which the parties rarely show all their cards at once but usually introduce
new statements and claims during the dispute, depending on the opponent's
moves. My game, by contrast, serves as a proof theory for an (argument-
based) nonmonotonic logic, i.e. it defines the defeasible consequenccs of a
given body of information. This body might be the joint premises at any
stage of a dispute, but it might also comprise the (incomplete or uncertain)
beliefs of a single person, or the norms of a single piece of legislation.
Since the Pleadings Game does not determine the consequences of a
266 CHAPTER 10
given set of premises, but instead regulates how players can construct
the premises during a dispute, the Game needs to apply an independent
logical argumentation system for determining whether a claim is entailed by
these premises. As such a system Gordon uses the proof theory of Geffner
& Pearl's conditional entailment (see Section 9.2.7), which also serves to
define such notions as argument, counterargument and defeat. However, it
is important to note that the general architecture of the Pleadings Game
does not crucially rely on conditional entailment; any argumentation sys-
tem of sufficient expressiveness could be used, for instance, the system of
Chapters 6-8 of this book.
In the Pleadings Game the introduction of new premises is regulated
by definitions of new types of moves, and by discourse rules for when these
moves are possible. Let us first look at the moves. 4
Types of Moves
If we disregard the first move of the game, then in my game the only
possible type of move is stating a counterargument to the other player's
previous move. Although this move is also possible in the Pleadings Game,
introducing premises gives rise to new types of moves. In fact, a new type
of move is not necessary for proposing, or "claiming" a new premise; this
can be modelled as the fact that astated argument contains the new
premise. Only responding to a claim requires new moves: in particular,
a player can deny a newly proposed premise (a "claim"), which makes it
obligatory either to state an argument for the claim or to leave the claim
as an issue for trial), and a player can concede a claim, after which the
claim becomes part of the premises. Finally, a player can also concede an
argument. However, this just has the procedural role of giving up the right
to state a counterargument; the player conceding an argument can still
deny its premises.
How can these new types of moves in turn be answered? Obviously, a
concession need not be answered, but for a denial the situation is different.
Since a denial does not itself claim anything, a counterargument against a
denial is impossible. Instead the Pleadings Game gives two other possibil-
ities. Firstly, a denial can be answered by a denial of the denial ('no, my
premise is OK'), against which no response is possible and the claim is left
as an issue for trial. Alternatively, a denial of a claim can be answered by
an argument for the claim. Such an argument keeps the game going, since
the other party can (as just explained) produce a counterargument or deny
any new claims of the argument.
4For explanatory purposes I here (and at some other points) simplify the Pleadings
Game.
USING THE ARGUMENTATION SYSTEM 267
An Example
Let us illustrate the Pleadings Game with an example. For the sake of
illustration I simplify the Game on several details, and use a different (and
semiformal) notation. For notational convenience each formula is numbered,
while the move indicators are in bold.
The example concerns a dispute on offer and acceptance of contractsj the
legal background is the Dutch Civil Code, Sections 6:217-230. The players
are called plaintiff (11") and defendant (8). Plaintiff, who had made an offer
to defendant, starts the game by claiming that a contract exists. Defendant
denies this claim, after which Plaintiff supports it with the argument that
defendant accepted his offer and that an accepted offer creates a contract.
11"1: Argue[ (1) Contract ]
81 : Deny(1)
11"2: Argue[ (2) Off er, (3) Acceptance,
(4) Offer /\ Acceptance =? Contract ]
Now defendant attacks plaintiff's supporting argument [2,3,4] by defeating
its subargument that he accepted the offer. The counterargument says that
defendant sent his accepting message after the offer had expired, for which
reason there was no acceptance in a legal sense.
82 : Concede(2,4), Deny(3)
Argue[ (5) "Accept" late, (6) "Accept" late =? -, Acceptance ]
with defendant's aim by simply denying defendant's denial and not stating
a supporting argument for his claim, after which the game termillates.
83: Concede(8,[5,7,8]), Deny(7)
1l'4: Deny(Deny(7))
This game has resulted in the following dialectical graph (since it is a
dialectical graph, the terms 'proponent' and 'opponent' are appropriate.)
°P
PI: [2,3,4] for Contract
1: [5,6] for --, Acceptance
2: [5,7,8] for Acceptance
The claims in this graph that have not been conceded are
(1) Contract
(3) Acceptance
(6) "Accept" late => --, Acceptance
(7) "Accept" recognized
So these are the issues 5 . Moreover, the set of premises constructed during
the game, Le. the set of conceded claims, is {2, 4, 5}. It is up to the judge
whether or not to extend it with the issues (6) and (7). In each case the
proof theory of conditional entailment must be used to verify whether the
other two issues, in particular plaintiff's main claim (1), are (defeasibly)
implied by the resulting premises. In fact, it is easy to see that they are
entailed only if (6) and (7) are added.
It is important to note that the above outcome of the game is completely
contingent, since at each turn the players might have introduced, conceded,
or denied different claims. This once more illustrates the difference between
using a dialogue game as a theory construction game or as a proof theory
for nonmonotonic reasoning.
Finally, in the above sketch I have ignored several interesting aspects of
the Pleadings Game. For instance, one concern of Gordon is to support an
efficient reasoning process, to formalise part of the 'effectiveness' require-
ment of procedural justice. For the details, and for other interesting aspects
of the Game, I refer to Gordon's own discussions.
Evaluation
How must we evaluate Gordon's Pleadings Game in the light of Chapters 6-
9? We have seen that at several specific points criticism is possible, for
instance with respect to Gordon's use of conditional entailment, which
hardwires specificity in the logic and which prevents a proper treatment
of examples like Example 6.3.3. Moreover, it seems to me that the set of
5More accurately, the third issue is the claim that rule (6) is backed.
270 CHAPTER 10
moves of the game should also include the possibility to retract claims, since
in real legal disputes this is what the parties often do.
However, more important is Gordon's positive contribution. He has
given an excellent illustration of how (deductive and nonmonotonic) logic
can be used as a tool in modelling legal argument. It is used to capture
notions such as 'argument', 'counterargument' and 'defeat', and to deter-
mine the defeasible consequences of the set of premises constructed during
a dispute. In addition, Gordon has shown that the discourse rules that use
these not ions can also be formalized, although interestingly he does not
formalize these rules in logic but as a procedure, with not ions from game
theory.
Finally, Gordon's Pleadings Game is a clear example of the proper
role of standard logic and logical argumentation systems in models of
argumentation. However, since this amounts to an evaluation of a main
contribution of this book, I shall now explain this further in aseparate
section.
The connection between the dialectical and procedural layer also sheds
light on some early diseussions in legal philosophy on the defeasibility of
legal reasoning (originating from before the rise of nonmonotonic logic!).
Hart (1949) (extensively discussed in Baker, 1977 and Loui, 1995) puts
defeasibility in the pragmatic context of legal procedures. Very often in
a legal case, when the proponent of a claim has proven facts that could
lead to granting the claim, this does not have the effect that the case is
settledj instead the burden of proof is shifted to the opponent, whose turn
it then is to prove additional facts which, despite the facts proven by the
proponent, nevertheless prevent the claim from being granted. Clearly this
feature of legal procedures cannot be understood without admitting that
legal reasoning is logically nonmonotonic: if the conclusions supported by
the proponent's evidence were deductively valid, new information could
never change the result of the procedure, so a shift of the burden of proof
would be pointless. Yet, Hart does not comment on the implications for
logic (but see later MacCormick, 1995): although he discusses defeasibility
in terms that later became very common in AI to defend the development
of non mono tonic logics 7 , he just regards it as an aspect of legal proceduresj
he does not discuss how it can be reconciled with the view that judicial
7 As illustrated in detail by Loui (1995).
USING THE ARGUMENTATION SYSTEM 273
reasoning is still subject to the laws oflogie. The present study (and related
work, such as Sartor, 1995) gives insight into how such a reconciliation
is possible: the procedural level of legal argumentation presupposes not
only a logical level but also a dialectical level, at which arguments can
be defeated by stronger counterarguments, resulting in a nonmonotonie
notion of logieal, or 'argumentative' consequence. Perhaps this analysis
meets MacCormick's (1995, p. 114) call to logieians to develop systems
that capture the pragmatic nature of legal defeasibility.
Interestingly, similar observations have been made by Baker (1977) in
his discussion of Hart (1949), albeit in purely informal terms. He proposes
the idea of a "C-relation" between premises and conclusions of an argu-
ment, whieh is in fact a relation of nonmonotonic consequence. Baker then
remarks that the effect of establishing aC-relation between a set of known
premises and a claim is that the burden of proof shifts to the person who
challenges the claim, to find further evidence which defeats the C-relation
established by his opponent. The present analysis can also be regarded as
a formalization of Baker's observations.
In fact, besides the logical, dialectical and procedural level we can even
identify a fourth level of what may perhaps be called strategy, at which
strategies and tactics for playing the game are identified. Note that a
procedural model like the Pleadings Game only defines when a dispute
has been conducted according to procedural rules, just as the rules of chess
only define how agame of chess can be played according to the rules of a
game. However, just as for being a good chess player much more is needed
than knowing the rules of the game, for being a good legal debater much
more is needed than knowing the rules of procedural justice.
Now one interesting challenge for legal philosophy and AI-and-Iaw is to
study what are the good ways to conduct a legal dispute. Perhaps in philos-
ophy part of the research of Perelman (1969; 1976) can already be regarded
as of this kind. And some AI-and-Iaw projects also study argumentation
heuristies and strategies. For instance, the 'prototype-and-deformations'
model underlying McCarty's TAXMAN 11 system (see Section 3.5.2) can be
regarded as an argumentation strategy. And the HYPO system studies 3-ply
strategies for analogical (and a contrario ) reasoning with legal precedents,
while Rissland & Skalak (1991) and Skalak & Rissland (1992) extend this
research to combined reasoning with rules and cases. In fact, if my view
in Section 2.3 on analogical reasoning is correct, viz. that it is not a form
of inference but a heuristic for suggesting new premises, then the level at
which analogy should be addressed is the strategie level; and this is indeed
274 CHAPTER 10
CONCLUSION
Assuming that logic can provide theoretical foundations for Artificial In-
telligence research, this book has aimed at giving a logical analysis of
two important aspects of legal reasoning which are sometimes believed
to escape such an analysis: of defeasible reasoning, Le. of reasoning with
rules which are implicitly subject to exceptions, and of reasoning with
inconsistent information. A secondary aim has been to clarify the role of
logic in legal reasoning, particularly, to show that logic can also be useful in
the analysis of noninferential kinds of reasoning, like analogical reasoning.
Both aims have been satisfied, as I will summarize in this chapter. The
observations on the role of logic in legal reasoning are not new: what I
have mainly done is making them more specific for AI-and-Iaw research,
in order to avoid misunderstandings on the nature of my investigations.
And the credits for showing that defeasible reasoning and reasoning with
inconsistent information can be logically analyzed should also not go to mej
my research has been a contribution to developments initiated by others,
partly by applying these developments to the legal domain and partly by
adding something new to the developments themselves.
11.1. Summary
To break the ground for the main investigations of this book, I started in
Chapter 2 with a discussion of the role of logic in legal reasoning. Both in
legal theory and AI-and-Iaw research the usefulness of logic in analysing
legal reasoning has been disputed. It became apparent that some of the
arguments raised against logic are based on misconceptions of what logic
is and how it can be used. Other doubts on logic, however, turned out to
be based on the idea that the kinds of reasoning which are traditionally
studied by logic are the only ones which can be logically analyzed: partic-
ularly reasoning with rules which are subject to exceptions, and nontrivial
reasoning with inconsistent information would fall outside the scope of a
logical analysis. And since, because of the open, unpredictable nature of
the world to which the law applies, and the many competing interests and
opinions involved in legal disputes, these kinds of information are abundant
in the legal domain, a logical analysis of legal reasoning would be of little
275
H. Prakken, Logical Tools for Modelling Legal Argument
© Springer Science+Business Media Dordrecht 1997
276 CHAPTER 11
use, so it is said.
In Chapter 3 we saw in detail that legal reasoning often indeed operates
on defeasible and inconsistent information. In addition, we saw that the
way in which legal texts separate general rules from exceptions cannot be
accounted for with standard logical means. However, much of the rest of
this book was devoted to showing that these phenomena do not escape a
logical analysis at all, while, moreover, they are logically related to each
other. First, in Chapter 4, I gave a brief sketch of new logical developments
on modelling the two investigated kinds of reasoning, most of which are
the result of AI research on modelling common-sense reasoning. After that
I studied the application of so me of the new developments to the legal
domain. It appeared that reasoning with rules which are subject to ex-
ceptions can be modelIed in two ways: firstly, as reasoning with explicit
exception clauses which are assumed false unless the contrary is shown,
which method was investigated in Chapter 5; and secondly, as choosing the
most specific of confiicting conclusions. After observing that this second way
of modelling defeasible reasoning is in fact a special case of reasoning with
inconsistent information, I made in the Chapters 6, 7 and 8 a contribution
to the new logical developments themselves: I showed that both defeasible
reasoning and inconsistency tolerant reasoning can be modelled as instances
of the process of constructing and comparing arguments for incompatible
conclusions. Although the main source of inspiration for this part of the
research has been the legal domain, it is stated in a sufficiently general way
to make it a contribution to general AI research on modelling common-sense
reasoning.
In Chapter 9 it turned out that the system was an instance of a new,
argument-based development in AI research on nonmonotonie reasoning,
and in Chapter 10 I applied both this general approach and my particular
system to various issues, in particulaI' to issues in knowledge representation
and implement at ion, to Toulmin's criticism of standard logic, and to the
role of logic in noninferential forms of reasoning. Then I tried to justify
the assumption of this book that logic can be used as a standard for imple-
mented systems, in using my system, and the general idea of argumentation
systems, in a logical analysis of some implemented AI-and-Iaw systems.
Finally, I discussed how from this book a four-Iayered picture of legal
argumentation has emerged, connecting the logical, dialectical, procedural
and strategie aspects of legal reasoning.
witness the discussion in Section 1.3 the role of logic in reasoning is not
only discussed in the legal domain, but also in AI and philosophy in general;
and, moreover, the patterns of reasoning which have been analyzed can not
only be found in law, but also in many other domains of common-sense
reasoning.
With respect to the role of logic in legal reasoning the main conclusions
are that using logic does not commit to the 'axiomatic' or even to the
'naive deductivist' view on reasoning. That is, logic does not commit to
the narrow view on reasonillg as no more than running asound deductive
theorem prover over formulas in some logical language: it leaves room for
other reasoning activities, like induction, analogical reasoning and ways of
arguing against a rule. Logic even plays a positive role in the description
of these activities, since it defines the logical meaning of the formalisms
on which they operate; to put it another way, these activities use logic as
a tool, since they aim at suggesting or rejecting information with which a
desired or undesired conclusion can be derived, and deriving conclusions is
a matter of logic.
In terms of legal reasoning, logic does not enforce a mechanical appli-
cation of statute norms and established precedent, without any regard to
considerations of justice and fairness or to socio-political demands. The
simple reason is that the force of a logical conclusion is ultimately based
on the force of the premises: if the conclusion is not accepted, then the
premises can be changed and these changes can be based on any ground
whatsoever. The new developments discussed in this book even account for
an additional possibility: while in classicallogic the validity of a conclusion
can only be affected by removing or changing premises, in nonmonotonic
logics this can also be done by adding new premises.
DEFEASIBLE REASONING
It has become apparent that nonmonotonic logics are able to cope with
rules which are implicitly subject to exceptions. They can do so since they
deviate in one remarkable respect from standard logic: in these logics the
mere addition of new premises without changing or removing old ones
can invalidate previously valid conclusions. Of course, what falls outside
the scope of nonmonotonic logics is deciding that a newly considered case
gives rise to an exception, since this is a matter of contenti the advantages
of nonmonotonic logics are of a formal nature: they provide for ways of
jumping to general conclusions if not all information about exceptions
is available and for ways of drawing the exceptional conclusion if a new
278 CHAPTER 11
One of the points of departure of this book was that logic should be
regarded as a tool in reasoning. For instance, we have seen several times that
interpretation, although being an extra-Iogical activity, still presupposes
logic, viz. when the consequences of alternative interpretations are tested.
Now Chapters 6-8 have formalized certain kinds of logical tools for legal
argument, viz. those useful when legal knowledge is incomplete or uncertain,
or when lawyers have conflicting points of view. The four-Iayered view
on legal argumentation developed in Section 10.5, distinguishing a logical,
dialectieal, procedural and strategie layer, reveals the nature of these tools:
they guide those forms of reasoning that are aimed at obtaining a eertain
dialectical outeome. For instanee, at the third, proeedural level, we have
seen that Gordon's Pleadings Game allows only those new arguments that
have the potential to change the dialectical status of a claim. And in
Section 10.3.2 we have seen that the fourth, strategie layer studies strategies
for obtaining a certain result at the dialeeticallevel.
The above conclusions mainly concerned the logical and philosophical as-
pects of the investigations, but with respect to applications to knowledge-
based systems also some conclusions can be drawn. One of them is that
in modelling defeasible reasoning the choice approach is computationally
more complex than the exception clause approach, but is also more widely
applicable. The main reason for its complexity is the need to compare and
choose between confiicting answers; by contrast, the aim of the exception
clause approach is to avoid confiicting answers, which leads to a simpler
problem solving task. However, unique answers can be obtained only on
restricted domains of application: particularly reasoning with inconsistent
information and metalevel reasoning rapidly need the possibility to present
alternative arguments and to argue about their strength. In conc1usion, as
so often in AI research a trade-off is necessary between expressiveness and
CONCLUSION 281
tractability.
For knowledge engineering an important conclusion has been that when
it comes to a modular way of designing knowledge bases the several ways
of representing exceptions do not differ much: in all formalization methods
and all formalisms it is better to keep an overview over the entire domain in
translating an individual expression: particularly, preserving the separation
of rules and exceptions does not necessarily support a modular formaliza-
tion process. For using exception clauses and priorities similar observations
were already made by others, but with respect to using a syntactic or
semantic specificity check these conclusions see m to be new.
It has not been the aim of this book to make a comparative study of ways of
modelling nonmonotonic reasoning, particularly not to advocate one way
as the best. In fact, it might even be argued that there is no such best
way: it might be that there is no unique kind of reasoning which can be
called 'nonmonotonic reasoning', but that there is a variety of reasoning
patterns of which one common feature is nonmonotonicity but which can
differ in many other respects. In this view, argumentation systems are more
a formalization of some of these reasoning patterns than a general proposal
on how to model nonmonotonic reasoning.
Nevertheless, from my investigations some conclusions on this issue
can still be drawn. Firstly, to my knowledge the law is one of the few
domains in which nonmonotonic logics have been tested on more than just
toy examples, and in this respect it is perhaps significant that especially
argument-based systems have turned out to be suitable. And if this obser-
vation is combined with the fact that general argumentation frameworks,
like the BDKT approach, are able to unify existing nonmonotonic logics,
the conclusion can be drawn that the argument-based approach to the
formalization of nonmonotonic reasoning is at least very promising.
Some more specific conclusions can also be drawn. To start with, we have
seen that a formalization of reasoning with priorities on premises, which
is an important kind of nonmonotonic reasoning, needs more sophisticated
tools than is often presumedj particularly, there should be a way to account
for the step-by-step nature of comparing premises. We have also shed more
light on the role of the specificity principle in nonmonotonic reasoning.
We have seen that as a notational convention for exceptions it does not
induce a modular formalization process, since the specificity form often
just records a choice on other groullds, requiring aglobaI view on the
available information. And we have seen that specificity is not very useful
282 CHAPTER 11
LOGICAL FOUNDATIONS OF AI
Coming back to the general claim that logic can provide formal foundations
of AI research, we can say that the present study also has implications
for the nature of this claim. The reason is that this book has been an
example of a somewhat different use of logic, in that I have used a logi-
cal system, default logic, as a component of a larger construct, which is
also defined in a formal way. This is in line with the point of departure
formulated in Section 1.4 that not only standard logical but also larger
formal constructs can specify the semantics of reasoning architectures. My
theory is by no means the only example of this way of using logicj aH
argumentation systems are of this kind, as weH as procedural models like
Gordon's Pleadings Game, and, for instance, belief revision theory and
recent research on metalevel reasoning. In fact, this even holds for the
investigations of Poole and Brewka: although I have criticized their views
on how to model nonmonotonic reasoning, I think an interesting aspect of
their research is the general idea of embedding logical systems in larger
formal theories, instead of forcing everything into the traditional form of a
logical system with one logical language, interpreted in a model-theoretic
semantics, and with one inferential system.
CONCLUSION 283
ARGUMENTATION THEORY
The four-Iayered view on argumentation that has emerged from this book
is not restricted to the legal domain, but is in fact a contribution to general
argumentation theory. As noted in SectiOll 10.5, in this field the procedural
aspects of argumentation have already been studied. At an informal level
this is done by e.g. the pragma-dialectical school, mentioned above in
Section 10.5, while, moreover, a field of 'formal dialectics' has evolved,
studying formal systems of procedural rules for dialogues; see e.g. Walton
& Krabbe (1995). The relevance of the present study for this field is that
it shows how the procedural level of argumentation presupposes not only
a logical level (as formal dialecticialls are weH aware) but also a dialectical
level.
DEONTIC LOGIC
The work on argumentation systems in the second part of this book has
genera ted various issues for furt her research. In discussillg them the four-
layered picture of argumentation is useful. Ifwe first look inside thc dialecti-
callevel, then one logical issue that needs furt her study is the formalization
of accrual of arguments. Although in Section 7.5.4 I sketched a way of deal-
ing with this phenomenon within my system, based on accrual 'by hand',
a more detailed comparison with alternative treatments, in particular with
the accrual 'by default' approach of Verheij (1996), is necessary before
a final evaluation can be given. Another required comparison between
alternatives concerns the proper formalization of reasoning about priorities
(see Section 8.6). While in my method the priorities are transported to
the metatheory of the system, the methods of Hage (1996) and Kowalski
& Toni (1996) keep the priorities inside the logical language and instead
extend the system's metatheory with other metalogical features. An in-
teresting technical research issue is the development of dialectical proof
theories for other semantics than just well-founded semantics. Preliminary
research is reported in Prakken (1997) but much work remains to be done.
Finally, as for knowledge representation, I have argued in Section 9.3.2
that it would be interesting to incorporate Hage & Verheij's work on legal
knowledge representation - which was done within an extension-based
system - in applications of my system, which is argument-based.
If we leave the dialecticallevel, we see that the four-Iayered view of argu-
mentation suggests various interesting research topics. As for the connection
between the dialectical and procedural level, it would be interesting to
formalize reasoning about procedural rules. Here the situation is analogous
to reasoning about the standards for comparing arguments since, just as
for these standards, the procedural rules for legal argumentation are also
not fixed, but debatable. Therefore, analogously to the formalization in
Chapter 8 of reasoning about priorities, it would be interesting to study
how these procedural rules could not only determine but also be the result
of the argumentation process. In legal philosophy this phenomenon of self-
modifying legal procedures has been extensively studied by Suber (1990).
In AI (and law) Hage et al. (1994) and Vreeswijk (1996) have studied the
logical aspects, but they leave much work to do.
Finally, as mentioned at the end of Chapter 10, the connection between
CONCLUSION 285
the procedural and strategie level also raises interesting research issues,
both for AI-and-Iaw and for legal philosophy. While the proceduralleveljust
defines the rules of agame, the strategie level defines how the game can be
played well; now a very challenging (but also difficult) task is to identify and
then, if possible, to formalize the strategies, or 'heuristies' for good and bad
arguing. A (modest) example of this research is Prakken & Sartor (1997b),
whieh is based on this book's 'heuristie' account of analogieal reasoning, and
in which this book's dialogue game is used as the dialectical core of a HYPO-
style protocol for analogical reasoning with legal precedents. It would be
interesting to compare this 'heuristie' approach to non-deductive argument
forms with approaches such as the one ofFreeman & Farley (1996), in which
such argument forms, and the ways to attack them, are defined not at thc
strategie but at the dialectieallevel.
APPENDIX A
Symbols:
--, Not
/\ And
V Or
---t Material Implication
Material Equivalence
T True ('P ---t 'P)
1- False ('P /\ --, 'P )
f- Provability
F Entailment
If Non-Provability
~ Non-Entailment
:3 Existential quantifier (There Exists)
\/ Universal quantifier (For All)
Abbreviations:
Th(A) The deductive closure of A
I(t) The interpretation of the term t
I(P} The interpretation of the predicate P
wff 'well-formed formula'
iff 'if and only if'
Some notational conventions:
p,q,r, ... Metavariables for atomic formulas
'P,'IjJ,x Metavariables for any formula
P,Q,R, .. . Predicate constants
A,B,C, .. . Predicate constants
a,b,c, .. . Object constants
x,y,z Object variables
'Pl /\ ... /\ 'Pn ---t 'I/J ('Pl /\ ... /\ 'Pn) ---t 'I/J
\/Xl, ... ,Xn·'Pl /\ ... /\ 'Pn ---t 'IjJ = \/Xl,··· ,Xn('Pl/\ ... /\ 'Pn ---t 'I/J)
287
288 APPENDIX A
SET THEORY
Symbols:
E Element
fI. Not an Element
n Set Interseetion
U Set Union
C Subset
:J Superset
0 Empty Set
00 Infinity
Notational conventions:
U~nSi SnU ... uSm
{xESI···x ... } The set of all x E S such that ... x ...
Reflcxivity: \:IxRxx
Irreflexivity: \:Ix--,Rxx
Transitivity: \:Ix, y, z( (Rxy 1\ Ryz) --t Rxz)
Antisymmetry: \:Ix,y((Rxy 1\ Ryx) --t X = y)
Asymmetry: \:Ix, y--,(Rxy 1\ Ryx)
Linearity: \:Ix, y(Rxy V Ryx V x = y)
ABBREVJATIONS
x 1:. y = not x ~ y (likewise for the following symbols)
x?y= y~x
x <Y= x ~ y and y 1:. x
x>y= y<x
x :::::: y = x ~ y and y ~ ::r
NOTATIONS, ORDERINGS AND GLOSSARY 289
A4. Glossary
Arity - The arity of a predicate is the number of arguments of the
predicate.
Atomic formula - Formula not composed of other formulas.
Closed formula - A formula without free variables.
Completeness - A proof system of a logic is complete with respect to
the semalltics of the logic iff every formula provable from the premises
is entailed by the premises.
290 APPENDIX A
alanguage.
Provability - A formula cp is provable from a set of premises T in a logic
iff the proof system of the logic sanctions a proof of cp from T.
Second-order logic - A system of predicate logic with thc possibility to
quantify over predicates.
Semi-decidability - A logic is semi-decidable iff a procedure exists which
for every provable (or valid) fonnula is guaranteed to tell in a finite
number of steps that it is provable (or valid).
Soundness - A proof system of a logic is sound with respect to its
semantics iff cvery formula provable from the premises is cntailed by
the premises.
Tautology - A theorem of propositional logic.
Theorem - A formula is a theorem of a logic iff it is provable without
premises.
Theory - A deductively cIosed set of wff's.
Term - An expression which can be an argument of a predicate letter,
i.e. a constant, variable or function symbol.
Validity - A formula cp is valid relative to a logic iff it is true in all models
of the logic.
References
Alchourr6n, C.E. & Bulygin, E. 1971. Normative Systems. Wien-New York: Springer
Verlag.
Alchourr6n, C.E. & Bulygin, E. 1984. Pragmatic foundations for a logic of norms.
Rechtstheorie 15, 453-464.
Alchourr6n, C.E. & Makinson, D. 1981. Hierarchies of regulations and their logic. In New
Studies in Deontic Logie, ed. R. Hilpinen, 125-148. Dordreeht: Reidel.
Alexy, R. 1978. Theorie der juristischen Argumentation. Die Theorie des rationalen
Diskurses als eine Theorie der juristischen Begründung. Frankfurt am Main:
Suhrkamp Verlag. (in german)
Allen, L.E. 1963. Beyond doeument retrieval toward information retrieval. Minnesota
Law Review 47, 713-767.
Allen, L.E. & Saxon, C.S. 1991. More IA needed in AI: interpretation assistance for
coping with the problem of multiple structural interpretation. Proceedings 0/ the
Third International Gon/erence on Artificial Intelligence and Law, 53-61. New York:
ACM Press.
Asher, N. & Bonevac, D. 1997. Common sense obligation. In De/easible Deontic Logic.
Essays in Nonmonotonic Normative Reasoning, ed. D.N. Nute, 159-204. Dordrecht:
Kluwer, Synthese Library.
Asher, N. & Morreau, M. 1990. Commonsense entailment: a modal theory of non mono-
tonic reasoning. Proceedings 0/ J ELf A 1990. Lecture notes in Artificial Intelligence
478, 1-30. Berlin: Springer Verlag.
Ashley, K.D. 1990. Modeling Legal Argument: Reasoning with Gases and Hypotheticals.
Cambridge, MA: MIT Press.
Ashley, K.D. & Rissland, E.L. 1987. But, See, Accord: Generating "Blue Book" citations
in HYPO. Proeeedings 0/ the First International Gonfe1'ence on Artificial Intelligenee
and ]'aw, 67--74. New York: ACM Press.
Aqvist, L. 1977. Legal wrongfulness as aprerequisite for liability in tort. In Deontis-
ehe Logik und Semantik, eds. A.G. Conte, R. Hilpinen & G.H. von Wright, 9-19.
Wiesbaden: Athenaion.
Baker, G.P. 1977. Defeasibility and meaning. In Law, Morality, and Society. Essays in
Honour 0/ I1.L.A. Hart, eds. P.M.S. Hacker & J. Raz, 26-57. Oxford: Clarendon
Press.
Barth, E.M. & Krabbe, E.C.W. 1982. From Axiom to Dialogue: a Philosophical Study 0/
Logic and Argumentation. New York: Walter de Gruyter.
Bench-Capon, T.J.M. 1993. Neuralnetworks and open texture. Proeeedings 0/ the Fourth
International Gon/erence on Artificial Intelligence and Law, 292-297. New York:
ACM Press.
Bench-Capon, T.J.M. & Sergot, M.J. 1985. Towards a rule-based representation of open
texture in law. In Gomputing Power and Legal Reasoning, ed. C. Walter, 39-60. St.
Paul, Minn.: West Publishing Co.
Bench-Capon, T.J.M. & Coenen, F.P. 1992. Isomorphism and Legal knowledge based
systems. Artificial Intelligence and Law 1, 65-86.
Benthem, J. van 1995. Logic and argumentation. Proceedings 0/ the Third ISSA Gon-
/erence on Argumentation. Volume I: Perspectives and Approaches, eds. F.H. van
Eemeren, R. Grootendorst, J.A. Blair & C.A. Willard, 18-31. Amsterdam: Sic Sat.
293
294 REFERENCES
Berman, D.H. & Hafner, C.D. 1987. Indeterminacy: achallenge to logic-based models of
legal reasoning. In Yearbook 0/ Law, Computers and Technology, Vo!. 3, 1-35. London:
Butterworths.
Beth, E.W. 1965. Mathematical Thought. An Intr-oduction to the Philosophy 0/ Mathe-
matics. Dordrecht: Reide!.
Bidoit, N. 1991. Negation in rule-based database languages: a survey. Theoretical Com-
puter Science 78, 3·-83.
Bing, J. 1992. Book review of Kevin D. Ashley, modeling legal argument: reasoning with
cases and hypotheticals. Ar·tificial Intelligence and Law 1, 103-107.
Birnbaum, L. 1991. Rigor mortis: a response to Nilsson's "Logic and artificial intelli-
gence" . Artificial Intelligence 47, 57-77.
Bondarenko, A., Dung, P.M., Kowalski, R.A. & Toni, F. 1997. An abstract
argumentation-theoretic approach to default reasoning. 1'0 appear in Artificial In-
telligence.
Brewka, G. 1989. Preferred subtheories: an extended logical framework for default
reasoning. Proceedings 0/ the Eleventh International Joint Con/erence on Artificial
Intelligence, 1043-1048.
Brewka, G. 1991a. Nonmonotonic Reasoning: Logical Foundations 0/ Commonsense.
Cambridge: Cambridge University Press.
Brewka, G. 1991b. Cumulative default logic: in defense of nonmonotonic inference rules.
Artificial Intelligence 50, 183-205.
Brewka, G. 1994a. Adding priorities and specificity to default logic. Proceedings 0/ the
Fifth European Workshop on Logics in Artificial Intelligence, Springer Lecture Notes
in AI 838, 247-260. Berlin: Springer Verlag.
Brewka, G. 1994b. A logical reconstruction of Rescher's theory of formal disputation
based on default logic. Pr-oceedings 0/ the Eleventh European Conference on ArtiJicial
Intelligence, 366-370. Chicester: Wiley.
Brewka, G. 1994c. Reasoning about priorities in default logic. Proceedings of the Twelfth
National Conference on Artificial Intelligence, 247-260.
Brewka, G. 1996. Well-founded semantics for extended logic programs with dynamic
preferences. Jom·nal 0/ Artificial Intelligence Research 4, 19-36.
Brouwer, P.W. 1994. Legal knowledge representation in the perspective of legal theory.
In Legal Knowledge Based Systems. The Relation with Legal Theory, eds. H. Prakken,
A.J. Muntjewerff & A. Socteman, 9-18. Lelystad: Koninklijke Vermande BV.
Bulygin, E. & Alchourr6n, C.E. 1977. Unvollständigkeit, Widersprüchlichkeit und Unbes-
timmtheit der Normenordnungen. In Deontische Logik und Semantik, eds. A.G.
Conte, R. Hilpinen & G.H. von Wright, 20-32. Wiesbaden: Athenaion.
Bylander, 1'., Allemang, D., Tauner, M.C. & Josephson, J.R. 1991. The computational
complexity of abduction. Artificial Intelligence 49, 25-60.
Cadoli, M., Donini, F.M. & Schaerf, M. 1996. Is intractability of nonmonotonic reasonillg
a real drawback? Artificial Intelligence 88: 215-251.
Clark, K. 1978. Negation as failure. In Logic and Databases, eds. H. Gallaire & J. 1-1illker,
293-322. New York: Plenum Press.
Delgrande, J. 1988. An approach to default reasoning based on a first-order conditional
logic: revised report. Artificial Intelligence 36, 63-90.
Doyle, J. 1979. A Truth Maintenance System. Artificial Intelligence 12, 231-272.
Dung, P.M. 1993. An argumentation semantics for logic programming with explicit
negation. Proceedings 0/ the Tenth Logic Programming Con/erence, 1993, 616-630.
Cambridge, MA: MIT Press.
Dung, P.M. 1994. Logic programming as dialogue garnes. Unpublished paper, Division of
Computer Science, Asian Institute of Technology, Bangkok.
Dung, P.M. 1995. On the acceptabilit.y of arguments and its fundamental role in non-
monotonic reasoning, logic programming, and n-person games. Artificial Intelligence
77, 321-357.
Dworkin, R.M. 1977. Is law a system of rules? In The Philosophy 0/ Law, ed. R.M.
REFERENCES 295
Guy Vandenberghe, eds. H.W.K. Kaspersen & A. Oskamp, 279-309. Deventer: Kluwer
Law and Taxation Publishers.
Opdorp, G.J. van, Walker, R.F., Schrickx, J.A., Groendijk, C. & Berg, P.H. van den,
1991. Networks at work. A connectionist approach to non-deductive legal reasoning.
Proeeedings of the Third International Conferenee on Artifieial Intelligenee and Law,
278-287. New York: ACM Press.
Peczenik, A. 1990. Legal collision norms and moral considerations. In Coherenee and
Confiict in Law. Proeeedings of the 3rd Benelux-Seandinavian Symposium in Legal
Theory, eds. B. Brouwer ct al., 177-197. Deventer: Kluwer Law and Taxation Pub-
lishers / Zwolle: Tjeenk Willink.
Pereira, L.M. & Alferes, J.J. 1992. Well-founded semantics for logic programs with explicit
negation. Proeeedings of the Tenth European Conferenee on Artifieial Intelligence,
102-106.
Perelman, Ch. 1976. Logique Juridique. Nouvelle Rhtorique. Dallaz.
Perelman, Ch. & Olbrechts-Tyteca, L. 1969. The Ncw Rhetorie. A Treatise on Argumen-
tation. Notre Dame, Indiana: University of Notre Dame Press.
Plug, H.J. 1996. Complex argumentation in judicial decisions. Analysing conflicting argu-
ments. Proeeedings of the International Conferenee on Formal and Applied Practieal
Reasoning, Springer Lecture Notes in AI 1085, 464-479. Berlin: Springer Verlag.
Pollock, .J.L. 1987. Defeasible reasoning. Cognitive Seienee 11,481-518.
Pollock, .J.L. 1995. Cognitive Carpentry. A Blueprint for How to Build aPerson. Cam-
bridge, MA: MIT Press.
Poole, D.L. 1985. On the comparison of theories: Preferring the most specific explanation.
Proeeedings of the Ninth International Joint Conlerenee on Artificial Intelligenee,
144-147.
Poole, D.L. 1988. A logical framework for default reasoning. Artifieial Intelligenee 36,
27-47.
Poole, D.L. 1991. The effect of knowledge on belief: conditioning, specificity and the
lottery paradox in default reasoning. Artifieial Intelligenee 49, 281-307.
Prakken, H. 1991a. A tool in modelling disagreement in law: preferring the most specific
argument. Proeeedings olthe Third International Conlerenee on Artifieial Intelligenee
and Law, 165-174. New York: ACM Press.
Prakken, H. 1991b. Reasoning with normative hierarchies (extended abstract). Proeeed-
ings of the First International Workshop on Deontic Logie and Computer Scienee,
315-334. Amsterdam.
Prakken, H. 1993. An argumentation framework in default logic. Annals of Mathematies
and A rtificial Intelligenee 9, 93-132.
Prakken, H. 1995a. A semantic view on reasoning about priorities (extended abstract).
Proeeedings 01 the Seeond Duteh/German Workshop on Non-monotonie Reasoning,
Utrecht, 160-167.
Prakken, H. 1995b. From logic to dialectics in legal argument. Proeeedings of the Fifth
International Conferenee on Artifieial Intelligenee and Law, 165-174. New York:
ACM Press.
Prakken, H. 1996a. Two approach es to the formalisation of defeasible deontic reasoning.
Studia Logiea 57: 73-90.
Prakken, H. 1m17. Dialectical proof theory for defeasible argumentation with defeasible
priorities (preliminary report). Proeeedings 01 the 4th ModelAge Workshop 'Formal
Models 01 Agents', Certosa di Pontignano (Italy), 201-214.
Prakken, H. & Sartor, G. 1995a. On the relation between legal language and legal
argument: assumptions, applicability and dynamic priorities. Proeeedings 01 the Filth
International Conlerence on Artificial Intelligence and Law, 1-9. New York: ACM
Press.
Prakken, H. & Sartor, G. 1996a. A system for defeasible argumentation, with defeasi-
ble priorities. Proceedings 01 the International Conlerence on Formal and Applied
Practical Reasoning, Springer Lecture Notes in AI 1085, 510-524. Berlin: Springer
300 REFERENCES
Verlag.
Prakken, H. & Sartor, G. 1996b. A dialectical model of assessing conflicting arguments
in legal reasoning. Artificial Intelligence and Law 4: 331-368.
Prakken, H. & Sartor, G. 1997a. Argument-based extended logic programming with
defeasible priorities. Journal 0/ Applied Non-c/assical Logics 7, 25-75.
Prakken, H. & Sartor, G. 1997b. Reasoning with precedents in a dialogue game. Proceed-
ings 0/ the Sixth International Con/erence on A rtificial Intelligence and Law, 1-9.
New York: ACM Press.
Prakken, H. & Schrickx, J.A. 1991. Isomorphic models for rules and exceptions in
legislation. In Legal Knowledge-based Systems. Model-based Legal Reasoning, eds.
J.A. Breuker, R.V. de Mulder & J.C. Hage, 17-27. Lelystad: Koninklijke Vermande
BV.
Przymusinski, T. 1988. Perfect model semantics. In Logic programming: Proceedings of
the Fifth International Conference and Symposium, eds. R.A. Kowalski & K. Bowen,
1081-1096.
Raz, J. 1975. Practical Reason and Norms. Princeton University Press, 1975.
Reiter, R. 1978. On closed-world databases. In Logic and Databases, eds. H. Gallaire &
J. Minker, 119-140. New York: Plenum Press.
Reit.er, R. 1980. A logic for default reasoning. Artificial Intelligence 13,81-132.
Reiter, R. 1987. Nonmonotonic reasoning. Annual Reviews 0/ Computer Seien ce 2: 147-
186.
Reiter, R. & Criscuolo, G. 1981. On interacting defaults. Proceedings of the Seventh
International Joint Conference on Artificial Intelligence, 270-276.
Rescher, N. 1964. Hypothetical Reasoning. Amsterdam: North-Holland.
Rescher, N. 1977. Dialectics: a Controversy-oriented Approach to the Theory 0/ Knowl-
edge. Albany, N.Y.: State University of New York Press.
Rissland, E.L. 1988. Artificial Intelligence and legal reasoning. A discussion of the field
& of Gardner's book. AI Magazine Fall 1988, 45-56.
Rissland, E.L. 1990. Artificial Intelligence and Law: steppillg stones to a model of legal
reasoning. Yale Law Review Vol. 99, 1957-1981.
Rissland, E.L. & Ashley, K.D. 1987. A case-based system for trade secrets law. Proceedings
of the Fi7·st International Conference on Artificial Intelligence and Law, 60-66. New
York: ACM Press.
Rissland, E.L. & Ashley, K.D. 1989. HYPO: A precedent-based legal reasoner. In Ad-
vanced Topics in Law and In/or·mation Technology, ed. G.P. V. ValIdenberghe, 213-
234. Deventer/Boston: Kluwer Law and Taxation Publishers.
Rissland, E.L. & Skalak, D.B. 1991. CABARET: statutory interpretation in a hybrid
architecture. International Journal 0/ Man-Machine Studies 34, 839-887.
Robinson, J. 1965. A machine-oriented logic based on the resolution principle. Journal
0/ the A CM 12, 23-41.
Roos, N. 1991. What is on the Machine's Mind? Models for Reasoning with Incomplete
and Uncertain Knowledge. Doctoral Dissertation Technical University Delft.
Roos, N. 1992. A logic for reasoning with inconsistent information. Artificial Intelligence
57,69-103.
Routen, T. & Bench-Capon, T.J .M. 1991. Hierarchical formalizations. International
Journal of Man-Machine Studies 35, 242-250.
Royakkers, L. & Dignum, F. 1996. Defeasible reasoning with legal rules. In Deontic Logic,
Agency and Normative Systems, eds. M.A. Brown & J. Carmo, 174-193. London:
Springer Workshops in Computing.
Sacksteder, W. 1974. The logic of analogy. Philosophy and Rhetoric Vol. 7, 234-252.
Sartor, G. 1991. The structure of norm conditions and nonmonotonic reasoning in law.
Proceedings of the Third International Conference on A rtificial Intelligence and Law,
155-164. New York: ACM Press.
Sartor, G. 1992a. Normative conflicts in legal reasoning. Artificial Intelligence and Law
2-3, 209-236.
REFERENCES 301
279.
Walker, R.F., Oskamp, A., Schrickx, J.A., Opdorp, G.J. van & Berg, P.H. van den, 1991.
Prolexs: creatillg law and order in a heterogeneous domain. Internat'ional Journal of
Man-Machine Studies 35, 35-67.
Walton, D.N. & Krabbe, E.C.W. 1995. Commitment in Dialogtte. Basic Concepts of
Interpersonal Reasoning. Albany, NY: State University of New York Press.
Wild, J.R. de & Quast, J.A., 1989. The concept of 'commensurate work' in a legal
knowledge-based system. Preproceedings of the 'Expert Systems in Law' Conference.
Bologna.
Wittgenstein, L. 1958. Philosophical Investigations. New York: MacMillan.
Wright, G.R. von, 1964. A new system of deontic logic. In Danish Yearbook of Philosophy
1, 173-182. Reprinted in Deontic Logic: Introductory and Systematic Readings, ed.
R. Rilpinen, 105-120. Dordrecht: Reidel, 1971.
Wright, G.R. von, 1983. Practical reason. Philosophical Papers, Vol. I. Oxford: Basil
Blackwell.
INDEX
303
304 INDEX
comparing, 25, 43, 59, 63, 152, au toepistemic Iogic, 73-76, 96-97
158-163,198,220,226-227, hierarchie, 180, 190
233, 252, 261, 263 axiomatic systems, 18
conflieting, 31,152,156-158,220, Aqvist, L., 2
226-227, 231, 233-235, 261
constructing, 152, 154-156, 263 backings, 22, 256
defeasible, 155 backward chaining, 9, 42, 120
defeat among, 152, 161-163, 173- Baker, G.P., 272-273
175,220, 222-223 Barklund, J., 262
hierarchical, 192 Barth, E.M., 164
rebutting, 174 BDKT approach, 221-225, 232, 237-
specificity, 162, 173 238
striet, 162, 222 belief revision, 12, 13, 30, 183-187
undercutting, 174 191 '
with defeasible priorities, 208 Bench-Capon, T.J.M., 17,24,34,35,
defensible, 152, 169, 195-196, 54, 55, 262
221, 228 Benthem, J. van, 256
incoherent, 157, 162 Berg, P. vd, 52
justified, 152, 164, 167, 195,221 Berman, D.H., 7-8
228 '
Beth, E.W., 23
orderings on, 25, 141, 232
Bidoit, N., 67, 81, 133
overruled, 152, 169, 221, 228
Bing, J., 27
rebutting, 192, 220
strict, 155 Birnbaum, L., 8, 11
structure of, 154-156, 222, 226, Bondarenko, A., 154, 156, 167, 197,
219, 221, 227
234, 237, 263
linear, 226 Bonevac, D., 283
suppositional, 226 Brewka, G., 45, 67, 72,82,90,92-94,
Toulmin on, 21-22,255-256 97, 102, 108, 119, 128, 133,
tree, 229, 230, 233 135,142-143,150,164,179,
undercutting, 220, 263 180, 183, 187-188, 190, 195,
validity of, 23 238-240, 260, 270
weight of, 198 British Nationality Act, 56, 61
arguments about Brouwer, P.W., 38
priorities, 203 Bulygin, E., 2, 15-16
rule backing, 176-177, 268 burden of proof, 102, 167, 263-264,
rule interpretation, 204, 216- 272-273
217,251 Bylander, T., 254
rule validity, 176-177
Artificial Intelligence, 1-5, 53, 180 C-relations, 273
ambitious vs. practical, 3-5, 12 CABARET, 2, 63-65, 259, 261-262,
logical foundations of, see formal 274
foundations canonical model, 78
symbolic vs. connectionist, 3-4 case law
weak vs. strong, 11 as source of exceptions, 48, 63
Asher, N., 89, 283 case-based reasoning, 26, 30-31
Ashley, K.D., 2, 8, 25-27, 29, 30, 64 combining rule-based and, 1-2
144 ' 64 '
assumptions, 58, 173, 221 cases
augmentation principle, 88 and rules, 30
INDEX 305