Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Monotonicity in Practical Reasoning

KENNETH G. FERGUSON
Department of Philosophy
East Carolina University
Greenville, NC, 27858-4353, U.S.A.
E-mail: fergusonk@mail.ecu.edu

ABSTRACT: Classic deductive logic entails that once a conclusion is sustained by a valid
argument, the argument can never be invalidated, no matter how many new premises are
added. This derived property of deductive reasoning is known as monotonicity. Monotonicity
is thought to conflict with the defeasibility of reasoning in natural language, where the dis-
covery of new information often leads us to reject conclusions that we once accepted. This
perceived failure of monotonic reasoning to observe the defeasibility of natural-language
arguments has led some philosophers to abandon deduction itself (!), often in favor of new,
non-monotonic systems of inference known as ‘default logics’. But these radical logics
(e.g., Ray Reiter’s default logic) introduce their desired defeasibility at the expense of other,
equally important intuitions about natural-language reasoning. And, as a matter of fact, if
we recognize that monotonicity is a property of the form of a deductive argument and not
its content (i.e., the claims in the premise(s) and conclusion), we can see how the common-
sense notion of defeasibility can actually be captured by a purely deductive system.

KEY WORDS: default logic, exceptions, monotonicity, practical generalizations, validity

One special feature of formal deduction that has been attacked as out of
touch with ordinary reasoning in natural language is its derived property
of monotonicity. Classic semantics entails that once any deductive argument
qualifies as valid, it remains valid, no matter how much additional infor-
mation is entered as premises. Beyond this, even if contradictory premises
are added to the basis of the argument, the validity remains undisturbed.
Monotonicity may seem to conflict with reasoning in real-life situations
where it is very natural for conclusions, even when reasonably accepted at
a given time, to become unacceptable later. Since the monotonic quality
of validity does not appear to allow for this, classic formal deduction could
be seen as an incorrect model for the reasoning displayed in natural
language.
This contemporary (post-1970s) complaint against formal deduction has
been lodged mainly by members of the AI community who are attempting
to develop computer programs to simulate ordinary reasoning (McDermott,
1990). But objections to formal deduction have been a feature of the land-
scape almost to the day that logicians such as Peano began to publish in
‘their own special language, which is without words, using only signs’.
Figures no less than Henri Poincaré immediately ridiculed their efforts
as artificial and bearing no resemblance to arguments found in natural

Argumentation 17: 335–346, 2003.


 2003 Kluwer Academic Publishers. Printed in the Netherlands.
336 KENNETH G. FERGUSON

language (1964, p. 224). This distaste for formal deduction picked up steam
in the 1960s when college professors in the United States fell under pressure
to make logic courses more attractive to their politically-active students
(Johnson, 2000, p. 114). An entire Informal Logic Movement then evolved,
rejecting both the symbolism and the validity standard used in formal
deduction (Thomas, 1981, p. 5). The complaint is that formal deduction
either ignores ‘real arguments’ (Govier, 1987, p. 5) or is so hidebound
that it characterizes many good arguments as ‘irrational’ (Stove, 1970,
p. 77). Members of the Informal Logic Movement should have an affinity,
then, for this emerging issue over monotonicity as it represents another
instance of the same, continuing complaint against deduction – that deduc-
tion is out of touch with the way people actually reason in ordinary, ‘real-
life’ situations.
The most conspicuous of the logical systems intended to eliminate
monotonicity is Ray Reiter’s ‘default logic’ (1980). But in looking at
Reiter’s system, we will find that once Reiter eliminates monotonicity by
replacing the standard of validity with a weaker standard of consistency,
many inferences without intuitive support become derivable. When Reiter
attempts to plug this gap in his non-monotonic system by assuming that
the background facts in his world are closed, the system edges back toward
deduction, the system he had sought to abandon. And, in fact, supporters
of traditional deduction can achieve Reiter’s results with hardly less
collateral damage. In point of fact, deduction, despite its monotonicity,
actually does provide some leeway for the sort of ‘defeasibility’ which
Reiter is so urgently seeking. It is only necessary that we be willing to
take care in constructing our arguments. To demonstrate this, I will present
a deductive schema which undoubtedly is fully monotonic but still permits
conclusions to ‘default’ when conflicting premises are discovered. But the
default will be in the truth of the conclusion, not its validity.

1. DEFAULT LOGIC, AN ALTERNATIVE TO DEDUCTION

Ray Reiter is convinced that deductive validity is not really the correct
standard for what counts as acceptable reasoning in practical areas. How
could it be, he suggests, when a valid argument may be intuitively ruined
by adding conflicting, or even contradictory, premises and still remain just
as valid? What Reiter really hopes to obtain is a standard for judging argu-
ments which is closer to practical intuitions about acceptability, a standard
that will hold up just so long as the available information supports the
conclusion but then default if damaging premises are added. In other words,
Reiter aspires to develop a system which is not monotonic.
First, we will openly acknowledge that Reiter is certainly correct to
regard classic deduction as monotonic. Solely for purposes of simplifica-
tion, let us consider a deductive setting where the available inference rules
MONOTONICITY IN PRACTICAL REASONING 337

have been reduced to modus ponens (MP) (cf. Kalish et al., 1980, p. 44).
The result is thoroughly monotonic, as can be easily shown. Suppose an
inference is performed. Since MP is only rule, it must have the form α,
α ⊃ β  β. If the inference is valid, either (i) β = true (T) or (ii)
{α, α ⊃ β} = false (F). Suppose, then, that the premises of the argument
are enriched by any arbitrary statement ‘γ’ (where γ = T or γ = F). This
produces α, α ⊃ β, γ  β. With only the premises being strengthened,
case (i) where the conclusion β = T is unaffected for validity. This leaves
the second validating case where the original premise set {α, α ⊃ β} = F.
If the added statement γ = F, obviously the strengthened set of premises
remains false. But even if γ = T, the enriched premise set {α, α ⊃ β, γ} is
still false (since at least one of the original set {α, α ⊃ β} must already
be false by our hypothesis), thus preserving the validity of the argument.
Consequently, if α, α ⊃ β  β is valid, then so is α, α ⊃ β, γ  β. QED.
Hence, no valid argument in such an MP-based system can be invalidated
by adding premises. Since this is the source of the monotonicity found in
deductive reasoning, Reiter is able to create a more flexible system simply
by replacing MP with a rule appealing to a weaker logical relation.
Reiter believes that he has found such a standard in the ordinary concept
of consistency. Fundamentally, Reiter will replace the classic form of MP
by a defeasible rule which substitutes consistency for deducibility relations.
Reiter introduces this weaker relation into his rule-substitute for MP via
a modal operator ‘M’. ‘M’ is read as ‘it is consistent to assume’ or,
alternatively, ‘in the absence of information to the contrary’ (1980, p. 82).
This enables him to provide a defeasible rule in the following symbolic
form –
A1(x), A2(x), . . . , An(x): M B(x)/B(x).
Here, the structure ‘A1(x), A2(x), . . . , An(x)’ is called the prerequisite,
the first instance of ‘B(x)’ the consistency statement, and the second
instance of ‘B(x)’ the conclusion of the default. Informally, Reiter’s alter-
native to MP expresses the claim that ‘If x has the property A (or multiple
properties A1(x), A2(x), . . . , An(x)), and if it is consistent to assume x
also has the property B, then we may infer that x does have the property
B’. Reiter describes the inferences generated by this rule-substitute as
having the status of beliefs, subject to change with subsequent discoveries
(1980, pp. 82–83). And, at the very least, a system based on such a rule
would satisfy the goal of being defeasible or non-monotonic. For infer-
ences, once drawn using the rule, could clearly be rendered fallacious
later by the addition of new information to the prerequisite, simply because
what it is consistent to assume may change with the addition of this new
information.
The non-monotonicity of Reiter’s ‘default logic’ can be made apparent
by applying it to his own famous Tweety example (1980, p. 81). Suppose
we encounter the following argument in a natural setting:
338 KENNETH G. FERGUSON

(1) Birds can fly. Tweety is a bird. ∴ Tweety can fly.


It seems harmless to infer that Tweety can fly, knowing that he is a bird.
But suppose we discover later that Tweety is actually a penguin, and add
this new information to our argument, producing the form
(2) Birds can fly. Tweety is a bird. Tweety is a penguin. ∴ Tweety
can fly.
Nonetheless, (2) still follows as a valid deduction in classic logic, even
with a premise containing an exception explicitly added to the argument.
Notice how all this changes with Reiter’s defeasible rule, which dispenses
with the logical property of validity, displacing it with consistency. That
is, so long as all we know about Tweety is that he is a bird, it is derivable
by Reiter’s new rule that he can fly. This means that
(3) BIRD(Tweety): M CAN-FLY(Tweety)/CAN-FLY(Tweety)
does follow in Reiter’s default logic, since the ability to fly is consistent
with being a bird. However, when one finds that Tweety is a penguin, this
inference is no longer consistent. That is,
(4) BIRD(Tweety), PENGUIN(Tweety): M CAN-FLY(Tweety)/
CAN-FLY(Tweety)
fails to satisfy the conditions of default logic, because flying is inconsis-
tent with one of the statements in the prerequisite, viz. that Tweety is a
penguin. Hence, at least for Reiter’s own example, we may concede that
default logic squares with common intuitions about what counts as an
acceptable argument in such practical situations. His system bends with our
intuitions about the acceptability of the Tweety argument, whereas classic
MP seems to remain staunchly inflexible.
It is probably not surprising, however, that Reiter’s consistency standard
would seem appealing when tailored to his own carefully-chosen example.
At question would be whether consistency is a better match than validity
with the intuitive acceptability of practical reasoning in the general run.
Assuming that we have at least an informal, working notion of the concept
of ‘acceptable’ reasoning, for Reiter’s default logic to prevail over classic
deductive reasoning, consistency would have to prove a better standard
for identifying acceptable arguments than the validity standard. But, as we
shall see, many consistent inferences also appear to be far from intuitively
acceptable, and there may be no way for Reiter to correct for this.

2. PROBLEMS WITH DEFAULT LOGIC

Reiter’s default logic is designed to provide a better match with our intu-
itions about what counts as an acceptable inference in practical settings,
MONOTONICITY IN PRACTICAL REASONING 339

introducing a line of defeasibility that was lacking in classic monotonic


reasoning. It is, however, this defeasibility that could prove to be the
bugbear of Reiter’s own system. The problem is at its worst where we are
working with a single default, with only one atomic formula in the pre-
requisite, as in A(x): M B(x)/B(x), a so-called normal default (cf. Prakken,
1997, p. 71). This can be recognized even if we let A(x) = BIRD(Tweety),
as in Reiter’s own example. For assuming only the requirement of consis-
tency, with no conditions for relevance or other pragmatic constraints
(except that we grant uniform substitution for x), the allowable inferences
about Tweety from this default are practically wide-open: the sole require-
ment is that they are consistent with the proposition that BIRD(Tweety).
But certainly because Tweety is a bird, it does not follow that he is a
plucked chicken! Such ‘inferences’ would hardly pass muster as accept-
able conclusions, even in those informal settings for which Reiter is
attempting to provide a standard.
Clearly Reiter’s defaults are simply too flexible. And the range of
inferences allowed by such defaults is even greater when the atomic
prerequisite features general predicates such as ENTITY(Tweety) or
OBJECT(Tweety). True, the consistency standard defaults whenever con-
flicting facts are added to the prerequisite, thus offering the advantage of
being non-monotonic, something which may help it match up better with
some of our intuitions about proper reasoning in some practical contexts.
But this advantage comes at the expense of other powerful, competing intu-
itions about the ease with which an inference should be obtainable. It is
simply not intuitive that B(x) follows from A(x) merely because the two
statements are consistent! Consequently, default logicians, if they insist
on substituting mere consistency for deducibility, must find some means
to limit the inferences which can be drawn from their defaults.
Not that default logicians have failed to spot this problem, but it does
appear to constitute at the very least what Reiter himself calls a ‘thorny
issue’ (1980, p. 82). An obvious tactic would be to attempt to bolster the
content of the prerequisite of the default, viz., the sequence A1(x), A2(x),
. . . , An(x), thus enriching the set of premises with which B(x) must agree
before it can be drafted as an inference. The obstacle is to develop stan-
dards for what must be included in this set of premises. Suppose, for
example, a case occurs where the prerequisite A(x) = PHYSICAL-
OBJECT(x), with x being left uninterpreted. Exactly how much cotenable
information about physical objects is to be assumed in making the required
judgments about consistency? Patrick Hayes (whose interest is finding the
best logic to incorporate into AI programs intended to simulate ordinary
reasoning) believes this problem is not completely insurmountable, offering
to construct for us a ‘naïve physics’ for physical objects which suppos-
edly would include all the essential background facts – for example, that
physical objects can move in only one of five ways, by falling, sliding,
rolling, by being pushed (or pulled), or through locomotion (Hayes, 1990,
340 KENNETH G. FERGUSON

p. 194). Additional facts could be collected and entered into the prerequi-
site, but it is difficult to see how one could ever be insured that one’s knowl-
edge base was sufficiently exhaustive to prevent clearly illicit inferences
from qualifying as consistent statements.
Eventually, Ray Reiter is forced to augment his default logic with the
Closed-World Assumption (CWA). This is the assumption that all positive
facts are stored in each prerequisite (1980, p. 84). This assumption, which
in one fell swoop, eliminates any need to worry about compiling the appro-
priate bodies of background facts (by, say, developing rules for which facts
must be included), does actually appear to be reasonable in certain very
controlled situations. Suppose, for example, that someone is attempting to
infer whether a friend actually graduated from Smallville High in 2001.
Under the assumption that the 2001 Smallville Yearbook lists all of the
students who graduated that year, the absence of the friend’s picture would
indicate that he did not graduate. Admittedly, for any such limited cases,
where CWA actually appears to be a realistic assumption, the prerequi-
sites should be adequate for preventing the type of frivolous inferences that
were the bane of the Tweety case.
But some very troublesome particulars must be noted about CWA.
Foremost, with such a powerful assumption, classic validity itself would
be redeemed. One way to phrase the issue with validity, if we generalize
on the Tweety example, is that universals such as ‘All A is B’ might be
employed to make practical deductions only to be spoiled later (for prac-
tical purposes) by the discovery that actually ‘Some A is not B’. But with
CWA, which assumes that all positive facts are already available, any
exceptions to ‘All A is B’ would be part of the prerequisite, duly regis-
tered, and hence not waiting to be sprung later as surprises. Actually, there
would be no need to depend on generalizations in such a fact-dense envi-
ronment. Why depend, for example, on the generalization ‘Birds can fly’
for drawing inferences about Tweety if our prerequisite contains all positive
facts? For the prerequisite would by hypothesis list each species of bird,
indeed each individual bird (including Tweety), with all its flying and other
capabilities, etc. Defeasible logicians who worried that validity remained
too persistent in the face of emergent exceptions should find their worries
disappear once the prerequisite is closed: there can be no subsequent excep-
tions when the data base is already replete with facts.
Even more than this, in a data setting with CWA, even default logic
itself, never mind deduction, would become monotonic! This can be easily
proven: Assume that the prerequisite ΣA = {A1(x), A2(x), . . . , An(x)} of
a default ΣA: M β(x)/∴ β(x) contains all positive facts, per CWA. Suppose
β(x) is consistent with ΣA, hence allowable as a default inference. The con-
clusion β(x) would default later only if a subsequent fact is added to the
set ΣA which makes β(x) inconsistent. But ΣA is already factually complete
by the CWA hypothesis. Consequently, there are no positive facts which
can possibly be added, inconsistent or otherwise, meaning that β(x) is an
MONOTONICITY IN PRACTICAL REASONING 341

incorrigible inference, thereby de facto monotonic. QED. Non-monotonic


logicians curiously skirt this problem.
Henry Prakken, for example, when referring to a default situation
restricted by CWA, makes the comment that ‘[o]bviously, this kind of rea-
soning is nonmonotonic, since if additional positive facts become known,
the assumption turns out to be false’ (1997, pp. 76–77). Comments such
as this indicate that CWA is not being taken very seriously, because the
whole idea of CWA is to exclude such ‘additional positive facts’. So we
must conclude that the assumption is only a technical device that is con-
veniently ignored whenever the logicians’ intuitions oblige their inferences
to fluctuate. In fact, CWA, appears to be a reasonable postulate only when
we are dealing with a setting where we can be confident that the relevant
set of facts is complete. But in actual, real-world settings – e.g. bird-
watching, courtrooms, and physics labs – where an indefinite number of
facts are relevant, many of them not fully recognized or completely sorted
out, some possibly yet to emerge (in other words, in those settings where
it would be most reasonable to regard our conclusions as defeasible), CWA
is at best an artificial postulate meant to plug holes in a theory. For example,
in a courtroom setting, it would be very unrealistic to assume that all
positive facts are actually available to the forum.
Furthermore, the common default logician’s practice of using ‘negation-
as-failure’, which entails that any fact that cannot be derived from a pre-
requisite must be false (Prakken, 1997, pp. 76–77), constitutes little more
than a formal endorsement of argumentum ad ignorantiam. This means that
if default logic is to substitute for classic deduction, it must be restricted
to genuinely closed settings – settings where, incidentally, deduction itself
could, as we have proven, serve just as well – and could at best be a very
problematic substitute for deduction in venues where defeasibility appears
to exist as a practical feature of acceptable inferences. By lowering the
bar for drawing inferences, default logic allows many an illicit inference
to slip over the bar.

3. DEFEASIBILITY AND SOUNDNESS

Default logic, then, does agree better with some of our practical intuitions
about acceptable reasoning, but it comes with its own raft of problems. It
might be advisable, considering these problems, to reappraise the benefits
of classic deduction before tossing it out completely in favor of default
logic. In fact, deduction actually measures up somewhat better with our
common intuitions about practical reasoning if we take the critical, allied
parameter of soundness into account.
Historically, although logicians may have focused on validity, it has
nonetheless been assumed that one’s goal in practical reasoning is to
produce a sound argument, that is, one which not only is valid but also
342 KENNETH G. FERGUSON

possesses true (or at last plausible) premises. After all, as a technicality, if


soundness is not a requirement, one can validly deduce conclusions at will.
For instance, α  α is undoubtedly valid, with its conclusion ‘α’ being
any statement that one might care to substitute. But such a question-begging
or plainly circular argument would win little regard, as the conclusion ‘α’
could just as well be false as true. It is only when we insist upon sound-
ness that valid conclusions are logically guaranteed to be true. Logicians,
however, have typically said far less about soundness, apparently agreeing
with Sherlock Holmes that ‘the art of the reasoner should be used rather
for the sifting of details than for the acquiring of fresh evidence’.
But if we are willing to bear soundness in mind as a formal condition
of arguments, we can note something rather interesting about Reiter’s coun-
terexample to deduction. First, we can generalize on his Tweety case to
reveal the underlying structure of all such examples:
(a) All A is B, x is an A/∴ x is B, moving to
(b) All A is B, x is an A, Some A is not B, x is not-B/∴ x is B.
The counterexample to monotonicity is produced when ‘Some A is not B’
emerges as a subsequent fact that opens an exception to the universal claim
‘All A is B’ found in (a). But, obviously, if the generalization in (a) were
actually true in the first place, the contradictory particular statement could
not possibly be ‘discovered’, thus barring ‘x is not-B’. The reason is simple:
genuinely true universals do not have exceptions. This is of course the
reason why Reiter felt the strength of his default logic to lie in situations
of incomplete knowledge. And directing this point specifically to the
Tweety case, if the universal ‘Birds can fly’ were really true, there would
be no need to worry about penguins, ostriches, or birds with clipped wings.
Indeed, it is transparent that what Reiter’s counterexample actually exposes
is more so the lack of full soundness in the argument form (a) than any
fault that could be attributed either to deduction or to its derived property
of monotonicity. At any rate, it would be a good challenge for default logi-
cians to produce a counterexample to a deductive argument where the
premises are true.
For deductive arguments are much less likely to sustain conclusions
which are unacceptable according to intuitive norms when the premises are
true. This is a point which has been recognized by previous defenders of
deduction such as Leo Groarke (cf. 1992, p. 120). This can be illustrated
if we rethink the Tweety example, keeping soundness in view instead of
bare validity. The intuitive appeal of the inference that ‘Tweety can fly’
from the generalization that ‘Birds fly’ depends on our willingness to treat
the generalization about birds as true, or true enough, for most practical
purposes. Yet when its lack of genuinely universal truth is exposed by the
revelation that Tweety is actually a penguin (the added premise), the
intuitive acceptability of the argument then collapses. Consequently, sound-
ness – when used to enrich the validity standard – does provide something
not unlike the ‘defeasibility’ which the default logicians are seeking. For
MONOTONICITY IN PRACTICAL REASONING 343

the acceptability of the conclusion that Tweety can fly is sacrificed, in


accordance with our intuitions, when the falsity of the generalization is
called into question by new information. The argument is still valid, but
its conclusion is no longer true.
Thus, soundness exhibits some of the flexibility which the default logi-
cians found to be lacking in validity alone. We will exploit this insight in
the concluding section, introducing a strategy for keeping monotonicity
while at the same time respecting the intuition that many natural-language
arguments are clearly ‘defeasible’.

4. DEFEASIBILITY IN A DEDUCTIVE ARGUMENT FORM

What we want is an argument form which is deductive, fully monotonic,


but at the same time providing for the statement of exceptions to general-
izations. Such a form (with implied quantification) can be produced by
weakening the consequent of the conditional premise in an instance of MP
and introducing a fully truth-functional, sentence-forming operator:
(EXCMP) α(x) ⊃ β(x) ∨ EXCβ(x)
α(x)
∴ β(x) ∨ EXCβ(x).
Here, the special operator ‘EXC’, whose use is restricted to atomic formulas,
reads ‘due to an exception, it is not the case that’. This operator carries
the intuitive meaning of the word ‘exception’ as found in natural language.
Expressed categorically, for any generalization of the form ‘All A is B’,
an exception is any statement of the form ‘Some A is not B’; for an equiv-
alent characterization in predicate calculus, for any conditional of the form
‘∀x(A(x) ⊃ B(x))’, an exception is any statement of the form ‘∃x(B(x) &
¬A(x))’. Although the intended domain is practical generalizations which
are treated as true even though they may have at least some potential excep-
tions, this operator behaves like ordinary negation for all generalizations.
This means that β(x) is true if and only if EXCβ(x) is false, a fact which
enables us to state the following useful, derived rule –
(ER) β(x) ∨ EXCβ(x) β(x) ∨ EXCβ(x)
¬ EXCβ(x) EXCβ(x)
∴ β(x) ∴ ¬β(x).
The combination of EXCMP and ER provides a very intuitive framework
for expressing defeasible reasoning within a deductive setting.
Before turning to the target class of mostly-true, practical generaliza-
tions, notice that this schema is actually expressed broadly enough to handle
all generalizations of the form ‘All A is B’. For those generalizations which
are strictly true – those with no possible exceptions – such as, for example,
that ‘All triangles have three sides’, the EXCβ(x) disjunct is superfluous.
344 KENNETH G. FERGUSON

These generalizations are always true because they never have exceptions.
On the other hand, for palpably false generalizations, such as the assertion
that ‘Pigs can fly’, the generalization is always false because every instance
is an exception. That is, every pig is an exception to the claim that pigs
can fly. Both of these results concord with common intuitions about the
truth values of such wholly true or wholly false generalizations.
The schema has been constructed, however, to treat generalizations of
the defeasible sort intended by Reiter’s default logic. If we relativize the
argument form EXPMP to Reiter’s case, its conditional premise says that
‘if Tweety is a bird, then either Tweety can fly or, due to an exception, it
is not the case that Tweety can fly’. If we find that Tweety is a bird, we
can obtain the weak conclusion that Tweety can fly unless prevented by
an exception. This leaves ‘Tweety can fly’, the targeted disjunct of the con-
clusion, defeasible. For this statement can turn out to be false if the second
disjunct is found subsequently to be true (as shown by our rule ER),
satisfiable by such possible discoveries as that Tweety is a penguin, Tweety
is an ostrich, etc. On the other hand, so long as no exceptions are found
to prove EXCβ(x), the truth of β(x) remains open. This is the sense in which
statements such as ‘Tweety can fly’ may be considered defeasible even in
a deductive setting.
Such results agree with ordinary intuitions about the truth conditions of
practical generalizations. For those who may not like the use of special
operators, the general exception clause provided by ‘EXCβ(x)’ may be
replaced by a disjunctive series of relativized predicate variables, viz.
β β β
EXC1(x) ∨ EXC2(x) ∨ . . . ∨ EXCn(x), where the predicates stand for specific
exceptions to the relevant predicate ascription in the generalization, as,
for example, exceptions to avian flight. That is, in our case under consid-
eration, ‘βEXC1(x)’ may express that ‘x is a penguin’, ‘βEXC2(x)’ that ‘x is
an ostrich’, etc. This variant of EXCMP –
(EXCMP′) α(x) ⊃ β(x) ∨ (βEXC1(x) ∨ βEXC2(x) ∨ . . . ∨ βEXCn(x))
α(x)
∴ β(x) ∨ (βEXC1(x) ∨ βEXC2(x) ∨ . . . ∨ βEXCn(x)),
will exhibit the same logical behavior as EXCMP when used to express
defeasibly-true generalizations, with β(x) being false when any one (or
more) of the disjunctive string βEXC1(x) ∨ βEXC2(x) ∨ . . . ∨ βEXCn(x) is true.
For purposes of generality, the number of possible exceptions is left delib-
erately open, as there is no specific upper limit to the number of excep-
tions that may exist to some asserted generalizations (e.g., that all positive
integers are even). For the targeted generalizations, however, those prac-
tical generalizations that appear to be defeasible, the number of excep-
tions would be much lower, so-called ‘atypical’ or ‘abnormal’ cases (cf.
Delgrande, 1988, pp. 63–64). The form EXCMP′ is probably the closest
structure to be found in ordinary first-order logic for the representation of
‘defeasible’ reasoning.
MONOTONICITY IN PRACTICAL REASONING 345

Yet EXCMP is fully monotonic. The classic definition of validity can be


fully applied to EXCMP as our new operator for exceptions is fully truth-
functional (always true or false), as shown by the fact that it could be alter-
natively expressed as a disjunction of ordinary (but countably many)
predicate variables. The monotonicity of a system using EXCMP follows
immediately from our proof above which established the monotonicity of
any deductive system limited to the rule MP. Our new form ‘EXCMP’ is a
legitimate instance of MP, with the disjunction ‘β(x) ∨ EXCβ(x)’ properly
substituted for the conditional consequent ‘β(x)’ occurring in our proof
above (Section 1), hence qualifying as monotonic according to the same
reasoning.
Still, such a system is capable of dealing with defeasible generaliza-
tions in a very intuitive manner. Let us postulate the following pair of argu-
ments:
(i) α(x), α(x) ⊃ β(x) ∨ EXCβ(x)  β(x) ∨ EXCβ(x), and
(ii) α(x), α(x) ⊃ β(x) ∨ EXCβ(x), EXCβ(x)  β(x) ∨ EXCβ(x).
Argument (i) is valid. Consistent with monotonicity, so is argument (ii)
since it is just like (i) except with an added premise. But (i) has a weak
conclusion, the disjunction ‘β(x) ∨ EXCβ(x)’. And this is all we really should
expect to get under conditions of incomplete knowledge. However, with
‘EXCβ(x)’ added to produce argument (ii), we can use ordinary MP and
the rule ER to obtain ¬β(x)! That is, even without sacrificing monotonicity,
we have an argument pattern where the addition of exceptions to the
premises of a valid argument succeeds in blocking undesirable results. Our
reasoning remains valid but nonetheless acknowledges that Tweety can’t
fly if his wing is broken. This should demonstrate that we do not need to
abandon deductive reasoning in order to respond to the default logician’s
Tweety example. The flexibility is in the soundness of the conclusion, not
its derivability.
Therefore, the validity standard – even with all its monotonicity – can
handle exceptions as well as default logic when the truth of the premises
is taken legitimately into account. Even though this is certainly not enough
to satisfy those in the Informal Logic Movement who regard deduction as
leagues away from the best model for reasoning in natural language, it
should go partway toward rehabilitating deduction in the eyes of those
whose principal worry is its supposed inability to deal with reasoning in
defeasible contexts.

ACKNOWLEDGEMENTS

An anonymous reviewer for Argumentation has assisted me in redirecting


the audience for this paper and has been the source of many other valuable
improvements.
346 KENNETH G. FERGUSON

REFERENCES

Delgrande, J.: 1988, ‘An Approach to Default Reasoning Based in a First-Order Conditional
Logic: Revised Report’, Artificial Intelligence 36, 217–236.
Govier, T.: 1987, Problems in Argument Analysis and Evaluation, Foris Publications,
Dordrecht-Holland.
Groarke, L.: 1992, ‘In Defense of Deductivism: Replying to Govier’, in F. van Eemeren et
al. (eds.), Argumentation Illuminated, ISSA, Amsterdam, 113–121.
Hayes, P.: 1990, ‘The Naïve Physics Manifesto’, in M. A. Boden (ed.), The Philosophy of
Artificial Intelligence, Oxford UP, Oxford, 171–205.
Johnson, R.: 2000, Manifest Rationality: A Pragmatic Theory of Argument, Lawrence
Erlbaum Associates, Publishers, London.
Kalish, D., R. Montague and G. Mar: 1980, Logic: Techniques of Formal Reasoning, Harcourt
Brace Jovanovich, Inc., New York.
McDermott, D.: 1990, ‘A Critique of Pure Reason’, in M. Boden (ed.), The Philosophy of
Artificial Intelligence, Oxford UP, 206–230.
Poincaré, H.: 1964, ‘A Negative Appraisal by a Mathematician’, in I. Copi and J. Gould
(eds.), Readings on Logic, The Macmillan Company, New York.
Prakken, H.: 1997, Logical Tools for Modelling Legal Argument, Kluwer Academic
Publishers, Dordrecht.
Reiter, R.: 1980, ‘A Logic for Default Reasoning’, Artificial Intelligence 13, 81–132.
Stove, D.: 1970, ‘Deductivism’, Australasian Journal of Philosophy 48, 76–98.
Thomas, S.: 1981, Practical Reasoning in Natural Language, Prentice-Hall, Inc., Englewood
Cliffs, New Jersey.

You might also like