Professional Documents
Culture Documents
Practical Reasoning
Practical Reasoning
KENNETH G. FERGUSON
Department of Philosophy
East Carolina University
Greenville, NC, 27858-4353, U.S.A.
E-mail: fergusonk@mail.ecu.edu
ABSTRACT: Classic deductive logic entails that once a conclusion is sustained by a valid
argument, the argument can never be invalidated, no matter how many new premises are
added. This derived property of deductive reasoning is known as monotonicity. Monotonicity
is thought to conflict with the defeasibility of reasoning in natural language, where the dis-
covery of new information often leads us to reject conclusions that we once accepted. This
perceived failure of monotonic reasoning to observe the defeasibility of natural-language
arguments has led some philosophers to abandon deduction itself (!), often in favor of new,
non-monotonic systems of inference known as ‘default logics’. But these radical logics
(e.g., Ray Reiter’s default logic) introduce their desired defeasibility at the expense of other,
equally important intuitions about natural-language reasoning. And, as a matter of fact, if
we recognize that monotonicity is a property of the form of a deductive argument and not
its content (i.e., the claims in the premise(s) and conclusion), we can see how the common-
sense notion of defeasibility can actually be captured by a purely deductive system.
One special feature of formal deduction that has been attacked as out of
touch with ordinary reasoning in natural language is its derived property
of monotonicity. Classic semantics entails that once any deductive argument
qualifies as valid, it remains valid, no matter how much additional infor-
mation is entered as premises. Beyond this, even if contradictory premises
are added to the basis of the argument, the validity remains undisturbed.
Monotonicity may seem to conflict with reasoning in real-life situations
where it is very natural for conclusions, even when reasonably accepted at
a given time, to become unacceptable later. Since the monotonic quality
of validity does not appear to allow for this, classic formal deduction could
be seen as an incorrect model for the reasoning displayed in natural
language.
This contemporary (post-1970s) complaint against formal deduction has
been lodged mainly by members of the AI community who are attempting
to develop computer programs to simulate ordinary reasoning (McDermott,
1990). But objections to formal deduction have been a feature of the land-
scape almost to the day that logicians such as Peano began to publish in
‘their own special language, which is without words, using only signs’.
Figures no less than Henri Poincaré immediately ridiculed their efforts
as artificial and bearing no resemblance to arguments found in natural
language (1964, p. 224). This distaste for formal deduction picked up steam
in the 1960s when college professors in the United States fell under pressure
to make logic courses more attractive to their politically-active students
(Johnson, 2000, p. 114). An entire Informal Logic Movement then evolved,
rejecting both the symbolism and the validity standard used in formal
deduction (Thomas, 1981, p. 5). The complaint is that formal deduction
either ignores ‘real arguments’ (Govier, 1987, p. 5) or is so hidebound
that it characterizes many good arguments as ‘irrational’ (Stove, 1970,
p. 77). Members of the Informal Logic Movement should have an affinity,
then, for this emerging issue over monotonicity as it represents another
instance of the same, continuing complaint against deduction – that deduc-
tion is out of touch with the way people actually reason in ordinary, ‘real-
life’ situations.
The most conspicuous of the logical systems intended to eliminate
monotonicity is Ray Reiter’s ‘default logic’ (1980). But in looking at
Reiter’s system, we will find that once Reiter eliminates monotonicity by
replacing the standard of validity with a weaker standard of consistency,
many inferences without intuitive support become derivable. When Reiter
attempts to plug this gap in his non-monotonic system by assuming that
the background facts in his world are closed, the system edges back toward
deduction, the system he had sought to abandon. And, in fact, supporters
of traditional deduction can achieve Reiter’s results with hardly less
collateral damage. In point of fact, deduction, despite its monotonicity,
actually does provide some leeway for the sort of ‘defeasibility’ which
Reiter is so urgently seeking. It is only necessary that we be willing to
take care in constructing our arguments. To demonstrate this, I will present
a deductive schema which undoubtedly is fully monotonic but still permits
conclusions to ‘default’ when conflicting premises are discovered. But the
default will be in the truth of the conclusion, not its validity.
Ray Reiter is convinced that deductive validity is not really the correct
standard for what counts as acceptable reasoning in practical areas. How
could it be, he suggests, when a valid argument may be intuitively ruined
by adding conflicting, or even contradictory, premises and still remain just
as valid? What Reiter really hopes to obtain is a standard for judging argu-
ments which is closer to practical intuitions about acceptability, a standard
that will hold up just so long as the available information supports the
conclusion but then default if damaging premises are added. In other words,
Reiter aspires to develop a system which is not monotonic.
First, we will openly acknowledge that Reiter is certainly correct to
regard classic deduction as monotonic. Solely for purposes of simplifica-
tion, let us consider a deductive setting where the available inference rules
MONOTONICITY IN PRACTICAL REASONING 337
have been reduced to modus ponens (MP) (cf. Kalish et al., 1980, p. 44).
The result is thoroughly monotonic, as can be easily shown. Suppose an
inference is performed. Since MP is only rule, it must have the form α,
α ⊃ β β. If the inference is valid, either (i) β = true (T) or (ii)
{α, α ⊃ β} = false (F). Suppose, then, that the premises of the argument
are enriched by any arbitrary statement ‘γ’ (where γ = T or γ = F). This
produces α, α ⊃ β, γ β. With only the premises being strengthened,
case (i) where the conclusion β = T is unaffected for validity. This leaves
the second validating case where the original premise set {α, α ⊃ β} = F.
If the added statement γ = F, obviously the strengthened set of premises
remains false. But even if γ = T, the enriched premise set {α, α ⊃ β, γ} is
still false (since at least one of the original set {α, α ⊃ β} must already
be false by our hypothesis), thus preserving the validity of the argument.
Consequently, if α, α ⊃ β β is valid, then so is α, α ⊃ β, γ β. QED.
Hence, no valid argument in such an MP-based system can be invalidated
by adding premises. Since this is the source of the monotonicity found in
deductive reasoning, Reiter is able to create a more flexible system simply
by replacing MP with a rule appealing to a weaker logical relation.
Reiter believes that he has found such a standard in the ordinary concept
of consistency. Fundamentally, Reiter will replace the classic form of MP
by a defeasible rule which substitutes consistency for deducibility relations.
Reiter introduces this weaker relation into his rule-substitute for MP via
a modal operator ‘M’. ‘M’ is read as ‘it is consistent to assume’ or,
alternatively, ‘in the absence of information to the contrary’ (1980, p. 82).
This enables him to provide a defeasible rule in the following symbolic
form –
A1(x), A2(x), . . . , An(x): M B(x)/B(x).
Here, the structure ‘A1(x), A2(x), . . . , An(x)’ is called the prerequisite,
the first instance of ‘B(x)’ the consistency statement, and the second
instance of ‘B(x)’ the conclusion of the default. Informally, Reiter’s alter-
native to MP expresses the claim that ‘If x has the property A (or multiple
properties A1(x), A2(x), . . . , An(x)), and if it is consistent to assume x
also has the property B, then we may infer that x does have the property
B’. Reiter describes the inferences generated by this rule-substitute as
having the status of beliefs, subject to change with subsequent discoveries
(1980, pp. 82–83). And, at the very least, a system based on such a rule
would satisfy the goal of being defeasible or non-monotonic. For infer-
ences, once drawn using the rule, could clearly be rendered fallacious
later by the addition of new information to the prerequisite, simply because
what it is consistent to assume may change with the addition of this new
information.
The non-monotonicity of Reiter’s ‘default logic’ can be made apparent
by applying it to his own famous Tweety example (1980, p. 81). Suppose
we encounter the following argument in a natural setting:
338 KENNETH G. FERGUSON
Reiter’s default logic is designed to provide a better match with our intu-
itions about what counts as an acceptable inference in practical settings,
MONOTONICITY IN PRACTICAL REASONING 339
p. 194). Additional facts could be collected and entered into the prerequi-
site, but it is difficult to see how one could ever be insured that one’s knowl-
edge base was sufficiently exhaustive to prevent clearly illicit inferences
from qualifying as consistent statements.
Eventually, Ray Reiter is forced to augment his default logic with the
Closed-World Assumption (CWA). This is the assumption that all positive
facts are stored in each prerequisite (1980, p. 84). This assumption, which
in one fell swoop, eliminates any need to worry about compiling the appro-
priate bodies of background facts (by, say, developing rules for which facts
must be included), does actually appear to be reasonable in certain very
controlled situations. Suppose, for example, that someone is attempting to
infer whether a friend actually graduated from Smallville High in 2001.
Under the assumption that the 2001 Smallville Yearbook lists all of the
students who graduated that year, the absence of the friend’s picture would
indicate that he did not graduate. Admittedly, for any such limited cases,
where CWA actually appears to be a realistic assumption, the prerequi-
sites should be adequate for preventing the type of frivolous inferences that
were the bane of the Tweety case.
But some very troublesome particulars must be noted about CWA.
Foremost, with such a powerful assumption, classic validity itself would
be redeemed. One way to phrase the issue with validity, if we generalize
on the Tweety example, is that universals such as ‘All A is B’ might be
employed to make practical deductions only to be spoiled later (for prac-
tical purposes) by the discovery that actually ‘Some A is not B’. But with
CWA, which assumes that all positive facts are already available, any
exceptions to ‘All A is B’ would be part of the prerequisite, duly regis-
tered, and hence not waiting to be sprung later as surprises. Actually, there
would be no need to depend on generalizations in such a fact-dense envi-
ronment. Why depend, for example, on the generalization ‘Birds can fly’
for drawing inferences about Tweety if our prerequisite contains all positive
facts? For the prerequisite would by hypothesis list each species of bird,
indeed each individual bird (including Tweety), with all its flying and other
capabilities, etc. Defeasible logicians who worried that validity remained
too persistent in the face of emergent exceptions should find their worries
disappear once the prerequisite is closed: there can be no subsequent excep-
tions when the data base is already replete with facts.
Even more than this, in a data setting with CWA, even default logic
itself, never mind deduction, would become monotonic! This can be easily
proven: Assume that the prerequisite ΣA = {A1(x), A2(x), . . . , An(x)} of
a default ΣA: M β(x)/∴ β(x) contains all positive facts, per CWA. Suppose
β(x) is consistent with ΣA, hence allowable as a default inference. The con-
clusion β(x) would default later only if a subsequent fact is added to the
set ΣA which makes β(x) inconsistent. But ΣA is already factually complete
by the CWA hypothesis. Consequently, there are no positive facts which
can possibly be added, inconsistent or otherwise, meaning that β(x) is an
MONOTONICITY IN PRACTICAL REASONING 341
Default logic, then, does agree better with some of our practical intuitions
about acceptable reasoning, but it comes with its own raft of problems. It
might be advisable, considering these problems, to reappraise the benefits
of classic deduction before tossing it out completely in favor of default
logic. In fact, deduction actually measures up somewhat better with our
common intuitions about practical reasoning if we take the critical, allied
parameter of soundness into account.
Historically, although logicians may have focused on validity, it has
nonetheless been assumed that one’s goal in practical reasoning is to
produce a sound argument, that is, one which not only is valid but also
342 KENNETH G. FERGUSON
These generalizations are always true because they never have exceptions.
On the other hand, for palpably false generalizations, such as the assertion
that ‘Pigs can fly’, the generalization is always false because every instance
is an exception. That is, every pig is an exception to the claim that pigs
can fly. Both of these results concord with common intuitions about the
truth values of such wholly true or wholly false generalizations.
The schema has been constructed, however, to treat generalizations of
the defeasible sort intended by Reiter’s default logic. If we relativize the
argument form EXPMP to Reiter’s case, its conditional premise says that
‘if Tweety is a bird, then either Tweety can fly or, due to an exception, it
is not the case that Tweety can fly’. If we find that Tweety is a bird, we
can obtain the weak conclusion that Tweety can fly unless prevented by
an exception. This leaves ‘Tweety can fly’, the targeted disjunct of the con-
clusion, defeasible. For this statement can turn out to be false if the second
disjunct is found subsequently to be true (as shown by our rule ER),
satisfiable by such possible discoveries as that Tweety is a penguin, Tweety
is an ostrich, etc. On the other hand, so long as no exceptions are found
to prove EXCβ(x), the truth of β(x) remains open. This is the sense in which
statements such as ‘Tweety can fly’ may be considered defeasible even in
a deductive setting.
Such results agree with ordinary intuitions about the truth conditions of
practical generalizations. For those who may not like the use of special
operators, the general exception clause provided by ‘EXCβ(x)’ may be
replaced by a disjunctive series of relativized predicate variables, viz.
β β β
EXC1(x) ∨ EXC2(x) ∨ . . . ∨ EXCn(x), where the predicates stand for specific
exceptions to the relevant predicate ascription in the generalization, as,
for example, exceptions to avian flight. That is, in our case under consid-
eration, ‘βEXC1(x)’ may express that ‘x is a penguin’, ‘βEXC2(x)’ that ‘x is
an ostrich’, etc. This variant of EXCMP –
(EXCMP′) α(x) ⊃ β(x) ∨ (βEXC1(x) ∨ βEXC2(x) ∨ . . . ∨ βEXCn(x))
α(x)
∴ β(x) ∨ (βEXC1(x) ∨ βEXC2(x) ∨ . . . ∨ βEXCn(x)),
will exhibit the same logical behavior as EXCMP when used to express
defeasibly-true generalizations, with β(x) being false when any one (or
more) of the disjunctive string βEXC1(x) ∨ βEXC2(x) ∨ . . . ∨ βEXCn(x) is true.
For purposes of generality, the number of possible exceptions is left delib-
erately open, as there is no specific upper limit to the number of excep-
tions that may exist to some asserted generalizations (e.g., that all positive
integers are even). For the targeted generalizations, however, those prac-
tical generalizations that appear to be defeasible, the number of excep-
tions would be much lower, so-called ‘atypical’ or ‘abnormal’ cases (cf.
Delgrande, 1988, pp. 63–64). The form EXCMP′ is probably the closest
structure to be found in ordinary first-order logic for the representation of
‘defeasible’ reasoning.
MONOTONICITY IN PRACTICAL REASONING 345
ACKNOWLEDGEMENTS
REFERENCES
Delgrande, J.: 1988, ‘An Approach to Default Reasoning Based in a First-Order Conditional
Logic: Revised Report’, Artificial Intelligence 36, 217–236.
Govier, T.: 1987, Problems in Argument Analysis and Evaluation, Foris Publications,
Dordrecht-Holland.
Groarke, L.: 1992, ‘In Defense of Deductivism: Replying to Govier’, in F. van Eemeren et
al. (eds.), Argumentation Illuminated, ISSA, Amsterdam, 113–121.
Hayes, P.: 1990, ‘The Naïve Physics Manifesto’, in M. A. Boden (ed.), The Philosophy of
Artificial Intelligence, Oxford UP, Oxford, 171–205.
Johnson, R.: 2000, Manifest Rationality: A Pragmatic Theory of Argument, Lawrence
Erlbaum Associates, Publishers, London.
Kalish, D., R. Montague and G. Mar: 1980, Logic: Techniques of Formal Reasoning, Harcourt
Brace Jovanovich, Inc., New York.
McDermott, D.: 1990, ‘A Critique of Pure Reason’, in M. Boden (ed.), The Philosophy of
Artificial Intelligence, Oxford UP, 206–230.
Poincaré, H.: 1964, ‘A Negative Appraisal by a Mathematician’, in I. Copi and J. Gould
(eds.), Readings on Logic, The Macmillan Company, New York.
Prakken, H.: 1997, Logical Tools for Modelling Legal Argument, Kluwer Academic
Publishers, Dordrecht.
Reiter, R.: 1980, ‘A Logic for Default Reasoning’, Artificial Intelligence 13, 81–132.
Stove, D.: 1970, ‘Deductivism’, Australasian Journal of Philosophy 48, 76–98.
Thomas, S.: 1981, Practical Reasoning in Natural Language, Prentice-Hall, Inc., Englewood
Cliffs, New Jersey.