Professional Documents
Culture Documents
COS3761-Study Material 2019
COS3761-Study Material 2019
Study guide
Semesters 1 and 2
School of Computing
BARCODE
TABLE OF CONTENTS
INTRODUCTION ............................................................................................................................. 3
2
COS3761/MO001/4/2018
INTRODUCTION
This tutorial letter (MO001) contains learning material available on the myUnisa website for COS3761.
The intention is to provide a document for students who do not have regular access to the internet, or
who prefer to work from a limited number of longer documents rather than a multitude of individual
resources (as is available on the COS3761 website).
Please note that this document does NOT contain the most important document for this module, namely
Tutorial Letter 101, which contains the general instructions and assignments for COS3761. Tut Letter
101 is available under Official Study Material on the COS3761 website, and you should refer to it first
(before this document, and before all the resources on the website). MO001 also does not contain
Tutorial Letters 201, 202 and 203. Since these documents contain the solutions to the assignments, they
will only be available after the due dates of the respective assignments, and will be available for
download from the Additional Resources of the COS3761 website.
The main body of this document contains the learning units for COS3761 as found on myUnisa. These
learning units refer to a number of other documents available in Additional Resources. These other
documents are included as appendices. The table of contents indicates this clearly.
The linear nature of a printed document like this imposes an ordering on its parts. The order might seem
to suggest that you should start at the beginning and work through to the end. This is not how this
document is intended to be used!
Remember that it just represents a collection of resources on the COS3761 website, which you can
navigate in any order. Although the material is naturally divided into three major parts represented by
Learning Units 1, 2 and 3, which each terminate in an assignment, nothing should stop you from jumping
between parts of different learning units. For example, you might find it useful to revise the explanation of
natural deduction for propositional logic covered in Learning Unit 1 while working through the section on
natural deduction for predicate logic covered in Learning Unit 2.
The fact that there is an assignment with a due-date at the end of each learning unit, places a constraint
on the order in which you should work through the material. It might be fun to jump from one topic to
another as the fancy takes you, but apart from confusing yourself, you will probably miss the due dates
of the assignments and lose the marks that they contribute to your year mark and final mark.
So we recommend that you work through Learning Units 1, 2 and 3 in sequence, remembering that the
resources within each learning unit can be used in different orders, and that there is value in going back
to previous learning units for revision.
Have you read through Tutorial Letter 101 for COS3761 yet? If not, this is your first priority. If you
don't have a printed copy, please download an electronic version from Official Study Material on
the COS3761 website of myUnisa.
Have you activated you myLife account yet? If not, do so as soon as possible. A lot of the
interaction between you and us (your lecturers) will be via email, and all of the announcements
we send about new resources we have placed on myUnisa and other important information, will
be sent to your myLife account. If you don't like using myLife, you should set your myLife account
to forward all your myLife email to your personal email address, to avoid missing out.
Have you purchased a copy of the prescribed book yet? If not, you should do so as a matter of
urgency. It is impossible to study this module without a copy of the prescribed book. Details of the
prescribed book are given in Tutorial Letter 101.
3
LEARNING UNIT 1
Topics:
Syntax of Propositional Logic
Natural Deduction for Propositional Logic
Semantics of Propositional Logic
Soundness and Completeness of Natural Deduction for Propositional Logic
Satisfiability of a Propositional Logic formula
Conjunctive Normal Form
Horn formula satisfiability
Resources:
Tutorial Letter 101 for COS3761
Chapter 1 of the prescribed book
Additional notes on Chapter 1 (Appendix B)
Solutions to selected exercises (Appendix E)
Tutorial Letter 201 (only available after the due date for Assignment 1)
Attempt some of the exercises in the prescribed book. We recommend the following:
o Exercises 1.1: 1(a), (c), (e), (f)#, (i)#; 2(a), (c)#, (e), (g)#
o Exercises 1.2: 1(a), (b), (c), (d), (e), (x)#; 2(a), (b)#, (c), (d), (e)#; 3(a), (f), (q)#; 5(a), (d)#
o Exercises 1.3: 1(c)#, (d), (e), (f); 2(c); 4(b)#; 6(a), (b)
o Exercises 1.4: 2(a), (c)#; 5#; 12(a), (b), (c); 13(a), (b)#
o Exercises 1.5: 2(a)#, (b), (c), (d); 7(a)#, (b); 15(a), (b), (c), (g)#
Review your marked assignment answers by comparing them to those provided in Tut Letter
201. Try to identify the areas where you made mistakes and go back to the relevant resources
for this learning unit, to repair your misunderstandings.
4
COS3761/MO001/4/2018
LEARNING UNIT 2
Topics:
Syntax of Predicate Logic
Free and bound variables
Natural Deduction for Predicate Logic
Semantics of Predicate Logic
Undecidability of Predicate Logic
Resources:
Tutorial Letter 101 for COS3761
Chapter 2 of the prescribed book
Additional notes on Chapter 2 (Appendix C)
Solutions to selected exercises (Appendix E)
Tutorial Letter 202 (only available after the due date for Assignment 2)
Attempt some of the exercises in the prescribed book. We recommend the following:
o Exercises 2.1: (a)#, (b)#, (c), (d)#, (e), (f); 2; 3(a), (b); 4
o Exercises 2.3: 6(a); 7(a), (c)#; 9(a), (d)#, (e), (f), (g); 13(a)#, (b), (c)
Review your marked assignment answers against those provided in Tut Letter 202. Identify the
areas where you made mistakes and go back to the relevant resources for this learning unit, in
order to repair your misunderstandings.
5
LEARNING UNIT 3
Topics:
Syntax of Basic Modal Logic
Natural Deduction for Basic Modal Logic
Semantics of Basic Modal Logic
Modes of truth and logic engineering
Multi-agent systems
Resources:
Tutorial Letter 101 for COS3761
Chapter 5 of the prescribed book
Additional notes on Chapter 5 (Appendix D)
Solutions to selected exercises (Appendix E)
Tutorial Letter 203 (only available after the due date for Assignment 3)
Attempt some of the exercises in the prescribed book. We recommend the following:
o Exercises 5.2: 1(a)ii, iv#, vi, ix, (b)i, ii, iii; 5(a), (b), (c), (f)#
Review your marked assignment answers against those provided in Tut Letter 203. Try to
identify the areas where you made mistakes and go back to the relevant resources for this
learning unit, in order to repair your misunderstandings.
6
COS3761/MO001/4/2018
p. 20, line 2. After the paragraph ending with the word “negation”, add the sentence: “The formula
stands for the contradiction.”.
p. 21, line 17. “The fact that ” should be “The fact that (the contradiction)”.
p. 47, below the second proof. “This is a proof of the sequent p q r, p p r.” is incorrect: p
r should be q r. The same mistake should be corrected in the line below as well.
p. 49. The sequent in line 4 has a typo. It must read 1, 2, …, n . Also correct the other two
sequents in the same paragraph.
p. 53. Corollary 1.39, in the second sentence: “is holds” should be “holds”.
p. 57. Definition 1.44: “a valuation in which is” should be “a valuation in which it”
p. 68. Section 1.6, in the first sentence of the first paragraph: “formule” should be “formula”.
p. 122. Lines 6 and 7 of the proof should have instead of in the rules.
p. 134, line 12. “verify that is” should be “verify that it”.
p. 165. Exercise 2.6.1: “In Example 2.23, page 136” should be “In Example 2.27, page 140”.
p. 166, line 5.
p. 166, line 6.
7
p. 166, line 7.
8
COS3761/MO001/4/2018
In this chapter you will learn to construct answers to questions such as:
The chapter deals with propositional arguments. A propositional argument is made up of a finite set of
propositions (called premises) that together justify the deduction of another proposition (called the
conclusion). Propositions represent statements about the world to which we can attach a truth value
(either true or false). A process of reasoning involving a proof calculus is used to ensure the validity of
arguments.
A cow is an animal
My favourite colour is blue
Jane got 80% in the Formal Logic exam
In propositional logic, a declarative sentence can be expressed as a propositional logic formula. (From
now on, we use the term proposition to refer to a propositional logic formula.) A proposition can be an
atom, or a combination of atoms and connectives of propositional logic.
So a proof consists of a list of propositions. Each proposition in the list is either a premise or a
proposition which can be proved by applying rules to propositions already in the list. The rules in natural
deduction all have a similar form. The rules allow us to add new propositions to the proof if the
proof contains propositions of a particular structure. The rules are all syntactic in nature - they make no
reference to the semantics of the propositions.
We use the " " symbol to indicate the provability of propositions using natural deduction. So "1, …,n
" states that is provable from 1, …, n. In this "sequent", 1, …, n are the premises and is the
conclusion.
Similarly, " " states that is provable from zero premises, or simply that is provable in natural
deduction. Such a proposition is called a theorem. The notation " " is an abbreviation for "
and ", and states that and are provably equivalent.
At any time, we can introduce a copy of an earlier proposition (unless it only occurs inside a closed
assumption box).
1 p premise
2 q assumption
3 p copy 1
4 q→p →i 2–3
As a rule of natural deduction, this is called "bottom-elimination" and, as may be seen on page 27 of the
prescribed book, it looks as follows:
Since a contradiction is necessarily false, and our proofs consist of propositions that are necessarily true,
it is not immediately clear how we can introduce a contradiction into a proof. However, we have a rule for
doing it (see page 27 of the prescribed book):
¬e
The text refers to this as the "not-elimination" rule rather than as the "bottom-introduction" rule as in
Formal Logic 2.
The rule for introducing negation into our proofs also involves using a contradiction. The idea is simple: if
we make an assumption and then, by applying the rules of natural deduction we are able to produce a
contradiction, then the assumption is not provable - that is, its negation is provable. So the “not-
introduction rule" (i) on page 27 of the prescribed book shows an assumption box above the line,
containing at the top and at the bottom, and below the line it shows .
It may seem strange that in order to prove from , one must prove both from and from , but
consider the following argument in natural language that illustrates this concept:
10
COS3761/MO001/4/2018
1 (p q) (p r) premise
2 p assumption
3 pq assumption
4 q e 3, 2
5 qr i1 4
6 p r assumption
7 r e 6, 2
8 qr i2 7
10 p ( q r) i 2-9
When we are inside an assumption box, we can use or copy any previous proposition that is not inside a
closed assumption box. We cannot use or copy any proposition that occurs only inside a previous closed
assumption box, because then the assumption would have already been discharged.
As you can see, this rule has nothing above the line. This means that we may introduce the proposition
into our proofs at any time, with no required premises or previous occurrences of any of the
symbols that might happen to be in .
For example, one of the provable equivalences given in the prescribed book is
p (q r) (p q) (p r)
So if our proof contained an earlier line
11
(a b) (c (d e))
we could add the line
((a b) c) ((a b) (d e))
The justification for adding that line would be stated as "provable equivalence" and would technically
need a reference to a list of provable equivalences.
Below we list a number of equivalences. (Note, however, that you may not use these equivalences when
you are required to give a formal proof using natural deduction. You may not, for example, use de
Morgan’s law in a formal proof. Only the proof rules of natural deduction may be used.)
Equivalences involving
In the equivalences listed here , , denote arbitrary propositions.
1. (commutativity of )
2. (idempotence of )
3.
4. and
5. ( ) ( ) (associativity of )
6. (commutativity of )
7. (idempotence of )
8. and
9.
11.
12.
13.
Equivalences involving
14.
15.
16.
17.
18.
19. ( )
Equivalences involving
21. ( )( ) ( ) ( ) .
12
COS3761/MO001/4/2018
De Morgan Laws
23. ( )
24. ( )
Distributivity of ,
25. ( ) ( ) ( )
26. ( ) ( ) ( )
27. ( ) and ( ) .
Examples
We emphasize that these are not formal proofs.
Example 1
Let us show that .
( ) (using equivalence 19)
(equivalence 6)
(equivalence 13)
(equivalence 19 again).
Example 2
Let us show that (p q) (p q) p.
(p q) (p q)
p (q q) (equivalence 26)
p (equivalence 4)
p (equivalence 9)
.
.
The sense of this is that if we want to prove , we start by assuming and show that this leads to a
contradiction. Since we do not accept a contradiction as part of a valid proof, the assumption we made
must have been false. This yields , which we simplify to .
The purpose of describing propositional logic as a formal language is not to add any expressive power to
it, but rather to put it into a format in which it is easy to determine if a string of symbols actually forms a
proposition that we could evaluate. The syntax of a well-formed formula (wff) of propositional logic is
defined in terms of a grammar on page 32-33 of the prescribed book.
A particular wff can also be expressed using a parse tree. The rules for constructing a parse tree
representing a given wff are as follows:
The following example illustrates how to construct the parse tree representing a given well-formed
formula. The parse tree of p (¬q ¬p) is
p
¬ ¬
q p
14
COS3761/MO001/4/2018
p q r pq (p q) r
T T T T T
T T F T F
T F T T T
T F F T F
F T T T T
F T F T F
F F T F F
F F F F F
Consider a theorem of natural deduction, i.e. . If we construct a truth table for , we find that it is
true in all rows. We say that is a tautology, and write .
Now consider a set of formulas 1, 2, ..., n, and the following sequent:
Suppose that we build a truth table that includes a column for each of the formulas 1, 2, ..., n and .
We observe that on every line where every i is true, is also true. We say that 1, 2, ..., n semantically
entails , and we write 1, 2, ..., n .
Clearly, the semantic entailment relation between arbitrary formulas does not always hold. For example,
the truth table above shows that "p q (p q) r" does not hold. This can be seen from line 2 or line
4: in both cases p q is true, but (p q) r is false.
The definition of semantic entailment may seem similar to the definition of a valid sequent. This is not
coincidental. However, it is crucial to recognize the difference. A sequent is valid if we can use the rules
of natural deduction to derive or prove the conclusion from the premises. A semantic entailment holds if
we observe the required condition in all the rows of the truth table representation of the sequent.
Note finally the changed the title of sections 1.4.3 and 1.4.4 below. There are other proof systems for
propositional logic (not covered in the prescribed book), and soundness and completeness relates a
specific proof system to the semantics of propositional logic. We have changed the titles of these
sections to reflect this.
The proof uses course-of-values induction where the inductive hypothesis is applied to the number of
lines in the proof.
15
This result is a demonstration of the soundness of propositional logic. The significance of this result is
that it is not possible for us to build a proof of something by natural deduction which is not "true" in the
semantic entailment representation of truth. To be more explicit, if there exists a proof for the sequent,
then, in the truth table representing that sequent, all lines that have a T for each of 1, ..., n must also
have a T for .
The result is immediately useful to us. Suppose we are trying to prove the validity of a sequent, but
without much success. If we can demonstrate that the corresponding semantic entailment does not hold,
then there cannot be a proof of the sequent. To show that the semantic entailment does not hold, all we
need to find is one line in the truth table on which all the premises are true but the conclusion is false.
For example, consider the sequent p q q p. This is obviously not valid but it may not be obvious
how to prove that. We can demonstrate its invalidity by constructing the truth table:
p q pq qp
T T T T
T F F T
F T T F
F F T T
In line 3 of the truth table, the premise p q has the truth value T but the conclusion q p has the truth
value F. Thus, the semantic entailment relation does not hold, and, from the proof of soundness, we
know that the sequent does not have a proof. In other words, it is unprovable.
This is the second part of showing that the two systems of propositional logic (natural deduction proof
rules and truth tables) are exactly equivalent.
We outline the proof. Remember that a valid sequent is one that we can prove using natural deduction
rules.
1. Prove that if 1, ..., n holds, then 1 (2 (..... (n ))...) holds.
2. Prove that if 1 (2 (..... (n ))...) holds, then 1 (2 (..... (n ))...) is a
valid sequent.
3. Prove that if 1 (2 (..... (n ))...) is a valid sequent, then 1, ..., n is a valid
sequent.
The proofs of Steps 1 and 3 are straightforward. The proof of Step 2 is more difficult and the explanation
in the prescribed book is quite hard to follow. We can summarize the main points as follows:
Course-of-values induction is applied to the height of the parse tree for the formula. The main idea is
that, given any proposition , each line of the truth table for can be used to create a provable sequent
which has the atoms in (or their negations) as premises and either or its negation as the conclusion.
This is achieved by assuming that it can be done for all formulas with parse trees whose height is
smaller than 's parse tree, and then show how applying the inductive hypothesis to the last operation in
building 's parse tree, allows these to be combined to get a proof of the desired sequent.
The formula that has to be proved by course-of-values induction applied to the height of its parse tree
is 1 (2 (..... (n ))...). A sequent is created from each line of its truth table and these are
proved separately. LEM is then used over and over again to combine all of these special-case proofs
into a complete proof of
16
COS3761/MO001/4/2018
iff
This means that all the provable equivalences listed in Section 1.2.4 are also semantic equivalences.
A proposition is said to be satisfiable if there is some truth assignment of its atoms that makes
evaluate to true. On the other hand, a proposition is said to be valid (or a tautology) if every
assignment of truth values of 's atoms makes evaluate to true.
Validity and satisfiability are linked by the following theorem (given on page 57 of the prescribed book):
is satisfiable iff ¬ is not valid (i.e. ¬ is not a tautology).
This theorem can be restated in a number of different ways. For example, " is valid iff ¬ is not
satisfiable", or " is not valid iff ¬ is satisfiable", etc.
Satisfiability is a very important practical problem for computer scientists. Unfortunately, there is no
known, efficient algorithm to test an arbitrarily chosen proposition for satisfiability. One obvious algorithm
is to generate the truth table for the problem, but this is extremely inefficient, and for any realistic
application this method is not feasible. The reason is that the number of rows of the table grows
exponentially with the number of different atoms in the formula. Think about it: If there are n different
atoms in a formula, the truth table needs to have 2n rows. In fact, satisfiability (or SAT, as it is usually
called) is still an area of active research in computer science.
To recap, we can show that the proposition is a tautology (or valid) by constructing its truth table.
However, constructing the truth table for may be impractical if the number of atoms is large. Is there a
solution? One alternative is to convert into an equivalent formula for which validity checking is easier.
One commonly used method is to convert to conjunctive normal form (CNF). A formula is in CNF if it
consists of a set of clauses connected by the logical connective "and", such that each clause contains
only atoms and their negations, connected by the logical connective "or".
The formal definition of CNF (given on page 55 of the prescribed book) depends on a grammar which
defines three types of objects: Literals (L), Disjunctions (D) and Conjunctions (C)
L ::= p | ¬p this means "a Literal is either an atom or the negation of an atom"
D ::= L | L D this means "a Disjunction is either a Literal, or
a Literal connected to a Disjunction with the connective "
C ::= D | D C this means "a Conjunction is either a Disjunction, or
a Disjunction connected to a Conjunction with the connective "
It is easy to check the validity of a formula that is in CNF. The formula can only be valid if all of its
disjunctions are valid. A disjunction is valid under all circumstances only if it contains a pair of
complementary literals; that is, if it contains a pair of literals of the form "p" and "¬p". Such a disjunction
will always evaluate to true under any assignment of truth values to the literals.
17
Creating a CNF Formula from a Truth Table
Suppose we are given a truth table for a formula , but we are not given the actual formula. We can use
the truth table to create in CNF. Recall that a formula in CNF consists of a conjunction of disjunctions.
In order for the formula to evaluate to true, all of its disjunctions must evaluate to true simultaneously.
We focus on the lines of the truth table where is false. We want to construct a formula that evaluates to
true if the combination of literal values does not place us on any of these false lines. By linking together
disjunctive clauses, each of which corresponds to the idea "not on line k of the truth table" where
evaluates to false in line k, we create a formula which is true if and only if we are on one of the lines of
the truth table where is true.
p q r
T T T T
T T F F
T F T T
T F F T
F T T F
F T F T
F F T F
F F F T
We will create a formula which says, in effect, "not on line 2" AND "not on line 5" AND "not on line 7"
On line 2, p is true, q is true, and r is false. If any of these conditions do not hold, then we are not on line
2. Thus "¬p ¬q r" is a formula for "not on line 2". Analyzing lines 5 and 7 in the same way, we get two
more disjunctions, and when we link them together, we get
Note that the correctness of this method has an immediate implication. We started with an entirely
arbitrary, unknown formula , and derived an equivalent CNF formula. This means that if we had started
with a known formula, we could follow exactly the same steps: construct the truth table, identify the lines
where the formula is false, and build a CNF formula based on those lines. The conclusion is that, for
every formula in propositional logic, there is an equivalent CNF formula. It means that if we can prove
things about CNF formulas, or create algorithms that apply only to CNF formulas, we are able to apply
the results to all formulas.
Unfortunately, this method of translating a given formula into CNF format is inefficient if there are a lot of
atoms involved (since it once again depends on writing out a truth table, which grows exponentially). If
we are given the truth table up front (and no other information about ) then this is the method we use.
But if we are actually given the formula , there is another method for creating a CNF equivalent. The
method involves three steps:
18
COS3761/MO001/4/2018
Note that the CNF version of is considerably longer than the original version. In the worst case, the
algorithm will produce a CNF formula which is exponentially longer than the original (i.e. if the original
formula contains n literals, then the CNF version will contain about 2n literals).
pq
( ) (p p p p)
(p q r s) ( p) (s )
Here is the familiar "bottom", meaning "always false", and is the same symbol upside down ("top"),
meaning "always true", and p, q, r and s are atoms. A Horn formula is a conjunction of Horn clauses,
each of which is an implication, in which the left side is a conjunction of things and the right side is a
single thing (where a "thing" is or or an atom). Note that and ¬ do not appear in Horn formulas.
Horn formulas form the basic statement structure of the Prolog programming language. They also have
the property that they can be tested for satisfiability very easily. The HORN algorithm works as follows.
This algorithm works by marking everything that is forced to be true by repeated applications of Modus
Ponens. Clearly if "bottom" is marked, then one of the clauses has only things which must be true on the
left side, and something which must be false on the right side. This implication must be false under any
truth assignment that satisfies the other clauses, so the formula is not satisfiable. However, if "bottom" is
not marked by this algorithm, then we can satisfy the formula by setting to true all the atoms that have
been marked, and setting all others to false.
Example: Say we want to determine whether the following Horn formula is satisfiable:
(p q r) ( p) (p q) (r )
(p q r) ( p) (p q) (r ) (1)
(p q r) ( p) (p q) (r ) (2)
(p q r) ( p) (p q) (r ) (2)
(p q r) ( p) (p q) (r ) (2)
(p q r) ( p) (p q) (r ) (2)
"not satisfiable" (3)
19
APPENDIX C: ADDITIONAL NOTES ON CHAPTER 2
INTRODUCTION
In this chapter, the proof system of natural deduction is extended to predicate logic. The syntax and
semantics are explained and the concept of bound and free variables is discussed. Soundness and
completeness of the natural deduction proof system are briefly discussed. Furthermore, the
undecidability of predicate logic is shown by reduction to Post’s correspondence problem. Certain
important quantifier equivalences are also shown.
So, what is the nature of the statement "All birds can fly"? It is about the property of "being able to fly", a
property that all things called "birds" should have. So, the proposition "All birds can fly" could be written
as a material implication that says "If an object is a bird, then that object can fly". In order to assert that
this implication is true, we have to be inclusive in our consideration of every possible object in the entire
universe. If any object that we encounter is a bird and that object does not fly, then we would have to
assert that the proposition is false.
Predicate logic is designed to give us a way to express knowledge and construct sound logical
arguments about elements of (possibly infinitely large) sets. It is based, as you might suspect, on the
concept of a predicate.
A predicate states something about an object or a tuple of objects. Predicates can take single or multiple
arguments. If a predicate takes a single argument, we can think of it as expressing a property of an
object. For example, Bird(x), which translates to "x is a bird", expresses the fact that some arbitrary
object x is a bird. If, on the other hand, the predicate takes more than one argument, we can think of it as
a relationship between the arguments of the predicate. For example, Younger(x, y), which translates to
"x is younger than y", expresses a relationship between the arguments x and y of the predicate Younger.
When predicates are defined, the arity of the predicate must always be specified. The arity defines how
many arguments the predicate takes. By convention, predicate names always start with capital letters.
The notion of a variable is implicit in the above examples of a predicate. The variable is the generic
name for any object from our universe of discourse.
Predicate logic also deals with functions. A function expression differs from a predicate statement in that
instead of denoting a truth value, it denotes a single object. For example, productOf(x, y) denotes the
single object that represents the product of x and y. Another example, the function parentOf(y) denotes
the single object that represents the parent of y. Some functions do not require any arguments. Such
functions are called constants because they always take on the same value. The arity of a function is the
number of arguments it requires. The convention we follow is to represent function names by beginning
them with lower case letters. Function symbols are used to build complex terms, whereas predicates are
used to build predicate formulas.
Predicate logic also introduces the notion of quantification. The two main quantifiers used in predicate
logic are the universal and existential quantifiers. Symbolically represented as '', the universal quantifier
loosely translates to “every” or “for all”. The existential quantifier, '', loosely translates to “there is at
least one” or “there exists at least one”. Thus, for example, xP(x) expresses the fact that “every object x
is such that x is P”, whatever predicate P stands for. Similarly, the quantified expression xP(x)
expresses the fact that “there is at least one object x such that x is P”, whatever predicate P means.
20
COS3761/MO001/4/2018
2.2.1 Terms
These are logical expressions that refer to objects. The inductive definition of term is given on page 99 of
the prescribed book.
Example: A symbol that refers to one specific whistler’s female parent always denotes the same value,
and is therefore a constant. Thus the constant ‘whistlersMother’ would be a valid term. Unary functions
can sometimes more easily express the same idea as a constant. Thus the term motherOf(whistler)
could also refer to the same object as referenced by the constant ‘whistlersMother’.
2.2.2 Formulas
The inductive definition of well-formed formulas (wffs) is given on page 100 of the prescribed book. You
will note that parentheses occur in all the formulas. However, the use of parentheses can very often be
avoided by applying the same binding priorities as in propositional logic. The quantifiers have the same
binding priority as negation (). Therefore, if we want a quantifier to apply to more than just the formula
following immediately after it, we need to use parentheses to demarcate the scope of the quantifier. For
example, in "xP(x) Q(x)" the quantifier applies only to P(x). If the intention is to extend the scope of
the quantifier to the whole formula, we must write "x(P(x) Q(x))".
We deal with the semantics (meaning) of predicate logic in section 2.4. There we will see that predicate
logic defines the idea of a model of a formula as a specific assignment of meaning to the predicates
and functions that appear in . There are some formulas that are true in all models (for example, "P(x)
P(x)”), some formulas are true in some models and false in others, while other formulas are false in all
models. One of the main goals of predicate logic is to determine the models in which a formula is true or
false. However, we before we come to that, we need to understand what free and bound variables are,
and how constants, variables and terms can be substituted for each other.
2.2.4 Substitution
Natural deduction proofs in predicate logic sometimes require the substitution of free variables by
constants or terms. We write [t/y] to mean "substitute the term t for all free instances of variable y in ",
or, equivalently, "replace all free instances of variable y in by t".
Definition: If is a predicate logic formula, t is a term, and y is a variable, then we denote by [t/y] the
formula obtained by replacing all free instances of y in by t. The expression t/y is called a substitution.
Note that if does not contain y as a free variable, the substitution would have no effect. Bound
variables cannot be replaced by other variables, constants or terms. (This is because the substitution
would result in an unintentional alteration of the semantics of the formula. For example, an attempt to
substitute x for the bound occurrences of y in y(P(x) Q(y)) would result in the formula x(P(x) Q(x)),
which would have an entirely different meaning from the original.)
Special care must be taken, though, as even substituting for free variables is not always safe. For
example, suppose that a free instance of y occurs within the scope of a quantified variable x in a formula
21
. If we replace y with a term t that includes the variable x, then this variable is now unintentionally bound
by the quantifier. This has the unintended consequence of changing the meaning of the original formula,
and hence the conditions under which the formula is true or false.
A term t is said to be free for x in if none of the free instances of x in are within the scope of any
quantifier that extends to any variable that appears in t. In other words, it is safe to substitute t for x in .
Thus, the substitution does not pose the risk of unintentionally changing the meaning of the formula.
We stress that if the variable x is not free in , that is, if contains only bound occurrences of x or no
occurrences of x at all, then the substitution [t/x] has no effect, and [t/x] is said to be equivalent to .
For example, let be a formula xP(x, y) and let t be a term. Then [t/x] = since x is a bound variable
in and [t/y] = xP(x, t) since y is a free variable in . (But note that x must not occur free in t.)
Take note that replacements of different free occurrences of the same variable take place at the same
time; in other words, simultaneously. For example, if is the formula P(x, x) yQ(x, y), then [h(x)/x]
is P(h(x), h(x)) yQ(h(x), y).
We stress that, as explained above, it is possible that substitutions may result in unintended, semantic
side effects when their application leads to a change in the meaning of the formula. Because of that,
substitutions should always be done subject to certain restrictions, which can be summarized in the
following definition:
The following are examples of situations in which t is free to replace all free occurrences of x in :
t is equal to x.
t is a constant.
The variables appearing in t do not occur in .
The variables appearing in t do not occur within the scope of a quantifier in .
There are no quantifiers in the formula .
The variable x does not occur freely within the scope of a quantifier in .
First
u(P(u) Q(y)) (P(x) ¬Q(y))
is equivalent to
x(P(x) Q(y)) (P(x) ¬Q(y))
and then
u(P(u) Q(f(x, y))) (P(x) ¬Q(f(x, y)))
22
COS3761/MO001/4/2018
¬e
New Rules
We discuss the new rules for natural deduction in predicate logic below.
This rule says that we can always add "t = t" to our proofs, where t is any term.
=e "equality elimination"
Equality elimination is an interesting case, because it allows us to apply substitution in our proofs. The
basic idea is that if we have "t1 = t2" in our proof, and we also have a formula of the form [t1/x] (i.e. a
formula in which t1 has been substituted for all the free instances of x), then we can add the formula
given by [t2/x]. This is possible because if t1 = t2, both t1 and t2 should have the same effect on the truth
value of . Note, however, that for these substitutions to work, we require that both t 1 and t2 be free for x
in .
The challenge with using this rule is being able to recognize when we have a useful formula of the form
[t1/x] in our proof.
Given the quantified formula x, the rule "for all x elimination" allows us to substitute anything we want
for the free instances of x in the formula (as long as what we substitute is a term which is free for x in
formula ). It is important to realize that does not include the x that precedes it. For example, if we
have "x(P(x) Q(x))" then is "P(x) Q(x)" and both occurrences of x are free in .
When we have proved [x0/x] within the box, we are entitled to close the box and assert that "x ". We
can do this because we didn't put any constraints on x0, so whatever properties it has are ones that it
shares with all objects in the universe of discourse. Hence the lines of the proof within the box would
work no matter what actual value we assigned to x0. That is, the conclusion is true for any arbitrary
variable x.
If we have [t/x] in a proof (i.e. a formula which would result from replacing all the free occurrences of x
in with t) then we know that there is some value that makes true. Thus we can introduce the formula
"x ".
23
This may seem useless because it is giving up some information - we are going from an assertion that
is true for a specific value to an assertion that is true for some unknown value. On the contrary, it is
actually very useful to us because very often the specific value for which is true will be a "local"
variable defined within an assumption box. The scope of that variable is restricted to the box, and we
cannot carry it outside the box, but after we generalize the statement " [x0/x]" to "x ", we can extend
the scope of that statement outside the box.
It is important to note that, whenever we write “ [t/x]", it is understood that this particular sequence of
characters is not a formula, and would not appear as a line in our proof. Rather, " [t/x]" is a short-hand
notation of a formula that would result from replacing all free occurrences of x in by t. This is the
formula that would actually appear in the proof.
The reasoning that lets us eliminate an existential quantifier is very similar to the " elimination" rule in
propositional logic. If we are given "x " then we know that there is some value x0 such that [x0/x] is
true. This is equivalent to an infinitely long disjunction: " [a/x] [b/x] [c/x] ..." where a, b, c, etc
are all the values in the universe. To make use of this knowledge, we introduce a box containing a value
x0. Within the box, we prove some formula such that contains no reference to x0. We then pull out
of the box as a true statement. This makes sense because we put no constraints on x0 other than that
[x0/x] is true. Thus, the steps we use to derive within the box would be valid no matter what value we
choose for x0, so long as [x0/x] is true. We leave x0 within the box because we don't really know its
value, but we pull out of the box because it is true no matter which -satisfying value of x0 we choose.
V(x) : x is a vehicle
F(x) : x can fly
A(x) : x is an aeroplane
and do so as follows:
24
COS3761/MO001/4/2018
3 x0
5 V(x0) e1 4
6 V(x0) F(x0) A(x0) x e 1
7 A(x0) e 6, 4
8 V(x0) A(x0) i 5,7
9 x(V(x) A(x)) x i 8
Notice that the prescribed book uses “assumption” in line 4 above as the justification for the introduction
of this line in the proof. It would have been preferable to justify the introduction of this line by writing:
: V(x) F(x), [x0/x].
1. xy yx
2. xy yx
3. x x
4. x x
5. x( ) x x
6. x( ) x x
x( ) x , and
x( ) x , and
x( ) x , and
x( ) x .
x( ) x , and
x( ) x .
25
x( ) x , and
x( ) x .
Example:
x
x (equivalence 3)
x (see note below)
x( ) (equivalence 8)
The above example is not a formal proof. You will find examples of natural deduction proofs in your
prescribed book on pages 118 to 122.
2.4.1 Models
Note: Models are defined differently by different logicians. In other logic textbooks you may come across
the term ‘interpretation’ which corresponds to ‘model’ here. We will stick to the way the term is used in
our prescribed book.
Given a set F of function symbols and a set P of predicate symbols, a model of the pair (F, P) is defined
on page 124 of the prescribed book. We give two examples of models below. The model in the first
example has a universe consisting of four girls. In the second example we give a mathematical model.
Given
F: the set { j, a, b }
P: the set { T }
where
the first two elements of F (namely j and a) are constants, i.e. nullary functions, the third
element of F (namely b) is a unary function, and the only element of P (namely T) is a binary
predicate.
There are many possible models. Note that in all models M we are obliged to define a value for the
function bM for every element of the universe A that we will choose, but that we are free to define the
predicate TM in any way we like. We construct the following model M of (F, P):
26
COS3761/MO001/4/2018
Given
F: the set { 0, 1, +, , -, / }
P: the set { =, , <, >, >, != }
where the first two elements of F are nullary functions (i.e. constants) and the other elements binary
functions, and all the elements of P are binary predicates.
Suppose we are given one or more formulas and are required to construct a model where the formulas
are true or to construct a model where the formulas are false. We will need a universe of concrete values
as well as definitions for all functions and all predicates appearing in the formulas. (If the formulas do not
involve any constants or other functions, no function definitions are required - the set F is empty.) Look
at the two examples below where models have to be constructed.
Note that F is empty because the given sentence does not include any constants or other functions.
The sentence involves one predicate, namely R, with two arguments. So
F: the set { }
P: the set { R }
There are many possible models. We construct the following model M of (F, P):
The universe A of concrete values: the set of integers greater than 3, i.e. {4, 5, 6, …}
RM : We define R(x, y) as “x is equal to 2 times y”.
You should be able to see that the given sentence is true in this model, because the right hand side
of the implication is always true: no integer greater than 3 is equal to 2 times itself.
27
Required: a model where a given sentence is false
We see the sentence involves one function (with one argument) and one predicate (with two
arguments). So
F: the set { m }
P: the set { G }
There are many possible models M. Note again that we are obliged to define a value for the function
mM for every element of the universe A that we choose, but that we are free to define the predicate
GM in any way we like. We construct the following model M of (F, P) where the sentence is false.
(We think of m(x) as “husband or wife of x” and of G(x, y) as “x and y are colleagues”, thus the
sentence states that some married couple are colleagues. This has to be false.)
Below we give three additional examples of models - one of a mathematical nature and the other two
non-mathematical. In each case we investigate whether given well-formed formulas are true or not in
that model.
Given
F: the set { c, f }
P: the set { E }
where
c is a nullary function, i.e. a constant,
f is a binary function,
E is a binary predicate.
The universe A of concrete values: the set of natural numbers {0, 1, 2, …},
cM = 0,
the function fM is defined by fM(i, j) = (i + j) mod 3,
E M: the equality predicate
Is the wff
x[E(f(x, x), x) E(x, c)]
true in this model? Yes.
(In this model the meaning of the formula is, for all natural numbers k,
if 2k mod 3 = k, then k = 0.)
28
COS3761/MO001/4/2018
Given
F: the set { c }
P: the set { L }
where
c is a nullary function, i.e. a constant,
and L is a binary predicate.
Is the wff
x(L(c, x) L(x, c))
true in this model? Yes. (“Everyone Lola likes, likes her” - yes)
However, the wff x(L(c, x)) is false in this model. (“Lola likes everyone” - no.)
Given
F: the set { a, b, c, m, n }
P: the set { F, K }
where
a, b, c, m and n are nullary functions, i.e. constants,
and F and K are binary predicates.
The wff
xy((F(x, y) F(y, x)) (K(x, y) K(y, x)))
is true in this model. The intended meaning of the formula is “If two persons are friends, then they
know each other”.
29
Environment
An environment (or look-up function) l for a universe A of concrete values is defined on page 127 of the
prescribed book. Such a function maps every variable onto an element of A.
Interpretation of terms
Terms are defined on page 99 of the prescribed book. Let T(F, var) denote the set of terms built from
function symbols in F and variables in var. Each model M and look-up function l induce a mapping tA
from the set of terms onto the universe A. The mapping can be defined recursively by:
This means that, given a model M and look-up function (or environment) l, we find that terms denote
elements of the universe A of concrete values. Here follows an example:
Given
F: the set { +, }
P: the set { = }
where the elements of F are binary functions, and the element of P is a binary predicate.
l(x) = 3
l(y) = 2
l(z) = 1
If a wff contains the term (x(y+xz)) where all the variables are free, it will be interpreted as 15.
Satisfaction in an environment
Satisfaction in an environment is defined on page 128 of the prescribed book. For a formula without
any free variables (i.e. a sentence) the environment is irrelevant and may be omitted from the notation.
Thus in that case we may simply write M .
Note: Q differs from M l B; these conflicting uses of the symbol are traditional.
30
COS3761/MO001/4/2018
Formally:
Soundness
Proof system is sound iff for any closed predicate formula (i.e. a formula without any free variables,
also called a sentence) we have:
Completeness
Proof system is complete iff for any closed predicate formula (i.e. a formula without any free
variables, also called a sentence) we have
The proof system described in the prescribed book is both sound and complete: If is a closed
predicate formula, then
Consequence:
There exists no perfect theorem-proving system for predicate logic. The set of true (valid) formulas can
be enumerated: just enumerate all possible proofs using, for example, Huth and Ryan’s proof system.
But there exists no way to enumerate the set of false formulas (else truth would be decidable).
31
Therefore we have the following situation: Checking whether is valid (i.e. checking whether ) is
undecidable.
Corollaries:
32
COS3761/MO001/4/2018
5.2.2 Semantics
The notion of a model is central to the study of the semantics of any logic. In propositional logic, a model
is simply an assignment of truth values to atomic formulas. In predicate logic, this definition of a model is
extended as in Definition 2.14 where we have a universe of values, and an interpretation for each
function symbol and predicate symbol in the language. In particular, an n-ary predicate symbol is
mapped to a set of n-tuples in the domain of interpretation.
In modal logic, the notion of a model is similarly extended. We now have a set W, whose elements we
call worlds. We also have a binary relation R, called the accessibility relation, and a labeling function L
which maps worlds to sets of atoms in the language.
Comparing Definition 2.14 to Definition 5.3, we note that there are some similarities. In both cases we
have a set of values. In predicate logic, its elements are values in the relevant universe. In modal logic,
its elements are called worlds.
In predicate logic, an n-ary predicate is interpreted as a set of n-tuples from the domain, that is, as an n-
ary relation over the domain. In a modal model, we have only one binary relation, called the accessibility
relation R. This accessibility relation is explicit in a modal model, but not in the modal language. Instead,
we have a unary modal connective in the modal language. The link between R and the unary modal
connective ⊨ is given in Definition 5.4.
The labeling function L of modal logic maps worlds to sets of atoms. For each world, L specifies the set
of atoms which is true in that world. Put differently, we could express this as a function which gives, for
each atom, the set of worlds at which it is true. It is therefore similar to the interpretation of unary
predicates in predicate logic, where each unary predicate symbol is mapped to a set of values, namely
those values in the domain that have a certain property.
The above correspondences in the semantics of modal logic, and that of predicate logic, indicates that
modal logic can be viewed as a restricted fragment of predicate logic, in which we have only unary
predicates and a single binary predicate R. Furthermore, the sentences we may write are restricted by
33
Definition 5.4. In particular, the ways in which we may use R are very restrictive. Normally we don’t
bother with all of this, and rather stick to the syntactic form of modal logic, using the unary modal
connectives. This way, we can view modal logic as a propositional logic, to which we have added some
modal connectives. The additional complexity of modal logic is, in a way, hidden in the semantics of the
logic.
The most common and intuitive way to specify such a model is by means of a Kripke diagram. Consider
the following example:
x1 x2
, p
x3 q p,q x4
This model consists of four worlds, namely x1, x2, x3 and x4. No atoms are true in world x1, atom p is true
in world x2, q is true in x3, and p and q are true in world x4.
The accessibility relation is represented by arrows between worlds. In this example, world x1 is not
accessible from any other world. World x2 is accessible from world x1, world x3 is only accessible from
itself, and x4 is accessible from x1, x2 and x3.
As in propositional and predicate logic, we can ask whether a given formula (in this case a modal
formula) is true in a model. A modal formula is true in a model if it is true in all its worlds. In the above
example, p is true in worlds x2 and x4, but not in x1 and x3. So the formula p is not true in this model. The
same goes for the formula q. But what about ⊨p? It is true in world x1 because p is true in all worlds
accessible from x1. (So to check whether a modal formula ⊨ is true in a world x, we check whether is
true in all worlds accessible from x.) ⊨p is also true in x2 because p is true in all worlds accessible from
x2. Unfortunately ⊨p is not true in x3, because although p is true in x4, it is not true in x3, which is a world
accessible from x3. ⊨p is said to be vacuously true in world x4, since p is true in "all" worlds accessible
from x4. (So if no worlds are accessible from a world x, then any formula of the form ⊨ is true in x.)
Since ⊨p is not true in all worlds, it is not true in this model.
See if you can work out whether ⊨q is true in the above model.
In the same way, to determine whether a formula ◊ is true in a model, we must check that it is true in all
worlds of the model. ◊ is true in a world x if is true in at least one world accessible from x. So to test
whether ◊p is true in the example model above, we need to check whether it is true in the worlds. ◊ p is
true in x1 because there is at least one world accessible from x1, namely x2, in which p is true. ◊ p is true
in x2 because there is at least one world accessible from x2, namely x4, in which p is true. ◊ p is true in x3
because there is at least one world accessible from x3, namely x4, in which p is true. But ◊ p is not true in
x4 because there is no world accessible from x4 in which p is true. (So if no worlds are accessible from a
world x, then any formula of the form ◊ is false in x.)
Validity
There are a few related notions of validity in modal logic.
34
COS3761/MO001/4/2018
The first of these are that of truth in all worlds in all models: A modal formula is valid if it is true in
all worlds in all models. One way to prove that a given modal formula is valid is to take an
arbitrary world in an arbitrary model (about which you make no assumptions except that it has at
least one world), and then show that the given formula is true in that world. The most important
valid modal formula schemes are the ones given in (5.3) on page 314, together with the modal
formula scheme K on page 315. Remember that all the propositional tautologies are also valid
modal formulas. This holds even if the sub formulas contain modal connectives. For example, the
formula ⊨ ¬⊨ is a valid modal formula because it is an instance of the law of the excluded
middle, which is a propositional tautology.
To show that a modal formula is not valid, it suffices to construct a single model which has a
world in which the given formula is false.
The second notion of validity is that of validity in a given frame. A modal formula can be valid in
one frame without being valid in all frames. A frame consists of a set of worlds and a fixed
accessibility relation on the worlds. A formula is valid in the frame if it is true in all the worlds in
the frame for all labeling functions.
Definition 5.11 on page 322 formalizes what it means for a formula to be valid in a frame. The
prescribed book refers to this as that the frame satisfies the formula. However, remember that
this is a form of validity, not satisfiability. For a formula to be valid in a frame, (i.e. in the
terminology of the prescribed book, for the frame to satisfy the formula), the formula has to be
true in all worlds in the frame for every labeling function. Remember the difference between a
frame and a model: A frame does not have a labeling function. Figures 5.9 and 5.10 are
examples of frames. They show the worlds, and the accessibility relation on worlds, but they
don’t show which atoms are true in which worlds. Figures 5.3 and 5.5, on the other hand, are
examples of modal models.
A third notion of validity in modal logic is that of validity in a given class of frames. In this case,
the set of worlds and accessibility relation on the worlds are not fixed, but some properties of the
accessibility relation are fixed. For example, we may fix the criterion that the accessibility relation
must be reflexive. We can then consider which formulas are valid in the class of all reflexive
frames, that is, which formulas are valid in all frames with a reflexive accessibility relation.
This is essentially what correspondence theory is about. Table 5.12 on page 325 gives a number of
important correspondences between properties of the accessibility relation on the one hand, and valid
formula schemes on the other. Each of the given formula schemes is valid in the class of frames with the
given corresponding property of its accessibility relation. Conversely, for each of the formula schemes on
the left, if all instances of the formula scheme are valid in a given frame, then the accessibility relation of
the frame will have the corresponding property. This correspondence is captured in Theorem 5.13 for
reflexive frames, and for transitive frames. The other correspondences are proved similarly.
Why would one want to construct new modal logics? The choice of which modal formulas should be
valid, depends on what you want the modal operators to mean. As we have seen above, there is a
correspondence between the validity of formula schemes on the one hand, and properties of the
accessibility relation on the other. So, the meaning and properties of the accessibility relation will
influence which formulas in the logic should be valid. For each different set of valid formulas, one needs
a new logic.
35
A new modal logic can easily be constructed from the basic modal logic K. Simply add enough of the
modal formulas that you want to be valid, as axioms. It is not necessary to add all the formulas that you
want to be valid as axioms. It is sufficient to add a few well-chosen axiom schemes.
This is what Definition 5.15 is about. The set L will contain a few axiom schemes, typically chosen from
the list in Table 5.12. The valid formulas in the new logic will be all the instances of elements of L,
together with everything entailed by them. We call the new logic by the same name as the new axiom
schemes. So, a formula is valid in the logic L, written L , iff is semantically entailed by the set of
instances of L in modal logic K, written Lc K .
Definition 5.15 makes this definition a bit more general, defining entailment in the logic L from a set of
premises Γ. We then have that is entailed by Γ in the logic L, written Γ L , iff is semantically entailed
by Lc Γ in modal logic K.
Intuitionistic logic
In classical logic, the meaning of a compound sentence is determined by the meaning of each of its
parts, and the meaning of the construction by which the parts are combined. The meaning is expressed
by truth conditions. In classical propositional logic, a valuation tells us which atoms are true and which
are false. Because we also know the truth conditions of the connectives, we can determine the meaning
of any compound formula. We say that classical logic is truth functional, or compositional.
In intuitionistic logic, things work a bit differently. The basic idea here is that the meaning of a sentence is
given by proof conditions:
Intuitionistic logic can be given a possible-worlds semantics. This is done in Definition 5.19. Think of
worlds as states of information. In any world, some things are known. These are the things that have
been proved. An accessible world is some state conceivable from the present state, in which some
additional things have been proved. This is why the accessibility relation has to be monotone. Once we
have proved something, we can only increase our knowledge through additional proofs. Nothing can be
retracted from what we have already proved.
If we have a proof of both and in some state, we have a proof of their conjunction.
If in some state we either have a proof of or we have a proof of , then we have a proof of their
disjunction.
If we don’t have a proof of in the present state, and we also don’t have a proof of in any state that
can be reached from the present state, then we have a proof that a proof of is not possible. This means
we have a proof of its negation.
If, in any state that can be reached from the present state, we have a proof of whenever we have a
proof of , then we have the construction required as a proof of → in the present state.
Suppose we have proved in a dashed box. This means holds in the world represented by the dashed
box. Since holds in an arbitrary world accessible from the current world, ⊨ holds in the current world,
which is represented by the surrounding box.
36
COS3761/MO001/4/2018
Similarly, suppose we have proved ⊨. This means that holds in an arbitrary accessible world. We may
therefore use in a nested dashed box.
A modal natural deduction proof starts in an arbitrary world. We may therefore view the entire proof as
taking place in an outer dashed box. Since this represents an arbitrary world, anything proved in this
outer dashed box holds in all worlds.
The rules we have to add for each of the axiom schemes T, 4 and 5, resemble the respective axiom
schemes closely. If we want to construct a natural deduction proof for the modal logic KT, we may use
the ⊨ introduction rule, the ⊨ elimination rule and the T rule, in addition to all the natural deduction rules
of propositional logic. Similarly, in a natural deduction proof for the modal logic KT45, we may use all of
the above rules, plus the rules 4 and 5.
Any proof for K, KT or KT4 will also be a proof for KT45, but not conversely. Example 5.21 (1) gives a
proof for the modal logic K. This would also constitute a proof in the logics KT, KT4 or KT45. (2) and (3)
are proofs for KT45. Since both the T and 5 axioms are used in each of these proofs, they are not proofs
for K, KT or KT4.
37
APPENDIX E: SOLUTIONS TO SELECTED EXERCISES
Chapter 1
1 (f) Declarative sentence: “If interest rates go up, share prices go down”
If we choose:
p: "Interest rates go up".
q: "Share prices go down”.
1(i) Declarative sentence: “If Dick met Jane yesterday, they had a cup of coffee together, or they
took a walk in the park.”
2(g) The expression p q r is problematic since and have the same binding priorities, so we
have to insist on additional brackets in order to resolve the conflict.
38
COS3761/MO001/4/2018
1 (p q) r premise
2 st premise
3 pq e1 1
4 q e2 3
5 s e1 2
6 qs i 4, 5
We again apply the rules you know from Formal Logic 2, but make a few comments.
Because the main connective of the goal is the implication , we open a subproof box in
line 4 with the assumption of the left hand side of the goal, namely p. The subproof ends
with the right hand side of the goal (line 10) and then the introduction rule is used in
line 11 after the subproof box was exited.
We require two subproofs (lines 6 to 7 and lines 8 to 9) so that the elimination rule can
be used in line 10. Also note how the elimination rule is used (lines 5, 7 and 9). If the
application of these rules is not clear, work through the rules given in the prescribed book
for Formal Logic 2 again.
1 p (q r) premise
2 qs premise
3 rs premise
4 p assumption
5 qr e 1, 4
6 q assumption
7 s e 2, 6
8 r assumption
9 s e 3, 8
10 s e 5, 6 – 7, 8 – 9
11 ps i 4 – 10
39
2(b) We prove the validity of ¬p ¬q ¬(p q) as follows:
1 ¬p ¬q premise
2 pq assumption
3 ¬p assumption
4 p 1 e 2
5 ¬ e 4, 3
6 ¬q assumption
7 q 2 e 2
8 ¬ e 7, 6
9 e 1, 3 – 5, 6 – 8
10 ¬ (p q) ¬i 2–9
As you can see, we assume the negation of the goal in line 2 (the first statement of the outer assumption
box) and derive the contradiction in line 9 (the last statement of this subproof) so that the goal can be
derived in line 10 after the subproof box has been exited by using the ¬ introduction rule.
Also note how the elimination rule is applied: we need two separate assumption boxes (lines 3 to 5
and lines 6 to 8), each box starting with one of the disjuncts of the formula in line 1 and each box ending
on the same formula which is then derived in line 9 after the second assumption box has been exited.
Note that all this happens inside the outer subproof box.
Remember that the ¬ e rule was called the Intro rule in Formal Logic 2 but the same requirements
apply.
1 p (q r) premise
2 ¬q premise
3 ¬r premise
4 p assumption
5 qr e 1, 4
6 q assumption
7 ¬ e 6, 2
8 r assumption
9 ¬ e 8, 3
10 e 5, 6 – 7, 8 – 9
11 ¬p ¬ i 4 – 10
The proof is very similar to the proof in question 2(b) above (assumption of negation of the
goal and use of the e rule thereby requiring two sub-subproofs).
Please note how the e rule is applied (line 5): the right hand side of an implication is
derived if the left hand side appears on an earlier line.
40
COS3761/MO001/4/2018
1 q¬q LEM
2 q assumption
3 p assumption
4 q copy 2
5 pq i 3–4
6 (p q) (q r) i1 5
7 ¬q assumption
8 q assumption
9 ¬ e 8, 7
10 r e 9
11 qr i 8 – 10
12 (p q) (q r) i2 11
13 (p q) (q r) e 1, 2 – 6, 7 – 12
Note that all the subproofs are essential. Most of the rules which are used above have been
explained in the previous exercises except the i rule that are used in lines 6 and 12. The
introduction rule is very simple: once a formula has been derived any formula may be connected
to it with the connective.
5(d) We have to prove the validity of (p q) ((¬ p q) q). You will find nothing new in the
proof given below. Note, however, again, that a new subproof box has to be opened whenever an
assumption is made.
1 p q assumption
2 ¬p q assumption
3 p ¬p LEM
4 p assumption
5 q e 1, 4
6 ¬p assumption
7 q e 2, 6
8 q e 3, 4 – 5, 6 – 7
9 (¬ p q) q i 2–8
10 (p q) ((¬ p q) q) i 1–9
41
Exercises 1.3 (p. 81)
1(c) In order to draw the parse tree for the formula, we have to determine the main connective. Since
binds more strongly than , we could re-write this formula as
(p q) p
This shows that the main connective of this formula is , which places it at the root of the parse
tree, shown below:
p p
The height of this parse tree is 1 + 3 = 4, since the longest path from root to leaf is 3.
42
COS3761/MO001/4/2018
4(b) Here, parentheses are used to override the binding priorities of the connectives, making the last
the main connective of the formula. The parse tree, whose height is 1 + 5 = 6, is shown below:
→ ¬
s r
→
p ¬ p r
By definition, a formula is a subformula of another formula if and only if the formation tree of
formula appears as a subtree of the formation tree of formula . The following is a list of all the
subformulas of ((p → ¬ q) (p r) → s) ¬ r:
p
q
r
s
¬r
¬q
(p → ¬q)
(p r)
(p → ¬q) (p r)
((p → ¬q) (p r) → s)
((p → ¬q) (p r) → s) ¬r
The purpose of all the parentheses is to override precedence rules and binding orders. We can
parse a fully parenthesized formula recursively and mechanically (i.e. we don't need to worry
about the interpretation of the symbols). Parsing a wff lets us build a parse-tree for the formula, in
which the root node corresponds to the final rule that was applied in the building of the formula,
and the leaves are the atomic propositions in the formula.
43
Exercises 1.4 (p. 82)
5 The formula of the parse tree in figure 1.10 on page 44 is the following:
¬ ((q → ¬ p) (p → (r q)))
The formula is not valid since it evaluates to F for several assignments. However, this formula is
satisfiable: for example, if q and p evaluate to T then q ¬p renders F and so the entire
formula evaluates to T.
13(b) An example is: Let p represent “There are clouds in the sky” and let q represent “It rains”. Then
¬p ¬q holds (“if there are no clouds in the sky, then it does not rain”), but ¬q ¬p is false
(“if it does not rain, then there are no clouds in the sky”) because, as we know, there may well
be clouds in the sky even if it does not rain.
44
COS3761/MO001/4/2018
As may be seen in the last five columns of the truth table below the formulas given in (a), (c) and
(d) are semantically equivalent to p (q r) but the formula given in (b) is not. The truth value of
the formula given in (b) does not correspond to that of p (q r) in lines 4 and 6 while the truth
values of (a), (c) and (d) are identical to those of p (q r) in all lines.
7(a) We construct the formula 1 in CNF as explained in section 1.5.1 in both the prescribed book and
the additional notes:
Note how these principal conjuncts correspond to the lines in the table where the 1 entry is F.
(T q) (T s) (w ) (p q s v) (v s) (T r) (r p),
Each Horn clause is now investigated: if everything to the left of is marked, the right hand side
is marked everywhere it appears. Thus:
All occurrences of q, s, and r are marked in the first iteration of the while-loop:
(T q) (T s) (w ) (p q s v) (v s) (T r) (r p)
Nothing further can be marked. Because is not marked, the Horn formula is satisfiable.
(We allocate T to q, s, r, p and v, and F to w.)
45
Chapter 2
x
4(a)
P
y
y z
¬ P
Q y z
y x
46
COS3761/MO001/4/2018
4(b) From the parse tree of the previous item we see that all occurrences of z are free, all
occurrences of x are bound, and the leftmost occurrence of y is free, whereas the other two
occurrences of y are bound.
4(d) (i)
[w/x] is simply again since there are no free occurrences of x in that could be
substituted by w.
[w/y] is x(P(w, z) (y(¬Q(y, x) P(y, z)))) since we replace the sole free
occurrence of y with w.
If we simply replace the sole free occurrence of y with f(x), we get that [f(x)/y] is
x(P(f(x), z) (y(¬Q(y, x) P(y, z)))). Note, however, that we created a
problem by this substitution because the variable x occurs in f(x) and after the
substitution it occurs within the scope of x. We should actually first rename x in the
given formula by, say, u to get u(P(y, z) (y(¬Q(y, u) P(y, z)))) and then do the
substitution: u(P(f(x), z) (y(¬Q(y, u) P(y, z)))).
If we simply replace all (free) occurrences of z with g(y, z) we get that [g(y, z)/z] is
x(P(y, g(y, z)) y(¬Q(y, x) P(y, g(y, z)))). By doing this we again created a
problem because the variable y occurs in g(y, z) and after the substitution it occurs
within the scope of y. We should actually first rename the bound occurrences of y in
the given formula by, say, u to get x(P(y, z) u(¬Q(u, x) P(u, z)))) and then do
the substitution: x(P(y, g(y, z)) u(¬Q(u, x) P(u, g(y, z)))).
4(d) (ii) All of them, for there are no free occurrences of x in to begin with.
4(d) (iii) The terms of w and g(y, z) are free for y in , but f(x) is not free for y in the formula since the
x in f(x) would be captured by x in that substitution process as noted above.
4(f) Now, the scope of x is only the formula P(y,z) since the inner quantifier x binds all occurrences
of x (and overrides the binding of x) in the formula (¬Q(x,x) P(x,z)).
47
Exercises 2.3 (p. 160)
7(c) The proof of the validity of xyP(x, y) yxP(x, y) is given below. This proof illustrates both the
and introduction and elimination rules. We comment on that below the proof.
1 xyP(x, y) premise
2 y0
6 xP(x, y0) x e 1, 3 5
7 yxP(x, y) y i 2 6
We open a y0-box first, since we want to derive a formula of the form y. Then we open an x0-
box to be able to use x e later on.
Note how the elimination and introduction rules for both and are applied above:
elimination: (i) a formula starting with (line 1), (ii) a new subproof box starting
with the choice of a free variable which then substitutes the relevant variable in the
formula now without , namely yP(x, y) (line 3), (iii) the subproof ends on a formula
that does not contain the free variable (line 5), (iv) the e rule is cited outside the
subproof (line 6) with the same formula on which the subproof ends in line 5.
introduction: (i) a formula containing a free variable (line 4), (ii) the i rule is cited
and is attached to the formula with the free variable replaced by the same variable
that is attached to (line 5).
elimination: (i) a formula starting with (line 3), (ii) the e rule is cited with the
formula without the and the relevant variable replaced by a free variable y0 (line 4).
introduction: (i) a subproof box starting with the choice of a free variable (line 2),
(ii) the subproof ends on a formula containing the free variable (line 6), (iii), the i
rule is cited outside the subproof (line 7) with the same formula on which the subproof
ends in line 6, but with attached to it and the free variable replaced by the same
variable that is attached to .
48
COS3761/MO001/4/2018
Note that we may not apply the elimination rule directly to the premise because x is not in
front of the whole rest of the formula, i.e. we do not have x(P(x) S). The elimination rule
is applicable to formulas of the type x only.
This is not a simple proof and you will not generally be asked to give such a proof in an
examination. However, it demonstrates the application of several rules. Make sure that you
understand it.
1 xP(x) S premise
2 ¬x(P(x) S) assumption
3 x0
4 ¬P(x0) assumption
5 P(x0) assumption
6 ¬e 5, 4
7 S e 6
8 P(x0) S i 5–7
9 x(P(x) S) x i 8
10 ¬e 9, 2
11 ¬¬P(x0) ¬i 4 – 10
12 P(x0) ¬¬e 11
13 xP(x) x i 3 – 12
14 S e 1, 13
15 P(t) assumption
16 S copy 14
17 P(t) S i 15 – 16
18 x(P(x) S) x i 17
19 ¬e 18, 2
20 ¬¬x(P(x) S) ¬i 2 – 19
21 x(P(x) S) ¬¬e 20
49
9(r) We prove the validity of ¬xP(x) x¬P(x) as follows:
1 ¬xP(x) premise
2 x0
3 P(x0) assumption
4 xP(x) x i 3
5 ¬e 4, 1
6 ¬P(x0) ¬i 3 – 5
7 x¬P(x) x i 2 – 6
There is nothing new in this proof. We choose a free variable in line 2, thereby opening a new
subproof box, so that the x introduction rule can be cited once the subproof is exited (line 7).
The “trick” to assume the negation of something that we want to prove is illustrated in the
subproof from line 3 (where P(x0) is assumed) to line 5 (where the contradiction is derived) and
then the ¬i rule is cited in the next line after this subproof box has been exited (i.e. line 6).
1 xP(a, x, x) premise
2 xyz(P(x, y, z) P((x), y, (z))) premise
3 P(a, a, a) x e 1
4 yz(P(a, y, z) P((a), y, (z))) x e 2
5 z(P(a, a, z) P((a), a, (z))) y e 4
6 P(a, a, a) P((a), a, (a)) z e 5
7 P((a), a, (a)) e 6, 3
Note that no subproofs are necessary for the application of the elimination rule: any free variable may
be used for the substitution.
50
COS3761/MO001/4/2018
1. This formula xyQ(g(x, y), g(y, y), z) contains a free variable z, thus we will also need a look-up
table.
In this way g(y, y) is interpreted as 0) and the predicate QM is interpreted to be such that a triple
of integers (a, b, c) is in QM if, and only if, c equals the product of a and b.
Thus our formula says that, for all integers x and y, we have that (x – y) times 0 is equal to z.
Second model M’: We choose this model identical to M above but define a different look-up table:
let lA (z) def 1. The formula is now false.
12(g) The following model shows that the formula is not valid:
Let the universe of discrete values A be the set of natural numbers {0, 1, 2, …}, and
let the predicate S be interpreted as “less than or equal to.”
The formula claims that the serial and anti-symmetric relation does not have a minimal element.
But in the given model, 0 is such a minimal element.
Then we have
M x(¬P(x) ¬Q(x)),
i.e. the formula to the left of evaluates to T (there exists such an x, for example, take 9 as the
value of x), but we cannot have
M x(P(x) Q(x))
since not all integers are divisible by 2 or 3 (choose x for example to be 13). Because we have
found a model where the premise of the sequent x(¬P(x) ¬Q(x)) x(P(x) Q(x)) is true but
the conclusion is false, we know that no prove exists for the validity of this sequent – the sequent
is not valid.
51
CHAPTER 5
1(a) (iv) The Kripke model M depicted in Figure 5.5 is on page 315.
The relation a ⊨⊨q holds iff x ⊨q holds for all x with R(a, x). Since e and b are the only
instances of x which satisfy R(a, x), we see that a ⊨⊨q holds iff e ⊨q and b ⊨q hold.
But none of this is the case. For example, we have R(b, e) but q is not true in world e, so ⊨q
is not true in world b.
(i) First, we have x (p q) iff there is a world y with R(x, y) and y p q.
But then y p or y q.
Case 1: If y p, then x p, and so x p q follows.
Case 2: If y q, we argue in the symmetric way: x q, and so x p q follows.
Case 1: If x p, then there exists a world y' with R(x, y') and y' p.
This implies y' p q, and so x (p q) follows because of R(x, y').
Case 2: Symmetrically, If x q, then there exists a world y'' with R(x, y'') and y'' q.
This implies y'' p q, and so x (p q) follows because of R(x, y'').
Let R be a reflexive, transitive and Euclidean relation. We need to show that R is an equivalence
relation. Since R is already reflexive and transitive, it suffices to show that R is symmetric. To that
end, assume that R(a, b). We have to show that R(b, a) holds as well. Since R is Euclidean, we
have that R(x, y) and R(x, z) imply R(y, z) for all choices of x, y and z. So, if we instantiate this
with x = a, y = b and z = a, then we have R(x, y) by assumption (as R(a, b)), but we also have
R(x, z) since R is reflexive (so R(a, a) holds). Using that R in is Euclidean, we obtain R(y, z)
which is R(b, a) as desired.
52
COS3761/MO001/4/2018
REFERENCES
1. Chellas, BF. 1980. Modal logic: an introduction, Cambridge University Press.
2. Hughes, GE and Cresswell, MJ. 1995. A new introduction to modal logic, New York Routledge.
3. Huth, M. and Ryan, M. 2004. Logic in Computer Science: Modelling and Reasoning about
systems. Second edition. Cambridge University Press.
4. Huth, M. and Ryan, M. 2004. Logic in Computer Science: Modelling and Reasoning about
systems. Solutions to designated exercises. Cambridge University Press.
5 http://www.doc.ic.ac.uk/~imh/teaching/140_logic/Cribsheet.pdf
6. http://cl-informatik.uibk.ac.at/teaching/ws06/lics/ohp/4x1.pdf
7. http://gensum.kaist.ac.kr/~cs402/2004fall/lecture/logic-prop-040916-30.pdf
8. http://www.doc.ic.ac.uk/~imh/teaching/140_logic/140.pdf
9. http://www.cs.bham.ac.uk/research/projects/lics/second_edition_errata.pdf
10. http://cs.sunysb.edu/~cse541/Spring2007/cse541BasicModalLogic.pdf
11. http://cs.sunysb.edu/~cse541/Spring2007/cse541ModalLogics.pdf
12. http://cs.vu.nl/~pdwind/thesis/thesis.pdf
13. http://plato.stanford.edu/contents.html
Unisa 2018
53