Download as pdf or txt
Download as pdf or txt
You are on page 1of 53

COS3761/2019

Study guide

Formal Logic III


COS3761

Semesters 1 and 2

School of Computing

This tutorial letter contains important information


about your module.

BARCODE
TABLE OF CONTENTS

INTRODUCTION ............................................................................................................................. 3

SECTION 1: LEARNING UNIT 1.................................................................................................... 4

SECTION 2: LEARNING UNIT 2.................................................................................................... 5

SECTION 3: LEARNING UNIT 3.................................................................................................... 6

APPENDIX A: ERRATA IN EARLIER PRINTINGS OF THE PRESCRIBED BOOK ....................... 7

APPENDIX B: ADDITIONAL NOTES ON CHAPTER 1 ................................................................. 9


APPENDIX C: ADDITIONAL NOTES ON CHAPTER 2 ................................................................ 20
APPENDIX D: ADDITIONAL NOTES ON CHAPTER 5 ................................................................ 31
APPENDIX E: SOLUTIONS TO SELECTED EXERCISES ........................................................... 38

2
COS3761/MO001/4/2018

INTRODUCTION
This tutorial letter (MO001) contains learning material available on the myUnisa website for COS3761.
The intention is to provide a document for students who do not have regular access to the internet, or
who prefer to work from a limited number of longer documents rather than a multitude of individual
resources (as is available on the COS3761 website).

Please note that this document does NOT contain the most important document for this module, namely
Tutorial Letter 101, which contains the general instructions and assignments for COS3761. Tut Letter
101 is available under Official Study Material on the COS3761 website, and you should refer to it first
(before this document, and before all the resources on the website). MO001 also does not contain
Tutorial Letters 201, 202 and 203. Since these documents contain the solutions to the assignments, they
will only be available after the due dates of the respective assignments, and will be available for
download from the Additional Resources of the COS3761 website.

The main body of this document contains the learning units for COS3761 as found on myUnisa. These
learning units refer to a number of other documents available in Additional Resources. These other
documents are included as appendices. The table of contents indicates this clearly.

How to use this document

The linear nature of a printed document like this imposes an ordering on its parts. The order might seem
to suggest that you should start at the beginning and work through to the end. This is not how this
document is intended to be used!

Remember that it just represents a collection of resources on the COS3761 website, which you can
navigate in any order. Although the material is naturally divided into three major parts represented by
Learning Units 1, 2 and 3, which each terminate in an assignment, nothing should stop you from jumping
between parts of different learning units. For example, you might find it useful to revise the explanation of
natural deduction for propositional logic covered in Learning Unit 1 while working through the section on
natural deduction for predicate logic covered in Learning Unit 2.

The fact that there is an assignment with a due-date at the end of each learning unit, places a constraint
on the order in which you should work through the material. It might be fun to jump from one topic to
another as the fancy takes you, but apart from confusing yourself, you will probably miss the due dates
of the assignments and lose the marks that they contribute to your year mark and final mark.

So we recommend that you work through Learning Units 1, 2 and 3 in sequence, remembering that the
resources within each learning unit can be used in different orders, and that there is value in going back
to previous learning units for revision.

Before you get started on the learning units

 Have you read through Tutorial Letter 101 for COS3761 yet? If not, this is your first priority. If you
don't have a printed copy, please download an electronic version from Official Study Material on
the COS3761 website of myUnisa.

 Have you activated you myLife account yet? If not, do so as soon as possible. A lot of the
interaction between you and us (your lecturers) will be via email, and all of the announcements
we send about new resources we have placed on myUnisa and other important information, will
be sent to your myLife account. If you don't like using myLife, you should set your myLife account
to forward all your myLife email to your personal email address, to avoid missing out.

 Have you purchased a copy of the prescribed book yet? If not, you should do so as a matter of
urgency. It is impossible to study this module without a copy of the prescribed book. Details of the
prescribed book are given in Tutorial Letter 101.

3
 LEARNING UNIT 1
Topics:
Syntax of Propositional Logic
Natural Deduction for Propositional Logic
Semantics of Propositional Logic
Soundness and Completeness of Natural Deduction for Propositional Logic
Satisfiability of a Propositional Logic formula
Conjunctive Normal Form
Horn formula satisfiability

Resources:
Tutorial Letter 101 for COS3761
Chapter 1 of the prescribed book
Additional notes on Chapter 1 (Appendix B)
Solutions to selected exercises (Appendix E)
Tutorial Letter 201 (only available after the due date for Assignment 1)

Suggested study plan:


 Over a period of three weeks, work through six sections of Chapter 1 of the prescribed book,
and the corresponding sections of the additional notes on Chapter 1. In other words, you should
aim to work through two sections per week.

 Attempt some of the exercises in the prescribed book. We recommend the following:

o Exercises 1.1: 1(a), (c), (e), (f)#, (i)#; 2(a), (c)#, (e), (g)#

o Exercises 1.2: 1(a), (b), (c), (d), (e), (x)#; 2(a), (b)#, (c), (d), (e)#; 3(a), (f), (q)#; 5(a), (d)#

o Exercises 1.3: 1(c)#, (d), (e), (f); 2(c); 4(b)#; 6(a), (b)

o Exercises 1.4: 2(a), (c)#; 5#; 12(a), (b), (c); 13(a), (b)#

o Exercises 1.5: 2(a)#, (b), (c), (d); 7(a)#, (b); 15(a), (b), (c), (g)#

(Exercises marked with a # have solutions provided in Appendix E.)

 Tackle and submit Assignment 1.

 Review your marked assignment answers by comparing them to those provided in Tut Letter
201. Try to identify the areas where you made mistakes and go back to the relevant resources
for this learning unit, to repair your misunderstandings.

4
COS3761/MO001/4/2018

 LEARNING UNIT 2
Topics:
Syntax of Predicate Logic
Free and bound variables
Natural Deduction for Predicate Logic
Semantics of Predicate Logic
Undecidability of Predicate Logic

Resources:
Tutorial Letter 101 for COS3761
Chapter 2 of the prescribed book
Additional notes on Chapter 2 (Appendix C)
Solutions to selected exercises (Appendix E)
Tutorial Letter 202 (only available after the due date for Assignment 2)

Suggested study plan:


 Over a period of four weeks, work through all the sections of Chapter 2 of the prescribed book,
and the corresponding sections of the additional notes on Chapter 2. Sections 2.1 and 2.2, as
well as 2.5 and 2.6 are easier, and shouldn't take longer than half a week each. You should plan
to spend more time (about a week each) on Sections 2.3 and 2.4.

 Attempt some of the exercises in the prescribed book. We recommend the following:

o Exercises 2.1: (a)#, (b)#, (c), (d)#, (e), (f); 2; 3(a), (b); 4

o Exercises 2.2: 1(a); 3(a); 4#

o Exercises 2.3: 6(a); 7(a), (c)#; 9(a), (d)#, (e), (f), (g); 13(a)#, (b), (c)

o Exercises 2.4: 1#; 2; 3; 5; 12(a), (c), (e), (g)#

o Exercises 2.5: 1(a), (c), (e), (g)#

(Exercises marked with a * have solutions provided in Appendix E.)

 Tackle and submit Assignment 2.

 Review your marked assignment answers against those provided in Tut Letter 202. Identify the
areas where you made mistakes and go back to the relevant resources for this learning unit, in
order to repair your misunderstandings.

5
 LEARNING UNIT 3
Topics:
Syntax of Basic Modal Logic
Natural Deduction for Basic Modal Logic
Semantics of Basic Modal Logic
Modes of truth and logic engineering
Multi-agent systems

Resources:
Tutorial Letter 101 for COS3761
Chapter 5 of the prescribed book
Additional notes on Chapter 5 (Appendix D)
Solutions to selected exercises (Appendix E)
Tutorial Letter 203 (only available after the due date for Assignment 3)

Suggested study plan:


 Over a period of four weeks, work through the five sections of Chapter 5 of the prescribed book,
and the corresponding sections of the additional notes on Chapter 5. Sections 5.1 and 5.2 can
be covered together in a week, but the remaining sections will require at least a week each,
particularly Section 5.3.

 Attempt some of the exercises in the prescribed book. We recommend the following:

o Exercises 5.2: 1(a)ii, iv#, vi, ix, (b)i, ii, iii; 5(a), (b), (c), (f)#

o Exercises 5.3: 4#; 11(a), (b), (c); 13

o Exercises 5.4: 1(a), (c), (e); 2(a), (c), (e)

o Exercises 5.5: 3(a), (c), (e); 4(a), (b), (c)

(Exercises marked with a * have solutions provided in Appendix E.)

 Tackle and submit Assignment 3.

 Review your marked assignment answers against those provided in Tut Letter 203. Try to
identify the areas where you made mistakes and go back to the relevant resources for this
learning unit, in order to repair your misunderstandings.

6
COS3761/MO001/4/2018

APPENDIX A: ERRATA IN THE PRESCRIBED BOOK


The following are errata found in Chapters 1, 2 and 5 of early printings of the second edition of the
prescribed book. Please let us know if you discover any other errata.

 p. 20, line 2. After the paragraph ending with the word “negation”, add the sentence: “The formula 
stands for the contradiction.”.

 p. 21, line 17. “The fact that ” should be “The fact that  (the contradiction)”.

 p. 31, line 14. “b is rational or it is not” should be “bb is rational or it is not”.

 p. 47, below the second proof. “This is a proof of the sequent p  q  r, p p  r.” is incorrect: p 
r should be q  r. The same mistake should be corrected in the line below as well.

 p. 49. The sequent in line 4 has a typo. It must read 1, 2, …, n . Also correct the other two
sequents in the same paragraph.

 p. 53. Corollary 1.39, in the second sentence: “is holds” should be “holds”.

 p. 57. Definition 1.44: “a valuation in which is” should be “a valuation in which it”

 p. 68, line 3. “has be to true” should be “has to be true”

 p. 68. Section 1.6, in the first sentence of the first paragraph: “formule” should be “formula”.

 p. 120, line 6. “assumption” should be “premise”.

 p. 122. Lines 6 and 7 of the proof should have  instead of  in the rules.

 p. 134, line 12. “verify that is” should be “verify that it”.

 p. 161, last line. Exercise 2.3.9(o): Replace S(y) by Q(y), twice.

 p. 162, line 8. Exercise 13(h): Replace “y” by “x”.

 p. 165. Exercise 2.6.1: “In Example 2.23, page 136” should be “In Example 2.27, page 140”.

 p. 166, line 5.

P(xyP(x, y)  ¬P(y, x))  (uvR(u, v)  P(v, u))


should be
P(xy(P(x, y)  ¬P(y, x))  uv(R(u, v)  P(v, u))).

 p. 166, line 6.

P(xyzP(x, y)  P(y, z)  ¬P(x, z))  (uvR(u, v)  P(u, v))


should be
P(xyz(P(x, y)  P(y, z)  ¬P(x, z)) uv(R(u, v)  P(u, v))).

7
 p. 166, line 7.

P(x¬P(x, x))  (uvR(u, v)  P(u, v))


should be
P(x¬P(x, x)  uv(R(u, v)  P(u, v))).

8
COS3761/MO001/4/2018

APPENDIX B: ADDITIONAL NOTES ON CHAPTER 1


1 Propositional logic
Chapter 1 discusses the syntax and semantics of propositional logic. The natural deduction proof system
is presented and its soundness and completeness are shown: only tautologies can be derived in the
proof system, and every tautology in a truth table can be proven.

In this chapter you will learn to construct answers to questions such as:

 How is a valid, propositional logic argument constituted?


 How can we show that a propositional logic argument is sound?

The chapter deals with propositional arguments. A propositional argument is made up of a finite set of
propositions (called premises) that together justify the deduction of another proposition (called the
conclusion). Propositions represent statements about the world to which we can attach a truth value
(either true or false). A process of reasoning involving a proof calculus is used to ensure the validity of
arguments.

1.1 Declarative sentences


A declarative sentence is a statement of fact that, in the context of a specific view of the world, can be
either true or false. Declarative sentences cannot be partly true or partly false. They can only be
completely true or completely false. Examples of declarative sentences are:

A cow is an animal
My favourite colour is blue
Jane got 80% in the Formal Logic exam

In propositional logic, a declarative sentence can be expressed as a propositional logic formula. (From
now on, we use the term proposition to refer to a propositional logic formula.) A proposition can be an
atom, or a combination of atoms and connectives of propositional logic.

1.2 Natural deduction


Natural deduction is a rule-based calculus for constructing proofs of propositions. In performing these
proofs, we are not interested in the meanings of the propositions. Rather, we start from a set of
propositions called premises, and apply the rules of the calculus to reach another proposition called the
conclusion.

So a proof consists of a list of propositions. Each proposition in the list is either a premise or a
proposition which can be proved by applying rules to propositions already in the list. The rules in natural
deduction all have a similar form. The rules allow us to add new propositions to the proof if the
proof contains propositions of a particular structure. The rules are all syntactic in nature - they make no
reference to the semantics of the propositions.

We use the " " symbol to indicate the provability of propositions using natural deduction. So "1, …,n
" states that is provable from 1, …, n. In this "sequent", 1, …, n are the premises and is the
conclusion.

Similarly, "  " states that  is provable from zero premises, or simply that  is provable in natural
deduction. Such a proposition is called a theorem. The notation "  " is an abbreviation for "
and  ", and states that  and are provably equivalent.

1.2.1 Rules for natural deduction


For each of the logical operators, there is an introduction and an elimination rule. We comment on a
selection of them.
9
The copy rule
This is not stated as a formal rule in the prescribed book, but is discussed on page 20, at the end of the
section discussion the rules for disjunction.

At any time, we can introduce a copy of an earlier proposition (unless it only occurs inside a closed
assumption box).

Example: Here is a proof of the sequent p qp

1 p premise

2 q assumption

3 p copy 1

4 q→p →i 2–3

The rules for negation


In natural deduction we use the  symbol to represent contradiction. This symbol is called "bottom". If a
sequent contains a contradiction among its premises, we can validly derive any conclusion whatsoever
from those premises. That is,"  " is always a valid sequent.

As a rule of natural deduction, this is called "bottom-elimination" and, as may be seen on page 27 of the
prescribed book, it looks as follows:



Since a contradiction is necessarily false, and our proofs consist of propositions that are necessarily true,
it is not immediately clear how we can introduce a contradiction into a proof. However, we have a rule for
doing it (see page 27 of the prescribed book):

   ¬e

The text refers to this as the "not-elimination" rule rather than as the "bottom-introduction" rule as in
Formal Logic 2.

The rule for introducing negation into our proofs also involves using a contradiction. The idea is simple: if
we make an assumption and then, by applying the rules of natural deduction we are able to produce a
contradiction, then the assumption is not provable - that is, its negation is provable. So the “not-
introduction rule" (i) on page 27 of the prescribed book shows an assumption box above the line,
containing  at the top and  at the bottom, and below the line it shows .

The or-elimination rule


This rule is discussed in the section called The rules for disjunction.

It may seem strange that in order to prove  from   , one must prove both  from  and  from , but
consider the following argument in natural language that illustrates this concept:

"Either I am happy, or I am sad"


"If I am happy, I will want to watch a soap opera"
"If I am sad, I will want to watch cartoons"
"If I want to watch a soap opera, I will turn on the TV"
"If I want to watch cartoons, I will turn on the TV"

From these premises, we conclude that I will turn on the TV.

10
COS3761/MO001/4/2018

Here is a natural deduction example: (p  q)  (p  r) p  (q  r)

1 (p  q)  (p  r) premise

2 p assumption

3 pq assumption
4 q  e 3, 2
5 qr  i1 4

6 p r assumption
7 r  e 6, 2
8 qr  i2 7

9 qr  e 1, 3-5, 6-8

10 p  ( q  r)  i 2-9

When we are inside an assumption box, we can use or copy any previous proposition that is not inside a
closed assumption box. We cannot use or copy any proposition that occurs only inside a previous closed
assumption box, because then the assumption would have already been discharged.

1.2.2 Derived rules


Using the rules that natural deduction provides as fundamental rules, we can derive other useful rules for
use in our proofs. Modus Tollens (MT), proof by contradiction (PBC) and double negation introduction
can all be derived from the other rules. They are discussed in sections 1.2.2 and 1.2.5 of the prescribed
book. Another derived rule is the Law of the Excluded Middle (LEM) which follows here.

The law of the excluded middle


We normally abbreviate this law as LEM. The rule looks like this:

  

As you can see, this rule has nothing above the line. This means that we may introduce the proposition 
  into our proofs at any time, with no required premises or previous occurrences of any of the
symbols that might happen to be in .

1.2.3 Natural deduction in summary


The problem with natural deduction is that it is often unclear what rules to apply, to what propositions,
and in what order. One often has to resort to a trial-and-error or hit-and-miss method. Our advice it to try
to work out an argument in English first and use it as a guide to constructing the natural deduction proof.

1.2.4 Provable equivalence


As stated above, if  and  are both valid sequents, then we say that  and are provably
equivalent, and we write  . This is useful because it allows us to make some shortcuts in our
proofs. Whenever we have a formula that fits the pattern of , we can follow it on a new line with .

For example, one of the provable equivalences given in the prescribed book is
p  (q  r) (p  q)  (p  r)
So if our proof contained an earlier line
11
(a  b)  (c  (d  e))
we could add the line
((a  b)  c)  ((a  b)  (d  e))
The justification for adding that line would be stated as "provable equivalence" and would technically
need a reference to a list of provable equivalences.

Below we list a number of equivalences. (Note, however, that you may not use these equivalences when
you are required to give a formal proof using natural deduction. You may not, for example, use de
Morgan’s law in a formal proof. Only the proof rules of natural deduction may be used.)

Equivalences involving 
In the equivalences listed here , ,  denote arbitrary propositions.

1.    (commutativity of )

2.     (idempotence of )

3.   

4.     and    

5. (  )   (  ) (associativity of )

6.    (commutativity of )

7.     (idempotence of )

8.  and   

9.    

10. (  )   (  ) (associativity of )

11.  

12. 

13.  

Equivalences involving 

14.   

15.   

16.  

17.   

18.    

19.     (   )

20. (  ) 

Equivalences involving 
21.   (  )(  ) (  )  (   )    .

12
COS3761/MO001/4/2018

22. (  )    (  )  (  ).

De Morgan Laws
23. (  )   

24. (  )   

Distributivity of , 
25.   (  ) (  )  (  )

26.   (  ) (  )  (  )

27.   (  )  and   (  ) .

Examples
We emphasize that these are not formal proofs.

Example 1
Let us show that     .
  
( )   (using equivalence 19)
   (equivalence 6)
  (equivalence 13)
  (equivalence 19 again).

Example 2
Let us show that (p  q)  (p  q) p.
(p  q)  (p  q)
p  (q  q) (equivalence 26)
p (equivalence 4)
p (equivalence 9)

1.2.5 Proof by contradiction


This looks almost identical to the "not-introduction" rule, and is in fact easily derived from it. The rule
looks like this


.
.

The sense of this is that if we want to prove , we start by assuming  and show that this leads to a
contradiction. Since we do not accept a contradiction as part of a valid proof, the assumption we made
must have been false. This yields , which we simplify to .

We usually abbreviate Proof by Contradiction to PBC.

1.3 Propositional logic as a formal language


Natural languages convey meaning using the grammar rules and vocabulary of the language. The
context of language usage often helps to clarify the meanings of pronouncements where this is not
13
immediately clear. Formal languages, on the contrary, have very little to do with information or
meaning. The central idea behind a formal language is determining whether or not a given sequence of
symbols is a legal or well-formed expression within the language. The study of formal languages is
central to computer science for both practical and theoretical reasons: on the practical side, a piece of
software must conform to the syntax of the programming language in which it is written, and we need
tools that will check this, and on the theoretical side, many of the most fundamental concepts in the
study of computational complexity (which tries to determine which problems can be solved quickly and
which are inherently intractable) can be expressed in terms of formal languages.

The purpose of describing propositional logic as a formal language is not to add any expressive power to
it, but rather to put it into a format in which it is easy to determine if a string of symbols actually forms a
proposition that we could evaluate. The syntax of a well-formed formula (wff) of propositional logic is
defined in terms of a grammar on page 32-33 of the prescribed book.

A particular wff can also be expressed using a parse tree. The rules for constructing a parse tree
representing a given wff are as follows:

 Leaf nodes represent atoms.


 Non-leaf nodes represent connectives.
 The main connective of a formula forms the root of the parse tree.
  has only one subtree.
 Binary connectives (, , ) have two subtrees.
 The height of a parse tree is 1 + the length of the longest path from root to leaf.
 A parse tree consisting of only a single atom has a height of 1 + 0 = 1.

The following example illustrates how to construct the parse tree representing a given well-formed
formula. The parse tree of p  (¬q  ¬p) is

p 

¬ ¬

q p

14
COS3761/MO001/4/2018

1.4 Semantics of propositional logic


1.4.1 The meaning of logical connectives
We can use a truth table to examine how the truth value of atomic propositions (atoms) affect the truth
value of the compound propositions constructed from the atoms. A truth table consists of a column for
each atom, a column for each sub-formula of the overall formula, and finally a column for the complete
formula. The set of rows under the atoms represent all possible assignments of truth values to those
atoms. Under each sub-formula and the complete formula are columns of truth values resulting from the
assignment of truth values to the atoms in the relevant row.

For example, the truth table of the formula (p  q)  r is

p q r pq (p  q)  r
T T T T T
T T F T F
T F T T T
T F F T F
F T T T T
F T F T F
F F T F F
F F F F F

Consider a theorem  of natural deduction, i.e. . If we construct a truth table for , we find that it is
true in all rows. We say that  is a tautology, and write .

Now consider a set of formulas 1, 2, ..., n, and the following sequent:

1, 2, ..., n

Suppose that we build a truth table that includes a column for each of the formulas 1, 2, ..., n and .
We observe that on every line where every i is true, is also true. We say that 1, 2, ..., n semantically
entails , and we write 1, 2, ..., n .

Clearly, the semantic entailment relation between arbitrary formulas does not always hold. For example,
the truth table above shows that "p  q (p  q)  r" does not hold. This can be seen from line 2 or line
4: in both cases p  q is true, but (p  q)  r is false.

The definition of semantic entailment may seem similar to the definition of a valid sequent. This is not
coincidental. However, it is crucial to recognize the difference. A sequent is valid if we can use the rules
of natural deduction to derive or prove the conclusion from the premises. A semantic entailment holds if
we observe the required condition in all the rows of the truth table representation of the sequent.

Note finally the changed the title of sections 1.4.3 and 1.4.4 below. There are other proof systems for
propositional logic (not covered in the prescribed book), and soundness and completeness relates a
specific proof system to the semantics of propositional logic. We have changed the titles of these
sections to reflect this.

1.4.3 Soundness of natural deduction


On pages 46-49 of the prescribed book it is proven that
if 1, ..., n is a valid sequent,
then the semantic entailment
1, ..., n holds.

The proof uses course-of-values induction where the inductive hypothesis is applied to the number of
lines in the proof.

15
This result is a demonstration of the soundness of propositional logic. The significance of this result is
that it is not possible for us to build a proof of something by natural deduction which is not "true" in the
semantic entailment representation of truth. To be more explicit, if there exists a proof for the sequent,
then, in the truth table representing that sequent, all lines that have a T for each of 1, ..., n must also
have a T for .

The result is immediately useful to us. Suppose we are trying to prove the validity of a sequent, but
without much success. If we can demonstrate that the corresponding semantic entailment does not hold,
then there cannot be a proof of the sequent. To show that the semantic entailment does not hold, all we
need to find is one line in the truth table on which all the premises are true but the conclusion is false.

For example, consider the sequent p  q q  p. This is obviously not valid but it may not be obvious
how to prove that. We can demonstrate its invalidity by constructing the truth table:

p q pq qp
T T T T
T F F T
F T T F
F F T T

In line 3 of the truth table, the premise p  q has the truth value T but the conclusion q  p has the truth
value F. Thus, the semantic entailment relation does not hold, and, from the proof of soundness, we
know that the sequent does not have a proof. In other words, it is unprovable.

1.4.3 Completeness of natural deduction


On pages 50-53 of the prescribed book it is proven that
if the semantic entailment 1, ..., n holds,
then
1, ..., n is a valid sequent,

This is the second part of showing that the two systems of propositional logic (natural deduction proof
rules and truth tables) are exactly equivalent.

We outline the proof. Remember that a valid sequent is one that we can prove using natural deduction
rules.

The proof consists of three main steps:

1. Prove that if 1, ..., n holds, then 1  (2  (..... (n  ))...) holds.
2. Prove that if 1  (2  (..... (n  ))...) holds, then 1  (2  (..... (n  ))...) is a
valid sequent.
3. Prove that if 1  (2  (..... (n  ))...) is a valid sequent, then 1, ..., n is a valid
sequent.
The proofs of Steps 1 and 3 are straightforward. The proof of Step 2 is more difficult and the explanation
in the prescribed book is quite hard to follow. We can summarize the main points as follows:

Course-of-values induction is applied to the height of the parse tree for the formula. The main idea is
that, given any proposition , each line of the truth table for  can be used to create a provable sequent
which has the atoms in  (or their negations) as premises and either  or its negation as the conclusion.
This is achieved by assuming that it can be done for all formulas with parse trees whose height is
smaller than  's parse tree, and then show how applying the inductive hypothesis to the last operation in
building  's parse tree, allows these to be combined to get a proof of the desired sequent.

The formula that has to be proved by course-of-values induction applied to the height of its parse tree
is 1  (2  (..... (n  ))...). A sequent is created from each line of its truth table and these are
proved separately. LEM is then used over and over again to combine all of these special-case proofs
into a complete proof of

16
COS3761/MO001/4/2018

1  (2  (..... (n  ))...).


Putting this result together with the previous proof of soundness, we can now state that
1, ..., n is a valid sequent iff the semantic entailment 1, ..., n holds.

1.5 Normal forms


1.5.1 Semantic equivalence, satisfiability and validity
Two propositions are said to be semantically equivalent if their values are identical in all rows of a
truth table. We write   to indicate that  and are semantically equivalent. Due to the soundness
and completeness of natural deduction, we have that

 iff  

This means that all the provable equivalences listed in Section 1.2.4 are also semantic equivalences.

A proposition  is said to be satisfiable if there is some truth assignment of its atoms that makes 
evaluate to true. On the other hand, a proposition  is said to be valid (or a tautology) if every
assignment of truth values of  's atoms makes  evaluate to true.

Validity and satisfiability are linked by the following theorem (given on page 57 of the prescribed book):
 is satisfiable iff ¬ is not valid (i.e. ¬ is not a tautology).
This theorem can be restated in a number of different ways. For example, " is valid iff ¬ is not
satisfiable", or " is not valid iff ¬ is satisfiable", etc.

Satisfiability is a very important practical problem for computer scientists. Unfortunately, there is no
known, efficient algorithm to test an arbitrarily chosen proposition for satisfiability. One obvious algorithm
is to generate the truth table for the problem, but this is extremely inefficient, and for any realistic
application this method is not feasible. The reason is that the number of rows of the table grows
exponentially with the number of different atoms in the formula. Think about it: If there are n different
atoms in a formula, the truth table needs to have 2n rows. In fact, satisfiability (or SAT, as it is usually
called) is still an area of active research in computer science.

To recap, we can show that the proposition  is a tautology (or valid) by constructing its truth table.
However, constructing the truth table for  may be impractical if the number of atoms is large. Is there a
solution? One alternative is to convert  into an equivalent formula for which validity checking is easier.

One commonly used method is to convert  to conjunctive normal form (CNF). A formula is in CNF if it
consists of a set of clauses connected by the logical connective "and", such that each clause contains
only atoms and their negations, connected by the logical connective "or".

For example, (p  q  ¬r)  (¬p  s)  (q  ¬r  ¬q) is in CNF.

The formal definition of CNF (given on page 55 of the prescribed book) depends on a grammar which
defines three types of objects: Literals (L), Disjunctions (D) and Conjunctions (C)

L ::= p | ¬p this means "a Literal is either an atom or the negation of an atom"
D ::= L | L  D this means "a Disjunction is either a Literal, or
a Literal connected to a Disjunction with the connective "
C ::= D | D  C this means "a Conjunction is either a Disjunction, or
a Disjunction connected to a Conjunction with the connective "

It is easy to check the validity of a formula that is in CNF. The formula can only be valid if all of its
disjunctions are valid. A disjunction is valid under all circumstances only if it contains a pair of
complementary literals; that is, if it contains a pair of literals of the form "p" and "¬p". Such a disjunction
will always evaluate to true under any assignment of truth values to the literals.

17
Creating a CNF Formula from a Truth Table
Suppose we are given a truth table for a formula , but we are not given the actual formula. We can use
the truth table to create  in CNF. Recall that a formula in CNF consists of a conjunction of disjunctions.
In order for the formula to evaluate to true, all of its disjunctions must evaluate to true simultaneously.

We focus on the lines of the truth table where  is false. We want to construct a formula that evaluates to
true if the combination of literal values does not place us on any of these false lines. By linking together
disjunctive clauses, each of which corresponds to the idea "not on line k of the truth table" where 
evaluates to false in line k, we create a formula which is true if and only if we are on one of the lines of
the truth table where  is true.

As an example, say we have a formula  that uses three atoms p, q and r:

p q r 
T T T T
T T F F
T F T T
T F F T
F T T F
F T F T
F F T F
F F F T

We will create a formula which says, in effect, "not on line 2" AND "not on line 5" AND "not on line 7"

On line 2, p is true, q is true, and r is false. If any of these conditions do not hold, then we are not on line
2. Thus "¬p  ¬q  r" is a formula for "not on line 2". Analyzing lines 5 and 7 in the same way, we get two
more disjunctions, and when we link them together, we get

(¬p  ¬q  r)  (p  ¬q  ¬r)  (p  q  ¬r)

Note that the correctness of this method has an immediate implication. We started with an entirely
arbitrary, unknown formula , and derived an equivalent CNF formula. This means that if we had started
with a known formula, we could follow exactly the same steps: construct the truth table, identify the lines
where the formula is false, and build a CNF formula based on those lines. The conclusion is that, for
every formula in propositional logic, there is an equivalent CNF formula. It means that if we can prove
things about CNF formulas, or create algorithms that apply only to CNF formulas, we are able to apply
the results to all formulas.

Unfortunately, this method of translating a given formula into CNF format is inefficient if there are a lot of
atoms involved (since it once again depends on writing out a truth table, which grows exponentially). If
we are given the truth table up front (and no other information about ) then this is the method we use.
But if we are actually given the formula , there is another method for creating a CNF equivalent. The
method involves three steps:

1. Eliminate all  operators by applying the equivalence "p  q  ¬p  q"


2. Eliminate all double negations and transfer all negations to atoms, using the equivalences
"¬(p  q)  (¬p ¬q)" and "¬(p  q)  (¬p  ¬q)"
3. Distribute all the s across the s, using the equivalence
"p  (r s)  (p  r)  (p  s)"

Example: Suppose we have  = (p  q)  ¬(r  ¬s):

¬(p  q)  ¬(r  ¬s)


 (¬p  ¬q)  (¬r  ¬¬s)
 (¬p ¬q)  (¬r  s)
 (¬p  ¬r)  (¬p  s)  (¬q  ¬r)  (¬q  s)

18
COS3761/MO001/4/2018

Note that the CNF version of  is considerably longer than the original version. In the worst case, the
algorithm will produce a CNF formula which is exponentially longer than the original (i.e. if the original
formula contains n literals, then the CNF version will contain about 2n literals).

1.5.3 Horn clauses and satisfiability


Horn formulas are one class of propositions for which SAT (see next section) can be solved efficiently.
Here are some examples of Horn formulas:

pq
(       )  (p  p  p  p)
(p  q  r  s)  (  p)  (s  )

Here  is the familiar "bottom", meaning "always false", and is the same symbol upside down ("top"),
meaning "always true", and p, q, r and s are atoms. A Horn formula is a conjunction of Horn clauses,
each of which is an implication, in which the left side is a conjunction of things and the right side is a
single thing (where a "thing" is  or or an atom). Note that  and ¬ do not appear in Horn formulas.

Horn formulas form the basic statement structure of the Prolog programming language. They also have
the property that they can be tested for satisfiability very easily. The HORN algorithm works as follows.

(1) Go through the formula and mark every occurrence of "top".


(2) While there is a clause in which everything in the left side has been marked but the thing on the
right side has not been marked, mark every occurrence of the thing on the right in the formula.
(3) If "bottom" has been marked, report "not satisfiable" else report "satisfiable".

This algorithm works by marking everything that is forced to be true by repeated applications of Modus
Ponens. Clearly if "bottom" is marked, then one of the clauses has only things which must be true on the
left side, and something which must be false on the right side. This implication must be false under any
truth assignment that satisfies the other clauses, so the formula is not satisfiable. However, if "bottom" is
not marked by this algorithm, then we can satisfy the formula by setting to true all the atoms that have
been marked, and setting all others to false.

Example: Say we want to determine whether the following Horn formula is satisfiable:

(p  q  r)  (  p)  (p  q)  (r  )

We apply the HORN algorithm as follows:

(p  q  r)  (  p)  (p  q)  (r  ) (1)
(p  q  r)  (  p)  (p  q)  (r  ) (2)
(p  q  r)  (  p)  (p  q)  (r  ) (2)
(p  q  r)  (  p)  (p  q)  (r  ) (2)
(p  q  r)  (  p)  (p  q)  (r  ) (2)
"not satisfiable" (3)

19
APPENDIX C: ADDITIONAL NOTES ON CHAPTER 2
INTRODUCTION
In this chapter, the proof system of natural deduction is extended to predicate logic. The syntax and
semantics are explained and the concept of bound and free variables is discussed. Soundness and
completeness of the natural deduction proof system are briefly discussed. Furthermore, the
undecidability of predicate logic is shown by reduction to Post’s correspondence problem. Certain
important quantifier equivalences are also shown.

2.1 The need for a richer language


Propositional logic is inadequate to express statements such as "All birds can fly". If there were n birds in
the universe and we had a list of all of them, and we could identify each of them using a specific
sequence number, starting at 1, this could be expressed in propositional logic as follows: "Bird 1 can fly"
and "Bird 2 can fly" and "Bird 3 can fly" and … and "Bird n can fly". This could be feasible if there were
only a few birds in the universe. However, even though the number of birds in the universe at any one
time is strictly finite for enumeration purposes, that number is infinite for all practical purposes. Hence
propositional logic would fail to adequately express the statement that "All birds can fly".

So, what is the nature of the statement "All birds can fly"? It is about the property of "being able to fly", a
property that all things called "birds" should have. So, the proposition "All birds can fly" could be written
as a material implication that says "If an object is a bird, then that object can fly". In order to assert that
this implication is true, we have to be inclusive in our consideration of every possible object in the entire
universe. If any object that we encounter is a bird and that object does not fly, then we would have to
assert that the proposition is false.

Predicate logic is designed to give us a way to express knowledge and construct sound logical
arguments about elements of (possibly infinitely large) sets. It is based, as you might suspect, on the
concept of a predicate.

A predicate states something about an object or a tuple of objects. Predicates can take single or multiple
arguments. If a predicate takes a single argument, we can think of it as expressing a property of an
object. For example, Bird(x), which translates to "x is a bird", expresses the fact that some arbitrary
object x is a bird. If, on the other hand, the predicate takes more than one argument, we can think of it as
a relationship between the arguments of the predicate. For example, Younger(x, y), which translates to
"x is younger than y", expresses a relationship between the arguments x and y of the predicate Younger.

When predicates are defined, the arity of the predicate must always be specified. The arity defines how
many arguments the predicate takes. By convention, predicate names always start with capital letters.

The notion of a variable is implicit in the above examples of a predicate. The variable is the generic
name for any object from our universe of discourse.

Predicate logic also deals with functions. A function expression differs from a predicate statement in that
instead of denoting a truth value, it denotes a single object. For example, productOf(x, y) denotes the
single object that represents the product of x and y. Another example, the function parentOf(y) denotes
the single object that represents the parent of y. Some functions do not require any arguments. Such
functions are called constants because they always take on the same value. The arity of a function is the
number of arguments it requires. The convention we follow is to represent function names by beginning
them with lower case letters. Function symbols are used to build complex terms, whereas predicates are
used to build predicate formulas.

Predicate logic also introduces the notion of quantification. The two main quantifiers used in predicate
logic are the universal and existential quantifiers. Symbolically represented as '', the universal quantifier
loosely translates to “every” or “for all”. The existential quantifier, '', loosely translates to “there is at
least one” or “there exists at least one”. Thus, for example, xP(x) expresses the fact that “every object x
is such that x is P”, whatever predicate P stands for. Similarly, the quantified expression xP(x)
expresses the fact that “there is at least one object x such that x is P”, whatever predicate P means.
20
COS3761/MO001/4/2018

2.2 Predicate logic as a formal language


A vocabulary of predicate logic consists of three sets: a set P of predicate symbols, a set F of function
symbols and a set C of constant symbols. We can actually get by with two sets only because C is a
subset of F if we consider every constant to be a nullary function.

2.2.1 Terms
These are logical expressions that refer to objects. The inductive definition of term is given on page 99 of
the prescribed book.

Example: A symbol that refers to one specific whistler’s female parent always denotes the same value,
and is therefore a constant. Thus the constant ‘whistlersMother’ would be a valid term. Unary functions
can sometimes more easily express the same idea as a constant. Thus the term motherOf(whistler)
could also refer to the same object as referenced by the constant ‘whistlersMother’.

2.2.2 Formulas

The inductive definition of well-formed formulas (wffs) is given on page 100 of the prescribed book. You
will note that parentheses occur in all the formulas. However, the use of parentheses can very often be
avoided by applying the same binding priorities as in propositional logic. The quantifiers have the same
binding priority as negation (). Therefore, if we want a quantifier to apply to more than just the formula
following immediately after it, we need to use parentheses to demarcate the scope of the quantifier. For
example, in "xP(x)  Q(x)" the quantifier applies only to P(x). If the intention is to extend the scope of
the quantifier to the whole formula, we must write "x(P(x)  Q(x))".

We deal with the semantics (meaning) of predicate logic in section 2.4. There we will see that predicate
logic defines the idea of a model of a formula  as a specific assignment of meaning to the predicates
and functions that appear in . There are some formulas that are true in all models (for example, "P(x) 
P(x)”), some formulas are true in some models and false in others, while other formulas are false in all
models. One of the main goals of predicate logic is to determine the models in which a formula is true or
false. However, we before we come to that, we need to understand what free and bound variables are,
and how constants, variables and terms can be substituted for each other.

2.2.3 Free and bound variables


A variable is said to be bound if, as we traverse up the parse tree from the leaf containing the variable,
we encounter a quantifier for that variable. A variable is free if it is not bound. A wff without any free
variables is called a sentence.

2.2.4 Substitution
Natural deduction proofs in predicate logic sometimes require the substitution of free variables by
constants or terms. We write  [t/y] to mean "substitute the term t for all free instances of variable y in  ",
or, equivalently, "replace all free instances of variable y in  by t".

Definition: If  is a predicate logic formula, t is a term, and y is a variable, then we denote by  [t/y] the
formula obtained by replacing all free instances of y in  by t. The expression t/y is called a substitution.

Note that if  does not contain y as a free variable, the substitution would have no effect. Bound
variables cannot be replaced by other variables, constants or terms. (This is because the substitution
would result in an unintentional alteration of the semantics of the formula. For example, an attempt to
substitute x for the bound occurrences of y in y(P(x)  Q(y)) would result in the formula x(P(x)  Q(x)),
which would have an entirely different meaning from the original.)

Special care must be taken, though, as even substituting for free variables is not always safe. For
example, suppose that a free instance of y occurs within the scope of a quantified variable x in a formula

21
. If we replace y with a term t that includes the variable x, then this variable is now unintentionally bound
by the quantifier. This has the unintended consequence of changing the meaning of the original formula,
and hence the conditions under which the formula is true or false.

A term t is said to be free for x in  if none of the free instances of x in  are within the scope of any
quantifier that extends to any variable that appears in t. In other words, it is safe to substitute t for x in .
Thus, the substitution does not pose the risk of unintentionally changing the meaning of the formula.

We stress that if the variable x is not free in , that is, if  contains only bound occurrences of x or no
occurrences of x at all, then the substitution  [t/x] has no effect, and  [t/x] is said to be equivalent to .
For example, let  be a formula xP(x, y) and let t be a term. Then  [t/x] =  since x is a bound variable
in  and  [t/y] = xP(x, t) since y is a free variable in . (But note that x must not occur free in t.)

Take note that replacements of different free occurrences of the same variable take place at the same
time; in other words, simultaneously. For example, if  is the formula P(x, x)  yQ(x, y), then  [h(x)/x]
is P(h(x), h(x))  yQ(h(x), y).

We stress that, as explained above, it is possible that substitutions may result in unintended, semantic
side effects when their application leads to a change in the meaning of the formula. Because of that,
substitutions should always be done subject to certain restrictions, which can be summarized in the
following definition:

Definition: A term t is said to be free to replace a variable x in a formula  if no free occurrence of x is in


the scope of a quantifier that binds a variable y occurring in t.

The following are examples of situations in which t is free to replace all free occurrences of x in  :

 t is equal to x.
 t is a constant.
 The variables appearing in t do not occur in .
 The variables appearing in t do not occur within the scope of a quantifier in .
 There are no quantifiers in the formula .
 The variable x does not occur freely within the scope of a quantifier in .

Function f(x, y) is free for x in the following wff:


(xP(x)  Q(y))  (P(x)  ¬Q(y))

However, f(x, y) is not free for y in


x(P(x)  Q(y))  (P(x)  ¬Q(y))
because the first free occurrence of y occurs within the scope of x and x appears in f(x, y). What should
we do in such a case if we want to do the substitution? We should rename all the bound occurrences of
x in the given formula before doing the substitution. Thus we need two steps for the substitution:

First
u(P(u)  Q(y))  (P(x)  ¬Q(y))
is equivalent to
x(P(x)  Q(y))  (P(x)  ¬Q(y))
and then
u(P(u)  Q(f(x, y)))  (P(x)  ¬Q(f(x, y)))

2.3 Proof theory of predicate logic


2.3.1 Natural deduction rules
The natural deduction rules of predicate logic can be obtained as an extension of the deduction rules of
propositional logic. This can be done by

 keeping the propositional rules, e.g.

22
COS3761/MO001/4/2018

  ¬e

for any predicate logic sentence , and

 introducing new rules for quantification and equality.

New Rules
We discuss the new rules for natural deduction in predicate logic below.

Rules for Equality


=i "equality introduction"

This rule says that we can always add "t = t" to our proofs, where t is any term.

=e "equality elimination"

Equality elimination is an interesting case, because it allows us to apply substitution in our proofs. The
basic idea is that if we have "t1 = t2" in our proof, and we also have a formula of the form  [t1/x] (i.e. a
formula in which t1 has been substituted for all the free instances of x), then we can add the formula
given by  [t2/x]. This is possible because if t1 = t2, both t1 and t2 should have the same effect on the truth
value of . Note, however, that for these substitutions to work, we require that both t 1 and t2 be free for x
in .

The challenge with using this rule is being able to recognize when we have a useful formula of the form 
[t1/x] in our proof.

Rules for Universal Quantification


x e "for all x elimination"

Given the quantified formula x, the rule "for all x elimination" allows us to substitute anything we want
for the free instances of x in the formula  (as long as what we substitute is a term which is free for x in
formula ). It is important to realize that  does not include the x that precedes it. For example, if we
have "x(P(x)  Q(x))" then  is "P(x)  Q(x)" and both occurrences of x are free in .

x i "for all x introduction"

In order to introduce a universal quantifier we create a new, completely unconstrained variable


(traditionally called x0) and attempt to prove some formula that includes x0 (i.e. a formula of the form 
[x0/x]). In writing our proofs, we open a box, which looks just like an assumption box, but which
represents the scope of the variable x0.

When we have proved  [x0/x] within the box, we are entitled to close the box and assert that "x  ". We
can do this because we didn't put any constraints on x0, so whatever properties it has are ones that it
shares with all objects in the universe of discourse. Hence the lines of the proof within the box would
work no matter what actual value we assigned to x0. That is, the conclusion  is true for any arbitrary
variable x.

Rules for Existential Quantification


x i "there exists x introduction"

If we have  [t/x] in a proof (i.e. a formula which would result from replacing all the free occurrences of x
in  with t) then we know that there is some value that makes  true. Thus we can introduce the formula
"x ".

23
This may seem useless because it is giving up some information - we are going from an assertion that 
is true for a specific value to an assertion that  is true for some unknown value. On the contrary, it is
actually very useful to us because very often the specific value for which  is true will be a "local"
variable defined within an assumption box. The scope of that variable is restricted to the box, and we
cannot carry it outside the box, but after we generalize the statement " [x0/x]" to "x ", we can extend
the scope of that statement outside the box.

It is important to note that, whenever we write “ [t/x]", it is understood that this particular sequence of
characters is not a formula, and would not appear as a line in our proof. Rather, " [t/x]" is a short-hand
notation of a formula that would result from replacing all free occurrences of x in  by t. This is the
formula that would actually appear in the proof.

x e "there exists x elimination"

The reasoning that lets us eliminate an existential quantifier is very similar to the " elimination" rule in
propositional logic. If we are given "x " then we know that there is some value x0 such that  [x0/x] is
true. This is equivalent to an infinitely long disjunction: " [a/x]   [b/x]   [c/x]  ..." where a, b, c, etc
are all the values in the universe. To make use of this knowledge, we introduce a box containing a value
x0. Within the box, we prove some formula  such that  contains no reference to x0. We then pull  out
of the box as a true statement. This makes sense because we put no constraints on x0 other than that 
[x0/x] is true. Thus, the steps we use to derive  within the box would be valid no matter what value we
choose for x0, so long as  [x0/x] is true. We leave x0 within the box because we don't really know its
value, but we pull  out of the box because it is true no matter which -satisfying value of x0 we choose.

As a natural language demonstration of this rule, consider the following argument:

Any vehicle that can fly is an aeroplane.


There is a vehicle that can fly.
Therefore
There is a vehicle that is an aeroplane.

In order to prove the above we use the following predicates:

V(x) : x is a vehicle
F(x) : x can fly
A(x) : x is an aeroplane

We therefore wish to prove the sequent

x((V(x)  F(x))  A(x)), x(V(x)  F(x)) x(V(x)  A(x))

and do so as follows:

24
COS3761/MO001/4/2018

1 x((V(x)  F(x))  A(x)) premise

2 x(V(x)  F(x)) premise

3 x0

4 V(x0)  F(x0) assumption

5 V(x0) e1 4
6 V(x0)  F(x0)  A(x0) x e 1
7 A(x0)  e 6, 4
8 V(x0)  A(x0) i 5,7
9 x(V(x)  A(x)) x i 8

10 x(V(x)  A(x)) x e 2, 3-9

Notice that the prescribed book uses “assumption” in line 4 above as the justification for the introduction
of this line in the proof. It would have been preferable to justify the introduction of this line by writing:
: V(x)  F(x),  [x0/x].

2.3.2 Quantifier equivalences


In addition to the propositional equivalences discussed before, we have equivalences for predicate logic.
 and denote arbitrary predicate formulas.

1. xy yx

2. xy yx

3. x x

4. x x

5. x(  ) x  x

6. x(  ) x  x

7. If x does not occur free in  then

x(  )   x , and

x(  )   x , and

x(  )   x , and

x(  )   x .

8. If x does not occur free in  then

x(  )   x , and

x(  )   x .

9. If x does not occur free in then

25
x(  ) x  , and

x(  ) x  .

Example:

Let us show that if x is not free in  then x   is equivalent to x(  ).

x  
x   (equivalence 3)
  x (see note below)
x(  ) (equivalence 8)

Note: We used the propositional equivalence:    

The above example is not a formal proof. You will find examples of natural deduction proofs in your
prescribed book on pages 118 to 122.

2.4 Semantics of predicate logic


How can we evaluate formulas in predicate logic? In propositional logic we could simply assign truth
values to all the atoms, but now it is more complicated. We need the notion of models.

2.4.1 Models
Note: Models are defined differently by different logicians. In other logic textbooks you may come across
the term ‘interpretation’ which corresponds to ‘model’ here. We will stick to the way the term is used in
our prescribed book.

Given a set F of function symbols and a set P of predicate symbols, a model of the pair (F, P) is defined
on page 124 of the prescribed book. We give two examples of models below. The model in the first
example has a universe consisting of four girls. In the second example we give a mathematical model.

First example of a model

Given
F: the set { j, a, b }
P: the set { T }
where
the first two elements of F (namely j and a) are constants, i.e. nullary functions, the third
element of F (namely b) is a unary function, and the only element of P (namely T) is a binary
predicate.

There are many possible models. Note that in all models M we are obliged to define a value for the
function bM for every element of the universe A that we will choose, but that we are free to define the
predicate TM in any way we like. We construct the following model M of (F, P):

 The universe A of concrete values: {joan, margaret, alma, danita},


 jM = joan,
 aM = alma,
 bM(joan) = margaret
 bM(margaret) = alma
 bM(alma) = danita
 bM(danita) = alma
 TM = { (joan, alma), (alma, joan) }

We think of b(x) as “best friend of x” and of T(x, y) as “x and y are twins”.

26
COS3761/MO001/4/2018

Second example of a model

Given
F: the set { 0, 1, +, , -, / }
P: the set { =, , <, >, >, != }
where the first two elements of F are nullary functions (i.e. constants) and the other elements binary
functions, and all the elements of P are binary predicates.

We construct the following model M of (F, P):

 The universe A of concrete values: the set of real numbers


 0M : the real number 0,
 1M : the real number 1,
 + M: the addition operation,
  M: the multiplication operation,
 - M: the subtraction operation,
 / M : the division operation,
 = M: the equality predicate,
  M: the less-than-or-equal-to predicate,
 < M: the less-than predicate,
 > M: the greater-than predicate
 > M: the greater-than-or-equal-to predicate
 != M: the not-equal-to predicate

Suppose we are given one or more formulas and are required to construct a model where the formulas
are true or to construct a model where the formulas are false. We will need a universe of concrete values
as well as definitions for all functions and all predicates appearing in the formulas. (If the formulas do not
involve any constants or other functions, no function definitions are required - the set F is empty.) Look
at the two examples below where models have to be constructed.

Required: a model where a given sentence is true

Given the sentence


x y (R(x, y)  R(y, x)  ¬ R(x, x))
we have to construct a model where the sentence is true.

Note that F is empty because the given sentence does not include any constants or other functions.
The sentence involves one predicate, namely R, with two arguments. So

F: the set { }
P: the set { R }

There are many possible models. We construct the following model M of (F, P):

 The universe A of concrete values: the set of integers greater than 3, i.e. {4, 5, 6, …}
 RM : We define R(x, y) as “x is equal to 2 times y”.

You should be able to see that the given sentence is true in this model, because the right hand side
of the implication is always true: no integer greater than 3 is equal to 2 times itself.

27
Required: a model where a given sentence is false

Given the sentence


x G(x, m(x))
we have to construct a model where the sentence is false.

We see the sentence involves one function (with one argument) and one predicate (with two
arguments). So

F: the set { m }
P: the set { G }

There are many possible models M. Note again that we are obliged to define a value for the function
mM for every element of the universe A that we choose, but that we are free to define the predicate
GM in any way we like. We construct the following model M of (F, P) where the sentence is false.
(We think of m(x) as “husband or wife of x” and of G(x, y) as “x and y are colleagues”, thus the
sentence states that some married couple are colleagues. This has to be false.)

 The universe A of concrete values: {zeb, yaco, suzie, patience},


 mM (zeb): suzie
 mM (suzie): zeb
 mM (yaco): patience
 mM (patience): yaco
 GM : {(zeb, patience), (patience, zeb)}

Below we give three additional examples of models - one of a mathematical nature and the other two
non-mathematical. In each case we investigate whether given well-formed formulas are true or not in
that model.

First additional example of a model

Given
F: the set { c, f }
P: the set { E }
where
c is a nullary function, i.e. a constant,
f is a binary function,
E is a binary predicate.

Suppose we construct the following model M of (F, P):

 The universe A of concrete values: the set of natural numbers {0, 1, 2, …},
 cM = 0,
 the function fM is defined by fM(i, j) = (i + j) mod 3,
 E M: the equality predicate

Is the wff
x[E(f(x, x), x)  E(x, c)]
true in this model? Yes.
(In this model the meaning of the formula is, for all natural numbers k,
if 2k mod 3 = k, then k = 0.)

However, the following wff is false in this model:


xy[E(f(x, y), y)  E(x, c)]
(In this model the meaning of the formula is, for all natural numbers k and m,
if (k + m) mod 3 = m, then k = 0.)

Second additional example of a model

28
COS3761/MO001/4/2018

Given
F: the set { c }
P: the set { L }
where
c is a nullary function, i.e. a constant,
and L is a binary predicate.

Suppose we construct the following model M of (F, P):

 The universe A of concrete values: {lola, john, harry},


 cM = lola,
 LM = {(lola, lola), (lola, john), (john, lola), (harry, john)}
(where we think of L(x,y) as “x likes y”)

Is the wff
x(L(c, x)  L(x, c))
true in this model? Yes. (“Everyone Lola likes, likes her” - yes)

However, the wff x(L(c, x)) is false in this model. (“Lola likes everyone” - no.)

Third additional example of a model

Given
F: the set { a, b, c, m, n }
P: the set { F, K }
where
a, b, c, m and n are nullary functions, i.e. constants,
and F and K are binary predicates.

We construct the following model M of (F, P):

 The universe A of concrete values: {aggie, bob, cecilia, marco, vincent},


 aM = aggie,
 bM = bob,
 cM = cecilia,
 mM = marco,
 nM = vincent,
 FM = {(aggie, cecilia), (bob, aggie), (bob, cecilia), (bob, marco)}
(where we interpret F(x,y) as “x and y are friends”)
 KM = {(aggie, aggie), (aggie, cecilia), (aggie, bob), (bob, bob), (bob, cecilia), (bob, marco),
(bob, vincent), (cecilia, cecilia), (cecilia, vincent), (marco, marco), (marco, aggie),
(marco, vincent), (vincent, vincent)}
(where we think of K(x,y) as “x knows y”)

The wff
xy((F(x, y)  F(y, x))  (K(x, y)  K(y, x)))
is true in this model. The intended meaning of the formula is “If two persons are friends, then they
know each other”.

However, the wff


x((F(m, x)  F(x, m) )  (F(x, n)  F(n, x)))
is false in this model. The intended meaning of the formula is “There is someone that is a friend of
Marco and a friend of Vincent”.

29
Environment
An environment (or look-up function) l for a universe A of concrete values is defined on page 127 of the
prescribed book. Such a function maps every variable onto an element of A.

Interpretation of terms
Terms are defined on page 99 of the prescribed book. Let T(F, var) denote the set of terms built from
function symbols in F and variables in var. Each model M and look-up function l induce a mapping tA
from the set of terms onto the universe A. The mapping can be defined recursively by:

tA (x) = l(x) for x an element of var


tA f(t1, . . . , tk) = fM (tA(t1), tA(t2), ..., tA(tk))

This means that, given a model M and look-up function (or environment) l, we find that terms denote
elements of the universe A of concrete values. Here follows an example:

Example of the interpretation of terms in a given model and look-up function

Given
F: the set { +,  }
P: the set { = }
where the elements of F are binary functions, and the element of P is a binary predicate.

We construct the following model M of (F, P):

 The universe A of concrete values: the set of real numbers


 + M: the addition operation,
  M: the multiplication operation,
 = M: the equality predicate,
(where we will use infix notation for these symbols)

and the following environment (look-up function) l:

l(x) = 3
l(y) = 2
l(z) = 1

Now the following will be the case in the model:

If the variable x is free in a given wff, it will be interpreted as 3.

If a wff contains the term (x(y+xz)) where all the variables are free, it will be interpreted as 15.

Satisfaction in an environment
Satisfaction in an environment is defined on page 128 of the prescribed book. For a formula  without
any free variables (i.e. a sentence) the environment is irrelevant and may be omitted from the notation.
Thus in that case we may simply write M .

2.4.2 Semantic entailment


Here the issue is whether the meaning of a given sentence entails that of another. This is a fundamental
problem in natural language understanding. It provides a broad framework for studying language
variability and has a large number of applications. It is defined on page 129 of the prescribed book.

Note:  Q differs from M l B; these conflicting uses of the symbol are traditional.

30
COS3761/MO001/4/2018

2.4.3 The semantics of equality


The semantics of “equality” = is intuitively as follows: given two terms t 1 and t2, the formula t1 = t2 is true
in a model M based on a set A if and only if t1 and t2 are interpreted by the same element of A.

Formally:

M l t1 = t2 holds if and only if tA(t1) = tA(t2)

2.5 Undecidability of predicate logic


A formula  is valid iff M  holds for every model M. Because  has to be true in all models, validity is a
very strong condition to place on the formula.

Soundness
Proof system is sound iff for any closed predicate formula  (i.e. a formula without any free variables,
also called a sentence) we have:

if  then M  for every model M


In brief: the proof system is sound if any provable formula is valid.

Completeness
Proof system is complete iff for any closed predicate formula  (i.e. a formula without any free
variables, also called a sentence) we have

if M  for every model M, then 


In brief: the proof system is complete if any valid formula is provable.

The proof system described in the prescribed book is both sound and complete: If  is a closed
predicate formula, then

 iff M  for every model M

The decidability problem for predicate logic


How can we find out whether or not  in predicate logic? The truth-table method works for
propositional logic: enumerate all possible values of propositional variables. However, this does not work
for predicate logic since there are infinitely many models, and so infinitely many valuations for individual
variables. It is shown in the prescribed book that, even though provability and validity are equivalent,
neither one is decidable. In other words, you cannot be guaranteed to find out in a predetermined finite
number of steps whether or not  in predicate logic!

Implications for the deductive paradigm


We know that if  is a closed predicate formula (i.e. a sentence, i.e. does not have any free variables),
then  iff M  for every model M. We also know that the question of whether  is undecidable.

Consequence:

There exists no perfect theorem-proving system for predicate logic. The set of true (valid) formulas can
be enumerated: just enumerate all possible proofs using, for example, Huth and Ryan’s proof system.
But there exists no way to enumerate the set of false formulas (else truth would be decidable).

31
Therefore we have the following situation: Checking whether  is valid (i.e. checking whether ) is
undecidable.

Corollaries:

 Checking whether  is satisfiable is undecidable.

 Checking whether  is provable (i.e. checking whether ) is undecidable.

2.6 Expressiveness of predicate logic


In contrast to first-order logic (that we have discussed up to now), second-order logic allows
quantification over functions and predicates. It can, for example, express mathematical induction by P
[P(0) k (P(k)  P(k + 1))  n P(n)], using quantification over the unary predicate symbol P. In
second-order logic, these functions and predicates must themselves be first-order, taking no functions or
predicates as arguments.

32
COS3761/MO001/4/2018

APPENDIX D: ADDITIONAL NOTES ON CHAPTER 5


INTRODUCTION
Modal logic concerns itself with the different modes in which statements can be true or false. Chapter 5
of the prescribed book only covers some of the basic concepts in modal logic, and introduces a few of
the many modal logics. It also deals only with propositional modal logics. We encourage you to read
further on the many applications of modal logics, both philosophical and computational. A good place to
start is the Stanford Encyclopedia of Philosophy, which you can access at
http://plato.stanford.edu/contents.html.

5.1 Modes of truth


In classical (i.e. propositional or predicate) logic, formulas are either true or false. In real life, however,
truth is qualified. We may identify facts that are “necessarily true”, “known to be true”, “believed to be
true” or “assumed to be true”. Furthermore, when we reason about computation, it is often convenient to
distinguish between truths at different points in time. These are the issues that are addressed by modal
logic. In addition, modal logic can also be applied in artificial intelligence for modeling scenarios with
several interacting agents.

5.2 Basic modal logic


5.2.1 Syntax
Basic modal logic is obtained by extending propositional logic by two new unary connectives, ⊨ and ◊
(read "box" and "diamond," respectively). These are also called modal operators.

5.2.2 Semantics
The notion of a model is central to the study of the semantics of any logic. In propositional logic, a model
is simply an assignment of truth values to atomic formulas. In predicate logic, this definition of a model is
extended as in Definition 2.14 where we have a universe of values, and an interpretation for each
function symbol and predicate symbol in the language. In particular, an n-ary predicate symbol is
mapped to a set of n-tuples in the domain of interpretation.

In modal logic, the notion of a model is similarly extended. We now have a set W, whose elements we
call worlds. We also have a binary relation R, called the accessibility relation, and a labeling function L
which maps worlds to sets of atoms in the language.

Comparing Definition 2.14 to Definition 5.3, we note that there are some similarities. In both cases we
have a set of values. In predicate logic, its elements are values in the relevant universe. In modal logic,
its elements are called worlds.

In predicate logic, an n-ary predicate is interpreted as a set of n-tuples from the domain, that is, as an n-
ary relation over the domain. In a modal model, we have only one binary relation, called the accessibility
relation R. This accessibility relation is explicit in a modal model, but not in the modal language. Instead,
we have a unary modal connective in the modal language. The link between R and the unary modal
connective ⊨ is given in Definition 5.4.

The labeling function L of modal logic maps worlds to sets of atoms. For each world, L specifies the set
of atoms which is true in that world. Put differently, we could express this as a function which gives, for
each atom, the set of worlds at which it is true. It is therefore similar to the interpretation of unary
predicates in predicate logic, where each unary predicate symbol is mapped to a set of values, namely
those values in the domain that have a certain property.

The above correspondences in the semantics of modal logic, and that of predicate logic, indicates that
modal logic can be viewed as a restricted fragment of predicate logic, in which we have only unary
predicates and a single binary predicate R. Furthermore, the sentences we may write are restricted by

33
Definition 5.4. In particular, the ways in which we may use R are very restrictive. Normally we don’t
bother with all of this, and rather stick to the syntactic form of modal logic, using the unary modal
connectives. This way, we can view modal logic as a propositional logic, to which we have added some
modal connectives. The additional complexity of modal logic is, in a way, hidden in the semantics of the
logic.

The most common and intuitive way to specify such a model is by means of a Kripke diagram. Consider
the following example:

x1 x2
, p

x3 q p,q x4

This model consists of four worlds, namely x1, x2, x3 and x4. No atoms are true in world x1, atom p is true
in world x2, q is true in x3, and p and q are true in world x4.

The accessibility relation is represented by arrows between worlds. In this example, world x1 is not
accessible from any other world. World x2 is accessible from world x1, world x3 is only accessible from
itself, and x4 is accessible from x1, x2 and x3.

As in propositional and predicate logic, we can ask whether a given formula (in this case a modal
formula) is true in a model. A modal formula is true in a model if it is true in all its worlds. In the above
example, p is true in worlds x2 and x4, but not in x1 and x3. So the formula p is not true in this model. The
same goes for the formula q. But what about ⊨p? It is true in world x1 because p is true in all worlds
accessible from x1. (So to check whether a modal formula ⊨ is true in a world x, we check whether  is
true in all worlds accessible from x.) ⊨p is also true in x2 because p is true in all worlds accessible from
x2. Unfortunately ⊨p is not true in x3, because although p is true in x4, it is not true in x3, which is a world
accessible from x3. ⊨p is said to be vacuously true in world x4, since p is true in "all" worlds accessible
from x4. (So if no worlds are accessible from a world x, then any formula of the form ⊨ is true in x.)
Since ⊨p is not true in all worlds, it is not true in this model.

See if you can work out whether ⊨q is true in the above model.

In the same way, to determine whether a formula ◊ is true in a model, we must check that it is true in all
worlds of the model. ◊ is true in a world x if  is true in at least one world accessible from x. So to test
whether ◊p is true in the example model above, we need to check whether it is true in the worlds. ◊ p is
true in x1 because there is at least one world accessible from x1, namely x2, in which p is true. ◊ p is true
in x2 because there is at least one world accessible from x2, namely x4, in which p is true. ◊ p is true in x3
because there is at least one world accessible from x3, namely x4, in which p is true. But ◊ p is not true in
x4 because there is no world accessible from x4 in which p is true. (So if no worlds are accessible from a
world x, then any formula of the form ◊  is false in x.)

Check yourself whether ◊ q is true in the above model.

Given a modal formula  and a world x, we write x ╟  to say that  is true in x.

5.3 Logic engineering


The main issue regarding logic engineering is that, given a particular mode of truth, how may we develop
a logic capable of expressing and formalizing that mode? This question is considered in the setting of
some well-known versions of modal logic.

Validity
There are a few related notions of validity in modal logic.
34
COS3761/MO001/4/2018

 The first of these are that of truth in all worlds in all models: A modal formula is valid if it is true in
all worlds in all models. One way to prove that a given modal formula is valid is to take an
arbitrary world in an arbitrary model (about which you make no assumptions except that it has at
least one world), and then show that the given formula is true in that world. The most important
valid modal formula schemes are the ones given in (5.3) on page 314, together with the modal
formula scheme K on page 315. Remember that all the propositional tautologies are also valid
modal formulas. This holds even if the sub formulas contain modal connectives. For example, the
formula ⊨  ¬⊨ is a valid modal formula because it is an instance of the law of the excluded
middle, which is a propositional tautology.

To show that a modal formula is not valid, it suffices to construct a single model which has a
world in which the given formula is false.

 The second notion of validity is that of validity in a given frame. A modal formula can be valid in
one frame without being valid in all frames. A frame consists of a set of worlds and a fixed
accessibility relation on the worlds. A formula is valid in the frame if it is true in all the worlds in
the frame for all labeling functions.

Definition 5.11 on page 322 formalizes what it means for a formula to be valid in a frame. The
prescribed book refers to this as that the frame satisfies the formula. However, remember that
this is a form of validity, not satisfiability. For a formula to be valid in a frame, (i.e. in the
terminology of the prescribed book, for the frame to satisfy the formula), the formula has to be
true in all worlds in the frame for every labeling function. Remember the difference between a
frame and a model: A frame does not have a labeling function. Figures 5.9 and 5.10 are
examples of frames. They show the worlds, and the accessibility relation on worlds, but they
don’t show which atoms are true in which worlds. Figures 5.3 and 5.5, on the other hand, are
examples of modal models.

 A third notion of validity in modal logic is that of validity in a given class of frames. In this case,
the set of worlds and accessibility relation on the worlds are not fixed, but some properties of the
accessibility relation are fixed. For example, we may fix the criterion that the accessibility relation
must be reflexive. We can then consider which formulas are valid in the class of all reflexive
frames, that is, which formulas are valid in all frames with a reflexive accessibility relation.

This is essentially what correspondence theory is about. Table 5.12 on page 325 gives a number of
important correspondences between properties of the accessibility relation on the one hand, and valid
formula schemes on the other. Each of the given formula schemes is valid in the class of frames with the
given corresponding property of its accessibility relation. Conversely, for each of the formula schemes on
the left, if all instances of the formula scheme are valid in a given frame, then the accessibility relation of
the frame will have the corresponding property. This correspondence is captured in Theorem 5.13 for
reflexive frames, and for transitive frames. The other correspondences are proved similarly.

Constructing new modal logics


The basic modal logic is called K. From this, we can construct new modal logics. We do this by making
the class of frames under consideration smaller, for example, by only considering frames in which the
accessibility relation is an equivalence relation. (Another way to construct a new modal logic is to add
more modal connectives. You will see an example of this in Section 5.5, so we won’t discuss that further
at this point.)

Why would one want to construct new modal logics? The choice of which modal formulas should be
valid, depends on what you want the modal operators to mean. As we have seen above, there is a
correspondence between the validity of formula schemes on the one hand, and properties of the
accessibility relation on the other. So, the meaning and properties of the accessibility relation will
influence which formulas in the logic should be valid. For each different set of valid formulas, one needs
a new logic.

35
A new modal logic can easily be constructed from the basic modal logic K. Simply add enough of the
modal formulas that you want to be valid, as axioms. It is not necessary to add all the formulas that you
want to be valid as axioms. It is sufficient to add a few well-chosen axiom schemes.

This is what Definition 5.15 is about. The set L will contain a few axiom schemes, typically chosen from
the list in Table 5.12. The valid formulas in the new logic will be all the instances of elements of L,
together with everything entailed by them. We call the new logic by the same name as the new axiom
schemes. So, a formula  is valid in the logic L, written L , iff  is semantically entailed by the set of
instances of L in modal logic K, written Lc K .

Definition 5.15 makes this definition a bit more general, defining entailment in the logic L from a set of
premises Γ. We then have that  is entailed by Γ in the logic L, written Γ L , iff  is semantically entailed
by Lc  Γ in modal logic K.

Intuitionistic logic
In classical logic, the meaning of a compound sentence is determined by the meaning of each of its
parts, and the meaning of the construction by which the parts are combined. The meaning is expressed
by truth conditions. In classical propositional logic, a valuation tells us which atoms are true and which
are false. Because we also know the truth conditions of the connectives, we can determine the meaning
of any compound formula. We say that classical logic is truth functional, or compositional.

In intuitionistic logic, things work a bit differently. The basic idea here is that the meaning of a sentence is
given by proof conditions:

A proof of   consists of a proof of  together with a proof of .


A proof of   consists of either a proof of  or a proof of .
A proof of ¬ consists of a proof that there is no proof of .
A proof of  → is a construction whereby, given any proof of , one obtains a proof of .

Intuitionistic logic can be given a possible-worlds semantics. This is done in Definition 5.19. Think of
worlds as states of information. In any world, some things are known. These are the things that have
been proved. An accessible world is some state conceivable from the present state, in which some
additional things have been proved. This is why the accessibility relation has to be monotone. Once we
have proved something, we can only increase our knowledge through additional proofs. Nothing can be
retracted from what we have already proved.

Returning to the proof conditions given above, we see that:

If we have a proof of both  and in some state, we have a proof of their conjunction.
If in some state we either have a proof of  or we have a proof of , then we have a proof of their
disjunction.
If we don’t have a proof of  in the present state, and we also don’t have a proof of  in any state that
can be reached from the present state, then we have a proof that a proof of  is not possible. This means
we have a proof of its negation.

If, in any state that can be reached from the present state, we have a proof of whenever we have a
proof of , then we have the construction required as a proof of  → in the present state.

5.4 Natural deduction


In modal natural deduction proofs, we have solid boxes as we had with classical natural deduction
proofs. But we now also have dashed boxes. Entering a dashed box represents moving from the current
world to an arbitrary accessible world. Leaving a dashed box represents returning from the world
represented by the dashed box to the world from which it was reached.

Suppose we have proved  in a dashed box. This means  holds in the world represented by the dashed
box. Since  holds in an arbitrary world accessible from the current world, ⊨ holds in the current world,
which is represented by the surrounding box.
36
COS3761/MO001/4/2018

Similarly, suppose we have proved ⊨. This means that  holds in an arbitrary accessible world. We may
therefore use  in a nested dashed box.

A modal natural deduction proof starts in an arbitrary world. We may therefore view the entire proof as
taking place in an outer dashed box. Since this represents an arbitrary world, anything proved in this
outer dashed box holds in all worlds.

The rules we have to add for each of the axiom schemes T, 4 and 5, resemble the respective axiom
schemes closely. If we want to construct a natural deduction proof for the modal logic KT, we may use
the ⊨ introduction rule, the ⊨ elimination rule and the T rule, in addition to all the natural deduction rules
of propositional logic. Similarly, in a natural deduction proof for the modal logic KT45, we may use all of
the above rules, plus the rules 4 and 5.

Any proof for K, KT or KT4 will also be a proof for KT45, but not conversely. Example 5.21 (1) gives a
proof for the modal logic K. This would also constitute a proof in the logics KT, KT4 or KT45. (2) and (3)
are proofs for KT45. Since both the T and 5 axioms are used in each of these proofs, they are not proofs
for K, KT or KT4.

5.5 Reasoning about knowledge in a multi-agent system


Multi-modal logics generalize modal logics such as K, KT45, etc. in a natural way. Instead of having only
one modal connective, and one corresponding accessibility relation, a multi-modal logic has a number of
modal connectives, each having its own corresponding accessibility relation. The two examples given in
the prescribed book (the wise men puzzle and the muddy children puzzle) illustrate why one would want
to work with multi-modal languages, or multi-agent systems. There are other applications as well.

37
APPENDIX E: SOLUTIONS TO SELECTED EXERCISES

Chapter 1

Exercises 1.1 (p. 78)

1 (f) Declarative sentence: “If interest rates go up, share prices go down”

If we choose:
p: "Interest rates go up".
q: "Share prices go down”.

the formula representing the above declarative sentence is


pq

1(i) Declarative sentence: “If Dick met Jane yesterday, they had a cup of coffee together, or they
took a walk in the park.”

If we choose the proposition atoms as follows


p: "Dick met Jane yesterday.”
q: "Dick and Jane had a cup of coffee together."
r: "Dick and Jane had a walk in the park."
the resulting formula is:
pqr
which reads as p  (q  r) when we recall the binding priorities of the logical connectives.

2(c) (p  q) (r (s  t))

2(g) The expression p  q  r is problematic since  and  have the same binding priorities, so we
have to insist on additional brackets in order to resolve the conflict.

38
COS3761/MO001/4/2018

Exercises 1.2 (p. 78)

1(a) The proof of the validity of (p  q)  r, s  t q  s is a straight-forward application of the


rules that you know from Formal Logic 2:

1 (p  q)  r premise
2 st premise
3 pq  e1 1
4 q  e2 3
5 s  e1 2
6 qs  i 4, 5

1(x) We have to prove the validity of p  (q  r), q  s, r  s p  s.

We again apply the rules you know from Formal Logic 2, but make a few comments.

Because the main connective of the goal is the implication , we open a subproof box in
line 4 with the assumption of the left hand side of the goal, namely p. The subproof ends
with the right hand side of the goal (line 10) and then the  introduction rule is used in
line 11 after the subproof box was exited.

We require two subproofs (lines 6 to 7 and lines 8 to 9) so that the  elimination rule can
be used in line 10. Also note how the  elimination rule is used (lines 5, 7 and 9). If the
application of these rules is not clear, work through the rules given in the prescribed book
for Formal Logic 2 again.

1 p  (q  r) premise
2 qs premise
3 rs premise

4 p assumption
5 qr  e 1, 4

6 q assumption
7 s  e 2, 6

8 r assumption
9 s  e 3, 8

10 s  e 5, 6 – 7, 8 – 9

11 ps  i 4 – 10

39
2(b) We prove the validity of ¬p  ¬q ¬(p  q) as follows:

1 ¬p  ¬q premise

2 pq assumption

3 ¬p assumption
4 p 1 e 2
5  ¬ e 4, 3

6 ¬q assumption
7 q 2 e 2
8  ¬ e 7, 6

9   e 1, 3 – 5, 6 – 8

10 ¬ (p  q) ¬i 2–9

As you can see, we assume the negation of the goal in line 2 (the first statement of the outer assumption
box) and derive the contradiction in line 9 (the last statement of this subproof) so that the goal can be
derived in line 10 after the subproof box has been exited by using the ¬ introduction rule.

Also note how the  elimination rule is applied: we need two separate assumption boxes (lines 3 to 5
and lines 6 to 8), each box starting with one of the disjuncts of the formula in line 1 and each box ending
on the same formula which is then derived in line 9 after the second assumption box has been exited.
Note that all this happens inside the outer subproof box.

Remember that the ¬ e rule was called the  Intro rule in Formal Logic 2 but the same requirements
apply.

2(e) We prove the validity of p  (q  r), ¬q, ¬r ¬p as follows:

1 p  (q  r) premise
2 ¬q premise
3 ¬r premise

4 p assumption
5 qr  e 1, 4

6 q assumption
7  ¬ e 6, 2

8 r assumption
9  ¬ e 8, 3

10   e 5, 6 – 7, 8 – 9

11 ¬p ¬ i 4 – 10

 The proof is very similar to the proof in question 2(b) above (assumption of negation of the
goal and use of the  e rule thereby requiring two sub-subproofs).
 Please note how the  e rule is applied (line 5): the right hand side of an implication is
derived if the left hand side appears on an earlier line.

40
COS3761/MO001/4/2018

3(q) We prove the validity of (p  q)  (q  r) as follows:

1 q¬q LEM

2 q assumption

3 p assumption
4 q copy 2

5 pq i 3–4
6 (p  q)  (q  r)  i1 5

7 ¬q assumption

8 q assumption
9  ¬ e 8, 7
10 r e 9

11 qr  i 8 – 10
12 (p  q)  (q  r)  i2 11

13 (p  q)  (q  r)  e 1, 2 – 6, 7 – 12

Note that all the subproofs are essential. Most of the rules which are used above have been
explained in the previous exercises except the  i rule that are used in lines 6 and 12. The 
introduction rule is very simple: once a formula has been derived any formula may be connected
to it with the  connective.

5(d) We have to prove the validity of (p  q)  ((¬ p  q)  q). You will find nothing new in the
proof given below. Note, however, again, that a new subproof box has to be opened whenever an
assumption is made.

1 p  q assumption

2 ¬p  q assumption

3 p  ¬p LEM

4 p assumption
5 q  e 1, 4

6 ¬p assumption
7 q  e 2, 6

8 q  e 3, 4 – 5, 6 – 7

9 (¬ p  q)  q i 2–8

10 (p  q)  ((¬ p  q)  q) i 1–9

41
Exercises 1.3 (p. 81)

1(c) In order to draw the parse tree for the formula, we have to determine the main connective. Since
 binds more strongly than , we could re-write this formula as

(p   q)   p

This shows that the main connective of this formula is , which places it at the root of the parse
tree, shown below:

 

p  p

The height of this parse tree is 1 + 3 = 4, since the longest path from root to leaf is 3.

42
COS3761/MO001/4/2018

4(b) Here, parentheses are used to override the binding priorities of the connectives, making the last
 the main connective of the formula. The parse tree, whose height is 1 + 5 = 6, is shown below:

→ ¬

 s r

→ 

p ¬ p r

By definition, a formula  is a subformula of another formula if and only if the formation tree of
formula  appears as a subtree of the formation tree of formula . The following is a list of all the
subformulas of ((p → ¬ q)  (p  r) → s)  ¬ r:

p
q
r
s
¬r
¬q
(p → ¬q)
(p  r)
(p → ¬q)  (p  r)
((p → ¬q)  (p  r) → s)
((p → ¬q)  (p  r) → s)  ¬r

The purpose of all the parentheses is to override precedence rules and binding orders. We can
parse a fully parenthesized formula recursively and mechanically (i.e. we don't need to worry
about the interpretation of the symbols). Parsing a wff lets us build a parse-tree for the formula, in
which the root node corresponds to the final rule that was applied in the building of the formula,
and the leaves are the atomic propositions in the formula.

43
Exercises 1.4 (p. 82)

2(c) The complete truth table for p  (¬ (q  (r  q))) is

p q r r q q  (r q) ¬(q  (r q)) p  (¬(q  (r q)))


T T T T T F T
T T F T T F T
T F T F F T T
T F F T F T T
F T T T T F F
F T F T T F F
F F T F F T T
F F F T F T T

5 The formula of the parse tree in figure 1.10 on page 44 is the following:

¬ ((q → ¬ p)  (p → (r  q)))

We give the truth table below.

p q r ¬p q→¬p rq p → (rq) (q → ¬p)  (p→(rq)) ¬((q→¬p)(p→(r  q)))


T T T F F T T F T
T T F F F T T F T
T F T F T T T T F
T F F F T F F F T
F T T T T T T T F
F T F T T T T T F
F F T T T T T T F
F F F T T F T T F

The formula is not valid since it evaluates to F for several assignments. However, this formula is
satisfiable: for example, if q and p evaluate to T then q  ¬p renders F and so the entire
formula evaluates to T.

13(b) An example is: Let p represent “There are clouds in the sky” and let q represent “It rains”. Then
¬p  ¬q holds (“if there are no clouds in the sky, then it does not rain”), but ¬q  ¬p is false
(“if it does not rain, then there are no clouds in the sky”) because, as we know, there may well
be clouds in the sky even if it does not rain.

44
COS3761/MO001/4/2018

Exercises 1.5 (p.87)


2

p q r ¬p ¬q ¬r ¬pr q¬r p¬r ¬q ¬r qr a b c d p(qr)


T T T F F F T F F F T T T T T T
T T F F F T F T T F T T T T T T
T F T F T F T F F F T T T T T T
T F F F T T F F T T F F T F F F
F T T T F F T F F F T T T T T T
F T F T F T T T F F T T F T T T
F F T T T F T F F F T T T T T T
F F F T T T T F F T F T T T T T

As may be seen in the last five columns of the truth table below the formulas given in (a), (c) and
(d) are semantically equivalent to p  (q  r) but the formula given in (b) is not. The truth value of
the formula given in (b) does not correspond to that of p  (q  r) in lines 4 and 6 while the truth
values of (a), (c) and (d) are identical to those of p  (q  r) in all lines.

7(a) We construct the formula 1 in CNF as explained in section 1.5.1 in both the prescribed book and
the additional notes:

(p  q)  (p  q)  (p  q)

Note how these principal conjuncts correspond to the lines in the table where the 1 entry is F.

15(g) Applying the HORN algorithm to

(T  q)  (T  s)  (w  )  (p  q  s  v)  (v  s)  (T  r)  (r  p),

firstly marks all occurrences of T (we indicate marking by underlining):


(T  q)  (T  s)  (w  )  (p  q  s  v)  (v  s)  (T  r)  (r  p)

Each Horn clause is now investigated: if everything to the left of  is marked, the right hand side
is marked everywhere it appears. Thus:

All occurrences of q, s, and r are marked in the first iteration of the while-loop:
(T  q)  (T  s)  (w  )  (p  q  s  v)  (v  s)  (T  r)  (r  p)

In the second iteration, both occurrences of p get marked:


(T  q)  (T  s)  (w  )  (p  q  s  v)  (v  s)  (T  r)  (r  p)

and in the third iteration v is marked:


(T  q)  (T  s)  (w  )  (p  q  s  v)  (v  s)  (T  r)  (r  p)

Nothing further can be marked. Because  is not marked, the Horn formula is satisfiable.
(We allocate T to q, s, r, p and v, and F to w.)

45
Chapter 2

Exercises 2.1 (p. 157)


1 (a) x(P(x)  A(m,x))

Remember that P(x) is not a term, so cannot be the argument of a predicate.

(b) x(P(x)  A(x,m))

(d) x(S(x)  (y(L(y)  ¬B(x,y)))), or, equivalently, ¬x(S(x)y(L(y)  B(x,y))).

Exercises 2.2 (p. 158)

x
4(a)

P
y

y z

¬ P

Q y z

y x

46
COS3761/MO001/4/2018

4(b) From the parse tree of the previous item we see that all occurrences of z are free, all
occurrences of x are bound, and the leftmost occurrence of y is free, whereas the other two
occurrences of y are bound.

4(d) (i)

  [w/x] is simply  again since there are no free occurrences of x in  that could be
substituted by w.

 [w/y] is x(P(w, z)  (y(¬Q(y, x)  P(y, z)))) since we replace the sole free
occurrence of y with w.

 If we simply replace the sole free occurrence of y with f(x), we get that  [f(x)/y] is
x(P(f(x), z)  (y(¬Q(y, x)  P(y, z)))). Note, however, that we created a
problem by this substitution because the variable x occurs in f(x) and after the
substitution it occurs within the scope of x. We should actually first rename x in the
given formula by, say, u to get u(P(y, z)  (y(¬Q(y, u)  P(y, z)))) and then do the
substitution: u(P(f(x), z)  (y(¬Q(y, u)  P(y, z)))).

 If we simply replace all (free) occurrences of z with g(y, z) we get that  [g(y, z)/z] is
x(P(y, g(y, z))  y(¬Q(y, x)  P(y, g(y, z)))). By doing this we again created a
problem because the variable y occurs in g(y, z) and after the substitution it occurs
within the scope of y. We should actually first rename the bound occurrences of y in
the given formula by, say, u to get x(P(y, z)  u(¬Q(u, x)  P(u, z)))) and then do
the substitution: x(P(y, g(y, z))  u(¬Q(u, x)  P(u, g(y, z)))).

4(d) (ii) All of them, for there are no free occurrences of x in  to begin with.

4(d) (iii) The terms of w and g(y, z) are free for y in , but f(x) is not free for y in the formula since the
x in f(x) would be captured by x in that substitution process as noted above.

4(f) Now, the scope of x is only the formula P(y,z) since the inner quantifier x binds all occurrences
of x (and overrides the binding of x) in the formula (¬Q(x,x)  P(x,z)).

47
Exercises 2.3 (p. 160)

7(c) The proof of the validity of xyP(x, y) yxP(x, y) is given below. This proof illustrates both the
 and  introduction and elimination rules. We comment on that below the proof.

1 xyP(x, y) premise

2 y0

3 x0 yP(x0, y) assumption ([x0/x])


4 P(x0, y0) y e 3
5 xP(x, y0) x i 4

6 xP(x, y0) x e 1, 3  5

7 yxP(x, y) y i 2  6

We open a y0-box first, since we want to derive a formula of the form y. Then we open an x0-
box to be able to use x e later on.

Note how the elimination and introduction rules for both  and  are applied above:

  elimination: (i) a formula starting with  (line 1), (ii) a new subproof box starting
with the choice of a free variable which then substitutes the relevant variable in the
formula now without , namely yP(x, y) (line 3), (iii) the subproof ends on a formula
that does not contain the free variable (line 5), (iv) the  e rule is cited outside the
subproof (line 6) with the same formula on which the subproof ends in line 5.

  introduction: (i) a formula containing a free variable (line 4), (ii) the  i rule is cited
and  is attached to the formula with the free variable replaced by the same variable
that is attached to  (line 5).

  elimination: (i) a formula starting with  (line 3), (ii) the  e rule is cited with the
formula without the  and the relevant variable replaced by a free variable y0 (line 4).

  introduction: (i) a subproof box starting with the choice of a free variable (line 2),
(ii) the subproof ends on a formula containing the free variable (line 6), (iii), the  i
rule is cited outside the subproof (line 7) with the same formula on which the subproof
ends in line 6, but with  attached to it and the free variable replaced by the same
variable that is attached to .

48
COS3761/MO001/4/2018

9(d) We prove the validity of xP(x)  S x(P(x)  S) below.

 Note that we may not apply the  elimination rule directly to the premise because x is not in
front of the whole rest of the formula, i.e. we do not have x(P(x) S). The  elimination rule
is applicable to formulas of the type x only.

 This is not a simple proof and you will not generally be asked to give such a proof in an
examination. However, it demonstrates the application of several rules. Make sure that you
understand it.

1 xP(x)  S premise

2 ¬x(P(x)  S) assumption

3 x0

4 ¬P(x0) assumption

5 P(x0) assumption
6  ¬e 5, 4
7 S e 6

8 P(x0)  S i 5–7
9 x(P(x)  S) x i 8
10  ¬e 9, 2

11 ¬¬P(x0) ¬i 4 – 10
12 P(x0) ¬¬e 11

13 xP(x) x i 3 – 12
14 S  e 1, 13

15 P(t) assumption
16 S copy 14

17 P(t)  S  i 15 – 16
18 x(P(x)  S) x i 17
19  ¬e 18, 2

20 ¬¬x(P(x)  S) ¬i 2 – 19
21 x(P(x)  S) ¬¬e 20

49
9(r) We prove the validity of ¬xP(x) x¬P(x) as follows:

1 ¬xP(x) premise

2 x0

3 P(x0) assumption
4 xP(x) x i 3
5  ¬e 4, 1

6 ¬P(x0) ¬i 3 – 5

7 x¬P(x) x i 2 – 6

There is nothing new in this proof. We choose a free variable in line 2, thereby opening a new
subproof box, so that the x introduction rule can be cited once the subproof is exited (line 7).
The “trick” to assume the negation of something that we want to prove is illustrated in the
subproof from line 3 (where P(x0) is assumed) to line 5 (where the contradiction is derived) and
then the ¬i rule is cited in the next line after this subproof box has been exited (i.e. line 6).

13(a) We show the validity of


xP(a, x, x), xyz(P(x, y, z)  P((x), y, (z))) P((a), a, (a))
as follows:

1 xP(a, x, x) premise
2 xyz(P(x, y, z)  P((x), y, (z))) premise
3 P(a, a, a) x e 1
4 yz(P(a, y, z)  P((a), y, (z))) x e 2
5 z(P(a, a, z)  P((a), a, (z))) y e 4
6 P(a, a, a)  P((a), a, (a)) z e 5
7 P((a), a, (a))  e 6, 3

Note that no subproofs are necessary for the application of the elimination rule: any free variable may
be used for the substitution.

50
COS3761/MO001/4/2018

Exercises 2.4 (p. 163)

1. This formula xyQ(g(x, y), g(y, y), z) contains a free variable z, thus we will also need a look-up
table.

First model M: We choose


the universe of discrete values A to be the set of integers,
the function gM (a, b) to be the result of subtracting b from a

In this way g(y, y) is interpreted as 0) and the predicate QM is interpreted to be such that a triple
of integers (a, b, c) is in QM if, and only if, c equals the product of a and b.

Thus our formula says that, for all integers x and y, we have that (x – y) times 0 is equal to z.

If we define lA(z) def 0, the formula holds in the above model.

Second model M’: We choose this model identical to M above but define a different look-up table:
let lA (z) def 1. The formula is now false.

12(g) The following model shows that the formula is not valid:

Let the universe of discrete values A be the set of natural numbers {0, 1, 2, …}, and
let the predicate S be interpreted as “less than or equal to.”

The formula claims that the serial and anti-symmetric relation does not have a minimal element.
But in the given model, 0 is such a minimal element.

Exercises 2.5 (p. 164)

1(g) There are many such models M: We choose

A to be the set of integers,


P(x) says that “x is divisible by 2”
Q(x) says that “x is divisible by 3”

Then we have
M x(¬P(x)  ¬Q(x)),
i.e. the formula to the left of evaluates to T (there exists such an x, for example, take 9 as the
value of x), but we cannot have
M x(P(x)  Q(x))
since not all integers are divisible by 2 or 3 (choose x for example to be 13). Because we have
found a model where the premise of the sequent x(¬P(x)  ¬Q(x)) x(P(x)  Q(x)) is true but
the conclusion is false, we know that no prove exists for the validity of this sequent – the sequent
is not valid.

51
CHAPTER 5

Exercises 5.2 (p. 350)

1(a) (iv) The Kripke model M depicted in Figure 5.5 is on page 315.

The relation a ⊨⊨q holds iff x ⊨q holds for all x with R(a, x). Since e and b are the only
instances of x which satisfy R(a, x), we see that a ⊨⊨q holds iff e ⊨q and b ⊨q hold.
But none of this is the case. For example, we have R(b, e) but q is not true in world e, so ⊨q
is not true in world b.

5 (f) We will show that (p  q) and p  q entail each other.

(i) First, we have x (p  q) iff there is a world y with R(x, y) and y p  q.

But then y p or y q.
Case 1: If y p, then x p, and so x p  q follows.
Case 2: If y q, we argue in the symmetric way: x q, and so x p q follows.

(ii) Second, if we have x p  q, then we have x p, or


p q, not necessarily exclusive.

Case 1: If x p, then there exists a world y' with R(x, y') and y' p.
This implies y' p  q, and so x (p  q) follows because of R(x, y').
Case 2: Symmetrically, If x q, then there exists a world y'' with R(x, y'') and y'' q.
This implies y'' p  q, and so x (p  q) follows because of R(x, y'').

Exercises 5.3 (p. 351) problem 4

Fact 5.16 is given on page 327. We prove it in one direction:

Let R be a reflexive, transitive and Euclidean relation. We need to show that R is an equivalence
relation. Since R is already reflexive and transitive, it suffices to show that R is symmetric. To that
end, assume that R(a, b). We have to show that R(b, a) holds as well. Since R is Euclidean, we
have that R(x, y) and R(x, z) imply R(y, z) for all choices of x, y and z. So, if we instantiate this
with x = a, y = b and z = a, then we have R(x, y) by assumption (as R(a, b)), but we also have
R(x, z) since R is reflexive (so R(a, a) holds). Using that R in is Euclidean, we obtain R(y, z)
which is R(b, a) as desired.

52
COS3761/MO001/4/2018

REFERENCES
1. Chellas, BF. 1980. Modal logic: an introduction, Cambridge University Press.

2. Hughes, GE and Cresswell, MJ. 1995. A new introduction to modal logic, New York Routledge.

3. Huth, M. and Ryan, M. 2004. Logic in Computer Science: Modelling and Reasoning about
systems. Second edition. Cambridge University Press.

4. Huth, M. and Ryan, M. 2004. Logic in Computer Science: Modelling and Reasoning about
systems. Solutions to designated exercises. Cambridge University Press.

5 http://www.doc.ic.ac.uk/~imh/teaching/140_logic/Cribsheet.pdf

6. http://cl-informatik.uibk.ac.at/teaching/ws06/lics/ohp/4x1.pdf

7. http://gensum.kaist.ac.kr/~cs402/2004fall/lecture/logic-prop-040916-30.pdf

8. http://www.doc.ic.ac.uk/~imh/teaching/140_logic/140.pdf

9. http://www.cs.bham.ac.uk/research/projects/lics/second_edition_errata.pdf

10. http://cs.sunysb.edu/~cse541/Spring2007/cse541BasicModalLogic.pdf

11. http://cs.sunysb.edu/~cse541/Spring2007/cse541ModalLogics.pdf

12. http://cs.vu.nl/~pdwind/thesis/thesis.pdf

13. http://plato.stanford.edu/contents.html

Unisa 2018

53

You might also like