Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

COS351D/204/2007

School of Computing

Techniques of Artificial Intelligence


COS351D

TUTORIAL LETTER 204/2007

Discussion of Assignment 4
COS351D/204/2007

Table of Contents

Outline of syllabus................................................................................................................. 3

Solution: Assignment 4 ......................................................................................................... 5


COS351D/204/2007

Outline of syllabus

Chapter 1
Read as background.

Chapter 2
Familiarise yourself with the concepts and terminology.

Chapter 3
Study everything. You have to understand all the algorithms and be able to apply them to a given
problem statement. You do not need to memorise any pseudo-code.

Chapter 4
Study everything except the following sections, which you can omit entirely:
- Memory-bounded heuristic search, pp. 101 – 104.
- 4.4 Local Search in Continuous Spaces, pp. 119 – 129.
You have to understand all the algorithms and be able to apply them to a given problem statement.
You do not need to memorise any pseudo-code.

Chapter 5
Study everything except the following section, which you can omit entirely:
- 5.4 The Structure of Problems, pp. 151 – 155.
You have to understand all the algorithms and be able to apply them to a given problem statement.
You do not need to memorise any pseudo-code.

Chapter 6
Study sections 6.1, 6.2 and 6.3. You have to understand the minimax and alpha-beta algorithms, and
be able to apply them to a given problem statement.
Read sections 6.4, 6.5, 6.6, 6.7 and 6.8. You have to understand the concepts covered in these
sections, but you do not need to be familiar with any particular game such as backgammon, chess or
any card game.

Chapter 7
Study everything except the following section, which you can omit entirely:
- 7.7 Agents based on propositional logic, pp. 225 – 232.
Many of the concepts introduced in this chapter should already be familiar to you from COS261.
However, there are some important algorithms in this chapter that you will probably not be familiar
with. In particular, you have to understand and be able to apply forward and backward chaining,
resolution, and the DPLL and WALKSAT algorithms.

Chapter 8
Most of the concepts introduced in this chapter should also already be familiar to you. Read the
chapter to refresh your memory. Make sure that you can translate English language sentences and
paragraphs into first-order logic.

Chapter 9
You have to be able to convert an English language paragraph into first-order logic, convert this into
conjunctive normal form, and use resolution to show that a given conclusion follows from the
premises. In order to apply the resolution algorithm, you will need to understand the unification of
terms. You can omit the following sections:
- 9.3 Forward chaining, pp. 280 – 287;
- 9.4 Backward chaining, pp. 287 – 295.
You can also omit everything from p. 300 Completeness of resolution to the end of the chapter.

Chapter 18
Understand the forms of learning and the concepts of inductive learning. You have to be able to
construct a decision tree and reason with the contents thereof. The theory of decision trees is also
important. You can omit the following sections:
- 18.4 Ensemble Learning, pp. 664 – 668;

3
COS351D/204/2007

- 18.5 Why Learning Works, pp. 668 – 673.

Chapter 20
Sections 20.1 to 20.4 serve as background to much of the rest of the chapter. Read and understand
the concepts as far as it pertains to sections 20.5 onwards. Section 20.5 forms the core of the work
that you need to understand fully. You will have to be able to construct a simple neural network and
train it using a given training set. You have to understand the limitations of the various models of
neural networks. You can omit the following sections:
- 20.6 Kernel machines, pp. 749 – 752;
- 20.7 Case Study, pp. 752 – 754.

4
COS351D/204/2007

Solution: Assignment 4

Answer 1

Exercise 8.1 on page 268.

The key distinction between analogical and sentential representations as that the analogical
representation automatically generates consequences that can be “read off” whenever suitable
premises are encoded.

a) Symbols on the map may depend on the scale and type of map. Symbols may typically include
city and town markers, road symbols, lighthouses, historic monuments, etc.

b) When the map creator put a symbol in a specific location, he says one explicit thing (Hillbrow
tower is here), but the analogical structure of the map representation means that we can
derive many implicit sentences.

Explicit sentences:
- There is a tower called Hillbrow tower here.
- Voortrekker Road runs approximately East-West.
- False Bay exists and is roughly circular.
Implicit sentences:
- The N1 is longer than the N3.
- Johannesburg is north of Cape Town.
- The shortest route from Johannesburg to Cape Town is via the N1.

c) Sentences not representable in map language:


- Table Mountain has a flat top.
- In 1900 the N1 did not exist.
- The N4 runs west or east of Pretoria.

d) Sentences that are easier to express in the map language: any sentence that can be
expressed easily in English is not going to be a good candidate for this question. Any linguistic
abstraction about the shape of False Bay can probably be expressed equally easily in the
predicate calculus, since that is what it was designed for. Facts, such as the shape of the
coastline are best expressed in the map language.

e) Examples of other types of analogical representations:

- Audio tape recordings. Advantages: simple circuits are required to record and reproduce
sounds. Disadvantages: subject to errors, noise, not easy to separate individual sounds.
- Traditional clock face. Advantages: easy to read quickly, easy to determine remaining
time. Disadvantages: hard to read precisely, cannot represent small units such as
milliseconds.
- Graphs, pie charts, bar charts. Advantages: enormous data compression, easy trend
analysis, easy to communicate information. Disadvantages: imprecise, cannot represent
disjunctive or negated information.

Answer 2

Exercise 8.6 on page 268.

The main point of this exercise is to understand connectives and quantifiers, as well as the use of
predicates, functions, constants, and equality. Let the basic vocabulary be as follows:

Takes ( x, c, s ) : student x takes course c in semester s ;

5
COS351D/204/2007

Passes ( x, c, s ) : student x passes course c in semester s ;


Score( x, c, s ) : the score obtained by student x in course c in semester s ;
x > y : x is greater than y ;
F and G : specific French and Greek courses;
Buys ( x, y, z ) : x buys y from z ;
Sells ( x, y, z ) : x sells y to z ;
Shaves( x, y ) : person x shaves person y ;
Born( x, c) : person x is born in country c ;
Parent ( x, y ) : person x is a parent of person y ;
Citizen ( x, c, r ) : person x is a citizen of country c for reason r ;
Risident ( x, c) : person x is a resident of country c ;
Fools ( x, y, t ) : person x fools person y at time t ;
Student ( x), Person( x), Man( x), Barber ( x), Expensive( x), Agent ( x), Insured ( x), Smart ( x),
Politician( x) : predicates satisfied by members of the corresponding categories.

a) Some students took French in spring 2001.


∃x Student ( x) ∧ Takes( x, F , Spring 2001)

b) Every student who takes French passes it.


∃x, s Student ( x) ∧ Takes( x, F , s ) ⇒ Passes( x, F , s )

c) Only one student took Greek in spring 2001.


∃x Student ( x) ∧ Takes( x, G, Spring 2001) ∧ ∀y y ≠ x ⇒ ¬Takes( y, G, Spring 2001)

d) The best score in Greek is always higher than the best score in French.
∀s ∃x ∀y Score( x, G, s ) > Score( y, F , s )

e) Every person who buys a policy is smart.


∀x Person( x) ∧ (∃y ∃z Policy ( y ) ∧ Buys( x, y, z )) ⇒ Smart ( x)

f) No person buys an expensive policy.


∀x, y, z Person( x) ∧ Policy ( y ) ∧ Expensive( y ) ⇒ ¬Buys( x, y, z )

g) There is an agent who sells policies only to people who are not insured.
∃x Agent ( x) ∧ ∀y ∀z Policy ( y ) ∧ Sells ( x, y, z ) ⇒ ( Person( z ) ∧ ¬Insured ( z ))

h) There is a barber who shaves all men in town who do not shave themselves.
∃x Barber ( x) ∧ ∀y Man( y ) ∧ ¬Shaves( y, y ) ⇒ Shaves( x, y )

i) A person born in the UK, each of whose parents is a UK citizen or a UK resident, is a UK


citizen by birth.
∀x Person( x) ∧ Born( x, UK ) ∧ (∀y Parent ( y, x) ⇒ ((∃r Citizen( y, UK , r )) ∨
Risident ( y, UK ))) ⇒ Citizen( x, UK , birth)

j) A person born outside the UK, one of whose parents is a UK citizen by birth, is a UK citizen by
descent.
∀x Person( x) ∧ ¬Born( x, UK ) ∧
(∃y Parent ( y, x) ∧ Citizen( y, UK , birth)) ⇒ Citizen( x, UK , descent )

k) Politicians can fool some of the people all of the time, and they can fool all of the people some
of the time, but they can’t fool all of the people all of the time.

6
COS351D/204/2007

∀x Politician( x) ⇒
(∃y ∀t Person( y ) ∧ Fools ( x, y, t )) ∧
(∃t ∀y Person( y ) ⇒ Fools ( x, y, t )) ∧
¬(∀t ∀y Person( y ) ⇒ Fools ( x, y, t ))

Answer 3

Exercise 9.4 on page 316.

a) {x / A, y / B, z / B} (or some permutation of this).


b) No unifier since x cannot bind to both A and B .
c) { y / John, x / John} .
d) No unifier because the occurs-check prevents unification of y with Father ( y ) .

Answer 4

Exercise 9.18 on page 318.

a) ∀x Horse( x) ⇒ Animal ( x)
∀x ∀h Horse( x) ∧ HeadOf (h, x) ⇒ ∃y Animal ( y ) ∧ HeadOf (h, y )

b) ¬Horse( x) ∨ Animal ( x) 1
Horse(G ) 2
HeadOf ( H , G ) 3
¬Animal ( y ) ∨ ¬HeadOf ( H , y ) 4

Here 1 is derived from the first sentence in (a), while the other three comes from the second
sentence. H and G are Skolem constants.

c) ¬Animal (G ) 5 Resolve 3 and 4


¬Horse(G ) 6 Resolve 1 and 5
contradiction 7 Resolve 6 and 2

7
COS351D/204/2007

Answer 5

Exercise 18.3 on page 676.

There are many possible answers to this question. The important point is that real-world situations
create a huge number of possible exceptions and the answer should reflect a sufficient number of
them to be usable in the real world. Below is one possible decision tree.

FrontOfQueue?
No Yes

CarAheadMoving? IntersectionBlocked?

No Yes No Yes

NO YES CrossTraffic? NO
No Yes

Pedestrians? NO
No Yes

Turning? NO
Right No Left

OncomingTraffic? YES Cyclist?

No Yes No Yes

YES NO YES NO

Answer 6

Exercise 20.11 on page 761.

XOR, and in fact any Boolean function, is easiest to construct using step-function units. Because XOR
is not linearly separable, we will need a hidden layer. It turns out that only one hidden node suffices. To
design the network we can think of the XO function as OR with the AND case (where both inputs are
true) ruled out. Thus the hidden layer computes AND, while the output layer computes OR but we set
the weights between the hidden and the output layer to negative value, so as to remove the effect of
the AND. The network below illustrates this idea.

0.3
0.3
-0.6
t=0.5 t=0.2

0.3
0.3

8
COS351D/204/2007

The following figure illustrates the output of the network above as a function of its two inputs x1 and x 2 ,
Weights and other parameter values are as in the network above. The function is limited to the unit
cube.

output
1

1
x1
1
x2
(1,1)

9
COS351D/204/2007

Unisa
PO Box 392, UNISA, 0003

Copyright © Unisa 2007

In terms of the Copyright Act 98 of 1978 no part of this


material may be reproduced, be stored in a retrieval
system, be transmitted or used in any form or be
published, redistributed or screened by any means
(electronic, mechanical, photocopying, recording or
otherwise) without the prior written permission of Unisa.
However, permission to use in these ways any material in
this work that is derived from other sources must be
obtained from the original sources.

Printed in South Africa by Unisa

10

You might also like