Ai-Module Iii

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

MODULE III

Knowledge representation -Using Predicate logic- representing facts in logic, functions and
Predicates, Conversion to clause form, Resolution in propositional logic, Resolution in predicate
Logic, Unification, Question Answering, forward and backward chaining.
(Elaine rich & Kevin Knight)

KNOWLEDGE REPRESENTATION

In order to solve the complex problems encountered in AI, one needs both a large amount of
knowledge and some mechanisms for manipulating that knowledge to create solutions to new
problems. A variety of ways of representing knowledge (facts) have been exploited in AI
programs.

 Facts: truth in some relevant world. These are things we want to represent.
 Representations of facts in some chosen formalism. These are the things we will actually
be able to manipulate.

One way to think of structuring these entries is as two levels:

 The knowledge level at which facts are described


 The symbol level at which representation of some objects at the knowledge-level are
defined in terms of symbols that can be manipulated by programs.

 Representation mappings, there are:-


o Forward representation which maps from facts to representation.
o Backward representation which maps the other way.

 One representation of facts concerns with natural language (particularly English) sentences.
 Regardless of the representation for facts that we use in a program, we may also need to be
concerned with an English representation of those facts that in order to facilitate getting
information into and out of the system.

1|AJCE CSE/2016/ AI- CS010 802


Approach to Knowledge Representation
• There are multiple techniques for knowledge representation.
• Different representation formalisms
– Rules
– Logic
– Natural language
– Database systems
– Semantic nets
– Frames
• Many programs rely on more than one technique.

USING PREDICATE LOGIC


The language of logic - One way of representing facts. The logical formalism is appealing
because it immediately suggests a powerful way of deriving new knowledge from old. We can
also conclude that a new statement is true by proving that it follows from the statements that are
already known.

Propositional calculus (Ref: Luger)

Propositional calculus is a language. Using their words, phrases and sentences, we can represent
and reason about properties and relationships in the world. Propositional logic is simple to deal with. We
can easily represent real world facts as logical propositions written as well formed formulas (wff’s) in
propositional logic.

Propositional calculus symbols

 propositional symbols:
o P, Q, R, S…. (P: it is raining Q: Marcus is a man)

 truth symbols:
o True, False

 connectives:
o ∩, U, ⌐, →, ≡

Propositional calculus semantics: the following are valid sentences;


 Every propositional symbol and truth symbol is a Sentence
P, Q, R,
 Negation of a sentence is a sentence
⌐ P, ⌐ false,
 The disjunction or or of two sentence is a sentence
P U ⌐P,

2|AJCE CSE/2016/ AI- CS010 802


 The conjunction or and of two sentence is a sentence
P ∩ ⌐P,
 The implication of one sentence from another is a sentence
P → Q,
 The equivalence of two sentences is a sentence
PUQ≡R
Legal sentences are called Well-formed formulas or WFF

Equivalence
Two expressions in propositional calculus are equivalent if they have the same value under all
truth value assignments.

⌐ (⌐P) ≡ P
P → Q ≡ ⌐P U Q
⌐ (P U Q) ≡ ⌐P ∩ ⌐Q DE-MORGAN’S
⌐ (P ∩ Q) ≡ ⌐P U ⌐Q
P ∩ Q ≡ Q ∩ P Commutative
PUQ≡QUP
(P ∩ Q) ∩ R ≡ P ∩ (Q ∩ R) Associative
(P U Q) U R ≡ P U (Q U R)
P U (Q ∩ R) ≡ (P U Q) ∩ (P U R) Distributive
P ∩ (Q U R) ≡ (P ∩ Q) U (P ∩ R)

These identities can be used to change propositional calculus expressions into a syntactically
different but logically equivalent form.

Prove that P → Q ≡ ⌐P U Q
P Q PQ ⌐P ⌐P U Q
T T T F T
T F F F F
F T T T T
F F T T T

Prove that P ↔Q ≡ (P → Q) ∩( Q → P)

P Q P ↔Q PQ Q→P (P → Q) ∩( Q → P)

T T T T T T
T F F F T F
F T F T F F
F F T T T T

3|AJCE CSE/2016/ AI- CS010 802


Predicate calculus

In propositional calculus, each atomic symbol P, Q etc.. Denotes a proposition of some complexity. There
is no way to access the components of an individual assertion. Predicate calculus provides this ability. For
eg. Instead of letting a single propositional symbol, P, denote the entire sentence “Marcus is a man”, we
can create a predicate man that describes the relationship between man and Marcus.
Man (Marcus)

Through inference rules we can manipulate predicate calculus expressions.


Predicate calculus allows expressions to contain variables.

The syntax of predicates


The symbols of predicate calculus consists of
1. The set of letters of the English alphabet. (both upper case and lower case)
2. The set of digits, 0, 1, 2….9.
3. The underscore, _.

Symbols are used to denote objects, properties or relations in a world.

 Predicate symbols allow functions on objects. Functions denote a mapping of one or more
elements in a set (domain) into a unique element of another set (range).
 Function has an associated arity  indicating number of elements in domain mapped onto each
element on range.
Predicate calculus terms
The following are some statements represented in predicate calculus.
f (X, Y)
father (david)
price (bananas)
likes (george, Susie)
friends (bill, george)
likes (X, george)

The predicate symbols in these expressions are f, father, price, likes, friends.
In the predicates above, david, bananas, george, Susie, bill are constant symbols.

Quantifiers
Predicate calculus includes 2 symbols, the . ( for all and there exists)

The universal quantifier, , indicates that the sentence is true for all values of the variable. The
existential quantifier, , indicates that the sentence is true for at least one value in the domain. Some
relationships between universal and existential quantifiers are given below.
Material implication:  , and (^) , or (v) , not (⌐)

o Example we can use mathematical logic as the representation formalism. Consider the English
sentences below
 Spot is a dog

4|AJCE CSE/2016/ AI- CS010 802


o This fact can also be represented in logic as follows:-
 Dog(Spot)
o Suppose also we have a logical representation of the fact: all dogs have tails as shown below:

x : dog ( x)  hastail ( x)
o Using the deductive mechanisms of the logic, we may generate the new representation object

hastail (Spot )
o Using an appropriate backward mapping function we could then generate the English sentence:
 Spot has a tail
o Or we could make use this representation of new fact to cause us to take some appropriate action
or to derive representation of additional facts.

The following represents some facts in the real world and corresponding predicate calculus
expressions.
1. Marcus was a man.
man (Marcus)
2. Marcus was a Pompeian.
pompeian (Marcus)
3. All Pompeians were Romans.

4. Caesar was a ruler.


Ruler (Caesar)
5. All Romans were either loyal to Caesar or hated him.

6. Everyone is loyal to someone.

7. People only try to assassinate rulers they are not loyal to.

8. Marcus tried to assassinate Caesar.


Tryassasinate (Marcus, Caesar)

9. All men are people


x : man( x)  person ( x)

Q. Was Marcus loyal to Caesar?


loyalto( Marcus, Caesar )
Ans: loyalto(marcus, caesar )

5|AJCE CSE/2016/ AI- CS010 802


loyalto( Marcus , Caesar )
 (7, substituti on)
person ( Marcus )  Ruler (Caesar )  tryassas sin ate( Marcus , Caesar )
 (4)
person ( Marcus ), tryassas sin ate( Marcus , Caesar )
 (8)
person ( Marcus )

 It seems that using 7 and 8, we should be able to prove that Marcus was not loyal to Caesar.
Though try to prove, reasoning backward from the desired goal.
 The nil at the end of each prove indicates that the list of conditions remaining to be proved is
empty and so the proof has succeeded.
 It also allows equal objects to be substituted for each other whenever it appears helpful to do so
during a proof.
 This attempt fails, since there is no way to conclude from that Marcus was a person.
 We need to consider another fact – 9. Now we can satisfy the last goal and produce a proof that
Marcus was not loyal to Caesar.

Modus ponens

Modus ponens is one of the most commonly used concepts in logic. It is one of the accepted
mechanisms for the construction of deductive proofs that includes the "rule of definition" and the "rule of
substitution". Modus ponens allows one to eliminate a conditional statement from a logical proof or
argument and there by not carry these antecedents forward in an ever-lengthening string of symbols; for
this reason modus ponens is sometimes called the rule of detachment. For example, observes that
"modus ponens can produce shorter formulas from longer ones".
If the sentences P and P→Q are known to be true, then modus ponens lets us infer Q.

Example
Assume the following observations.
1. “it is raining”.
2. “if it is raining, then the ground will be wet”.

If P denotes “it is raining” and Then 2 becomes P → Q

Through the application of modus ponens we can infer Q. that is “ground is wet”.

Example 2

Consider the statements

“Socrates is a man”. And


“all men are mortal”.

6|AJCE CSE/2016/ AI- CS010 802


These can be represented in predicate calculus.

“Socrates is a man” can be represented by


man(Socrates)

“all men are mortal” can be


X (man(X) → mortal (X)

Since the variable X in the implication is universally quantified, we may substitute any value in
the domain for x. By substituting “socrates” for X in the implication, we infer the expression,
man(socrates) → mortal(socrates)
By considering the predicates
man(socrates) and
man(socrates) → mortal(socrates)

Applying modus ponens, we get


mortal(socrates)

That is “Socrates is mortal”.

Computable Functions and Computable Predicates

All the simple facts were expressed as the combination of individual predicate.
Eg. Tryassassinate (Marcus, Caesar)
Suppose we want to express very large number of facts, such as the following greater than and
less than relationships: gt(1,0) , gt(2,1), lt(0,1), lt(1,2) etc. we do not want to have to write out the
representation of each of these facts individually. It would be extremely insufficient to store
explicitly a large set of statements when we would, instead, so easily compute each one of it as
we need it. So it is useful to have computable functions as well as computable predicates. The
next example shows how these ideas of computable functions and predicates. It also allows equal
objects to be substituted for each other whenever it appears helpful to do so during a proof.

Consider the problem of answering questions based on a database of simple facts, such as the following.

1. Marcus was a man.


man (Marcus) --------------------------- (1)
2. Marcus was a Pompeian.
pompeian (Marcus) -------------------------(2)
3. Marcus was born in 40 A. D.
Born(Marcus,40) ----------------------------(3)
4. All men are mortal.
x : man( x)  mortal ( x) -------------------- (4)
5. All Pompeian died when the volcano erupted in 79 A. D.
erupted (volcano,79)  x : [ pompeian( x)  died ( x,79)]

7|AJCE CSE/2016/ AI- CS010 802


erupted (volcano,79) ---------------------------- (5)
x : [ pompeian( x)  died ( x,79)] ----------------------- (6)
6. No mortal lives longer than 150 years.
x : t1 : t 2 : mortal ( x)  born( x, t1)  gt (t 2  t1,150)  dead ( x, t 2) ----- (7)
7. It is now 1991 A. D.
Now = 1991 ------------------- (8)
8. Alive means not dead
x : t : [alive( x, t )  dead ( x, t )]  [dead ( x, t )  alive( x, t )] ----------- (9)
9. If someone dies, then he is dead at all later times.
x : t1 : t 2 : died ( x, t1)  gt (t 2, t1)  dead ( x, t 2) ------------------------ (10)

Now let’s attempt to answer the question is Marcus alive??? By proving


alive(Marcus, now)
alive( Marcus, now)
 (9, substitution)
dead ( Marcus, Now)
 (10, substitution)
Died ( Marcus, t1)  gt (now, t1)
 (5, substitution)
Pompeian( Marcus)  gt (now, 79) OR
 (2)
gt (now, 79)
 (8, substituteequals )
gt (1991, 79)
 (compute _ gt )
nil

From looking at the proofs we have just shown, two things should be clear

I. Even very simple conclusions can require many steps to prove.


II. A verity of processes, such as matching, substitution and application of modus pones are
involved in the production of a proof.

In the next section, we introduce a proof procedure called resolution that reduces some of the
complexity because it operates on statements that have first been converted to a single canonical
form.

8|AJCE CSE/2016/ AI- CS010 802


RESOLUTION

 Resolution is a technique for proving theorems in the propositional or predicate calculus.

 To prove a statement, resolution attempts to show that the negation of the statement
produces a contradiction with the known statements. That is known as refutation.

 It’s a simple iterative process: at each step, two clauses called, the parent clauses, are
compared, yielding a new clause that has been inferred from them. The new clause
represents the ways that the two parents interact with each other.

Resolution proofs involve the following steps.

1. Put the premises or axioms in to clause form.

2. Add the negation of what is to be proved, in clause form, to the set of axioms.

3. Resolve these clauses together, producing new clauses that logically follow from them.

4. Produce a contradiction by generating the empty clause.

5. The substitutions used to produce the empty clause are those under which the opposite of
the negated goal is true.

Producing the clause form

 The resolution procedure requires all statements in the database to be converted to a


standard form called clause form. The form is referred to as conjunction of disjuncts
(POS) eg.  a  b    a  c 

The following is the algorithm for reducing any set of predicate calculus statements
(wff) to clause form (Conjunctive normal form)

1. First we eliminate the → by using the equivalent form.


For example, a→b ≡ ⌐a U b.
2. Next we reduce the scope of negation.
  a   a
  x  a( x)  ( x)a( X )
( x) b(x)   x  b(x)
(a  b)   a   b
(a  b)   a   b
3. Standardize by renaming all variables so that variables bound by different quantifiers have unique
names. If we have a statement

9|AJCE CSE/2016/ AI- CS010 802


x a(x)   xb(x)  x a(x)  y b(y)
4. Move all quantifiers to the left without changing their order.
5. Eliminate all existential quantifiers by a process called skolemization.

(X ) (Y ) (Z ) (W )  foo  X , Y , Z , W  is replaced with


(X ) (Y ) (W )  foo  X , Y , f  X , Y  , W 
For example, the formula y : president ( y) Can be transformed as president (SI )
where SI is a function with no arguments that somehow produce a value that satisfies president

If the existential quantifier lies in the scope of a universal quantifier, then the value that satisfies
the predicate may depend on the value of the universally quantified variables.

x : y : father( y, x) Transferred into x : father(S 2( x), x))

6. Drop all universal quantifiers.


7. Convert the expression to the conjunct of disjuncts form using the following equivalences.
a  b  c    a  b   c
a  b  c    a  b  c
a   b  c  is already in clause form.
a  b  c    a  b   a  c 

8. Call each conjunct a separate clause. For eg.

a  b  a  c
Separate each conjunct as
.
a  b and
a  c

9. Standardize the variables apart again. By this we mean rename the variables so that no two
clauses make reference to the same variables.

( X )  a  X   b  X   (X ) a  X   (Y ) b Y 

After performing these nine steps, we will get the expression in clause form.

Example

Consider the following expression.

10 | A J C E C S E / 2 0 1 6 / A I - C S 0 1 0 8 0 2
( X) ( [a (X) ∩ b (X)] → [c (X, I) ∩ ( Y) (( Z) [c(Y, Z)] → d (X, Y))]) U ( X) (e(X))

Convert this expression to clause form.

Step 1. Eliminate the →.

( X) ( ⌐(a (X) ∩ b (X)] U [c (X, I) ∩ ( Y) (( Z) [⌐c(Y, Z)] U d (X, Y))] U ( X) (e(X))

step 2: Reduce the scope of negation.

( X) ([ ⌐(a (X) U ⌐b (X)] U [c (X, I) ∩ ( Y) (( Z) [⌐c(Y, Z)] U d (X, Y))]) U ( X) (e(X))

step 3: standardize by renaming the variables.

( X) ([ ⌐(a (X) U ⌐b (X)] U [c (X, I) ∩ ( Y) (( Z) [⌐c(Y, Z)] U d (X, Y))]) U ( W) (e(W))

step 4: Move all quantifiers to the left.

( X) ( Y) ( Z) ( W) ([ ⌐(a (X) U ⌐b (X)] U [c (X, I) ∩ ( [⌐c(Y, Z)] U d (X, Y))] U (e(W))

step 5: Eliminate existential quantifiers.

( X) ( W) ([ ⌐(a (X) U ⌐b (X)] U [c (X, I) ∩ ( [⌐c(f(X), g(X))] U d (X, f(X)))] U (e(W))

step 6: Drop all universal quantifiers.

[ ⌐(a (X) U ⌐b (X)] U [c (X, I) ∩ ( [⌐c(f(X), g(X))] U d (X, f(X)))] U (e(W))

step 7: Convert the expression to conjunct of disjuncts form.

[⌐(a (X) U ⌐b (X)] U c (X, I) U (e(W)] ∩

[⌐(a (X) U ⌐b (X) U ⌐c(f(X), g(X)) U d (X, f(X)) U e(W)]

step 8: Call each conjunct a separate clause.

1 ⌐(a (X) U ⌐b (X)] U c (X, I) U (e(W)

2 ⌐(a (X) U ⌐b (X) U ⌐c(f(X), g(X)) U d (X, f(X)) U e(W)

Step 9: Standardize the variables apart again.

11 | A J C E C S E / 2 0 1 6 / A I - C S 0 1 0 8 0 2
1 ⌐(a (X) U ⌐b (X)] U c (X, I) U (e(W)

2 ⌐(a (V) U ⌐b (V) U ⌐c(f(V), g(V)) U d (V, f(V)) U e(Z)

This is the clause form generated. Since each clause is a separate conjunct and since all the variables are
universally quantified, there need be no relationship between the variables of two clauses, even if they
were generated from same wff.

After applying this entire procedure to a set of wff’s, we will have a set of clauses, each of which is a
disjunction of literals. These clauses can now be exploited by the resolution procedure to generate proofs.

RESOLUTION IN PROPOSITIONAL LOGIC

In propositional logic, the procedure for producing the proof by resolution of proposition P with
respect to a set of axioms F is the following.

Algorithm Propositional resolution:

1. Convert all the propositions of F to clause form.


2. Negate P and convert the result to clause form. Add it to the set of clauses obtained in step 1.
3. Repeat until either a contradiction is found or no progress can be made.
a. Select two clauses. Call these the parent clauses.
b. Resolve them together. The resulting clause, the resolvent, will be the disjunction of all
of the literals of both of the parent clause with following exceptions. If there are any pairs
of literals L and ⌐L such that one of the parent clauses contains L and the other contains
⌐L, then select one such pair and eliminate both L and ⌐L from the resolvent.
c. If the resolvent is an empty clause, then a contradiction has been found. If it is not, then
add it to the set of clauses available to the procedure,

Let’s look at a simple example. We want to prove R. First we convert the axioms to clause form.

Given _ axioms      converted _ to _ clause _ form


P                (1)
P
P  Q  R          (2)
( P  Q)  R
S  Q            (3)
(S T)  Q
T  Q            (4)
T                (5)
T

Then we negate R, producing ⌐R, which is already in clause form. We begin by resolving with the clause
⌐R since that is one of the clauses that must be involved in the contradiction we are trying to find.

12 | A J C E C S E / 2 0 1 6 / A I - C S 0 1 0 8 0 2
Although any pair of clauses can be resolved, only those pairs that contain complementary literals will
produce a resolvant that is likely to lead to the goal of producing the empty clause.

A contradiction occurs when a clause becomes so restricted that there is no way it can be true.
This is indicated by generating empty clause. In order for proposition 2 to be true, one of three thing must
be true: ⌐P,⌐Q, or R. But we are assuming that ⌐R is true. Given The only way for proposition 2 to be
true is for one of two things to be true: ⌐P and ⌐Q. that is what resolvant says. But preposition says P is
true. Which means ⌐P cannot be true. This leaves only ⌐Q to be true. Preposition 4 can be true if either
⌐T or ⌐Q is true. The only way for preposition 4 to be true is for ⌐T to be true. Preposition 5 says that T is
true. There is no way for all these clauses to be true in a single interpretation. This is indicated by the
empty clause.

UNIFICATION

To apply inference rules such as modus ponens, an inference system must be able to determine when 2
expressions are the same or match. In propositional logic it is easy to determine the complementary
literals by finding L and ⌐L. In predicate calculus, the process of matching 2 sentences is complicated by
the existence of variables in the expressions. Unification is an algorithm for determining the substitutions
needed to make 2 predicate calculus expressions match.
The basic idea of unification is very simple. First check if their initial predicates are same. If so we can
proceed, otherwise there is no way to proceed. For example two literals
Tryassassinate(Marcus, Caesar)
hate(Marcus, Caesar) cannot be unified. If the predicate symbols match, then we must check the
arguments, one pair at a time.
The only complication in this procedure is that we must find a single, consistent substitution for
the entire literal, not separate one for each piece of it. Eg:

13 | A J C E C S E / 2 0 1 6 / A I - C S 0 1 0 8 0 2
P(x,x)
P(y,z) Compare x,y and decide that to substitute y for x (y/x) and then z for y(z/y).
We write composition as (z/y) (y/x).
Algorithm: unify (L1, L2)
1. if L1 and L2 are both variable or constant, then
(a) if L1 and L2 are identical then return NIL
(b) else if L1 is a variable, then if L1 occurs in L2 then return FAIL; else return {L2 / L1};
(c ) else if L2 is a variable: if then L2 occurs in L1 then return FAIL; else return {L1 / L2};
(d) else return FAIL;
2. if the initial predicate symbol in L1 and L2 are not identical, then return {FAIL}
3. if L1 and L2 have different number of arguments then return {FAIL}
4. Set SUBSET to NIL.
5. for I  I to number of arguments in L1
a. call unify with the ith argument of L1 and the ith arguments of L2, putting result in S
b. if S contains FAIL then return {FAIL}
c. if S is not equal to NIL then:
1. Apply S to the remainder of both L1 and L2
2. SUBST=append(S, SUBST)
6. return SUBST

14 | A J C E C S E / 2 0 1 6 / A I - C S 0 1 0 8 0 2
RESOLUTION IN PREDICATE LOGIC

Unification is an easy way of determining two literals are contradictory – they are if one of them
can be unified with the negation of the other. For example, man(x) and ⌐man(spot) are contradictory,
since man(x) and man(spot) can be unified.

15 | A J C E C S E / 2 0 1 6 / A I - C S 0 1 0 8 0 2
Algorithm Predicate resolution:

1. Convert all the propositions of F to clause form.


2. Negate P and convert the result to clause form. Add it to the set of clauses obtained in step 1.
3. Repeat until either a contradiction is found or no progress can be made.
a. Select two clauses. Call these the parent clauses.
b. Resolve them together. The resulting clause, the resolvent, will be the disjunction of all
of the literals of both of the parent clause with appropriate substitutions and with the
following exception. If there is one pair of literals T1 and ⌐T2 such that one of the parent
clauses contains T1 and the other contains ⌐T2, and T1 and T2 are unifiable. We call T1
and T2 are complementary literals. Use by the substitutions produce the unification to
create the resolvent. If there is more than one pair of complementary literals, only one
pair should be omitted from the resolvent.
c. If the resolvent is an empty clause, then a contradiction has been found. If it is not, then
add it to the set of clauses available to the procedure,

Figure (a) shows the result of clause form conversion and (b) shows the resolution proof the
statement.

Prove: hate(marcus,Caesar)  Marcus hates caesar

16 | A J C E C S E / 2 0 1 6 / A I - C S 0 1 0 8 0 2
Question Answering
AI – theorem proving technique – can also be applied to problem of answering questions.
As we have already suggested , this seems natural since both deriving theorems from axioms and
deriving new facts ( answers) from old facts employ the process of reduction.

Resolution can be used to answer YES/NO questions, such as ” is Marcus alive? “ . Here we are
using resolution to answer fill-in-the-blank questions, such as “When did Marcus die?”

died(Marcus,??)

” is Marcus alive? YES?NO

Prove : Marcus not alive now  ⌐alive(Marcus, now) :

17 | A J C E C S E / 2 0 1 6 / A I - C S 0 1 0 8 0 2
Fill-in-the-blank questions, “When did Marcus die?”

died(Marcus,??)  died(marcus, t) t: date

18 | A J C E C S E / 2 0 1 6 / A I - C S 0 1 0 8 0 2
Whatever value of date we use in producing that contradiction is the answer we want.

 Fig: (a) shows the resolution process to find the answer. The answer to the
question can then be derived from the chain of unifications that lead back to the
starting clause.
 We can eliminate the backward lookup by simply adding the expresson that we
are trying to prove. Fig: (b)
 We can tag it with a special marker so it will not interfere with the resolution
process.
 It just gets carried along, but each time unification is done, the dummy expression
also will also change as they are part of the actual clause.

19 | A J C E C S E / 2 0 1 6 / A I - C S 0 1 0 8 0 2
FORWARD AND BACKWARD CHAINING (Ref: Russel)

 The object of a search procedure is to discover a path through a problem space


form an initial configuration to a goal state.

 2 directions in search procedure


1. Forward , from the start states.
2. Backward , from the goal states.
Forward chaining

The forward chaining method is very simple:

Start with the atomic sentences in the knowledge base and apply modus
Ponens in the forward direction, adding new atomic sentences, until no
further inferences can be made.

Deductive inference rule: [ Modus ponens]


Forward Chaining: Conclude from "A" and "A implies B" to "B".
A
A -> B
B
-------- ------------- -------------

Example: It is raining. (A)


If it is raining, the street is wet.(AB)
The street is wet. (B)

20 | A J C E C S E / 2 0 1 6 / A I - C S 0 1 0 8 0 2
Problem: Does situation Z exists or not ?
The first rule that fires is A->D because A is already in the database. Next we infer D. Existence of C
and D causes second rule to fire and as a consequence F is inferred and placed in the database. This in
turn, causes the third rule F&B->Z to fire, placing Z in the database.
This technique is called forward chaining.

Forward chaining Algorithm


Given m facts F1,F2,....,Fm and

N RULES R1,R2,......Rn
repeat
for i  1 to n do
if one or more current facts match the antecedent of Ri then
1 ) add the new fact(s) define by the consequent
2 ) flag the rule that has been fired
3 ) increase m
until no new facts have been produced.

Problem with forward chaining:


many rules may be applicable. The whole process is not directed towards a goal.

 Backward chaining

These algorithms work backward from the goal, chaining through rules to find known
facts that support the proof. The list of goals can be thought of as a stack waiting to be worked
on: if all of them can be satisfied, then the current branch of the proof succeeds. The backward
chaining algorithm takes the first goal in the list and finds every clause in the knowledge base
whose positive literal or head, unifies with the goal. Each such clause creates a new recursive
call in which the premise, or body, of the clause is added to the goal stack.
Abductive inference rule: [Modus Tollens]
Backward Chaining: Conclude from "B" and "A implies B" to "A".
B
A -> B
A
-------- ------------- -------------
Example: The Street is wet. (B)
If it is raining, the street is wet. (AB)
It is raining. (A)

21 | A J C E C S E / 2 0 1 6 / A I - C S 0 1 0 8 0 2
With this inference method the system starts with what it wants to prove, e.g., that situation Z
exists, and only executes rules that are relevant to establishing it. Figure following shows how
backward chaining would work using the rules from the forward chaining example.

In step 1 the system is told to establish (if it can) that situation Z exists, It first checks the
data base for Z, and when that fails, searches for rules that conclude Z, i.e., have Z on the right
side of the arrow. It finds the rule F&B->Z, and decides that it must establish F and B in order to
conclude Z.

In step 2 the system tries to establish F, first checking the data base and then finding a rule
that concludes F. From this rule, C&D->F, the system decides it must establish C and D to
conclude F.

In steps 3 the system finds C in the data base but decides it must establish A before it can
conclude D. It then finds A in the data base.

In steps 4 through 6 the system executes the third rule to establish D, and then executes the
second rule to establish the original goal, Z. The inference chain created here is identical to the
one created by forward chaining. The difference in two approaches hinges on the method in
which data and rules are searched.

22 | A J C E C S E / 2 0 1 6 / A I - C S 0 1 0 8 0 2

You might also like