Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Chapter 3

SLD-Resolution

This chapter introduces the inference mechanism which is the basis of most logic
programming systems. The idea is a special case of the inference rule called the
resolution principle — an idea that was first introduced by J. A. Robinson in the
mid-sixties for a richer language than definite programs. As a consequence, only a
specialization of this rule, that applies to definite programs, is presented here. For
reasons to be explained later, it will be called the SLD-resolution principle.
In the previous chapter the model-theoretic semantics of definite programs was
discussed. The SLD-resolution principle makes it possible to draw correct conclusions
from the program, thus providing a foundation for a logically sound operational se-
mantics of definite programs. This chapter first defines the notion of SLD-resolution
and then shows its correctness with respect to the model-theoretic semantics. Finally
SLD-resolution is shown to be an instance of a more general notion involving the
construction of proof trees.

3.1 Informal Introduction


Every inference rule of a logical system formalizes some natural way of reasoning.
The presentation of the SLD-resolution principle is therefore preceded by an informal
discussion about the underlying reasoning techniques.
The sentences of logic programs have a general structure of logical implication:

A0 ← A1 , . . . , An (n ≥ 0)

where A0 , . . . , An are atomic formulas and where A0 may be absent (in which case it
is a goal clause). Consider the following definite program that describes a world where
“parents of newborn children are proud”, “Adam is the father of Mary” and “Mary is
newborn”:

33
34 Chapter 3: SLD-Resolution

proud (X) ← parent(X, Y ), newborn(Y ).


parent(X, Y ) ← father (X, Y ).
parent(X, Y ) ← mother (X, Y ).
father (adam, mary).
newborn(mary).

Notice that this program describes only “positive knowledge” — it does not state who
is not proud. Nor does it convey what it means for someone not to be a parent. The
problem of expressing negative knowledge will be investigated in detail in Chapter 4
when extending definite programs with negation.
Say now that we want to ask the question “Who is proud?”. The question concerns
the world described by the program P , that is, the intended model of P . We would
of course like to see the answer “Adam” to this question. However, as discussed in
the previous chapters predicate logic does not provide the means for expressing this
type of interrogative sentences; only declarative ones. Therefore the question may be
formalized as the goal clause:

← proud (Z) (G0 )

which is an abbreviation for ∀ Z ¬proud (Z) which in turn is equivalent to:

¬ ∃ Z proud (Z)

whose reading is “Nobody is proud”. That is, a negative answer to the query above.
The aim now is to show that this answer is a false statement in every model of P (and
in particular in the intended model). Then by Proposition 1.13 it can be concluded
that P |= ∃ Z proud (Z). Alas this would result only in a “yes”-answer to the original
question, while the expected answer is “Adam”. Thus, the objective is rather to find
a substitution θ such that the set P ∪ {¬ proud (Z)θ} is unsatisfiable, or equivalently
such that P |= proud (Z)θ.
The starting point of reasoning is the assumption G0 — “For any Z, Z is not
proud”. Inspection of the program reveals a rule describing one condition for someone
to be proud:

proud (X) ← parent(X, Y ), newborn(Y ). (C0 )

Its equivalent logical reading is:

∀(¬proud (X) ⊃ ¬(parent (X, Y ) ∧ newborn(Y )))

Renaming X into Z, elimination of universal quantification and the use of modus


ponens with respect to G0 yields:

¬ (parent (Z, Y ) ∧ newborn(Y ))

or equivalently:

← parent (Z, Y ), newborn(Y ). (G1 )


3.1 Informal Introduction 35

Thus, one step of reasoning amounts to replacing a goal G0 by another goal G1 which
is true in any model of P ∪ {G0 }. It now remains to be shown that P ∪ {G1 } is
unsatisfiable. Note that G1 is equivalent to:

∀ Z ∀ Y (¬parent (Z, Y ) ∨ ¬newborn(Y ))

Thus G1 can be shown to be unsatisfiable with P if in every model of P there is some


individual who is a parent of a newborn child. Thus, check first whether there are any
parents at all. The program contains a clause:

parent (X, Y ) ← father (X, Y ). (C1 )

which is equivalent to:

∀(¬parent (X, Y ) ⊃ ¬father (X, Y ))

Thus, G1 reduces to:

← father (Z, Y ), newborn(Y ). (G2 )

The new goal G2 can be shown to be unsatisfiable with P if in every model of P there
is some individual who is a father of a newborn child. The program states that “Adam
is the father of Mary”:

father (adam, mary). (C2 )

Thus it remains to be shown that “Mary is not newborn” is unsatisfiable together with
P:

← newborn(mary). (G3 )

But the program also contains a fact:

newborn(mary). (C3 )

equivalent to ¬newborn(mary) ⊃ 2 leading to a refutation:

2 (G4 )

The way of reasoning used in this example is as follows: to show existence of something,
assume the contrary and use modus ponens and elimination of the universal quantifier
to find a counter-example for the assumption. This is a general idea to be used in
computations of logic programs. As illustrated above, a single computation (reasoning)
step transforms a set of atomic formulas — that is, a definite goal — into a new set of
atoms. (See Figure 3.1.) It uses a selected atomic formula p(s1 , . . . , sn ) of the goal and
a selected program clause of the form p(t1 , . . . , tn ) ← A1 , . . . , Am (where m ≥ 0 and
A1 , . . . , Am are atoms) to find a common instance of p(s1 , . . . , sn ) and p(t1 , . . . , tn ). In
other words a substitution θ is constructed such that p(s1 , . . . , sn )θ and p(t1 , . . . , tn )θ
are identical. Such a substitution is called a unifier and the problem of finding unifiers
will be discussed in the next section. The new goal is constructed from the old one by
replacing the selected atom by the set of body atoms of the clause and applying θ to all
36 Chapter 3: SLD-Resolution

← proud (Z).

 proud (X) ← parent (X, Y ), newborn(Y ).



← parent (Z, Y ), newborn(Y ).

 parent (X, Y ) ← father (X, Y ).


← father (Z, Y ), newborn(Y ).

 father (adam, mary).



← newborn(mary).

 newborn(mary).
2

Figure 3.1: Refutation of ← proud (Z).

atoms obtained in that way. This basic computation step can be seen as an inference
rule since it transforms logic formulas. It will be called the resolution principle for
definite programs or SLD-resolution principle. As illustrated above it combines in a
special way modus ponens with the elimination rule for the universal quantifier.
At the last step of reasoning the empty goal, corresponding to falsity, is obtained.
The final conclusion then is the negation of the initial goal. Since this goal is of the
form ∀ ¬(A1 ∧ · · · ∧ Am ), the conclusion is equivalent (by DeMorgan’s laws) to the
formula ∃(A1 ∧ · · · ∧ Am ). The final conclusion can be obtained by the inference rule
known as reductio ad absurdum. Every step of reasoning produces a substitution.
Unsatisfiability of the original goal ← A1 , . . . , Am with P is demonstrated in k steps
by showing that its instance:

← (A1 , . . . , Am )θ1 · · · θk

is unsatisfiable, or equivalently that:

P |= (A1 ∧ · · · ∧ Am )θ1 · · · θk

In the example discussed, the goal “Nobody is proud” is unsatisfiable with P since
its instance “Adam is not proud” is unsatisfiable with P . In other words — in every
model of P the sentence “Adam is proud” is true.
It is worth noticing that the unifiers may leave some variables unbound. In this
case the universal closure of (A1 ∧ · · · ∧ Am )θ1 · · · θk is a logical consequence of P .
Examples of such answers will appear below.
Notice also that generally the computation steps are not deterministic — any atom
of a goal may be selected and there may be several clauses matching the selected atom.
Another potential source of non-determinism concerns the existence of alternative
unifiers for two atoms. These remarks suggest that it may be possible to construct
(sometimes infinitely) many solutions, i.e. counter-examples for the initial goal. On
the other hand it may also happen that the selected atom has no matching clause.
3.2 Unification 37

If so, it means that, using this method, it is not possible to construct any counter-
example for the initial goal. The computation may also loop without producing any
solution.

3.2 Unification
As demonstrated in the previous section, one of the main ingredients in the inference
mechanism is the process of making two atomic formulas syntactically equivalent. Be-
fore defining the notion of SLD-resolution we focus on this process, called unification,
and give an algorithmic solution — a procedure that takes two atomic formulas as
input, and either shows how they can be instantiated to identical atoms or, reports a
failure.
Before considering the problem of unifying atoms (and terms), consider an ordinary
equation over the natural numbers (N ) such as:
.
2x + 3 = 4y + 7 (5)

The equation has a set of solutions; that is, valuations ϕ: {x, y} → N such that ϕ= (2x+
3) = ϕ= (4y + 7) where = is the standard interpretation of the arithmetic symbols. In
this particular example there are infinitely many solutions ({x 7→ 2, y 7→ 0} and
{x 7→ 4, y 7→ 1} etc.) but by a sequence of syntactic transformations that preserve
the set of all solutions the equation may be transformed into an new equation that
compactly represents all solutions to the original equation:
.
x = 2(y + 1) (6)

The transformations exploit domain knowledge (such as commutativity, associativity


etc.) specific to the particular interpretation. In a logic program there is generally
no such knowledge available and the question arises how to compute the solutions
of an equation without any knowledge about the interpretation of the symbols. For
example:
.
f (X, g(Y )) = f (a, g(X)) (7)

Clearly it is no longer possible to apply all the transformations that were applied above
since the interpretation of f /2, g/1 is no longer fixed. However, any solution of the
equations:
. .
{X = a, g(Y ) = g(X)} (8)

must clearly be a solution of equation (7). Similarly, any solution of:


. .
{X = a, Y = X} (9)

must be a solution of equations (8). Finally any solution of:


. .
{X = a, Y = a} (10)

is a solution of (9). By analogy to (6) this is a compact representation of some solutions


to equation (7). However, whether it represents all solution depends on how the
38 Chapter 3: SLD-Resolution

symbols f /2, g/1 and a are interpreted. For example, if f /2 denotes integer addition,
g/1 the successor function and a the integer zero, then (10) represents only one solution
to equation (7). However, equation (7) has infinitely many integer solutions — any ϕ
such that ϕ(Y ) = 0 is a solution.
On the other hand, consider a Herbrand interpretation =; Solving of an equation
.
s = t amounts to finding a valuation ϕ such that ϕ= (s) = ϕ= (t). Now a valuation in
the Herbrand domain is a mapping from variables of the equations to ground terms
(that is, a substitution) and the interpretation of a ground term is the term itself.
Thus, a solution in the Herbrand domain is a grounding substitution ϕ such that
sϕ and tϕ are identical ground terms. This brings us to the fundamental concept of
unification and unifiers:
Definition 3.1 (Unifier) Let s and t be terms. A substitution θ such that sθ and
tθ are identical (denoted sθ = tθ) is called a unifier of s and t.
The search for a unifier of two terms, s and t, will be viewed as the process of solving
. . .
the equation s = t. Therefore, more generally, if {s1 = t1 , . . . , sn = tn } is a set of
equations, then θ is called a unifier of the set if si θ = ti θ for all 1 ≤ i ≤ n. For instance,
the substitution {X/a, Y /a} is a unifier of equation (7). It is also a unifier of (8)–(10).
In fact, it is the only unifier as long as “irrelevant” variables are not introduced. (For
instance, {X/a, Y /a, Z/a} is also a unifier.) The transformations informally used in
steps (7)–(10) preserve the set of all solutions in the Herbrand domain. (The full set
of transformations will soon be presented.) Note that a solution to a set of equations
is a (grounding) unifier. Thus, if a set of equations has a unifier then the set also has
a solution.
However, not all sets of equations have a solution/unifier. For instance, the set
.
{sum(1, 1) = 2} is not unifiable. Intuitively sum may be thought of as integer addition,
but bear in mind that the symbols have no predefined interpretation in a logic program.
(In Chapters 13–14 more powerful notions of unification are discussed.)
It is often the case that a set of equations have more than one unifier. For in-
.
stance, both {X/g(Z), Y /Z} and {X/g(a), Y /a, Z/a} are unifiers of the set {f (X, Y ) =
f (g(Z), Z)}. Under the first unifier the terms instantiate to f (g(Z), Z) and under the
second unifier the terms instantiate to f (g(a), a). The second unifier is in a sense
more restrictive than the first, as it makes the two terms ground whereas the first
still provides room for some alternatives in that is does not specify how Z should be
bound. We say that {X/g(Z), Y /Z} is more general than {X/g(a), Y /a, Z/a}. More
formally this can be expressed as follows:
Definition 3.2 (Generality of substitutions) A substitution θ is said to be more
general than a substitution σ (denoted σ  θ) iff there exists a substitution ω such
that σ = θω.

Definition 3.3 (Most general unifier) A unifier θ is said to be a most general


unifier (mgu) of two terms iff θ is more general than any other unifier of the terms.

. .
Definition 3.4 (Solved form) A set of equations {X1 = t1 , . . . , Xn = tn } is said
to be in solved form iff X1 , . . . , Xn are distinct variables none of which appear in
t1 , . . . , tn .
3.2 Unification 39

There is a close correspondence between a set of equations in solved form and the most
general unifier(s) of that set as shown by the following theorem:
. .
Proposition 3.5 Let {X1 = t1 , . . . , Xn = tn } be a set of equations in solved form.
Then {X1 /t1 , . . . , Xn /tn } is an (idempotent) mgu of the solved form.

Proof : First define:


. .
E := {X1 = t1 , . . . , Xn = tn }
θ := {X1 /t1 , . . . , Xn /tn }

Clearly θ is an idempotent unifier of E. It remains to be shown that θ is more general


than any other unifier of E.
Thus, assume that σ is a unifier of E. Then Xi σ = ti σ for 1 ≤ i ≤ n. It must
follow that Xi /ti σ ∈ σ for 1 ≤ i ≤ n. In addition σ may contain some additonal pairs
Y1 /s1 , . . . , Ym /sm such that {X1 , . . . , Xn } ∩ {Y1 , . . . , Ym } = ?. Thus, σ is of the form:

{X1 /t1 σ, . . . , Xn /tn σ, Y1 /s1 , . . . , Ym /sm }

Now θσ = σ. Thus, there exists a substitution ω (viz. σ) such that σ = θω. Therefore,
θ is an idempotent mgu.

Definition 3.6 (Equivalence of sets of equations) Two sets of equations E1 and


E2 are said to be equivalent if they have the same set of unifiers.

Note that two equivalent sets of equations must have the same set of solutions in any
Herbrand interpretation.
The definition can be used as follows: to compute a most general unifier mgu(s, t)
.
of two terms s and t, first try to transform the equation {s = t} into an equivalent
solved form. If this fails then mgu(s, t) = failure. However, if there is a solved form
. .
{X1 = t1 , . . . , Xn = tn } then mgu(s, t) = {X1 /t1 , . . . , Xn /tn }.
Figure 3.2 presents a (non-deterministic) algorithm which takes as input a set of
equations E and terminates returning either a solved form equivalent to E or failure
if no such solved form exists. Note that constants are viewed as function symbols of
.
arity 0. Thus, if an equation c = c gets selected, the equation is simply removed by
case 1. Before proving the correctness of the algorithm some examples are used to
illustrate the idea:
.
Example 3.7 The set {f (X, g(Y )) = f (g(Z), Z)} has a solved form since:
. . .
{f (X, g(Y )) = f (g(Z), Z)} ⇒ {X = g(Z), g(Y ) = Z}
. .
⇒ {X = g(Z), Z = g(Y )}
. .
⇒ {X = g(g(Y )), Z = g(Y )}
.
The set {f (X, g(X), b) = f (a, g(Z), Z)}, on the other hand, does not have a solved
form since:
. . . .
{f (X, g(X), b) = f (a, g(Z), Z)} ⇒ {X = a, g(X) = g(Z), b = Z}
. . .
⇒ {X = a, g(a) = g(Z), b = Z}
40 Chapter 3: SLD-Resolution

Input: A set E of equations.


Output: An equivalent set of equations in solved form or failure.

repeat
.
select an arbitrary s = t ∈ E;
.
case s = t of
.
f (s1 , . . . , sn ) = f (t1 , . . . , tn ) where n ≥ 0 ⇒
. .
replace equation by s1 = t1 , . . . , sn = tn ; % case 1
.
f (s1 , . . . , sm ) = g(t1 , . . . , tn ) where f /m 6= g/n ⇒
halt with failure; % case 2
.
X =X ⇒
remove the equation; % case 3
.
t = X where t is not a variable ⇒
.
replace equation by X = t; % case 4
.
X = t where X 6= t and X has more than one occurrence in E ⇒
if X is a proper subterm of t then
halt with failure % case 5a
else
replace all other occurrences of X by t; % case 5b
esac
until no action is possible on any equation in E;
halt with E;

Figure 3.2: Solved form algorithm

. . .
⇒ {X = a, a = Z, b = Z}
. . .
⇒ {X = a, Z = a, b = Z}
. . .
⇒ {X = a, Z = a, b = a}
⇒ failure
.
The algorithm fails since case 2 applies to b = a. Finally consider:
. . .
{f (X, g(X)) = f (Z, Z)} ⇒ {X = Z, g(X) = Z}
. .
⇒ {X = Z, g(Z) = Z}
. .
⇒ {X = Z, Z = g(Z)}
⇒ failure

The set does not have a solved form since Z is a proper subterm of g(Z).

Theorem 3.8 The solved form algorithm in Figure 3.2 terminates and returns an
equivalent solved form or failure if no such solved form exists.

Proof : First consider termination: Note that case 5b is the only case that may increase
the number of symbol occurrences in the set of equations. However, case 5b can be
applied at most once for each variable X. Thus, case 5b can be applied only a finite
3.2 Unification 41

number of times and may introduce only a finite number of new symbol occurrences.
Case 2 and case 5a terminate immediately and case 1 and 3 strictly decrease the
number of symbol occurrences in the set. Since case 4 cannot be applied indefinitely,
but has to be intertwined with the other cases it follows that the algorithm always
terminates.
It should be evident that the algorithm either returns failure or a set of equations
in solved form. Thus, it remains to be shown that each iteration of the algorithm
preserves equivalence between successive sets of equations. It is easy to see that if
case 2 or 5a apply to some equation in:
. .
{s1 = t1 , . . . , sn = tn } (E1 )

then the set cannot possibly have a unifier. It is also easy to see that if any of case 1, 3
or 4 apply, then the new set of equations has the same set of unifiers. Finally assume
.
that case 5b applies to some equation si = ti . Then the new set is of the form:
. . . . .
{s1 θ = t1 θ, . . . , si−1 θ = ti−1 θ, si = ti , si+1 θ = ti+1 θ, . . . sn θ = tn θ} (E2 )

where θ := {si /ti }. First assume that σ is a unifier of E1 — that is, sj σ = tj σ for
every 1 ≤ j ≤ n. In particular, it must hold that si σ = ti σ. Since si is a variable
which is not a subterm of ti it must follow that si /ti σ ∈ σ. Moreover, θσ = σ and it
therefore follows that σ is a unifier also of E2 .
Next, assume that σ is a unifier of E2 . Thus, si /ti σ ∈ σ and θσ = σ which must
then be a unifier also of E1 .

The algorithm presented in Figure 3.2 may be very inefficient. One of the reasons is
case 5a; That is, checking if a variable X occurs inside another term t. This is often
referred to as the occur-check. Assume that the time of occur-check is linear with
respect to the size |t| of t.1 Consider application of the solved form algorithm to the
equation:
.
g(X1 , . . . , Xn ) = g(f (X0 , X0 ), f (X1 , X1 ), . . . , f (Xn−1 , Xn−1 ))

where X0 , . . . , Xn are distinct. By case 1 this reduces to:


. . .
{X1 = f (X0 , X0 ), X2 = f (X1 , X1 ), . . . , Xn = f (Xn−1 , Xn−1 )}

Assume that the equation selected in step i is of the form Xi = f (. . . , . . .). Then in the
.
k-th iteration the selected equation is of the form Xk = Tk where Ti+1 := f (Ti , Ti ) and
T0 := X0 . Hence, |Ti+1 | = 2|Ti | + 1. That is, |Tn | > 2n . This shows the exponential
dependency of the unification time on the length of the structures. In this example
the growth of the argument lengths is caused by duplication of subterms. As a matter
of fact, the same check is repeated many times. Something that could be avoided by
sharing various instances of the same structure. In the literature one can find linear
algorithms but they are sometimes quite elaborate. On the other hand, Prolog systems
usually “solve” the problem simply by omitting the occur-check during unification.
Roughly speaking such an approach corresponds to a solved form algorithm where
case 5a–b is replaced by:
1 The size of a term is the total number of constant, variable and functor occurrences in t.
42 Chapter 3: SLD-Resolution

.
X = t where X 6= t and X has more than one occurrence in E ⇒
replace all other occurrences of X by t; % case 5

A pragmatic justification for this solution is the fact that rule 5a (occur check) never is
used during the computation of many Prolog programs. There are sufficient conditions
which guarantee this, but in general this property is undecidable. The ISO Prolog
standard (1995) states that the result of unification is undefined if case 5b can be
applied to the set of equations. Strictly speaking, removing case 5a causes looping
of the algorithm on equations where case 5a would otherwise apply. For example, an
.
attempt to solve X = f (X) by the modified algorithm will produce a new equation
. .
X = f (f (X)). However, case 5 is once again applicable yielding X = f (f (f (f (X))))
and so forth. In practice many Prolog systems do not loop, but simply bind X to
the infinite structure f (f (f (. . .))). (The notation X/f (∞) will be used to denote this
binding.) Clearly, {X/f (∞)} is an infinite “unifier” of X and f (X). It can easily
be represented in the computer by a finite cyclic data structure. But this amounts
to generalization of the concepts of term, substitution and unifier for the infinite case
not treated in classical logic. Implementation of unification without occur-check may
result in unsoundness as will be illustrated in Example 3.21.

Before concluding the discussion about unification we study the notion of most general
unifier in more detail. It turns out that the notion of mgu is a subtle one; For instance,
there is generally not a unique most general unifier of two terms s and t. A trivial
.
example is the equation f (X) = f (Y ) which has at least two mgu’s; namely {X/Y }
and {Y /X}. Part of the confusion stems from the fact that  (“being more general
than”) is not an ordering relation. It is reflexive: That is, any substitution θ is “more
general” than itself since θ = θ. As might be expected it is also transitive: If θ1 = θ2 ω1
and θ2 = θ3 ω2 then obviously θ1 = θ3 ω2 ω1 . However,  is not anti-symmetric. For
instance, consider the substitution θ := {X/Y, Y /X} and the identity substitution .
The latter is obviously more general than θ since θ = θ. But θ is also more general
than , since  = θθ. It may seem odd that two distinct substitutions are more general
than one another. Still there is a rational explanation. First consider the following
definition:
Definition 3.9 (Renaming) A substitution {X1 /Y1 , . . . , Xn /Yn } is called a renam-
ing substitution iff Y1 , . . . , Yn is a permutation of X1 , . . . , Xn .
A renaming substitution represents a bijective mapping between variables (or more
generally terms). Such a substitution always preserves the structure of a term; if θ
is a renaming and t a term, then tθ and t are equivalent but for the names of the
variables. Now, the fact that a renaming represents a bijection implies that there
must be an inverse mapping. Indeed, if {X1 /Y1 , . . . , Xn /Yn } is a renaming then
{Y1 /X1 , . . . , Yn /Xn } is its inverse. We denote the inverse of θ by θ−1 and observe
that θθ−1 = θ−1 θ = .
Proposition 3.10 Let θ be an mgu of s and t and assume that ω is a renaming.
Then θω is an mgu of s and t.
The proof of the proposition is left as an exercise. So is the proof of the following
proposition:
3.3 SLD-Resolution 43

Proposition 3.11 Let θ and σ be substitutions. If θ  σ and σ  θ then there exists


a renaming substitution ω such that σ = θω (and θ = σω −1 ).
Thus, according to the above propositions, the set of all mgu’s of two terms is closed
under renaming.

3.3 SLD-Resolution
The method of reasoning discussed informally in Section 3.1 can be summarized as
the following inference rule:
∀ ¬ (A1 ∧ · · · ∧ Ai−1 ∧ Ai ∧ Ai+1 ∧ · · · ∧ Am ) ∀ (B0 ← B1 ∧ · · · ∧ Bn )
∀ ¬ (A1 ∧ · · · ∧ Ai−1 ∧ B1 ∧ · · · ∧ Bn ∧ Ai+1 ∧ · · · ∧ Am )θ

or (using logic programming notation):


← A1 , . . . , Ai−1 , Ai , Ai+1 , . . . , Am B0 ← B1 , . . . , Bn
← (A1 , . . . , Ai−1 , B1 , . . . , Bn , Ai+1 , . . . , Am )θ
where
(i ) A1 , . . . , Am are atomic formulas;
(ii ) B0 ← B1 , . . . , Bn is a (renamed) definite clause in P (n ≥ 0);
(iii ) mgu(Ai , B0 ) = θ.
The rule has two premises — a goal clause and a definite clause. Notice that each
of them is separately universally quantified. Thus the scopes of the quantifiers are
disjoint. On the other hand, there is only one universal quantifier in the conclusion
of the rule. Therefore it is required that the sets of variables in the premises are
disjoint. Since all variables of the premises are bound it is always possible to rename
the variables of the definite clause to satisfy this requirement (that is, to apply some
renaming substitution to it).
The goal clause may include several atomic formulas which unify with the head
of some clause in the program. In this case it may be desirable to introduce some
deterministic choice of the selected atom Ai for unification. In what follows it is
assumed that this is given by some function which for a given goal selects the subgoal
for unification. The function is called the selection function or the computation rule.
It is sometimes desirable to generalize this concept so that, in one situation, the
computation rule selects one subgoal from a goal G but, in another situation, selects
another subgoal from G. In that case the computation rule is not a function on goals
but something more complicated. However, for the purpose of this book this extra
generality is not needed.
The inference rule presented above is the only one needed for definite programs. It
is a version of the inference rule called the resolution principle, which was introduced
by J. A. Robinson in 1965. The resolution principle applies to clauses. Since definite
clauses are restricted clauses the corresponding restricted form of resolution presented
below is called SLD-resolution (Linear resolution for Definite clauses with Selection
function).
44 Chapter 3: SLD-Resolution

Next the use of the SLD-resolution principle is discussed for a given definite pro-
gram P . The starting point, as exemplified in Section 3.1, is a definite goal clause G0
of the form:

← A1 , . . . , Am (m ≥ 0)

From this goal a subgoal Ai is selected (if possible) by the computation rule. A new
goal clause G1 is constructed by selecting (if possible) some renamed program clause
B0 ← B1 , . . . , Bn (n ≥ 0) whose head unifies with Ai (resulting in an mgu θ1 ). If so,
G1 will be of the form:

← (A1 , . . . , Ai−1 , B1 , . . . , Bn , Ai+1 , . . . , Am )θ1

(According to the requirement above, the variables of the program clause are being
renamed so that they are different from those of G0 .) Now it is possible to apply
the resolution principle to G1 thus obtaining G2 , etc. This process may or may not
terminate. There are two cases when it is not possible to obtain Gi+1 from Gi :
• the first is when the selected subgoal cannot be resolved (i.e. is not unifiable)
with the head of any program clause;
• the other case appears when Gi = 2 (i.e. the empty goal).
The process described above results in a finite or infinite sequence of goals starting
with the initial goal. At every step a program clause (with renamed variables) is used
to resolve the subgoal selected by the computation rule < and an mgu is created.
Thus, the full record of a reasoning step would be a pair hGi , Ci i, i ≥ 0, where Gi is a
goal and Ci a program clause with renamed variables. Clearly, the computation rule
< together with Gi and Ci determines (up to renaming of variables) the mgu (to be
denoted θi+1 ) produced at the (i + 1)-th step of the process. A goal Gi+1 is said to be
derived (directly) from Gi and Ci via < (or alternatively, Gi and Ci resolve into Gi+1 ).

Definition 3.12 (SLD-derivation) Let G0 be a definite goal, P a definite program


and < a computation rule. An SLD-derivation of G0 (using P and <) is a finite or
infinite sequence of goals:
C0 Cn−1
G0 G1 · · · Gn−1 Gn . . .

where each Gi+1 is derived directly from Gi and a renamed program clause Ci via
<.

Note that since there are usually infinitely many ways of renaming a clause there are
formally infinitely many derivations. However, some of the derivations differ only in
the names of the variables used. To avoid some technical problems and to make the
renaming of variables in a derivation consistent, the variables in the clause Ci of a
derivation are renamed by adding the subscript i to every variable in the clause. In
what follows we consider only derivations where this renaming strategy is used.

Each finite SLD-derivation of the form:


C0 Cn−1
G0 G1 · · · Gn−1 Gn
3.3 SLD-Resolution 45

yields a sequence θ1 , . . . , θn of mgu’s. The composition



θ1 θ2 · · · θn if n > 0
θ :=
 if n = 0

of mgu’s is called the computed substitution of the derivation.

Example 3.13 Consider the initial goal ← proud (Z) and the program discussed in
Section 3.1.

G0 : ← proud (Z).
C0 : proud (X0 ) ← parent (X0 , Y0 ), newborn(Y0 ).

Unification of proud (Z) and proud (X0 ) yields e.g. the mgu θ1 = {X0 /Z}. Assume that
a computation rule which always selects the leftmost subgoal is used (if nothing else is
said, this computation rule is used also in what follows). Such a computation rule will
occasionally be referred to as Prolog’s computation rule since this is the computation
rule used by most Prolog systems. The first derivation step yields:

G1 : ← parent (Z, Y0 ), newborn(Y0 ).


C1 : parent (X1 , Y1 ) ← father (X1 , Y1 ).

In the second resolution step the mgu θ2 = {X1 /Z, Y1 /Y0 } is obtained. The derivation
then proceeds as follows:

G2 : ← father (Z, Y0 ), newborn(Y0 ).


C2 : father (adam, mary).

G3 : ← newborn(mary).
C3 : newborn(mary).

G4 : 2
The computed substitution of this derivation is:

θ1 θ2 θ3 θ4 = {X0 /Z}{X1 /Z, Y1 /Y0 }{Z/adam, Y0 /mary}


= {X0 /adam, X1 /adam, Y1 /mary, Z/adam, Y0 /mary}

A derivation like the one above is often represented graphically as in Figure 3.1.

Example 3.14 Consider the following definite program:

1: grandfather (X, Z) ← father (X, Y ), parent (Y, Z).


2: parent (X, Y ) ← father (X, Y ).
3: parent (X, Y ) ← mother (X, Y ).
4: father (a, b).
5: mother (b, c).
46 Chapter 3: SLD-Resolution

← grandfather (a, X).

 grandfather (X , Z ) ← father (X , Y ), parent (Y , Z ).


0 0 0 0 0 0

← father (a, Y ), parent (Y , X).


0 0

 father (a, b).


← parent (b, X).

 parent (X , Y ) ← mother (X , Y ).
2 2 2 2

← mother (b, X).

 mother (b, c).


2

Figure 3.3: SLD-derivation

Figure 3.3 depicts a finite SLD-derivation of the goal ← grandfather (a, X) (again using
Prolog’s computation rule).

SLD-derivations that end in the empty goal (and the bindings of variables in the initial
goal of such derivations) are of special importance since they correspond to refutations
of (and provide answers to) the initial goal:

Definition 3.15 (SLD-refutation) A (finite) SLD-derivation:


C0 Cn
G0 G1 · · · Gn Gn+1

where Gn+1 = 2 is called an SLD-refutation of G0 .

Definition 3.16 (Computed answer substitution) The computed substitution of


an SLD-refutation of G0 restricted to the variables in G0 is called a computed answer
substitution for G0 .

In Examples 3.13 and 3.14 the computed answer substitutions are {Z/adam} and
{X/c} respectively.
For a given initial goal G0 and computation rule, the sequence G1 , . . . , Gn+1 of
goals in a finite derivation G0 G1 · · · Gn Gn+1 is determined (up to renaming
of variables) by the sequence C0 , . . . , Cn of (renamed) program clauses used. This is
particularly interesting in the case of refutations. Let:

G0
C0
G1 · · · Gn
Cn
2
be a refutation. It turns out that if the computation rule is changed there still exists
another refutation:
C00 0
G0 G01 · · · G0n
Cn
2
3.3 SLD-Resolution 47

← grandfather (a, X).

 grandfather (X , Z ) ← father (X , Y ), parent (Y , Z ).


0 0 0 0 0 0

← father (a, Y ), parent (Y , X).


0 0

 father (a, b).


← parent (b, X).

 parent (X , Y ) ← father (X , Y ).
2 2 2 2

← father (b, X).

Figure 3.4: Failed SLD-derivation

of G0 which has the same computed answer substitution (up to renaming of variables)
and where the sequence C00 , . . . , Cn0 of clauses used is a permutation of the sequence
C0 , . . . , Cn . This property will be called independence of the computation rule and it
will be discussed further in Section 3.6.
Not all SLD-derivations lead to refutations. As already pointed out, if the selected
subgoal cannot be unified with any clause, it is not possible to extend the derivation
any further:

Definition 3.17 (Failed derivation) A derivation of a goal clause G0 whose last


element is not empty and cannot be resolved with any clause of the program is called
a failed derivation.

Figure 3.4 depicts a failed derivation of the program and goal in Example 3.14. Since
the selected literal (the leftmost one) does not unify with the head of any clause in
the program, the derivation is failed. Note that a derivation is failed even if there is
some other subgoal but the selected one which unifies with a clause head.
By a complete derivation we mean a refutation, a failed derivation or an infinite
derivation. As shown above, a given initial goal clause G0 may have many complete
derivations via a given computation rule <. This happens if the selected subgoal of
some goal can be resolved with more than one program clause. All such derivations
may be represented by a possibly infinite tree called the SLD-tree of G0 (using P and
<).

Definition 3.18 (SLD-tree) Let P be a definite program, G0 a definite goal and


< a computation rule. The SLD-tree of G0 (using P and <) is a (possibly infinite)
labelled tree satisfying the following conditions:

• the root of the tree is labelled by G0 ;

• if the tree contains a node labelled by Gi and there is a renamed clause Ci ∈ P


such that Gi+1 is derived from Gi and Ci via < then the node labelled by Gi
has a child labelled by Gi+1 . The edge connecting them is labelled by Ci .
48 Chapter 3: SLD-Resolution

← grandfather (a, X).

← father (a, Y0 ), parent (Y0 , X).

← parent (b, X).


,@
, @@
,
← father (b, X). ← mother (b, X).

Figure 3.5: SLD-tree of ← grandfather (a, X)

The nodes of an SLD-tree are thus labelled by goals of a derivation. The edges are
labelled by the clauses of the program. There is in fact a one-to-one correspondence
between the paths of the SLD-tree and the complete derivations of G0 under a fixed
computation rule <. The sequence:
C0 Ck
G0 G1 · · · Gk ···

is a complete derivation of G0 via < iff there exists a path of the SLD-tree of the form
G0 , G1 , . . . , Gk , . . . such that for every i, the edge hGi , Gi+1 i is labelled by Ci . Usually
this label is abbreviated (e.g. by numbering the clauses of the program) or omitted
when drawing the tree. Additional labelling with the mgu θi+1 or some part of it may
also be included.
Example 3.19 Consider again the program of Example 3.14. The SLD-tree of the
goal ← grandfather (a, X) is depicted in Figure 3.5.
The SLD-trees of a goal clause G0 are often distinct for different computation rules. It
may even happen that the SLD-tree for G0 under one computation rule is finite whereas
the SLD-tree of the same goal under another computation rule is infinite. However,
the independence of computation rules means that for every refutation path in one
SLD-tree there exists a refutation path in the other SLD-tree with the same length
and with the same computed answer substitution (up to renaming). The sequences of
clauses labelling both paths are permutations of one another.

3.4 Soundness of SLD-resolution


The method of reasoning presented informally in Section 3.1 was formalized as the
SLD-resolution principle in the previous section. As a matter of fact one more inference
3.4 Soundness of SLD-resolution 49

rule is used after construction of a refutation. It applies the computed substitution


of the refutation to the body of the initial goal to get the final conclusion. This is
the most interesting part of the process since if the initial goal is seen as a query, the
computed substitution of the refutation restricted to its variables is an answer to this
query. It is therefore called a computed answer substitution. In this context it is also
worth noticing the case when no answer substitution exists for a given query. Prolog
systems may sometimes discover this and deliver a “no” answer. The logical meaning
of “no” will be discussed in the next chapter.
As discussed in Chapter 1, the introduction of formal inference rules raises the
questions of their soundness and completeness. Soundness is an essential property
which guarantees that the conclusions produced by the system are correct. Correctness
in this context means that they are logical consequences of the program. That is, that
they are true in every model of the program. Recall the discussion of Chapter 2 —
a definite program describes many “worlds” (i.e. models), including the one which
is meant by the user, the intended model. Soundness is necessary to be sure that
the conclusions produced by any refutation are true in every world described by the
program, in particular in the intended one.
This raises the question concerning the soundness of the SLD-resolution principle.
The discussion in Section 3.1 gives some arguments which may by used in a formal
proof. However, the intermediate conclusions produced at every step of refutation are
of little interest for the user of a definite program. Therefore the soundness of SLD-
resolution is usually understood as correctness of computed answer substitutions. This
can be stated as the following theorem (due to Clark (1979)).

Theorem 3.20 (Soundness of SLD-resolution) Let P be a definite program, < a


computation rule and θ an <-computed answer substitution for a goal ← A1 , . . . , Am .
Then ∀ ((A1 ∧ · · · ∧ Am )θ) is a logical consequence of the program.

Proof : Any computed answer substitution is obtained by a refutation of the goal via <.
The proof is based on induction over the number of resolution steps of the refutation.
First consider refutations of length one. This is possible only if m = 1 and A1
resolves with some fact A with the mgu θ1 . Hence A1 θ1 is an instance of A. Now let
θ be θ1 restricted to the variables in A1 . Then A1 θ = A1 θ1 . It is a well-known fact
that the universal closure of an instance of a formula F is a logical consequence of the
universal closure of F (cf. exercise 1.9, Chapter 1). Hence the universal closure of A1 θ
is a logical consequence of the clause A and consequently of the program P .
Next, assume that the theorem holds for refutations with n − 1 steps. Take a
refutation with n steps of the form:

G0
C0
G1 · · · Gn−1
Cn−1
2
where G0 is the original goal clause ← A1 , . . . , Am .
Now, assume that Aj is the selected atom in the first derivation step and that C0
is a (renamed) clause B0 ← B1 , . . . , Bk (k ≥ 0) in P . Then Aj θ1 = B0 θ1 and G1 has
to be of the form:

← (A1 , . . . , Aj−1 , B1 , . . . , Bk , Aj+1 , . . . , Am )θ1


50 Chapter 3: SLD-Resolution

By the induction hypothesis the formula:

∀ (A1 ∧ . . . ∧ Aj−1 ∧ B1 ∧ . . . ∧ Bk ∧ Aj+1 ∧ . . . ∧ Am )θ1 · · · θn (11)

is a logical consequence of the program. It follows by definition of logical consequence


that also the universal closure of:

(B1 ∧ . . . ∧ Bk )θ1 · · · θn (12)

is a logical consequence of the program. By (11):

∀ (A1 ∧ . . . ∧ Aj−1 ∧ Aj+1 ∧ . . . ∧ Am )θ1 · · · θn (13)

is a logical consequence of P . Now because of (12) and since:

∀ (B0 ← B1 ∧ . . . ∧ Bk )θ1 · · · θn

is a logical consequence of the program (being an instance of a clause in P ) it follows


that:

∀ B0 θ1 · · · θn (14)

is a logical consequence of P . Hence by (13) and (14):

∀ (A1 ∧ . . . ∧ Aj−1 ∧ B0 ∧ Aj+1 ∧ . . . ∧ Am )θ1 · · · θn (15)

is also a logical consequence of the program. But since θ1 is a most general unifier of
B0 and Aj , B0 can be replaced by Aj in (15). Now let θ be θ1 · · · θn restricted to the
variables in A1 , . . . , Am then:

∀ (A1 ∧ . . . ∧ Am )θ

is a logical consequence of P , which concludes the proof.

It should be noticed that the theorem does not hold if the unifier is computed by a
“unification” algorithm without occur-check. For illustration consider the following
example.

Example 3.21 A term is said to be f-constructed with a term T if it is of the form


f (T, Y ) for any term Y . A term X is said to be bizarre if it is f -constructed with
itself. (As discussed in Section 3.2 there are no “bizarre” terms since no term can
include itself as a proper subterm.) Finally a term X is said to be crazy if it is the
second direct substructure of a bizarre term. These statements can be formalized as
the following definite program:

f constructed(f (T, Y ), T ).
bizarre(X) ← f constructed(X, X).
crazy(X) ← bizarre(f (Y, X)).
3.5 Completeness of SLD-resolution 51

Now consider the goal ← crazy(X) — representing the query “Are there any crazy
terms?”. There is only one complete SLD-derivation (up to renaming). Namely:

G0 : ← crazy(X)
C0 : crazy(X0 ) ← bizarre(f (Y0 , X0 ))

G1 : ← bizarre(f (Y0 , X))


C1 : bizarre(X1 ) ← f constructed(X1 , X1 )

G2 : ← f constructed(f (Y0 , X), f (Y0 , X))

The only subgoal in G2 does not unify with the first program clause because of the
occur-check. This corresponds to our expectations: Since, in the intended model, there
are no bizarre terms, there cannot be any crazy terms. Since SLD-resolution is sound,
if there were any answers to G0 they would be correct also in the intended model.
Assume now that a “unification” algorithm without occur-check is used. Then the
derivation can be extended as follows:

G2 : ← f constructed(f (Y0 , X), f (Y0 , X))


C2 : f constructed(f (T2 , Y2 ), T2 )

G3 : 2
The “substitution” obtained in the last step is {X/Y2 , Y0 /f (∞, Y2 ), T2 /f (∞, Y2 )} (see
Section 3.2). The resulting answer substitution is {X/Y2 }. In other words the conclu-
sion is that every term is crazy, which is not true in the intended model. Thus it is
not a logical consequence of the program which shows that the inference is no longer
sound.

3.5 Completeness of SLD-resolution


Another important problem is whether all correct answers for a given goal (i.e. all
logical consequences) can be obtained by SLD-resolution. The answer is given by the
following theorem, called the completeness theorem for SLD-resolution (due to Clark
(1979)).

Theorem 3.22 (Completeness of SLD-resolution) Let P be a definite program,


← A1 , . . . , An a definite goal and < a computation rule. If P |= ∀(A1 ∧ · · · ∧ An )σ,
there exists a refutation of ← A1 , . . . , An via < with the computed answer substitution
θ such that (A1 ∧ · · · ∧ An )σ is an instance of (A1 ∧ · · · ∧ An )θ.

The proof of the theorem is not very difficult but is rather long and requires some
auxiliary notions and lemmas. It is therefore omitted. The interested reader is referred
to e.g. Apt (1990), Lloyd (1987), Stärk (1990) or Doets (1994).
Theorem 3.22 shows that even if all correct answers cannot be computed using
SLD-resolution, every correct answer is an instance of some computed answer. This is
52 Chapter 3: SLD-Resolution

Figure 3.6: Depth-first search with backtracking

due to the fact that only most general unifiers — not arbitrary unifiers — are computed
in derivations. However every particular correct answer is a special instance of some
computed answer since all unifiers can always be obtained by further instantiation of
a most general unifier.

Example 3.23 Consider the goal clause ← p(X) and the following program:
p(f (Y )).
q(a).
Clearly, {X/f (a)} is a correct answer to the goal — that is:

{p(f (Y )), q(a)} |= p(f (a))

However, the only computed answer substitution (up to renaming) is {X/f (Y0 )}.
Clearly, this is a more general answer than {X/f (a)}.

The completeness theorem confirms existence of a refutation which produces a more


general answer than any given correct answer. However the problem of how to find this
refutation is still open. The refutation corresponds to a complete path in the SLD-tree
of the given goal and computation rule. Thus the problem reduces to a systematic
search of the SLD-tree. Existing Prolog systems often exploit some ordering on the
program clauses, e.g. the textual ordering in the source program. This imposes the
ordering on the edges descending from a node of the SLD-tree. The tree is then
traversed in a depth-first manner following this ordering. For a finite SLD-tree this
strategy is complete. Whenever a leaf node of the SLD-tree is reached the traversal
continues by backtracking to the last preceding node of the path with unexplored
branches (see Figure 3.6). If it is the empty goal the answer substitution of the
completed refutation is reported before backtracking. However, as discussed in Section
3.3 the SLD-tree may be infinite. In this case the traversal of the tree will never
3.6 Proof Trees 53

Figure 3.7: Breadth-first search

terminate and some existing answers may never be computed. This can be avoided by a
different strategy of tree traversal, like for example the breadth-first strategy illustrated
in Figure 3.7. However this creates technical difficulties in implementation due to very
complicated memory management being needed in the general case. Because of this,
the majority of Prolog systems use the depth-first strategy for traversal of the SLD-
tree.

3.6 Proof Trees


The notion of SLD-derivation resembles the notion of derivation used in formal gram-
mars (see Chapter 10). By analogy to grammars a derivation can be mapped into
a graph called a derivation tree. Such a tree is constructed by combining together
elementary trees representing renamed program clauses. A definite clause of the form:

A0 ← A1 , . . . , An (n ≥ 0)

is said to have an elementary tree of one of the forms:


Aq0 Aq0

,,@@
q q
if n > 0 if n = 0
A ··· A
1 n

Elementary trees from a definite program P may be combined into derivation trees by
combining the root of a (renamed) elementary tree labelled by p(s1 , . . . , sn ) with the
leaf of another (renamed) elementary tree labelled by p(t1 , . . . , tn ). The joint node is
.
labelled by an equation p(t1 , . . . , tn ) = p(s1 , . . . , sn ).2 A derivation tree is said to be
complete if it is a tree and all of its leaves are labelled by . Complete derivation trees
are also called proof trees. Figure 3.8 depicts a proof tree built out of the following
elementary trees from the program in Example 3.14:
2 Strictly .
speaking equations may involve terms only. Thus, the notation p(t1 , . . . , tn ) =
. .
p(s1 , . . . , sn ) should be viewed as a shorthand for t1 = s1 , . . . , tn = sn .
54 Chapter 3: SLD-Resolution

H
grandfather (X0 , Z0 )

 HHH
father (X , Y ) parent (Y , Z )
. 0 0 . 0 0
= =
father (a, b) parent(X1 , Y1 )

mother (X
. 1 , Y1 )
=
mother (b, c)

Figure 3.8: Consistent proof tree

grandfather (X0 , Z0 ) parent (X 1 , Y1 ) fatherq (a, b) mother (b, c)

, ,@@
q
q

q
q

q
q

father (X , Y ) parent (Y , Z )
0 0 0 0 mother (X1 , Y1 )

A derivation tree or a proof tree can actually be viewed as a collection of equations.


In the particular example above:
. . . . . .
{X0 = a, Y0 = b, Y0 = X1 , Z0 = Y1 , X1 = b, Y1 = c}

In this example the equations can be transformed into solved form:


. . . . .
{X0 = a, Y0 = b, X1 = b, Z0 = c, Y1 = c}

A derivation tree or proof tree whose set of equations has a solution (i.e. can be
transformed into a solved form) is said to be consistent. Note that the solved form
may be obtained in many different ways. The solved form algorithm is not specific as
to what equation to select from a set — any selection order yields an equivalent solved
form.
Not all derivation trees are consistent. For instance, the proof tree in Figure 3.9
does not contain a consistent collection of equations since the set:
. . . . . .
{X0 = a, Y0 = b, Y0 = X1 , Z0 = c, X1 = a, Y1 = b}

does not have a solved form.


The idea of derivation trees may easily be extended to incorporate also atomic goals.
An atomic goal ← A may be seen as an elementary tree with a single node, labelled by
A, which can only be combined with the root of other elementary trees. For instance,
proof tree (a) in Figure 3.10 is a proof tree involving the goal ← grandfather (X, Y ).
Note that the solved form of the associated set of equations provides an answer to the
initial goal — for instance, the solved form:
. . . . . . . .
{X = a, Y = c, X0 = a, Y0 = b, Y0 = X1 , Z0 = Y1 , X1 = b, Y1 = c}
3.6 Proof Trees 55

H
grandfather (X0 , Z0 )

 HHH
father (X , Y ) parent (Y , Z )
. 0 0 . 0 0
= =
father (a, b) parent(X1 , Y1 )

father (X
. 1 , Y1 )
=
father (a, b)

Figure 3.9: Inconsistent proof tree

of the equations associated with proof tree (a) in Figure 3.10 provides an answer
substitution {X/a, Y /c} to the initial goal.
The solved form of the equations in a consistent derivation tree can be used to
simplify the derivation tree by instantiating the labels of the tree. For instance, ap-
plying the substitution {X/a, Y /c, X0/a, Y0 /b, Z0 /c, X1 /b, Y1 /c} (corresponding to the
solved form above) to the nodes in the proof tree yields a new proof tree (depicted in
.
Figure 3.11). However, nodes labelled by equations of the form A = A will usually
be abbreviated A so that the tree in Figure 3.11 is instead written as the tree (d) in
Figure 3.10. The equations of the simplified tree are clearly consistent.
Thus the search for a consistent proof tree can be seen as two interleaving processes:
The process of combining elementary trees and the simplification process working on
the equations of the already constructed part of the derivation tree. Note in particular
that it is not necessary to simplify the whole tree at once — the tree (a) has the
following associated equations:
. . . . . . . .
{X = X0 , Y = Z0 , X0 = a, Y0 = b, Y0 = X1 , Z0 = Y1 , X1 = b, Y1 = c}
Instead of solving all equations only the underlined equations may be solved, resulting
in an mgu θ1 = {Y0 /X1 , Z0 /Y1 }. This may be applied to the tree (a) yielding the tree
(b). The associated equations of the new tree can be obtained by applying θ1 to the
previous set of equations after having removed the previously solved equations:
. . . . . .
{X = X0 , Y = Y1 , X0 = a, X1 = b, X1 = b, Y1 = c}
Solving of the new underlined equations yields a mgu θ2 = {X1 /b, Y1 /c} resulting in
the tree (c) and a new set of equations:
. . . .
{X = X0 , Y = c, X0 = a, b = b}
Solving all of the remaining equations yields θ3 = {X/a, Y /c, X0/a} and the final tree
(d) which is trivially consistent.
Notice that we have not mentioned how proof trees are to be constructed or in
which order the equations are to be solved or checked for consistency. In fact, a whole
spectrum of strategies is possibile. One extreme is to first build a complete proof
56 Chapter 3: SLD-Resolution

grandfather
. (X, Y ) grandfather
. (X, Y )
= =
 HH
grandfather (X0 , Z0 )
HH
grandfather (X0 , Y1 )

 HH  HH
father (X , Y ) parent (Y , Z )
. 0 0 . 0 0 father (X , X ) parent(X , Y )
. 0 1 1 1
= = =
father (a, b) parent (X1 , Y1 ) father (a, b)
mother (X
. 1 , Y1 )
=
mother (X
. 1 , Y1 ) mother (b, c)
=
mother (b, c)

(a) (b)

grandfather
. (X, Y )
=

  HHH
grandfather (X0 , c)
  HHH
grandfather (a, c)


father (X , b)
H
parent (b, c)

father (a, b)
H
parent(b, c)
. 0
=
father (a, b)
mother (b, c) mother (b, c)

(c) (d)

Figure 3.10: Simplification of proof tree

tree and then check if the equations are consistent. At the other end of the spectrum
equations may be checked for consistency while building the tree. In this case there
are two possibilities — either the whole set of equations is checked every time a new
equation is added or the tree is simplified by trying to solve equations as soon as they
are generated. The latter is the approach used in Prolog — the tree is built in a
depth-first manner from left to right and each time a new equation is generated the
tree is simplified.

From the discussion above it should be clear that many derivations may map
into the same proof tree. This is in fact closely related to the intuition behind the
independence of the computation rule — take “copies” of the clauses to be combined
together. Rename each copy so that it shares no variables with the other copies. The
clauses are then combined into a proof tree. A computation rule determines the order
in which the equations are to be solved but the solution obtained is independent of
this order (up to renaming of variables).
Exercises 57

grandfather
. (a, c)
=
H
grandfather (a, c)

  HHH
father (a, b)
. parent(b, c)
.
= =
father (a, b) parent(b, c)

mother
. (b, c)
=
mother (b, c)

Figure 3.11: Resolved proof tree

Exercises
3.1 What are the mgu’s of the following pairs of atoms:

p(X, f (X)) p(Y, f (a))


p(f (X), Y, g(Y )) p(Y, f (a), g(a))
p(X, Y, X) p(f (Y ), a, f (Z))
p(a, X) p(X, f (X))

3.2 Let θ be an mgu of s and t and ω a renaming substitution. Show that θω is


an mgu of s and t.
3.3 Let θ and σ be substitutions. Show that if θ  σ and σ  θ then there exists
a renaming substitution ω such that σ = θω.
3.4 Let θ be an idempotent mgu of s and t. Prove that σ is a unifier of s and t iff
σ = θσ.
3.5 Consider the following definite program:

p(Y ) ← q(X, Y ), r(Y ).


p(X) ← q(X, X).
q(X, X) ← s(X).
r(b).
s(a).
s(b).

Draw the SLD-tree of the goal ← p(X) if Prolog’s computation rule is used.
What are the computed answer substitutions?
3.6 Give an example of a definite program, a goal clause and two computation
rules where one computation rule leads to a finite SLD-tree and where the
other computation rule leads to an infinite tree.
58 Chapter 3: SLD-Resolution

3.7 How many consistent proof trees does the goal ← p(a, X) have given the
program:

p(X, Y ) ← q(X, Y ).
p(X, Y ) ← q(X, Z), p(Z, Y ).
q(a, b).
q(b, a).

3.8 Let θ be a renaming substitution. Show that there is only one substitution σ
such that σθ = θσ = .
3.9 Show that if A ∈ BP and ← A has a refutation of length n then A ∈ TP ↑ n.

You might also like