Possibilistic Logic: Complexity and Algorithms: Jer Ome Lang May 19, 1997

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

Possibilistic logic: complexity and algorithms

Jer^me Lang o May 19, 1997


Possibilistic logic is a logic for reasoning with uncertain and partially inconsistent knowledge bases. Its standard version consists in ranking propositional formulas according to their certainty or priority level, by assigning them lower bounds of necessity values. We give a survey of automated deduction techniques for standard possibilistic logic, together with complexity results. We focus on the extensions of resolution (Section 3) and of the Davis and Putnam procedure (Section 4). In Section 5 we consider extended versions and variants of possibilistic logic. We conclude by listing the related research topics, the applicative impact of this work and further research issues.

Abstract

1 Introduction
Possibilistic logic is a logic of uncertainty tailored for reasoning under incomplete and partially inconsistent knowledge. At the syntactical level it handles formulae of propositional or rstorder classical logic, to which are attached lower bounds of so-called degrees of necessity and possibility. The degree of necessity (resp. possibility) of a formula expresses to what extent the formula is entailed by (resp. compatible with) the available evidence. At the mathematical level, degrees of possibility and necessity are closely related to fuzzy sets 59] 60] and possibilistic logic is especially adapted to automated reasoning when the available information is pervaded with vagueness. A vague piece of evidence can be viewed as de ning an implicit ordering on the possible worlds it refers to, this ordering being encoded by means of fuzzy set membership functions. Hence, possibilistic logic is a tool for reasoning under uncertainty based on the idea of (complete) ordering rather than counting, unlike probabilistic logic. For a complete exposition of possibilistic logic see 29]; a more general introduction to possibility theory and is in 15]. Going now deeper into the formal details, possibilistic logic in its simplest version (\standard possibilistic logic") considers certainty-valued statements of the form N (') , where ' is a well-formed formula of a classical propoistional or rst-order language L, 2 (0; 1] and N (') the necessity degree of '. These statements will be expressed by the syntactical object (' ) abd will be called possibilistic formulae. Thus, (' ) acts as a constraint on the set of necessity measures: a necessity measure N on the language L satis es ('; ) i N (') . More generally, a set of possibilistic formulae ('i i ) logically entails a possibilistic formula ( ) i any necessity measure satisfying 8i; N ('i) i satis es also N ( ) . The basic deduction problem in possibilistic logic consists then in nding the greatest such that ( ) is logically entailed by the available knowledge composed by a set of possibilistic formulae. It is worth noticing that possibilistic logic can be viewed as a \two-level logic". At the lower level, well-formed formulae of L are not only formed on a classical language, but they 1

really are formulae from classical (propositional or rst-order) logic from a semantical point of view. At the upper level, the necessity (and possibility) degrees weighing the formulae respect completely the structure of classical logic. Clearly, other similar logics can be obtained by replacing the possibility and necessity measures by other classes of functions (for instance probabilities). The variety of logics thus obtained by letting these classes of functions vary will be called weighted logics. The only requirement we ask for a weighted logic is that the mappings g of the associated class map the language L into an ordered structured, and that they respect the structure of classical logic, i.e., for any two logically equivalent formulae ' and '0 then g(') = g('0). Now, among the set of all weighted logics, we are especially interested in a distinguished subset (to which possibilistic logic belongs to), namely, weighted logics of uncertainty. Weighted logics of uncertainty not only respect classical equivalence but also require the Sugeno property, or monotonicity with respect to classical entailment: if ' logically entails then g (') g ( ) This condition ensures that g can be interpreted as a degree of uncertainty. At this point we should insist on the following point: possibilistic logic is not a multiplevalued logic. Generally speaking, multiple-valued logics are non-classical logics whose semantics are de ned in terms of truth values belonging to a totally ordered set. Multiple-valued logics being non-classical logics mean that the set of classical tautologies is not identical to the set of tautologies of a multiple-valued logic { this is the case for instance for ' _ :', or :(' ^ :'). Conversely, possibilistic logic is rather a meta-logic built on classical logic than a non-classical logic strictly speaking { since it respects completely the structure of classical logic. More generally, multiple-valued logics are generally dedicated to the representation of partial truth (where a degree of truth is viewed as the measure of compatibility of a vague statement with reality), while logics of uncertainty (and among them possibilistic logic) are dedicated to the representation of states of partial ignorance. For a discussion about degrees of truth and degrees of uncertainty see 25] 15]. This chapter contains a comprehensive study of algorithmic and complexity issues related to possibilistic logic. In Section 2 we recall the basics of possibility theory. Section 3 discusses algorithmic issues for possibilistic deduction, which mainly consists in a possibilistic extension of refutation by resolution. Section 4 discusses algorithmic issues for possibilistic model nding, based on an extension of the procedure Davis and Putnam. While Sections 3 and 4 focus only on the simplest fragment of possibilistic logic (\necessity-valued logic"), handling only certainty-valued statements, Section 5 discusses proof methods for an extended fragment of possibilistic logic, which handles both certainty-valued statements and possibility-valued statements. Section 6 concludes and points to work on related subjects.

2 Standard possibilistic logic: formal background


In full possibilistic logic (discussed in Section 5), uncertain knowledge is expressed in terms of certainty- and possibility-quali ed statements; \full" possibilistic logic handles syntactic objects expressing inequalities resulting from these statements. These objects, called possibilistic formulae, are the basic objects of full possibilistic logic. The so-called \standard", or necessity-valued, fragment of possibilistic logic (SPL) handles only possibilistic formulae corresponding to certainty-quali ed statements. Although this fragment is poorer than full possibilistic logic, there are two good reasons for studying it separately before considering 2

possibilistic logic in its full version: rst, algorithmic issues are simpler to present in the necessity-valued fragment, and then their extension to the full case can be easily understood; second, from a knowledge representation point of view, the necessity-valued fragment SPL is signi cant, since it is su cient for modelling a preference order upon formulae, and, as such, it entertains close links with the nonmonotonic approach based on preferential models and belief revision theory (see 18] 19]). This Section borrows much material from 29] and 42]. Proofs of the results that are not directly related to algorithmical and complexity issues are omitted { most of them can be found in 29]. In the rest of the chapter, L is a propositional or rst-order logical language (restricted to closed well-formed formulae), equipped with the semantics of classical logic. j= denotes classical entailment. Well-formed formulae will be denoted by ', , etc. Classical worlds will be denoted by ! , ! 0 , etc, and the set of all possible worlds by . > and ? denote respectively tautology and contradiction. A necessity-valued formula (also called SPL formula) is a pair (' ), where ' is a classical propositional or rst-order formula of L and 2 (0; 1] is a positive number. (' ) expresses that ' is certain at least to the degree , i.e., N (') , where N is a necessity measure modelling the state of knowledge of the agent. is called the valuation of the formula and is denoted val('). A necessity-valued knowledge base (also called SPL knowledge base) F is then de ned as a nite set (i.e. a conjunction) of necessity-valued formulae. SPL denotes the language consisting of necessity-valued formulae. F denotes the set of classical formulae obtained from F by ignoring the weights: if F = f('i ; i); i = 1 : : :ng, then F = f'i; i = 1 : : :ng. It is called the classical projection of F . A SPL knowledge base may also be seen as a collection of nested sets of classical formulae: being any valuation of (0; 1], we de ne the -cut F and the strict -cut F by

2.1 Language

Their classical projections are thus

F = f(' ) 2 Fj g F = f(' ) 2 Fj > g

Thus, a SPL knowledge base F can be viewed as a layered knowledge base, where the higher levels ( close to 1) correspond to the most certain pieces of knowledge. Reasoning from such a knowledge base will aim at deriving conclusions by means of the most certain parts of F . Interestingly, the two-level view of possibilistic logic suggests that valuations appear as labels associated to formulae; thus, possibilistic logic can be cast in Gabbay's Labelled Deductive Systems framework 36], where the set of labels is the totally ordered set 0; 1] and the operations de ned on it follow directly from the axioms of possibility theory. See 29] for connections with other general frameworks. 3

g F = f'j(' ) 2 F ; F = f'j(' ) 2 F ; > g

In standard possibilistic logic, satisfaction and logical consequence are de ned by means of possibility distributions on the set of classical worlds . A possibility distribution is a function from to 0; 1]. (! ) re ects how possibly ! is the real world. When (! ) = 1 (resp. (! ) = 0), it is completely possible (resp. completely impossible) that ! is the real world. A possibility distribution is normalized i 9! such that (! ) = 1. In this paper we do not assume that possibility distributions are necessarily normalized.

2.2 Semantics and partial inconsistency

Satisfaction of a knowledge base by a possibility distribution


The possibility measure induced by is a function from L to 0; 1] de ned by (') = supf (! ); ! j= ')g The dual necessity measure N induced by is de ned by

N (') = 1 ? (:') = inf f1 ? (!); ! j= :')g


Giving up the normalization condition supf (! ); ! 2 g = 1 slightly modi es the behaviour of necessity measures with respect to the usual possibility theory: if = 1 ? supf (! ); ! 2 g, then we have 8'; min(N ('); N (:')) = > 0 which leads to N (?) = N (' ^:') = min(N ('); N (:')) = instead of N (?) = 0. However, the following properties still hold: N (>) = 1; N (' ^ ) = min(N ('); N ( )); N (' _ ) max(N ('); N ( )); if ' j= then N ( ) N (') A possibility distribution is said to satisfy the SPL formula (' ) i N (') , where N is the necessity measure induced by . We shall then use the notation j= (' ). A possibility distribution satis es a SPL knowledge base F = f('i i )ji = 1:::ng i 8i; j= ('i i ). This is denoted by j= F .

Logical consequences
A SPL formula (' ) is a logical consequence of the SPL knowledge base F i for any satisfying F , then also satis es (' ). This deduction problem will then be stated in the following manner: let F be a SPL knowledge base and ' a classical formula that we would like to deduce from F to some degree; we have to compute the highest valuation (i.e. the best lower bound of a necessity degree) such that (' ) is a logical consequence of F , i.e. to compute V al('; F ) = supf 2 (0; 1]; F j= (' )g

Principle of minimum speci city


4

A fundamental result about deduction from possibilistic knowledge bases in SPL is that there always exists a least speci c, i.e. highest, possibility distribution satisfying a possibilistic knowledge base F . Namely, if F = f('i i ); i = 1:::ng, then the least speci c possibility distribution F satisfying F is de ned by ( minf1 ? i j ! j= :'i ; i = 1:::ng F (! ) = 1 if ! j= ' ^ ::: ^ '
1
n

Proposition 1 29] For any possibility distribution , satis es F if and only if i.e. 8!; (! ) F (! ).
As a corollary,

Proposition 2 29]

F j= (' ) i F j= (' ) In other terms, V al('; F ) = NF ('), where NF is the necessity measure induced by

Partial inconsistency
One of the nice features of standard possibilistic logic is that it enables a gradation of inconsistency and a nontrivial notion of deduction from a partially inconsistent knowledge base. A possibilistic knowledge base F whose associated possibility distribution F is such that 0 < sup F < 1 is said to be partially inconsistent. Measuring the consistency of F consists, then, of evaluating to what degree there is at least one comletely possible interpretation for F , i.e. to what degree the set of possibility distributions satisfying F contains normalized possibility distributions; the quantity

Cons(F ) = sup j=F sup!2 (!) = sup!2 will be called consistency degree of F . Its complement to 1, Incons(F ) = 1 ? sup!2 F (!) is called the inconsistency degree of F .

(! )

Example: let F = f(p 0:8); (q 0:3); (p ! r 0:6); (q ! :r 0:9)g. We get F (pqr) = 0:1; F (pq r) = 0:4; F (pqr) = 0:7; F (pq r) = 0:4; (pqr) = 0:1; F (pq r) = 0:2; F (pqr) = 0:2; F (pq r) = 0:2: F Hence, sup!2 F (! ) = 0:7 and thus Incons(F ) = 0:3. Inconsistency degrees enable the gradation of inconsistency. The two extreme cases are Incons(F ) = 0 (complete consistency) and Incons(F ) = 1 (complete inconsistency). When 0 < Incons(F ) < 1, F is partially inconsistent. A partially inconsistent knowledge base entails contradictions with a positive necessity degree, i.e., F j= (? ) for some > 0. Namely, Proposition 3 29]

Incons(F ) = inf fN (?)j j= Fg = NF (?) = supf jF j= (? )g


5

This equality justi es the terminology \inconsistency degree". Furthermore, it can be proved that Incons(F ) is the valuation of the least certain formula in the strongest contradiction in F . Namely, Proposition 4 29] 0 Incons(F ) = F 0 F ;Fmax 0 nconsistent minf j( ) 2 F g
i

Deduction under partial inconsistency


Let F be a partially inconsistent SPL knowledge base (such that 0 < Incons(F ) < 1). Since any formula we have N ( ) N (?), any formula is deducible from F with a valuation greater or equal to Incons(F ). It means that any deduction such that F j= ( ) with Incons(F ) may be due only to partial inconsistency of F and perhaps has nothing to do with . These deductions are called trivial deductions; on the contrary, deductions of necessity-valued formulae F j= ( ) with > Incons(F ) are not caused by the partial inconsistency; they are called nontrivial deductions: F j ( ) i F j= ( ) and > Incons(F ) While the operator j= is monotonic, j is not. Thus, Incons(F ) acts as a threshold inhibiting all formulae of F with a valuation equal to or below it. The following result shows its role as a threshold for the deduction problem more deeply: Proposition 5 29] Let inc = Incons(F ) and > 0. Then 1. Finc is consistent; 2. F j ( ) i Finc j= ( ). This result shows that only the consistent part of F consisting of formulae with a weight strictly greater than the inconsistency degree is signi cant for the deduction process. The next result establishes a link between inconsistency degrees and inconsistency in classical logic and is thus central to the deduction problem. Proposition 6 29] Incons(F ) = supf j F is inconsistentg = inf f j F is consistentg The following result generalizes the deduction and refutation theorems from classical to possibilistic logic: Proposition 7 29] F f( 1)g j= ( ) iff F j= ( ! ) Proposition 8 (refutation) 29] F j= ( ) i F f(: 1)g j= (? ) or, equivalently, V al( ; F ) = Incons(F f(: 1)g) 6

This result shows that any deduction problem in possibilistic logic can be viewed as computing an inconsistency degree. Lastly, we give the following result, stating that in order to deduce ( ), only the formulae with a weight greater than or equal to are useful:

Proposition 9 29]
Preferred models

F j= (

) i F j= (

As seen before, knowing the possibility distribution F is su cient for any deduction problem in SPL (including the computation of the inconsistency degree). It is important to notice that not only F is the least speci c distribution satisfying F , but it also minimizes inconsistency among all possibility distributions satisfying F , i.e. NF (?) = Incons(F ) = inf fN (?)j j= Fg. Now, F de nes a fuzzy subset of , which can be seen as the fuzzy set of (classical) models of F , its membership function being F (! ). The quantity F (! ) represents the compatibility degree of ! with F , measuring to what degree ! is a model of F . From a decision-theoretic perspective, possibilistic formulae can be viewed as prioritized constraints (the valuation being the priority) and F (! ) is then the degree to which ! satis es the set of fuzzy constraints expressed by F . See 43] for this interpretation of posssibilistic logic in terms of prioritized constraints, and also 22] for a similar interpretation but with a CSP-based formalism. Thus, F de nes a preordering relation on , namely, ! F ! 0 i F (! ) F (! 0). The worlds maximizing F (! ) are called the best models of F . It can be proved that the set of best models of F is never empty, i.e., the least upper bound in the computation if Incons(F ) is reached. If F represents prioritized constraints, then the best models are those which minimize the level of the most important among the violated constraints, and thus represent the best decisions, i.e. the most compatible with the constraints in F .

3 Algorithms for deduction in standard possibilistic logic


In this section we investigate complexity and algorithmic issues related to the deduction problem in SPL. We consider in succession several versions of the deduction problem, as already evoked in Section 2. For complexity issues we restict the problem to propositional necessity-valued logic. There are two di erent formulations of the \standard" deduction problem in SPL given the possibilistic knowledge base F and a formula , the optimization formulation of the deduction problem consists in computing the best possible lower bound of the necessity degree of given F , denoted by V al( ; F ); given F , and 2 (0; 1], the decision formulation of the deduction problem consists in deciding whether ( ) is a logical consequence of F .

3.1 Problem formulations and complexity

De nition 1 (standard deduction problem in SPL, optimization form)


(spl-opt) compute V al( ; F ) = sup f

j F j= (

De nition 2 (standard deduction problem, decision form)


(spl-ded): decide whether F j= (

Due to Proposition 8, spl-opt can be reduced to computing an inconsistency degree; the converse is trivial since Incons(F ) = V al(?; F ). Interestingly, the decision form of the deduction problem is not harder than deduction in classical propositional logic.

Proposition 10

spl-ded is co-NP-complete.

Proof: follows directly from Proposition 9.

Let us turn now to the complexity of spl-opt, which is a more interesting problem since it has a more practical impact.

Proposition 11 Let F be a propositional SPL knowledge base and a propositional formula. Then, computing V al( ; F ) is NP-hard and requires dlog2 ne satis ability checks, where n is the number of di erent valuations involved in F . Proof: following Proposition 6 we have Incons(F ) = supf j F is inconsistentg; this result enables us to compute V al( ; F ) by dichotomy, using any prover for the propositional satis ability problem sat. Let 0 = 0 and let 1 ; :::; n be the distinct valuations appearing in F ,
ranked increasingly : 0 < 1 ; :::; n 1.

begin

l 0; u n; while l < u do r b(l + u)=2c ; if F r ^ : consistent then u r?1 l r fV al( ; F ) = r g

else

end
Clearly, this algorithm computes V al( ; F ) and contains exactly dlog2 ne calls to sat. } We now focus on the nontrivial deduction problem in SPL. They are used in order to draw nontrivial, nonmonotonic inference from partially inconsistent possibilistic knowledge bases 17] 29].

De nition 3 (nontrivial deduction problem, optimization form) ( V al( ; F ) iff V al( ; F ) > Incons(F ) spl-ntopt: compute V alNT ( ; F ) = 0 otherwise
8

De nition 4 (nontrivial deduction problem, decision form) F j ( g i F j= ( g and > Incons(F ) (spl-ntded): decide whether F j ( g De nition 5 (nonmonotonic possibilistic entailment) (nmpe): decide whether V alNT ( ; F ) > 0. Proposition 12 Computing V alNT ( ; F ) requires 1 + dlog2 ne satis ability checks, where n is the number of di erent valuations involved in F .
Proof: immediate corollary of Proposition 11.

Proposition 13

spl-ntded is DP-complete1.

Proof: (Membership) : F j ( ) i F j= ( ) and > Incons(F ), i.e., i F j= and F is satis able. This requires one satis ability test and one unsatis ability test, hence the membership of spl-ntded to DP. (Completeness) : proved by the following polynomial reduction from the canonical DPcomplete problem sat-unsat to spl-ntded. It is easy to see that ( ; ) is an instance of sat-unsat, i.e., is satis able and is not, if and only if f( 1)g j ( ! : 1): indeed, f( 1)g j ( ! : 1) if and only if f( 1)g j= ( ! : 1) and Incons(f( 1)g) < 1, i.e.; the rst condition is equivalent to j= ! : or equivalently, to j= : ; the second one to satis able. }

Proposition 14 Provided that NP 6= co-NP, nmpe is in

Proof: (Membership to P ) : By Proposition 8, V alNT ( ; F ) > 0 i Incons(F (: 1)) > 2 Incons(F ). This can be checked by the following algorithm: - compute = Incons(F (: 1)); - verify that F is satis able which proves membership of nmpe to P . 2 (Non-membership to NP) : suppose that nmpe 2 NP. Let F = ;, Then checking that is unsatis able is equivalent to j F , which would imply that unsat 2 NP, i.e., NP = co-NP { which is quite unlikely. Non-membership to co-NP is showed by similar arguments. } Hence, all forms deduction problems come down to a small number (logarithmic in the worst case) of calls to the satis ability problem in classical logic. Hence, there is no real gap of complexity when switching from classical logic to necessity-valued logic. These simple transformations of a possibilistic deduction problem into classical deduction problems enable the construction of simple algorithms for possibilistic logic, directly based on classical theorem provers (see also subsection 3.4). Note that it is not hard to see that if a SPL knowledge base and the formula to prove involve only formulae of a polynomial fragment of propositional logic, then all the considered versions of the possibilistic deduction problem become polynomial. Furthermore, in the rst-order case, the possibilistic deduction problems have the same computational nature then the associated classical deduction problems (semi-decidable in the general case, decidable i the associated fragment is decidable). Note that the complexity results of this section did not need the assumption that formulas be written under conjunctive normal form. In order to extend resolution to possibilistic logic, in the next section we will consider clausal forms for possibilistic knowledge bases.
1 see in Annex an introduction to the complexity class

P n (NP 2

co-NP)

DP.

A possibilistic clause is a possibilistic formula (c ) where c is a (propositional or rst-order) clause. A possibilistic clausal form is a universally quanti ed conjunction of possibilistic clauses. The problem of nding a clausal form of F whose inconsistency degree is the same as F always has a solution for SPL knowledge bases. Indeed, there exists a clausal form C of F such that Incons(C ) = Incons(F ), which generalizes the result holding in classical logic about the equivalence between the inconsistency of a set of formulae and the inconsistency of its clausal form. A possibilistic clausal form of a SPL knowledge base can be obtained by the following method, where F = f('i i ); i = 1:::ng 1. put each 'i in clausal form, i.e. 'i = (8) ^j (cij ), where cij is a universally-quanti ed clause; 2. C (8) ^j (cij i )

3.2 Clausal form in Standard Possibilistic Logic

Proposition 15 29]

Incons(C ) = Incons(F )

Proof: comes easily from the equivalence between a classical propositional formula and its clausal form and Proposition 6.

Once a clausal form is de ned for a given SPL knowledge base, the resolution principle may easily be extended from classical rst-order logic to SPL, in order to compute its inconsistency degree. The following possibilistic resolution rule between two possibilistic clauses (c1 1) and (c2 2 ) was rst proposed in 14]: (c2 (R) (r(c(c;1c )1 )min( 2 ) )) 1 2 1; 2 where r(c1; c2) is any classical resolvent of c1 and c2 . Besides we introduce the following subsumption rule: (c ) (S ) (c ) where If C is a set of possibilistic clauses, we note C `Res (c ) if (c ) can be obtained by a nite number of applications of rules (R) and (S ) to C . The following result establishes the soundness of the resolution and subsumption rules:

3.3 Resolution in Standard Possibilistic Logic

Proposition 16 (soundness) 29] If C `Res (c ) then C j= (c )


Proof: if (r min( ; )) is obtained by rule (R) from (c ) and (c0 ) then N (r) N (c ^ c0) (since c ^ c0 j= r), thus N (r) min(N (c); N (c0)) min( ; ), so (c ); (c0 ) j= (r min( ; )). Now, if (c ) is obtained by rule (S ) from (c ) then trivially (c ) j= (c ). By induction on the derivations, any possibilistic clause obtained by a nite number applications of rules (R) and (S) is a logical consequence of the initial set of possibilistic clauses.

10

The resolution rule for SPL clauses performs locally at the syntactical level what the combination/projection principle 60] does in approximate reasoning. Moreover, resolution for SPL clauses is complete for refutation:

Proposition 17 29] Let C be a set of SPL clauses. Then the valuation of the optimal refutation by resolution from C (i.e., the derivation of (? ) with maximal) is the inconsistency degree of C .
The proof comes easily from completeness of resolution for refutation in classical rstorder logic, together with Propositions 6 and 9. Using Propositions 8 and 10, we get the following corollary:

Proposition 18 29] Let F be a set of SPL formulae and ' be a classical formula. Let C 0 be the set of SPL clauses obtained from F f(:' 1)g; then the valuation of the optimal refutation by resolution from C 0 is V al('; F ).
Thus refutation by resolution can be used for computing the inconsistency degree of a SPL knowledge base. We consider a set F of SPL formulae (the knowledge base) and a formula '; we wish to know the maximal valuation with which F entails ', i.e. V al('; F ) = supf 2 (0; 1]jF j= (' )g. This request can be answered by using refutation by resolution, which is extended to standard possibilistic logic as follows:
Refutation by resolution:

put F into clausal form C ; put ' into clausal form; let c1, ..., cm the clauses obtained; C 0 C f(c1 1); :::; (cm 1)g; search for a deduction of (? ) by applying the resolution rule (R) from C 0 repeatedly, with maximal; 5. V al('; F ) 1. 2. 3. 4.

Illustrative example Let F be the following possibilistic knowledge base, concerning an election whose two
only candidates are Mary and Peter:
1 2 3 4 5 6

(Elected(Peter) $ :Elected(Mary ) 1) (8x CurrentPresident(x) ! Elected(x) 0:5) (CurrentPresident(Mary ) 1) (8x Supports(John; x) ! Elected(x) 0:6) (Supports(John; Mary ) 0:2) (8x V ictimOfAnAffair(x) ! :Elected(x) 0:7) F is equivalent to the set of possibilistic clauses C = fC1; :::; C7g:

11

(Elected(Peter) _ Elected(Mary ) 1) (:Elected(Peter) _ :Elected(Mary ) 1) (:CurrentPresident(x) _ Elected(x) 0:5) (CurrentPresident(Mary ) 1) (:Supports(John; x) _ Elected(x) 0:6) (Supports(John; Mary ) 0:2) (:V ictimOfAnAffair(x) _ :Elected(x) 0:7) We cannot nd any refutation from C ; hence, C is consistent, i.e., Incons(C ) = 0. Let us now nd the best lower bound of the necessity degree of the formula Elected(Mary ). Let C 0 = C f:Elected(Mary)1)g; then there exist two distinct refutations by resolution from C 0, which are the following:

C1 C2 C3 C4 C5 C6 C7

Refutation 1: (:Elected(Mary ) 1); C3 `Res (:CurrentPresident(Mary ) 0:5); (:CurrentPresident(Mary ) 0:5); C4 `Res (? 0:5) Refutation 2: (:Elected(Mary ) 1); C5 `Res (:Supports(John; Mary ) 0:6); (:Supports(John; Mary ) 0:6); C6 `Res (? 0:2) It can be checked that there is no other refutation from C 0 . Refutation 1 is optimal, whereas refutation 2 is not. Hence, we conclude that C j= (Elected(Mary ) 0:5), i.e., it is moderately certain that Mary will be elected; since this degree 0.5 is maximal, V al(Elected(Mary); C ) = 0:5. Then we learn that Mary is victim of an a air, which leads us to update the knowledge base by adding to C the possibilistic clause C8 : (V ictimOfAnAffair(Mary) 1). Let C1 be the new knowledge base, C1 = C fC8g. Then we can nd a 0.5-refutation from C1 (which is optimal): C8; C7 `Res (:Elected(Mary) 0:7); C3; C4 `Res (Elected(Mary) 0:5); (:Elected(Mary ) 0:7); (Elected(Mary ) 0:5) `Res (? 0:5) Hence, C1 is partially inconsistent, with Incons(C1) = 0:5. Refutation 1, which had given N (Elected(Mary )) 0:5 can still be obtained from C1 but since its valuation is not greater than Incons(C1), it becomes a trivial deduction. On the contrary, adding to C1 the possibilistic clause (Elected(Mary ) 1), we nd a 0.7-refutation this time: (Elected(Mary ) 1); C8 `Res (:V ictimOfAnAffair(Mary ) 0:7); (:V ictimOfAnAffair(Mary ) 0:7); C8 `Res (? 0:7) Since 0:7 > Incons(C1 ), the deduction C1 j= (:Elected(Mary ) 0:7) is nontrivial; using C1, it could be shown that we also have C1 j= (Elected(Peter) 0:7). We have shown that the key problem for possibilistic deduction consists in computing an inconsistency degree, which can be done by searching for an optimal refutation by resolution from a set of possibilistic clauses. In this section we address the issue of nding an optimal 12

3.4 Resolution strategies

refutation in practice. As in classical logic, a resolution strategy will restrict or order the choice of clauses or literals on which resolution will be made. In the whole section, C will denote a set of rst-order necessity-valued clauses. C will denote the projection of C .

De nition 6 A possibilistic resolution strategy is an algorithm (deterministic or not) for nding an optimal refutation of C . A possibilistic resolution strategy S is complete i for any C such that Incons(C ) > 0 there is an optimal refutation for C compatible with the criteria of S . directly complete i its application to C such that Incons(C ) > 0 leads necessarily to an

optimal refutation in a nite amount of time. really directly complete i its application to C such that Incons(C ) > 0 not only leads to an optimal refutation in a nite amount of time, but also recognizes it as optimal. decidable i it is directly complete and furthermore stops after a nite amount of time even if C is consistent.

A meta-level resolution strategy consists in a meta-level algorithm calling a classical resolution strategy S.

Bottom-up meta-level strategies


These strategies consist in applying a given classical resolution strategy S to increasing -cuts of C until the attainment of a consistent -cut. If the algorithm stops then the last refutation obtained is optimal.

begin
stop

0;

repeat

false;

apply S to C (without caring about valuations); if S stops without nding a refutation then stop true

end
S.

let be the valuation of the obtained refutation; ; until stop f possibly given by C = ;g ; Return ;

else

Let S" be a bottom-up metal-level strategy induced by the classical resolution strategy
is Incons(C ).

Proposition 19 If S" applied to C stops then the last value of


Proof: comes easily from soundness of resolution and proposition.

13

Proposition 20 S" is complete (resp. directly complete, decidable) i S is complete (resp.


directly complete, decidable).

The proof does not contain any particular di culty 42]. Note that even if S is directly complete, S" may not be really directly complete since the optimal refutation, once found, will be recognized as optimal when the consistency of the remaining set of clauses C is checked, which is not guaranteed to succeed in a nite amount of time.

Top-down meta-level strategies


These strategies consist in applying a given classical resolution strategy S to decreasing -cuts of C until the attainment of a refutation (the rst one found being optimal). Let us rank the distinct valuations appearing in C (there are nitely many distinct valuations since C is nite), namely, 0 < 1 < ::: < n 1.

begin
i

repeat

n; stop

false;

apply S to C i (without caring about valuations) ; if a refutation is found then i; stop true

i?1 until stop or i = 0;; i

else

end

Return i ;

Top-down strategies were also considered by Williams 57] in the context of belief revision, where an anytime algorithm using a top-down strategy is proposed for computing a particular revision scheme (called maxi-adjustment). Let S# be a top-down metal-level strategy induced by the classical resolution strategy S.

Proposition 21 If S# applied to C stops then the returned value

i is

Incons(C ).

Proof: if the last value of i is 0 then no refutation could be found from C 1 = C , which means that C is consistent (due to the completeness of resolution for refutation), hence Incons(C ) = 0 (by Proposition 6). If i > 0 then, from the soundness of resolution we get C i inconsistent, and thus Incons(C ) i by Proposition 6; now, since no refutation could be found from C i+1 , from the completeness of resolution for refutation we get that C i+1 is consistent, and again by Proposition 6, Incons(C ) i . Hence, Incons(C ) = i .

Proposition 22 S# is decidable i S is decidable.


14

This result is easy to establish. Note that if S is not decidable (even if it is directly complete), S# is not guaranteed to be complete (and a fortiori directly complete). Indeed, if Incons(C ) < n , then C n is consistent and nothing guarantees that S applied to C n will terminate, thus the algorithm may loop for ever in the rst iteration. The search for a refutation of C i may take into account the fact that there is no possible refutation from C i+1 , and hence, that if there exists a refutation from C i then it involves at least a clause with valuation i . One possibility is to choose for S a support set strategy 47] with C i n C i+1 as support set. Another possibility 35] consists in saturating each C i by producing all clauses of valuation i before trying to produce any clause of C i?1 .

Mixed meta-level strategies


Top-down (resp. bottom-up) meta-level strategies are all the more e cient as the inconsistency degree of C is high (resp. low); hence, the choice of one meta-level strategy or another may be guided by expectations about the inconsistency degree. However, without any prior knowledge about it, applying blindly a top-down or a bottom-up strategy may be very ine cient (in the propositional case, where deduction is decidable, it may lead to n iterations before nding the optimal refutation). In this case, hybrid meta-level strategies may be considered, for instance: - combining both top-down and bottom-up trategies, e.g., start by searching for a refutation in a high -cut and in case it does not give anything within a xed amount of time, try by a bottom-up strategy; - computing the inconsistency degree by dichotomy (as in the proof of Proposition 11), with at most a logarithmic number of classical resolution proofs.

Informed strategies
The meta-level strategies discussed so far have one drawback: they are blind, in the sense that in a given iteration the choice of the next clauses to resolve is not guided by the valuations. We now brie y de ne a class of informed strategies, namely, linear pseudo-A strategies. The de nition of a linear resolution strategy for possibilistic logic is exactly the same as in classical logic 47]: let C be a set of possibilistic clauses such that Incons(C ) > 0, and let us distinguish a clause C0 in C (generally chosen such that C n fC0g be consistent); C0 is called the initial central clause; then, a linear strategy only admits resolutions between a descendent clause of C0 (called central clause) and a clause which is either a clause of C (entry clause) or a descendent of C0 . A linear refutation is a refutation by resolution obtained by the application of a linear strategy.

Proposition 23 42] 23] Let C be a set of possibilistic clauses such that Incons(C ) > 0; then, among the optimal refutations of C there is at least one which is linear. However, this result does not mean that if an arbitrary clause C0 is chosen in C as a central initial clause, an optimal linear refutation from C with C0 as central initial clause is an optimal refutation from C . For this to hold, Incons(C n fC0g) = 0. This last condition is guaranteed in case C was obtained by adding to a consistent possibilistic knowledge base F
one clause corresponding to the negation of the the formula ' to prove: then this clause is 15

chosen as the central initial clause. If the negation of ' gives more than one clause, then it is possible to take successively each one of these clauses, and then retain the maximum of the valuations of obtained optimal refutations2. The search for a linear refutation by resolution may be expressed in terms of an arborescent search in a state space. A state is de ned as a central clause and the set of its ancestor central clauses. To each state (C1:::Cn) of the search tree is associated the valuation of the last clause obtained val(Cn). Thus we are looking for an objective state with a maximal valuation, where an objective (goal) state has the form (C1:::(? )) (? being the empty clause). Thus, the search for a maximal refutation can be cast as the search of an optimal objective state with min-max costs 23] 42]. Namely: - the initial state S0 is de ned by the central initial clause C0 and its cost is g (S0) = val(C0); - the cost associated with an arc (C0:::Ci) ?! (C0:::CiCi+1 ) is the valuation of the side clause Ci0+1 used for producing Ci+1 . - the global cost of a path (C0) ?! (C0C1 ) ?! ::: ?! (C0:::Ci+1) is the minimum of all its elementary costs, which can also be written g (C0:::Ci+1) = min(g (C0:::Ci); val(Ci0+1); - an objective (or goal) state is a state (C0:::Ci) such that Ci = ? (? being the empty clause); - a state (C0:::Ci) is expanded by producing all the resolvents of Ci authorized by the linear strategy. The search for the empty clause with the highest valuation is equivalent to the search for a path whose cost is maximum. Such state spaces where costs are to be maximized and are combined by min (instead of +) have been studied by Yager 58] who has proposed in this framework (called \possibilistic production systems") a generalisation of algorithms A, A , and in a more general framework by Farreny 32]. Paths with a maximal valuation are called \paths of least resistance". The search is guided by an evaluation function f associating to each state S a valuation f (S ) 2 0; 1]. As for classical A and A algorithms, f is a function of two other functions g and h; g is the cost of the path from the initial state S0 to the current state S , and h is a heuristic function estimating the expected cost from S to an optimal objective state, and f (S ) = min(g(S ); h(S)) (in contrast with usual A and A algorithms where f = g + h). The next state to expand will be selected among the states maximizing f . We rst give the general formulation of the search algorithm, which is independent of the choice of a particular evaluation function.
2 More generally, if one wishes to compute the inconsistency degree of a set of possibilistic clauses which does not come from the addition to the negation of a given formula to a consistent possibilistic knowledge base, one may replace each clause (c ) of C by (:aux _ c ) where aux is an auxiliary nullary predicate, and take as central initial clause the new clause (aux 1). Thus, in all cases the inconsistency degree of C may be computed with a linear resolution strategy.

16

begin

repeat

:= fS0g; Success := false; := 0;


Open MaxOpen := fS 2 Open maximizing f g; if MaxOpen contains an objective state then

end

select a state S in MaxOpen; remove S from Open; develop S ; let En be the set of newly produced states ; compute f for each newly produced state; Open := Open En ; until Open = ; or Success; Return f = Incons(C )g

else

Success := true; := g (S );

A variant of this algorithm consists of maintaining a list Closed of states in which all states are place as they are developed; then, when producing a new state whose last produced clause is Cn+1 , it is checked whether Cn+1 is subsumed by a clause ending a state in Closed. However it is known that subsumption tests may be too costly with respect to their bene ts. The heuristic function h is said admissible i 8S; h(S ) h (S ), where h (S ) is the real cost of an optimal path from state S to an objective state. An admissible function h is thus optimistic (since costs are here to be maximized), like evaluation nctions for traditional A algorithms. Note that an example of admissible function is given by h(S ) = 1; 8S : in this case we recover the ascending meta-level strategy restricted to linear refutations (corresponding to the uniform-cost strategy in traditional heuristic search). See Farreny 32] for an extensive study of heuristic search algorithms with non-additive cost functions. An admissible function h1 is said to be more informed than an admissible function h2 i 8S; h1(S ) h2 (S ). Obviously, h(S ) = 1 corresponds to the least informed admissible function.

Proposition 24 Assume that the clauses involved in C are propositional or from a decidable

fragment of rst-order logic. If furthermore the evaluation function h is admissible, then the above algorithm stops, and the returned value is equal to Incons(C ).

If it is not assumed that clauses are taken from a decidable fragment, then it is no longer guaranteed that the algorithm stops; as an example, consider the evaluation function h(S ) = 1; 8S and C = f(P (x) ! P (f (x)) 1); (P (a) 1); (Q(b) 1); (:Q(b) 0:7)g then the application of the above algorithm never stops (producing (P (f (a)) 1), (P (f (f (a))) 1), etc.), and will never nd the 0:7-refutation from (Q(b) 1) and (:Q(b) 1). However, in the case where the above assumption is not made, we get the following weaker result: 17

Proposition 25 Let h be an admissible evaluation function. If the above algorithm stops, then the returned value is equal to Incons(C ).
We also get a result concerning the use of two admissible evaluation functions, one of which is more informed than the other.

Proposition 26 Let h1, h2 be two admissible functions with h1 h2 (h1 more informed than h2 ). Then any clause produced by the application of the algorithm to C using h1 will also be produced by its application to C using h2 .
We end this section by proposing a collection of admissible evaluation functions. Let C be a set of possibilistic clauses and R(C ) the set of possibilistic clauses that can be produced from C within one application of the resolution principle. Let R1(C ) = R(C ) and 8i; Ri+1(C ) = R(Ri (C )); lastly, let R1 (C ) = i Ri (C ). R1 (C ) contains all clauses producible within any nite resolution steps from C . Now, for any classical rst-order clause c = :]L1(:::) _ :]L2(:::) _ ::: _ :]Ln(:::) we let Prop(c) = l1 _ l2 _ ::: _ ln , where li = :]Li(:::). Thus, Prop(c) is the \propositional projection" of the rst-order clause c (obtained by ignoring the arguments if the predicates). Obviously, if c is propositional then Prop(c) = c. Lastly, we say that l 2 Prop(c) i l is one of the disjuncts of Prop(c).

H1(l) = maxf ; (c ) 2 R1(C ); :l 2 cg What we wish to get is an optimal refutation from C . Let us consider a refutation from C and let (c ) be a clause used in this refutation; for any literal l of Prop(c), the refutation has to use, at one point or the other, a clause (c0 ) such that Prop(c0) contains :l. Therefore, for any l 2 Prop(c), any refutation from C using c necessarily has a valuation not higher than H1(l). This means, denoting h(C ) for h(S ) with S being a state whose last clause is C , that

Now, let us de ne

8C = (c ) 2 C ; 8l 2 Prop(c); h (C ) H1(l) De ned as such, the heuristic function H1 seems not to be directly computable from C . However, H1 is directly computable from C , since we have the following result:

Proposition 27

H1 (l) = maxf ; (c ) 2 C ; :l 2 Prop(c)g

Hence, H1 is a static function which can be easily computed once and for all before the algorithm is executed; namely, the complexity of the computation of H1 is in O(jV j:jCj) where jV j is the number of propositional variables and jCj the number of clauses in C . Now, for each clause C = (c ), let us de ne ( if c = ? h1 (C ) = minfH (l); 1 2 Prop(c)g otherwise l
1

Let C = (c ); from H1(l) h (C )8l 2 Prop(c), we get h1 (C ) admissibility of h1 . Now we de ne the sequence of evaluation functions (fp )p 0 by 18

h (C ), hence the

admissible evaluation functions:

fp(C ) = min(val(C ); hp(C )) hp+1 (C ) = minl2Prop(c) maxC 0 =(c0 )2C;:l2Prop(c0) fp (C 0) For p = 1 we recover h1; interestingly, (hp)p 0 is a family of more and more informed

and 8p 0

fp : R1 (C ) ! 0; 1] h0 (C ) = 1

Proposition 28 8p 0; 8C 2 R1 (C ),
h (C ) hp+1 (C ) hp (C )
The proof is done by induction on p. Moreover, it can be shown that hp eventually becomes stationary, namely, 9p such that 8p p ; hp = hp . Moreover, p jCj. Let us denote h1 = hp = limp!1 hp . Since the functions hp are more and more informed when p increases, one might think of using hp for the largest p possible, preferably h1 . From p jCj we may think that computing h1 should not take so long if jCj is small. However, for p > 1, hp is not a static function like h1 , and thus the computation must be done again at each resolution step, which may induce a limitation in the practical use of hp ; p > 1. A possibilistic theorem prover was implemented, using h1 as heuristic function 42]. It is worth mentioning that so-called \deletion strategies" 47] can be superposed on any possibilistic resolution strategy. Deletion strategies consist of deleting from the current set of produced clauses, some clauses which are guaranteed to be useless in nding an optimal refutation. Some examples of clauses that can be deleted are tautologies, clauses containing pure literals3 and subsumed clauses. The de nition of subsumption in possibilistic logic is stronger than in classical logic: a clause (c ) is said to subsume a clause (c0 0 ) i c subsumes4 0 . For instance, (P (x) 0:7) subsumes (P (a) _ Q(a) 0:4) but not (P (a) _ Q(a) 0:8). c0 and It is not hard to prove that the deletion of tautologies, clauses containing pure literals and subsumed clauses preserves the inconsistency degree.

4 Algorithms for model nding in standard propositional possibilistic logic


In this Section we assume that all possibilistic formulae involved are propositional. While resolution is mainly directed towards nding an inconsistency, semantic evaluation is directed towards nding a model of a set of clauses, if any. The terminology \semantic evaluation" is originally from Jeannicot, Oxuso and Rauzy 38] and refers to an improvement of the well-known Davis and Putnam procedure 11]. After recalling the basics of semantic evaluation in classical logic, we will propose its possibilistic version. Since the notion \being a model of" in possibilistic logic becomes gradual, the possibilistic analog of model nding becomes an optimization problem consisting of nding a model ! that maximizes the possibility F (! ) that ! is a model of C , i.e., optimal model nding. Since semantic evaluation is de ned only in the propositional case, so is its possibilistic version.
3 A pure literal is a literal whose negation does not appear anywhere in the curent set of clauses. 4 c classically subsumes c0 i there exists a substitution such that c c0 .

19

The Davis and Putnam procedure 11] consists of searching for a model of a set of clauses C by developing a semantic tree for C 10]. The formulation of the Davis and Putnam procedure we give here is from 38]. Let C be a set of propositional clauses; let p be a literal appearing in C . We denote by Tp(C ) the set of clauses obtained from C by removing all clauses containing p, and by deleting all occurrences of :p in the remaining clauses. For instance, if

4.1 Semantic evaluation in classical propositional logic

C = fa _ b; :a _ b; :b _ c; :c _ ag
then and

T:a(C ) = fb; :b _ c; :cg

T:a;c(C ) = Tc(T:a(C )) = T:a(Tc (C )) = fb; ?g A clause c subsumes a clause c0 i the set of literals of c is included in the set of literals 0 (for instance, a _ :b subsumes a _ :b _ c). For any set of clauses C we denote by S (C ) of c the simpli cation by subsumption of C , obtained by removing from C all clauses subsumed by another one. Lastly, we let Tp (C ) = S (Tp(C )). Then, the following property 38]:

C consistent , Tp(C ) consistent or T:p(C ) consistent , Tp (C ) consistent or T:p(C ) consistent gives us a recursive algorithm for testing the consistency of C :

Function Consistency (C ): begin if C = ; then else if ? 2 C then else end


It can be shown 38] that the e ciency of the Davis and Putnam procedure can be improved by cutting some branches of the semantic tree, due to the model partition theorem that we recall here: let C be a set of clauses and p1 ; :::; pj; :::; pk distinct literals appearing in C and such that fp1; :::; pkg does not contain any complementary pair (i.e., a literal and its negation); let Cj = Tp1;:::;pj (C ) and Ck = Tp1;:::;pj ;pj+1 ;:::;pk (C ). Then: if Ck Cj then the consistency of Ck is equivalent to the consistency of Cj 20 Return (True) Return (False) choose a literal p appearing in C ; Return (Consistency(Tp (C ) _ Consistency (T:p(C ));

Thus, for testing whether Cj is consistent, it will be enough to test whether Ck is consistent; this avoids studying the branches located between Cj and Ck . We call semantic evaluation the algorithm resulting from the addition of the cuts due to the model partition theorem to the standard Davis and Putnam procedure. For some varieties of problems (especially those which are \strongly consistent" in the sense that the set of clauses has a large number of models), semantic evaluation can be much more e cient than resolution. Furthermore it gives a constructive proof of consistency by exhibiting a model. Lastly, semantic evaluation is relatively insensitive to syntax, since it can be applied to non-clausal propositional formulas as well.

4.2 Possibilistic semantic trees

A possibilistic semantic tree is similar to a classical semantic tree, the di erence being due to the gradation of inconsistency.
variables in a subset A0 of A; a (complete) world assigns a truth value to all propositional variables of A. We denote partial world (including complete worlds which are special cases of partial worlds) by v , v 0 etc., and complete worlds by ! , ! 0 , etc. Dv denotes the subset of propositional variables assigned a truth value by v . For any two partial worlds v , v 0 we say that v 0 extends v (denoted by v 0 w v ) i Dv0 Dv and 8a 2 Dv , v and v 0 assign the same truth value to a.

De nition 7 Let C be a set of propositional possibilistic clauses; let A = fa1; :::; ang be the set of propositional atoms appearing in C . A partial world assigns a truth value to propositional

We recall that if C = f(ci i ); i = 1:::ng then the degree to which it is possible that ! be the actual world is = C (! ) = minf1 ? i j! j :ci g We extend C to partial worlds by C (v ) = maxf C (! )j! v v g and we also de ne inc(C ; !) = 1 ? C (!) inc(C ; v) = 1 ? C (v) Lastly, for any partial world v we let For(v ) be a propositional formula whose models are exactly f! j! v v g (For(v ) is unique up to logical equivalence). In classical propositional logic, a semantic tree for the set of clauses C is a binary tree whose nodes correspond to partial worlds, and leaves to complete worlds. At each node, a new propositional variable is chosen among the variables not instanciated yet. In a possibilistic semantic tree, some extra information pertaining to the inconsistency degree is needed. More formally:
which is attached a valuation; each node corresponds to a partial world v (complete if the node is terminal), and the valuation attached to it is inc(C ; v ).

De nition 8 A possibilistic semantic tree for C is a classical semantic tree to each node of

The valuations of the terminal nodes can be computed directly; the valuations of nonterminal nodes can then be computed bottom-up, due to the following result. 21

assigned by v ; let v1 and v2 the partial worlds obtained from v by assigning ai respectively to true and false. Then inc(C ; v) = min(inc(C ; v1); inc(C ; v2)) Proof: f! jv v ! g = f! jv1 v ! g f! jv2 v ! g, hence the result.

Proposition 29 let v be a non-complete partial world and ai a propositional variable non

This property enables us to compute inconsistency degrees recursively; it will be the basis of the possibilistic version of the Davis and Putnam procedure. The relevant information in a possibilistic semantic tree is the set of terminal nodes with the lowest valuation inc(C ; ! ), which are actually the preferred models w.r.t. C , and the value associated with the root of the tree (which is the value associated to the preferred models), which is the inconsistency degree of C :

Proposition 30 Let T be a semantic tree for C ; then the value associated to the root of T is the inconsistency degree of C
Proof: it is based on the following lemma:

Lemma 30: for each node of T corresponding to the partial valuation v, inc(C ; v) = Incons(C ^ (For(v) 1)g)
The lemma is now proved by upward induction on the tree. For each terminal node, associated to a complete world ! , we have Incons(C ^ For(! )) = 1 ? max!0 2 min( C (! 0); f(For(!) 1)g(! 0)); now, f(For(!) 1)g (! 0) = 0 if ! 0 6= ! , and = 1 if ! 0 = !, hence Incons(C ^ For(!)) = 1 ? C (!) = inc(C ; !). Now let us prove the property by upward induction; let a non-terminal node associated to a partial world v , and assume the partial worlds v1 and v2 associated to its children satisfy the property. Then, Incons(C ^ (For(v ) 1)) = Incons(C ^ (For(v1 ) _ For(v2 ) 1)) = 1 ? max!2 min( C (! ); f(For(v1)_For(v2 ) 1)g(! )) = 1?max!j=v1 _v2 C (! ) = min(inc(C ; v1); inc(C ; v2)) = inc(C ; v ) by Proposition 29, which achives proving the result. Now, the application of the lemma to the root of the tree, associated to the \empty world" v0 , gives inc(C ; v0) = Incons(C ^ For(v0 )) = Incons(C ^ >) = Incons(C ).
Example : Let C = fC1 : (p 0:8); C2 : (p ! r 0:4); C3 : (p ! q 0:5); C4 : (:q 0:2); C5 : (:r 0:1); C6 : (p ! :r 0:3)g. A possibilistic semantic tree for C is represented on Figure 1:

Let us detail the computation of the value attached to the terminal node corresponding to the complete world fp; q; rg; since fp; q; rg falsi es the possibilistic clauses (:q 0:2), (:r 0:1) and (p ! :r 0:3), we get inc(C ; fp; q; rg = 1 ? C (fp; q; rg) = 1 ? minf1 ? j (c ) 2 C ; fp; q; rg j= :cg = maxf j(c 2 C ; fp; q; rg j= :cg = max(0:2; 0:1; 0:3g = 0:3. The value attached to the root of the tree is 0:3, hence Incons(C ) = 0:3. There is only one preferred model of C , namely, fp; q; rg. 22

0.3

{p}

0.3

0.8

{ p}

{p,q}

0.3

{p,q}

0.5

{ p,q}

0.8

{ p, q}

0.8

p q r falsified clauses C4 C5 C6 0.3

p q r C2 C4 0.4

p q r C3 C5 C6 0.5

p q r C2 C3 0.5

p q r C1 C4 C5 0.8

p q r C1 C4 0.8

p q r C1 C5 0.8

p q r

C1

0.8

4.3.1 Basic principle Let C be a set of propositional possibilistic clauses. Tp (C ) and Tp (C ) are de ned as before
(the weights being unchanged). Due to Propositions 29 and 30, Incons(C ) can be computed by the following recursive algorithm, which is the basis of the possibilistic version of the Davis and Putnam procedure.

4.3 Possibilistic semantic evaluation

Function Incons (C ): begin if C = ; then else if C = f(? )g (1) then else end
Due to the simpli cation by subsumption made at each step, it cannot be the case that 23 Return choose a literal p appearing in C ; Return (min(Incons(Tp (C ); Incons(T:p(C )) Return 0

several empty clauses belong to C at step 1]5. The di erence with the classical procedure of Davis and Putnam is the absence of a systematic stop when an empty clause is discovered. The above algorithm stops and returns the inconsistency degree of C , and the best model(s) of C is (are) the worlds corresponding to the nodes giving its weight to the root of the tree. Note that the possibilistic Davis and Putnam procedure is close to the backtrack plus forward checking algorithm in the framework of fuzzy constraint satisfaction problems ( 22]) { up to the di erence between the languages of representation.

4.3.2 Representation by minimax trees


The possibilistic procedure of Davis and Putnam, executed such as described above, generally needs visiting most of the 2n possible branches (where n is the number of propositional variables appearing in C ), because the halting criterion is too weak and thus may be rarely ful lled. However it can be improved so as to take account of partial inconsistencies as soon as possible. Thus, each time an empty clause of weight appears in C , it will be removed as well as all clauses with a weight , i.e., C is replaced by C and Incons(C ) is then computed. Incons(C ) will be the maximum of and Incons(C ). This is equivalent to replacing each portion of the tree (left part of Figure 2) by the following one (right part of the Figure 2).
C C MAX

T*( ) pC

T*( ) pC

MIN

T * ( ) pC

* (C ) p

Thus the possibilistic semantic tree has been translated into an equivalent minimax tree. The new recursive algorithm for computing Incons(C ) corresponds to the exploration of this minimax tree:

Function Incons (C ): begin end

Create the root hn0; Ci and give it the status min; Return (ValuationMin(hn0; Ci))

if C = f(? )g then return should be replaced by if C contains only empty clauses then return the highest value attached to an empty clauses in C .

5 If the subsumption test is omitted i.e. if we replace Tp (C ) and T:p (C ) by Tp (C ) and T:p (C )] then the test

24

Function ValuationMin (hn; Cni): begin if Cn = ; (1) then else


Return 0 Choose a literal p appearing in Cn ; Create a left child hn1 ; Tp (Cn )i, with status max; v1 ValuationMax (hn1; Tp (Cn)i); Create a right child hn2; Tp (Cn )i, with status max; v2 ValuationMax (hn2; Tp (Cn)i); Return (min(v1; v2));

end Function ValuationMax (hn; Cni): begin if Cn contains an empty clause (? ) (2) then else
vinc vinc
0; create a right child hn0 ; (Cn) )i; v 0 ValuationMin (hn0 ; (Cn) )i); Return (max(vinc ; v 0))

(1) Note that we have Cn = ; in only two cases: either all propositional variables from the initial set of clauses have been given a truth value, or the partial interpretation corresponding to the node satis es all clauses whose valuation is greater than the highest valuation of all empty clauses between the root and the node. (2) If the algorithm contains at each step a simpli cation by subsumption of the set of clauses, then at this step of the algorithm (2), Cn contains at most one empty clause. If the subsumption check is not made, and if several empty clauses appear at this step then the one with the highest valuation is retained.

end

Proposition 31 The function V aluationMin applied to the set of clauses Cn and the node n computes exactly the inconsistency degree of Cn.
Proof: by bottom-up induction on the tree and the following lemma (obvious from Proposition 6): Incons((Cn) ) = 0 if Incons(Cn ) and Incons(Cn) = Incons((Cn) if < Incons((Cn). If node n has no child then Cn = ; and thus V aluationMin(n; Cn ) = 0 = Incons(Cn ), which establishes the base induction step. Now, assume that all children of Cn verify the induction hypothesis. if n is a Min node then let nl and nr its children; then, V aluationMin(n) = min(V aluationMin(nl ); V aluationMin(nr ) = min(Incons(Cl); Incons(Cr )) by the induction hypothesis applied to nl and nr

25

= min(Incons(TpCn ); Incons(T:pCn )) = Incons(Cn ). if n is a Max node then, let the valuation of the highest empty clause appearing in Cn (or 0 if none appears), and let nr be its right child, associated to the empty clause free set of clauses Cr . If = 0 then Cn0 = Cn and thus V aluationMax(n) = max(0; V aluationMin(n0)) = V aluationMin(n0 ) = Incons(Cn0 ) = Incons(Cn ). Now, it remains to prove that if > 0 then Incons(Cn ) = max( ; Incons((Cn) )). It cannot be the case that Incons(Cn ) < because Cn contains (? ). If Incons(Cn ) < then the lemma guarantees that Incons((Cn ) ) = 0 and thus max( ; Incons((Cn) )) = = Incons(Cn). Lastly, if Incons(Cn ) > then the lemma tells us that Incons((Cn ) ) = Incons(Cn ), hence, max( ; Incons((Cn) )) = max( ; Incons(Cn)) = Incons(Cn ), which achieves proving the proposition.

Proposition 32

The function V aluationMin applied to the initial set of clauses and the root of the tree computes exactly the inconsistency degree of C .

This is an immediate corollary of Proposition 31. The possibilistic Davis and Putnam procedure can then be improved by cutting a certain number of branches in the corresponding min-max tree. They are two kinds of such cuts:

4.4 Pruning the tree

4.4.1 Alpha-beta cuts


These cuts 40] are classical in min-max tree search. To each created node we attach a temporary return value (TRV) which is, for a max (resp. min) node, the largest (resp. least) of the de nitive values of its children; it is thus a lower (resp. upper) bound of its de nitive TRV, which becomes equal to the de nitive value at the latest when all children nodes have been explored or cut.
v1 (temporary return value ) MAX v1 (temporary return value ) MIN

MIN v1 v2

MAX

v2

v1

v2

v1

When computing the value of a max node n (Figure 3, left part), assume that the de nitive values v1 and v2 have been attached to its left child l and to the left child lr of its right child r, and that furthermore we have v2 v1 ; then, whatever the value attached to the 26

right child r of n, the value computed upwards for r will in any case be v2 and a fortiori v1, so that the TRV of n becomes de nitive, and the right branch issued from n does not have to be explored. The case for computing the value of a min node (right part of Figure 3) is symmetrical. More generally, this cutting property still holds as soon as the min (resp. max) node with the value v2 v1 (resp. v2 v1) is a descendent (not necessarily a child) of the max (resp. min) node n.

4.4.2 Model partition cuts

The model partition theorem ( 38], recalled in Section 4.1). can be extended to standard possibilistic logic in the following way:

Proposition 33 Let C a set of possibilistic clauses; let p1; :::; pk be distinct literals appearing in C and such that fp1; :::; pkg does not contain any complementary pair. Let Cj = Tp1 ;:::;pj (C ) and Ck = Tpj+1 ;:::;pk (Cj ). Let the highest inconsistency value on the way from Cj to Ck if there is at least one, with
= 0 otherwise. Then, if

Incons(Ck ) Incons(Cj ) Incons(Ck ) = Incons(Cj )


(1)

then

and the corresponding cuts can thus be made.

Remark: (1) is often a consequence of one of these two conditions: Ck Cj where denotes classical set inclusion; more generally, Ck Cj where denotes fuzzy set inclusion, i.e., 8(c ) 2 Ck 9(c0 ) 2 Cj such that c0 = c and . This result generalizes the classical model partition theorem, where the node Ck is studied only if its ancestors do not contain any empty clause, i.e., if = 0.
Proof of Proposition 33: Let !k an optimal model of Ck , i.e., inc(Ck ; !k ) = Incons(Ck ). Let !j the interpretation assigning to all literals of Ck the same truth value as !k and furthermore satisfying pj +1 ; :::; pk. We have inc(Cj ; !j ) = Incons(Cj ^ For(!j ) (Lemma30) = max( ; inc(Ck ; !k )) because Tpj+1 ;:::;pk (Cj ^ For(!j )) = Ck and the optimal interpretation of Cj ^ For(E!j ) satis es necessarily pj +1 ; :::; pk. Then, inc(Ck ; !k ) = Incons(Ck ) by hypo-thesis, hence inc(Cj ; !j ) = inc(Ck ; !k ) = Incons(Ck ), and since Incons(Cj ) = inf! inc(Cj ; !) we have Incons(Cj ) inc(Cj ; !j ) and thus Incons(Cj ) Incons(Ck ). Since we have by hypothesis Incons(Ck ) Incons(Cj ), we conclude that Incons(Ck ) = Incons(Cj ).

27

{(p 0.8), ( p v r 0.4), (p v q 0.5), ( q 0.2), ( 0.1), ( vr 0.3)} r p 0.3 q q (model partition) {(p 0.8), ( p v r 0.4), ( 0.3 0.2), ( r 0.1), (p vr 0.3)} [1]

MIN

MAX

0.2

0.3 {(p 0.8), ( p v r 0.4), (r 0.1), (p v r 0.3)} p 0.3 {(r 0.4), ( r 0.3)} p {( 0.8 0.8), ( r 0.1)} [3]

[2]

MIN

MAX

(alpha-beta) 0 0.3 {(r 0.4), ( r 0.3)} 0.3 {( 0.3)} 0.8 MIN {( 0.4)} MAX

0.3

0 { } 0.4

{ }0

4.4.3 Example
The initial set of clauses (1) is C = fp 0:8); (:p_r 0:4); (:p_q 0:5); (:q 0:2); (:r 0:1); (:p_:r 0:3)g. Here is the corresponding min-max tree (where the order of instanciation is q , p, r in all branches), taking bene t from cuts. The tree is explored depth- rst from left to right (Figure 4). The values are computed bottom-up. A step (2) we see that Tq (C ) C , thus the model partition theorem can be applied and the branch corresponding to Tq (C ) can be cut o . At step (3) an alpha-beta cut can be done. The optimal model of C is pqr; the inconsistency degree of C is 0.3.

5 Algorithms for extensions of possibilistic logic


Standard possibilistic logic handles only \certainty-quali ed statements". It has been already said that it is practically su cient foir most applications (see 29] and Conclusion for a panorama of applications). However there are some situations where it is also desirable to handle \possibility-quali ed statements". These kinds of statements (see 25] 29] 14]) are 28

5.1 Extended possibilistic logic

represented semantically by constraints of the form (')

Thus, in Extended Possibilistic Logic (EPL), a knowledge base is a set of EPL formulas being either necessity valued formulas of SPL, denoted here by (' (N )) or possibilityvalued formulas (' ( )) with 0 < 1. A possibility-valued formula (' ( )) intuitively expresses to what extent ' cannot be refuted. Note that (') is equivalent to N (:') 1 ? , which shows that possibility-valued formulas are complementary to necessityvalued ones, since they express an upper bound on the certainty level. EPL more generally enables a larger variety of types of constraints on N ('); for instance, N (') can be expressed in EPL by f(' (N )); (:' ( 1 ? ))g. In particular, f(' ( 1)); (:' ( 1))g expresses that (') = (:') = 1, or equivalently N (') = N (:') = 0, i.e., expresses explicitely complete ignorance about ', which SPL alone cannot do. An interpretation of possibilistic logic where posibility-quali ed statements are particularly meaningful is when dealing with graded obligations and permissions; while N (') expresses that ' is obligatory to the degree , (') expresses that ' is at least permitted to the degree , which is much weaker. Since N (') > 0 entails (') = 1, (' (N )) is stronger than (' ( ) for any > 0; > 0. This leads to the following ordering between valuations: (N ) (N ) if ; ( ) ( ) if ; (N ) (N ) 8 ; > 0. We let W = f(N ); > 0g f( ); > 0g the set of all EPL valuations.

j= (' ( )) i (') . The function V al is extended to EPL by V al('; F ) = Supf! j F j= (?! )g. This leads to a richer view of grader inconsistency, where ( )-inconsistencies are weaker than (N )inconsistencies (see 29] for a complete exposition of EPL). The following results are useful for extending SPL proof techniques to EPL. The rst one extends Proposition 8. The second one has been independently proved in 29] and by Hollunder 39].

The semantics of EPL generalizes SPL's ( 29] 44]); in particular we have

Proposition 34 F j= ('w) i F f(:') (N 1))g j= (? w); V al('; F ) = Incons(F f(' (N 1))g Proposition 35 Let F = FN F where FN and F contain respectively the necessity-valued and possibilityvalued formulas of F . Let FN = f' j (' (N )) 2 Fg. Then Incons(F ) = (N ) i Incons(FN ) = (N ) Incons(F ) = ( ) i FN is consistent and 9(' ( )) 2 F such that - FN f'g is inconsistent - 8 0 > , 8('0 ( 0 )) 2 F , FN f'0g is consistent.
29

The resolution rule for two clauses (c1 w1 ) and (c2 w2) of EPL is then de ned by: (R) (c1 w1) (c2 w2) (r(c1; c2) w1 w2) Proposition 38 (R) is sound

Proposition 34 means that like in SPL, any deduction problem in EPL can be reformulated equivalently into the computation of an inconsistency degree; Proposition 35 gives us an easy way to practically compute Incons(F , together with complexity results for deduction in EPL. Proposition 36 Deciding that F j= (' w) in EPL is co-NP-complete. Proposition 37 Computing V al('; F ) in EPL requires at most jF j + dlog2 jFN je satis ability checks. Now, resolution can be extended to EPL: De nition 9 (resolution in EPL) 16] The operation : W W ! W is de ned by 8 (N ) (N ) = (N min( ; )) > ( > < ( ) if + > 1 > (N ) ( ) = ( 0) otherwise > : ( ) ( ) = ( 0)

As to completeness, the result is not so easy as in SPL; in order to generalize Proposition 18 we must assume that F is already in clausal form6 and propositional7. Proposition 39 Let C be a set of propositional EPL clauses and ' a formula. Let C 0 be the set of clauses obtained from C f(' (N 1))g. Then the valuation of the optimal refutation from C 0 is V al('; F ). Lastly, due to the impossibility to extend Propositions 1 and 2 (related to the principle of minimum speci city) to EPL, semantic evaluation cannot be extended neither.

SPL and EPL knowledge bases are sets, .e., conjunctions, of possibilistic formulas. We consider here another way to extend standard and extended possibilistic logic, namely by allowing disjunctions of possibilistic formulas, such as (' ( )) _ ( (N ))8. For the sake of
does not preserve the inconsistency degree: for instance, if F = f(a _ b (N 0:7)); (:a ^ :b ( 0:4))g and C =7 f((a _ b (N 0:7)); (:a ( 0:4)); (:b ( 0:4))g, then Incons(C ) = ( 0) whereas Incons(F ) = ( 0:4). Let C = f(8x)p(x) ( 0:8); (p(a) (N 1)); (p(b) (N 1))g. There is no ( )-refutation from C { because p(x) ( 0:8) should be used twice in the refutation and ( 0:8) ( 0:8) = ( 0)) { whereas Incons(C ) = ( 0:8). 8 Note that allowing for negations of possibilistic formulas does not bring much new, since :(' (N )), expressing not(N (') ) i .e. N (') < or equivalently (:') > 1 ? , which is equivalent to an EPL possibility-valued formula, no matter if the inequality is strict: indeed, the niteness of the knowledge base implies that only nitely many di erent valuations are used, and thus (:') > 1 ? is equivalent to (:') for a suitable choice of (depending on F ). This is why, when de ning full possibilistic logic, we only add to the language disjunctions of possibilistic formulas.
6 The knowledge base must be already in clausal form because putting a formula in EPL into clausal form

5.2 Full possibilistic logic

30

simplicity and brevity we will only consider here the propositional necessity-valued fragment of FPL, that we call NFPL9 .

De nition 10 The language of necessity-valued full possibilistic logic (NFPL) is generated


recursively by the following rules: (' ) 2 NFPL, where ' is a propositional formula and 2 (0; 1].

8 ; 2 NFPL, V and W are in NFPL. De nition 11 (semantics of NFPL) j= (' ) i N (') j= V i j= and j= j= W i j= or j= j= i 8 , j= implies j= V al( ; ') = Supf j j= (' )g Incons( ) = V al( ; ?)

The external disjunctive normal form (EDNF) of a NFPL formula will be the equivalent formula having the form _^ ('i;j i;j ) i j V on W It is obtained as in classical propositional logic by distributing V . Note that the internal W , formulas 'i;j are not necessarily in normal form. We note i = j ('i;j i;j ); thus i i each i being a standard possibilistic knowledge base.

Proposition 40 Let be aWNFPL formula and 1; : : : n the disjuncts of its external disjunctive normal form, i.e., i i . Let i be the least speci c possibility distribution satisfying (after Proposition 1) and Ni the necessity measure induced by i . Then, j= (' ) i i
mini Ni (')
Proof: S

j= if and only if 9i such that f j j= ig f j j= (' ). Since i the result. }


Incons( ) = mini Incons( i ).

j= j=

i. i i

Thus,

j= (' ) if and only if by Proposition 1, we get i

As an immediate corollary, we get V al('; ) = mini V al('; i) and

The problem is that in the worst case, putting under external DNF leads to an exponential number of i 's, hence the complexity gap when switching from SPL to NFPL.
9 This gives the following diagram (! meaning \more general than"): NFPL

.&
SPL

FPL

&.

EPL

31

Interestingly, it lays in the second level of the polynomial hierarchy, like many problems in knowledge representation (including most nonmonotonic logics and belief revision { see for instance 37] 49] 9]).

Proposition 41 Deduction in NFPL is in

P. 2

Proof: let us call nfpl-ded the problem of deciding whether ( ) is a consequence of a NFPL formula . The membership of the complementary problem nfpl-cons to P is given 2 by the following nondeterministic algorithm: 1. guess i a conjunction of SPL formulas formed with a subset of the possibilistic formulas f('k k ); k 2 K g appearing in . 2. considering all f('k k ); k 2 K g appearing in and i as classical propositional variables pk , show that i j= ; 3. show that i 6j= ( ) Steps 2 and 3 require NP-oracles, hence nfpl-cons 2 P , hence nfpl-cons 2 P . } 2 2

6 Conclusion
6.1 What else?
There are several issues, related to the algorithmic issues of possibilistic logic, which have not been discussed in this chapter.
Fuzzy constraint satisfaction 22] computes preferred solutions with respect to a set of prioritized constraints in a way being very similar to possibilistic semantic evaluation. Possibilistic logic programming uses possibilistic logic as a programming language, which is particularly suited to dealing with uncertainty or with minmax optimization. Formal details about declarative and procedural semantics of possibilistic logic programs can be found in 27] and extensions incorporating negation by failure can be found in the recent works of Wagner 55] 56]. Drowning-free variants of possibilistic logic. A consequence of Proposition 5 is that in a SPL knowledge base F , formulas whose valuation is not larger than Incons(F ) are completely useless for nontrivial deductions { this is the drowning e ect 2]. For instance, F = f(a 1); (:a 0:8); (b 0:6)g is equivalent to f(a 1); (? 0:8)g, and thus (b 0:6) is drown by the 0.8-inconsistency. In order to escape the drowning e ect, the idempotent operation min in F (! ) = min!j=:'i 1 ? i (Proposition 1) must be replaced by a non-idempotent operation. Two alternative operators, close to the qualitative spirit of min but not idempotent, have been proposed for de ning drowning-free variants of possibilistic logic namely, leximin and discrimin 2]. As for complexity, recall (Proposition 13) that nonmonotonic possibilistic entailment is in P (O(logn)); then the leximin variant is in P (O(n)) 2 2 9] and the discrimin variant is P -complete 48] 9]. This means that escaping the drowning 2 e ect generates an increase of complexity; as to practical computation, drowning-free variants basically require computing some of the maximal consistent subsets of the knowledge base, which SPL does not.

32

There are also some generalizations of possibilistic logic that we did not discuss here, including possiiblistic assumption-based truth maintenance systems, possibilistic logic with vague predicates 14], lattice-based possibilistic logics where the set of certainty values is not any longer 0; 1] but more generally a lattice 26], possibility theory based on non-classical logics 5] 6].

6.2 What for?

This chapter was written in a computational perspective, and therefore we said very little about the potential impact of the algorithmic issues we discussed. Among the applications of possibilistic logic we nd 1. nonmotonic reasoning 18] 24], reasoning with default rules 3]; 2. belief revision 19]; 3. multisource reasoning 28], inconsistency handling, knowledge fusion 4] 4. reasoning about action 20], planning under uncertainty 12]; 5. diagnosis 8]; 6. terminological logics 39]. 7. temporal reasoning 26], graded persistence 13], timed stamped knowledge bases; 8. solving discrete optimization problems with minmax costs, reasoning with prioritized constraints 43], qualitative decision making 30]; 9. reasoning with graded obligations and permissions. Algorithmic issues in possibilistic logic are related to their counterparts in related logics of formalisms for reasoning with uncertainty, some of which are considered in this book: probabilistic logics (Jaumard and Hansen, chapter 8), Dempster-Shafer logics (Kohlas and Haenni, chapter 6), default logics (Mengin, chapter 3). It is possible to reformulate possibilistic logic into a much more general framework which encompasses other weighted logics of uncertainty (among which several variants of probabilistic logic and of belief function logics, as well as more \exotic" logics of uncertainty). This gives raise to so-called \weighted logics" 21]. Intuitively, a weighted logic consists in the association of valuations from a given structure (generally a completely ordered lattice) to logical formulae; these valuations will be required to satisfy a list of given axioms depending on the particular weighted logic (for instance, axiomes of possibility theory for possibilistic logic); furthermore, these valuations operate upon a given logical structure with which they are fully compatible (in this chapter, this logical structure was classical logic). The study of general automated deduction algorithms for speci c classes of weighted logics is a promising topic. However, it appears that possibilistic logic enjoys many of its computational properties thanks to the idempotency of the min operator (see 53] for a similar general work in a 33

6.3 What next?

constraint satisfaction framework). Lastly, a point which was not discussed in this chapter was the design of anytime and approximate algorithms for possibilistic logic. As pointed out in 57] in a belief revision context, many of the algorithms given in this chapter can be used in an anytime way (which means that they provide an output which get better and better with the actual execution time). An experimental study of their e ciencies remains then to be done.

Appendix
We assume that complexity classes NP and coNP are known by the reader. We recall here three other classes, located between the two latter ones and PSPACE. The reader may consult Papadimitriou 51] for an extensive presentation. The polynomial hierarchy (containing P , 2 P and P ) has been rst de ned by Stockmeyer 54]. These classes has proven to be of 2 2 some great interest for Arti cial Intelligence, and in particular, belief revision 49] 31] and nonmonotonic reasoning 7] 9] 37]. DP is the set of all languages which are the intersection of a language in NP and a language in coNP. The canonical DP-complete problem is sat-unsat: given two propositional formulas '; , decide that ' is satis able and is not. P = PNP 54] is the class of all decision problems that can be decided in polynomial 2 time using NP-oracles; P (O(logn)) is the class of all problems which can be decided 2 in polynomial time using a logarithmically many NP-oracles calls. P = NPNP 54] is the class of all decision problems that can be decided in poly2 nomial on a nondeterministic Turing machine using NP-oracles. The canonical P 2 complete problem is 2-qbf (where qbf stands for \Quanti ed Boolean Formula"): 9a1 : : : 9am 8b1 : : : 8bnF , where the ai's and bj 's are Boolean variables and F is a propositional formula built on them. P = co- P . The canonical P -complete problem is 2 ? qbf: 2 2 2 8a1 : : : 8am 9b1 : : : 9bnF .

References
1] Salem Benferhat. Raisonnement non-monotone et traitement de l'inconsistance en logique possibiliste. PhD Thesis, Universite Paul Sabatier, 1994. 2] Salem Benferhat, Claudette Cayrol, Didier Dubois, Jer^me Lang and Henri Prade. o Inconsistency management and prioritized syntax-based entailment, Proc. of IJCAI'93, 640-645. 3] Salem Benferhat, Didier Dubois and Henri Prade. Default rules and possibilistic logic. Proceedings of KR'92, 673-684. 4] Salem Benferhat, Didier Dubois and Henri Prade. From semantic to syntactic approaches to information combination in possibilistic logic. To appear in Aggregation 34

of Evidence under Fuzziness (B. Bouchon-Meunier, ed.), Physica Verlag. Short version in Proc. of ECSQARU'97.

5] Philippe Besnard and Jer^me Lang. Possibility and necessity functions over non-classical o logics. Proceedings of UAI'94. 6] Luca Boldrin and Claudio Sossai. An algebraic semantics for possibilistic logic. Proceedings of UAI'95, 27-35. 7] Marco Cadoli and Marco Schaerf. A survey of complexity results for nonmonotonic logics. Journal of Logic Programming 17:127-160, 1993. 8] Didier Cayrac, Didier Dubois and Henri Prade. Practical model-based diagnosis with qualitative possibilistic uncertainty, Proceedings of Uncertainty in AI'95, 68-76. 9] Claudette Cayrol and Marie-Christine Lagasquie-Schiex. Nonmonotonic syntax-based entailment: a classi cation of consequence relations, Proceedings of ECSQARU'95 (C. Froidevaux, J. Kohlas, eds.), Apringer Verlag, 107-114. 10] C.C. Chang, R.C.T. Lee. Symbolic logic and mechanical theorem proving, Academic Press, 1973. 11] M. Davis, H. Putnam. A computing procedure for quanti cation theory, J. of the Assoc. for Computing Machinery 7 (1960), 201-215. 12] Celia Da Costa Pereira, Frederick Garcia, Jer^me Lang and Roger Martin-Clouaire. o Planning with graded nondeterministic actions: a possibilistic approach, to appear in International Journal of Intelligent Systems. 13] Dimiter Driankov and Jer^me Lang, Possibilistic decreasing persistence. Proceedings of o Uncertainty in AI'93. 14] Didier Dubois, Henri Prade. Necessity measures and the resolution principle, IEEE Trans. on Systems, Man and Cybernetics, 17:474-478, 1987. 15] Didier Dubois, Henri Prade. An introduction to possibilistic and fuzzy logics (with discussions and reply). In Non-standard Logics for Automated Reasoning (P. Smets, A. Mamdani, D. Dubois, H. Prade, eds.), Academic Press, 287-315 & 321-326. 16] Didier Dubois, Henri Prade. Resolution principles in possibilistic logic. Int. Journ. of Approximate Reasoning, 4(1):1-21, 1990. 17] Didier Dubois, Henri Prade. Epistemic entrenchment and possibilistic logic. Arti cial Intelligence, 50:223-239, 1991. 18] Didier Dubois, Henri Prade. Possibilistic logic, preferential models, nonmonotonicity and related issues. Proc. of IJCAI'91, 419-424. 19] Didier Dubois, Henri Prade. Belief change and possibility theory. In P. Gardenfors, ed., Belief Revision, 142-182, Cambridge University Press, 1992.

35

20] Didier Dubois, Florence Dupin de Saint-Cyr and Henri Prade. Updating, transition constraints and possibilistic Markov chains, Lectures Notes in Computer Science 945 (B. Bouchon-Meunier, R.R. Yager, L. Zadeh eds.), Springer-Verlag, 1994, 263-272. 21] Didier Dubois, Florence Dupin de Saint-Cyr, Jer^me Lang, Henri Prade and Thomas o Schiex. Weighted logics of uncertainty. In preparation. 22] Didier Dubois, Henene Fargier and Henri Prade. Possibility theory in constraint satisfaction problems: handling priority, preference and uncertainty, Applied Intelligence 6, 287-309, 1996. 23] Didier Dubois, Jer^me Lang, Henri Prade. Theorem proving under uncertainty: a poso sibility theory-based approach. Proc. of IJCAI'87, 484-486. 24] Didier Dubois, Jer^me Lang, Henri Prade. Automated reasoning using possibilistic : o semantics, belief revision, and variable certainty weights. IEEE Trans. on Data and Knowledge Engineering, 1994. 25] Didier Dubois, Jer^me Lang, Henri Prade. Fuzzy sets in approximate reasoning { Part o II: logical approaches. Fuzzy Sets and Systems, 40:203-244, 1991. 26] Didier Dubois, Jer^me Lang, Henri Prade. Timed possibilistic logic. Fundamenta Ino formaticae, XV:211-234, 1991. 27] Didier Dubois, Jer^me Lang, Henri Prade. Towards possibilistic logic programming. o Proc. of ICLP'91, 581-595. 28] Didier Dubois, Jer^me Lang, Henri Prade. Dealing with multi-source information in o possibilistic logic. Proc. of ECAI'92, 38-42. 29] Didier Dubois, Jer^me Lang, Henri Prade. Possibilistic logic. Handbook of Logic in Aro ti cial Intelligence and Logic Programming (D.M. Gabbay, C.J. Hogger, J.A. Robinson, eds.), Vol. 3, 439-513, Oxford University Press. 30] D. Dubois, H. Prade, R. Sabbadin. A possibilistic logic machinery for qualitative decision. In Proc. of the AAAI 1997 Spring Symposium Series (Qualitative Preferences in Deliberation and Practical Reasoning), Standford University, California, March 24-26, 1997. 31] Thomas Eiter and Georg Gottlob. On the complexity of propositional knowledge base revisions, updates, and counterfactuals. Arti cial Intelligence 57:227-270, 1992. 32] Henri Farreny. Recherche heuristiquement ordonnee dans les graphes d'etats. Masson, 1995. English version in Technical Report IRIT, Universite Paul Sabatier, Toulouse, 1997, 33] Luis Fari~as del Cerro, Andreas Herzig, Jer^me Lang. From ordering-based nonmonon o tonic reasoning to conditional logics. Arti cial Intelligence 66 (1994), 375-393. 34] Christine Froidevaux, Christine Grosset^te. Graded default theories for uncertainty. e Proc. of ECAI'90, 283-288. 36

35] Christine Froidevaux, Jer^me Mengin. A theorem prover for free graded default theories. o 36] Dov Gabbay. Labelled Deductive Systems, Oxford University Press, 1991. 37] Georg Gottlob. Complexity results for nonmonotonic logics. Journal for Logic and Computation, 2(3):397-425, 1992. 38] Serge Jeannicot, Laurent Oxuso , Antoine Rauzy. Evaluation semantique: une propriete de coupure pour rendre e cace la procedure de Davis et Putnam, Revue d'Intelligence Arti cielle, 2 (1):41-60, 1988. 39] Bernhard Hollunder. An alternative proof method for possibilistic logic and its application to terminological logics. Proceedings of UAI'94, 327-335. 40] D.E. Knuth and R.W Moore, An analysis of alpha-beta pruning, Arti cial Intelligence 6 (1975), 203-326. 41] Jer^me Lang. Semantic evaluation in possibilistic logic. In Uncertainty in Knowledgeo Based Systems (B. Bouchon, R. Yager, L. Zadeh, eds.), Lecture Notes in Computer Science, Vol. 521, Springer Verlag, 1991, 260-268. 42] Jer^me Lang. Logique possibiliste: aspects formels, deduction automatique et applicao tions, PhD Thesis, Universite Paul Sabatier, 1991. 43] Jer^me Lang. Possibilistic logic as a framework for min-max discrete optimisation probo lems and prioritized constraints, Fundamentals of Arti cial Intelligence Research, 112126, Lecture Notes in Computer Science, Vol. 535, 1991. 44] Jer^me Lang, Didier Dubois and Henri Prade. A logic of graded possibility and certainty o coping with partial inconsistency, Proc. of UAI'91, 188-196. 45] R.C.T Lee. Fuzzy logic and the resolution principle, Journ. of the ACM, 19:109-119, 1972. 46] Churn-Jung Liau and Bertrand I-Peng Lin. Possibilistic reasoning: a mini-survey and uniform semantics. Arti cial Intelligence 88 (1996), 163-193. 47] Donald W. Loveland. Automated theorem proving: a logical basis, North-Holland, 1978. 48] Bernhard Nebel. Belief revision and default reasoning: syntax-based approaches. Proceedings of KR'91, 417-428. 49] Bernhard Nebel. How hard is it to revise a belief base? To appear. 50] Nils J. Nilsson. Probabilistic logic, Arti cial Intelligence, 28:71-87, 1986. 51] Christos H. Papadimitriou. Computational complexity, Addison-Wesley, 1994. 52] Alessandro Sa otti. A belief function logic, Proceedings of AAAI'91. 53] Thomas Schiex, Helene Fargier and Gerard Verfaillie. Valued constraint satisfaction problems, Proc. IJCAI'95. 37

54] Larry Stockmeyer. The polynomial-time hierarchy. Theoretical Computer Science 3, 122, 1977. 55] Gerd Wagner. A Logical Reconstruction of Fuzzy Inference in Databases and Logic Programs, Proceedings of IFSA'97. 56] Gerd Wagner. Negation in Fuzzy and Possibilistic Logic Programs, to appear. 57] Mary-Anne Williams. Anytime belief revision, Proceedings of IJCAI'97. 58] Ronald R. Yager. Paths of least resistance in possibilistic production systems. Fuzzy Sets and Systems 10:121-132, 1986. 59] Lot A. Zadeh. Fuzzy Sets, Information and Control, 8:338-353, 1965. 60] Lot A. Zadeh. Fuzzy sets as a basis for theory of possibility, Fuzzy Sets and Systems 1(1):3-28, 1978.

38

You might also like