Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 47

CS 401: Artificial Intelligence

Propositional Logic
Semantics and Inference

Instructor: Wasif Altaf


Thanks to AI Team at UC, Berkeley for providing all lecture materials openly.
Today
 Chapter 7
 7.4 (Propositional Logic)
 Prolog (Till Monday, 6th May 2015)
 Slides Chapter 1
 Practice
Knowledge
 Knowledge base = set of sentences in a formal language
 Declarative approach to building an agent (or other system):
 Tell it what it needs to know (or have it Learn the knowledge)
 Then it can Ask itself what to do—answers should follow from the KB
 Agents can be viewed at the knowledge level
i.e., what they know, regardless of how implemented
 A single inference algorithm can answer any answerable question
 Cf. a search algorithm answers only “how to get from A to B” questions
Knowledge base Domain-specific facts

Inference engine Generic code


Knowledge Based Agent
The Wumpus World
 Wumpus world is a cave consisting of rooms connected by
passageways
 A terrible Wumpus - a beast that eats anyone who enters its
room
 The Wumpus can be shot by our agent BUT our agent has only
one arrow
 Some room have pit, where everyone who enters, falls in and
dies
 A heap of Gold in some room
The Wumpus World (Cont.)
 Performance Measure
 +1000 for climbing out of the cave with gold
 -1000 for falling into pit, or for being eaten by Wumpus
 -1 for each action
 -10 for using the only arrow
 Game end when agent dies, or climbs out of the cave
The Wumpus World (Cont.)
 Environment
 4 x 4 grid of rooms
 Random Wumpus location
 Random Gold Location
 Agents starts at [1, 1] facing right
The Wumpus World (Cont.)
 Actuators
 Move forward
 Turn Left by 90 Degrees
 Turn Right by 90 Degrees
 Safe to enter dead Wumpus square
 Grab to pick Gold
 Shoot to fire arrow in direction facing agent
 Arrow continues until hits Wumpus or Wall
 Climb out of the cave from square [1, 1]
The Wumpus World (Cont.)
 Sensors
 In the square containing the wumpus and in the directly (not diagonally)
adjacent squares, the agent will perceive a Stench.
 In the squares directly adjacent to a pit, the agent will perceive a
Breeze.
 In the square where the gold is, the agent will perceive a Glitter.
 When an agent walks into a wall, it will perceive a Bump.
 When the wumpus is killed, it emits a woeful Scream that can be
perceived anywhere in the cave.
 [Stench, Breeze, None, None, None]
The Wumpus World (Cont.)
 Environment State Dynamics
 Static or Dynamic?
 Discrete or Continuous?
 Multi or single agent?
 Episodic or sequential?
 Observable?
 Partially?
 Fully?
The Wumpus World (Cont.)
 What if Gold is surrounded by Pits on all paths?
 What if the Gold is in Pit?
The Wumpus World (Cont.)
 A where the agent is
 OK where agent is safe
 [1, 1] is OK
 [None, None, None, None, None]
 [1, 2] is OK
 [2, 1] is OK
 Agent moves to [2, 1]
 Agent senses Breeze
 Pit can’t be in [1, 1]
 Pit must be in [2, 2] or [3, 1]
 Where should agent move?
The Wumpus World (Cont.)
 Prudent agent moves back to [1, 1]
 And then moves to [1,2]
 Stench in [1, 2] – Wumpus nearby
 Wumpus not in [1, 1]
 Wumpus not in [2,2]
 Because of no stench in [2,1]
 Agent can infer that Wumpus in [1, 3]
 W! shows inference
 Agent infers there must be pit in [3, 1]
 What’s next?
Intelligence
 Intelligence requires
 Knowledge
 Reasoning
 Intelligence implies
 Deducing facts which are not explicitly defined in knowledge-base
 Humans have consciousness
Reasoning
 Multiple forms of logical reasoning
 Deduction
 Induction
 Deduction
 Process of reasoning in which conclusions follow necessarily from
premises
 A, A => B, conclude B
 A is true
 If A is true then B is true
 Therefore conclude B is true
Deductive Reasoning
 Deduction
 Process of reasoning in which conclusions follow necessarily from
premises
 A, A => B, conclude B
 A is true
 If A is true then B is true
 Therefore conclude B is true
Deduction
 Example
 W[3,1] : Wumpus is in 3rd row of 1st column
 S[3,2] : Stench in 3rd row of 2nd column
 W[3,1], W[3,1] => S[3,2], conclude S[3,2]
 W[3,1] is true
 If W[3,1] is true then S[3,2] is true
 Therefore conclude S[3,2] is true, there is stench in 3rd row of 2nd column
Induction
 Induction
 Process of reasoning where inference is drawn from particular
presented instances
 Example
 A tile from the bag is white
 Another tile from the bag is white
 Yet another tile from the bag is white
 Hence, all the tiles from the bag are white
Knowledge Representation for KB Agents
 Knowledge Base
 Formal knowledge representation techniques
 Inference Engine
 Inference algorithms
 Manipulate knowledge-base
 Logic
 Valid reasoning
Backus-Naur Form – Propositional Logic
Propositional Logic - Examples
 Propositional logic
 Syntax: P  (Q  R); X1  (Raining  Sunny)
 Possible world: {P=true,Q=true,R=false,S=true} or 1101
 Semantics:    is true in a world iff is  true and  is true (etc.)
 Semantics
  W[1,2]
  S[1,2]
 P: Pakistan is a peaceful country
 Q: I like quality fish
Solution
P Q P PQ P  Q PQ
F F T F F T
F T T F T T
T F F F F F
T T F T F T

 Semantics
 P: Pakistan is a peaceful country
 Q: I like quality fish
 Validity of form, not the actual content
Logic
 Syntax: What sentences are allowed?
 Semantics:
 What are the possible worlds?
 Which sentences are true in which worlds? (i.e., definition of truth)

1
2 3
Syntaxland Semanticsland
Examples
 Propositional logic
 Syntax: P  (Q  R); X1  (Raining  Sunny)
 Possible world: {P=true,Q=true,R=false,S=true} or 1101
 Semantics:    is true in a world iff is  true and  is true (etc.)
 First-order logic
 Syntax: x y P(x,y)  Q(Joe,f(x))  f(x)=f(y)
 Possible world: Objects o1, o2, o3; P holds for <o1,o2>; Q holds for <o3>;
f(o1)=o1; Joe=o3; etc.
 Semantics: () is true in a world if =oj and  holds for oj; etc.
Inference: entailment
 Entailment:  |=  (“ entails ” or “ follows from ”) iff in
every world where  is true,  is also true
 I.e., the -worlds are a subset of the -worlds [models()  models()]
 In the example, 2 |= 1
 (Say 2 is Q  R  S  W
1 is Q )
1
2
Inference: proofs
 A proof is a demonstration of entailment between  and 
 Method 1: model-checking
 For every possible world, if  is true make sure that is  true too
 OK for propositional logic (finitely many worlds); not easy for first-order logic
 Method 2: theorem-proving
 Search for a sequence of proof steps (applications of inference rules) leading
from  to 
 E.g., from P  (P  Q), infer Q by Modus Ponens
 Sound algorithm: everything it claims to prove is in fact entailed
 Complete algorithm: every that is entailed can be proved
Quiz
 What’s the connection between complete inference algorithms and
complete search algorithms?
 Answer 1: they both have the words “complete…algorithm”
 Answer 2: they both solve any solvable problem
 Answer 3: Formulate inference as a search problem
 Initial state: KB contains 
 Actions: apply any inference rule that matches KB, add conclusion
 Goal test: KB contains 
Hence any complete search algorithm yields a complete inference algorithm
Propositional logic syntax: The gruesome details
 Given: a set of proposition symbols {X1,X2,…, Xn}
 (we often add True and False for convenience)
 Xi is a sentence
 If  is a sentence then  is a sentence
 If  and  are sentences then    is a sentence
 If  and  are sentences then    is a sentence
 If  and  are sentences then    is a sentence
 If  and  are sentences then    is a sentence
 And p.s. there are no other sentences!
Propositional logic semantics: The unvarnished truth
function PL-TRUE?(,model) returns true or false
if  is a symbol then return Lookup(, model)
if Op() =  then return not(PL-TRUE?(Arg1(),model))
if Op() =  then return and(PL-TRUE?(Arg1(),model),
PL-TRUE?(Arg2(),model))
etc.

(Sometimes called “recursion over syntax”)


Example: Partially observable Pacman
 Pacman perceives just the local walls/gaps
 Formulation: what variables do we need?
 Pacman’s location
 At_1,1_0 (Pacman is at [1,1] at time 0) At_3,3_1 etc
 Wall locations
 Wall_0,0 Wall_0,1 etc
 Percepts
 Blocked_W_0 (blocked by wall to my West at time 0) etc.
 Actions
 W_0 (Pacman moves West at time 0), E_0 etc.
 NxN world for T time steps => N2T + N2 + 4T + 4T = O(N2T) variables
2T
 2 possible worlds! N=10, T=100 => 1032 worlds (each is a “history”)
N
Sensor model
 State facts about how Pacman’s percepts arise…
 If there is a wall to the West, Pacman perceives it
 If at time t he is in x,y and there is a wall at x-1,y,
Pacman perceives a wall to the west at time t
 At_1,1_0  Wall_0,1  Blocked_W_0
 At_1,1_1  Wall_0,1  Blocked_W_1


….
Not so fast……
At_3,3_9  Wall_3,4  Blocked_N_9
What’s wrong???
 An implication sentence says nothing about the RHS
when the LHS is false
 PeaceInIraq  HillaryElectedIn2008 is TRUE when
PeaceInIraq is false
 Does “if you submit after the deadline it will be rejected”
imply “if you submit before the deadline it won’t be rejected”?
 We need to say when percept variables are false as well
as when they are true
 Percept  <some condition on world>
Sensor model
 State facts about how Pacman’s percepts arise…
 Pacman perceives a wall to the West at time t
if and only if he is in x,y and there is a wall at x-1,y
 Blocked_W_0  ((At_1,1_0  Wall_0,1) v
(At_1,2_0  Wall_0,2) v
(At_1,3_0  Wall_0,3) v …. )

(Given these axioms, plus a sequence of locations and percepts,


Pacman can infer the complete map….
To be continued!)
Inference (reminder)
 Method 1: model-checking
 For every possible world, if  is true make sure that is  true too
 Method 2: theorem-proving
 Search for a sequence of proof steps (applications of inference rules) leading
from  to 
 Sound algorithm: everything it claims to prove is in fact entailed
 Complete algorithm: every that is entailed can be proved
Simple theorem proving: Forward chaining
 Forward chaining applies Modus Ponens to generate new facts:
 Given X1  X2  … Xn  Y and X1, X2, …, Xn
 Infer Y
 Forward chaining keeps applying this rule, adding new facts, until
nothing more can be added
 Requires KB to contain only definite clauses:
 (Conjunction of symbols)  symbol; or
 A single symbol (note that X is equivalent to True  X)
Forward chaining algorithm
function PL-FC-ENTAILS?(KB, q) returns true or false
count ← a table, where count[c] is the number of symbols in c’s premise
inferred ← a table, where inferred[s] is initially false for all s
agenda ← a queue of symbols, initially symbols known to be true in KB
while agenda is not empty do
p ← Pop(agenda)
if p = q then return true
if inferred[p] = false then
inferred[p]←true
for each clause c in KB where p is in c.premise do
decrement count[c]
if count[c] = 0 then add c.conclusion to agenda
return false
Forward chaining example: Proving Q
CLAUSES COUNT INFERRED
Q
 PQ 1// 0 A false
xxxx true
 LMP 2 1// 0
// B false
xxxx true
 BLM 2// 1// 0 L false
xxxx true P
 APL 2// 1// 0 M false
xxxx true

 ABL 2// 1// 0 P false


xxxx true M
Q xxxx
false true
 A 0
L
 B 0
AGENDA
A
x B x P
x xL M x Q
x L x
A B
Properties of forward chaining
 Theorem: FC is sound and complete for definite-clause KBs
 Soundness: follows from soundness of Modus Ponens (easy to check)
 Completeness proof:
1. FC reaches a fixed point where no new atomic sentences are derived
2. Consider the final inferred table as a model m, assigning true/false to symbols
3. Every clause in the original KB is true in m A xxxx
false true
Proof: Suppose a clause a1... ak  b is false in m B xxxx
false true
Then a1... ak is true in m and b is false in m L xxxx
false true
Therefore the algorithm has not reached a fixed point! M xxxx
false true
P xxxx
false true
4. Hence m is a model of KB
Q xxxx
false true
5. If KB |= q, q is true in every model of KB, including m
Simple model checking
function TT-ENTAILS?(KB, α) returns true or false
return TT-CHECK-ALL(KB,α,symbols(KB) U symbols(α),{})
function TT-CHECK-ALL(KB,α,symbols,model) returns true or false
if empty?(symbols) then
if PL-TRUE?(KB,model) then return PL-TRUE?(α,model)
else return true
else
P ← first(symbols)
rest ← rest(symbols)
return and (TT-CHECK-ALL(KB,α,rest,model ∪ {P = true})
TT-CHECK-ALL(KB,α,rest,model ∪ {P = false }))
Simple model checking, contd.
 Same recursion as backtracking P1=true P1=false

 O(2n) time, linear space P2=true P2=false

 We can do much better!

Pn=true Pn=false

KB?
α?
11111…1

0000…0
Satisfiability and entailment
 A sentence is satisfiable if it is true in at least one world (cf
CSPs!)
 Suppose we have a hyper-efficient SAT solver; how can we use it
to test entailment?
 Suppose  |= 
 Then    is true in all worlds
 Hence (  ) is false in all worlds
 Hence    is false in all worlds, i.e., unsatisfiable
 So, add the negated conclusion to what you know, test for
(un)satisfiability; also known as reductio ad absurdum
Conjunctive normal form (CNF)
 Replace biconditional by two implications
Every sentence can be expressed as a conjunction of clauses
 Each clause is a disjunction of literals Replace    by  v 

 Each literal is a symbol or a negated symbol Distribute v over 

 Conversion to CNF by a sequence of standard transformations:


 At_1,1_0  (Wall_0,1  Blocked_W_0)
 At_1,1_0  ((Wall_0,1  Blocked_W_0)  (Blocked_W_0 Wall_0,1))
 At_1,1_0 v ((Wall_0,1 v Blocked_W_0)  (Blocked_W_0 v Wall_0,1))
 (At_1,1_0 v Wall_0,1 v Blocked_W_0) 
(At_1,1_0 v Blocked_W_0 v Wall_0,1)
Efficient SAT solvers
 DPLL (Davis-Putnam-Logemann-Loveland) is the core of modern solvers
 Essentially a backtracking search over models with some extras:
 Early termination: stop if
 all clauses are satisfied; e.g., (A  B)  (A  C) is satisfied by {A=true}
 any clause is falsified; e.g., (A  B)  (A  C) is satisfied by {A=false,B=false}
 Pure literals: if all occurrences of a symbol in as-yet-unsatisfied clauses have
the same sign, then give the symbol that value
 E.g., A is pure and positive in (A  B)  (A  C)  (C  B) so set it to true
 Unit clauses: if a clause is left with a single literal, set symbol to satisfy clause
 E.g., if A=false, (A  B)  (A  C) becomes (false  B)  (false  C), i.e. (B)  (C)
 Satisfying the unit clauses often leads to further propagation, new unit clauses, etc.
DPLL algorithm
function DPLL(clauses,symbols,model) returns true or false
if every clause in clauses is true in model then return true
if some clause in clauses is false in model then return false
P,value ←FIND-PURE-SYMBOL(symbols,clauses,model)
if P is non-null then return DPLL(clauses, symbols–P, model∪{P=value})
P,value ←FIND-UNIT-CLAUSE(clauses,model)
if P is non-null then return DPLL(clauses, symbols–P, model∪{P=value})
P ← First(symbols); rest ← Rest(symbols)
return or(DPLL(clauses,rest,model∪{P=true}),
DPLL(clauses,rest,model∪{P=false}))
Efficiency
 Naïve implementation of DPLL: solve ~100 variables
 Extras:
 Variable and value ordering (from CSPs)
 Divide and conquer
 Caching unsolvable subcases as extra clauses to avoid redoing them
 Cool indexing and incremental recomputation tricks so that every step of the
DPLL algorithm is efficient (typially O(1))
 Index of clauses in which each variable appears +ve/-ve
 Keep track number of satisfied clauses, update when variables assigned
 Keep track of number of remaining literals in each clause
 Real implementation of DPLL: solve ~10000000 variables
SAT solvers in practice
 Circuit verification: does this VLSI circuit compute the right answer?
 Software verification: does this program compute the right answer?
 Software synthesis: what program computes the right answer?
 Protocol verification: can this security protocol be broken?
 Protocol synthesis: what protocol is secure for this task?
 Planning: how can I eat all the dots???
Summary
 One possible agent architecture: knowledge + inference
 Logics provide a formal way to encode knowledge
 A logic is defined by: syntax, set of possible worlds, truth condition
 Logical inference computes entailment relations among sentences
 Next: a logical Pacman agent

You might also like