Title: Reasoning with Classical Propositional Logic
1Reasoning with Classical Propositional Logic
2Outline
- Syntax
- Full CPL
- Implicative Normal Form CPL (INFCPL)
- Horn CPL (HCPL)
- Semantics
- Cognitive and Herbrand interpretations, models
- Reasoning
- FCPL Reasoning
- Truth-tabel based model checking
- Multiple inference rules
- INFCPL Reasoning
- Resolution and factoring
- DPLL
- WalkSat
- HCPL Reasoning
- Forward chaining
- Backward chaining
3Full Classical Propositional Logic (FCPL) syntax
Syntax
?(a ? (b ? ((?c ? d) ? a) ? b))
FCPLFormula
4CPL Normal Forms
Implicative Normal Form (INF)
Premisse
ConstantSymbol
Conclusion
- Semantic equivalence
- a ? b ? c ? d
- ?(a ? b) ? c ? d
- ?a ? ?b ? c ? d
Conjunctive Normal Form (CNF)
Literal
ConstantSymbol
5Horn CLP
Implicative Normal Form (INF)
Premisse
ConstantSymbol
Conclusion
IntegrityConstraint
context IntegrityConstraint inv IC
Conclusion.ConstantSymbol false
a ? b ? c ? false
DefiniteClause
context DefiniteClause inv DC Conclusion.Constant
Symbol ? false
a ? b ? c ? d
Fact
context Fact inv Fact Premisse -gt size() 1 and
Premisse -gt ConstantSymbol true
true ? d
Conjunctive Normal Form (CNF)
Literal
ConstantSymbol
IntegrityConstraint
context IntegrityConstraint inv IC
Literal-gtforAll(oclIsKindOf(NegativeLiteral))
?a ? ?b ? ?c
DefiniteClause
context DefiniteClause inv DC Literal.oclIsKindOf
(ConstantSymbol)-gtsize() 1
?a ? ?b ? ?c ? d
Fact
context Fact inv Fact Literal-gtforAll(oclIsKindOf
(ConstantSymbol))
d
6FCPL semantics cognitive interpretation
Syntax
?(a ? (b ? ((?c ? d) ? a) ? b))
Arg
Functor
FCPLFormula
ConstantSymbol
FCPLConnective
1..2
fm1(pitIn12 ? ? pitIn11) agent knows there is a
pit in coordinates (1,2) and no pit in
coordinates (1,1) fm1(pitIn12 ? ? pitIn11) John
is the Kind of England and John is not the King
of France
csm1(pitIn12) agent knows there is a pit in
coordinates (1,2) csm2(pitIn12) John is the
King of England
FCLPCognitiveInterpretation
Semantics
7FCPL semantics Herbrand interpretation
Syntax
?(a ? (b ? ((?c ? d) ? a) ? b))
cv1(pitIn12) true, cv1(pitIn11) true,
... cv2(pitIn12) true, cv2(pitIn11) false,
...
Arg
Functor
FCPLFormula
ConstantSymbol
FCPLConnective
1..2
FCLPHerbrandInterpretation
fv1(pitIn12 ? ? pitIn11) true, fv1(pitIn12 ?
pitIn11) true, ... fv2(pitIn12 ? ? pitIn11)
true, fv2(pitIn12 ? pitIn11) false, ...
Semantics
8FCPL semantics
Syntax
?(a ? (b ? ((?c ? d) ? a) ? b))
Arg
Functor
FCPLFormula
ConstantSymbol
FCPLConnective
1..2
ConstantValuation
FormulaValuation
FormulaMapping
FCLPHerbrandInterpretation
FCLPCognitiveInterpretation
FCLPHerbrandModel
ConstantMapping
CompoundDomainProperty
AtomicDomainProperty
Semantics
9Entailment and models
- Entailment
- f f iff ?Hi, Hi(f) true ? Hi(f) true
- Logical equivalence ?
- f ? f iff f f and f f
- Herbrand model
- An Herbrand interpretation Hi is a (Herbrand)
model of formula f iffits truth value
corresponds to the application of the truth-table
definition of the FCPL connectives to the truth
value in Hi of the constant symbols that compose
f - f valid (or tautology) iff true in all Hi(f), ex,
a ? ?a - f satisfiable iff true in at least one Hi(f)
- f unsatisfiable (or contradiction) iff false in
all Hi(f), ex, a ? ?a
10Logic-Based Agent
Given B as axiom, formula f is a theorem of L? B
L f ? B ? f is valid in L? (Boolean CSP search
proof) B ? ?f is unsatisfiable in L? (Refutation
proof)
Environment
Sensors
Ask
Knowledge Base BDomain Model in Logic L
Inference Engine Theorem Prover for Logic L
Tell
Retract
Actuators
- Strenghts
- Reuse results and insights about correct
reasoning that matured over 23 centuries - Semantics (meaning) of a knowledge base can be
represented formally as syntax, a key step
towards automating reasoning
11Truth-table based model checking
- To answer Ask(?)
- Enumerate all His from domain proposition
alphabet - Use truth-table to compute Mh(KB) and Mh(?)
- If Mh(KB) ? Mh(?), then answer yes, else answer
no - Example
- KB ?pit11 ? ?breeze11 ? ?pit12 ? breeze12
- ?1 ?pit21
- ?2 ?pit22
12FCLP inference rules
- Bi-directional (logical equivalences)
- R1 f ? g ? g ? f
- R2 f ? g ? g ? f
- R3 (f ? g) ? h ? f ? (g ? h)
- R4 (f ? g) ? h ? f ? (g ? h)
- R5 ??f ? f
- R6 f ? g ? ?g ? ?f
- R7 f ? g ? ?f ? g
- R8 f ? g ? (f ? g) ? (g ? f)
- R9 ?(f ? g) ? ?f ? ?g
- R10 ?(f ? g) ? ?f ? ?g
- R11 f ? (g ? h) ? (f ? g) ? (f ? h)
- R12 f ? (g ? h) ? (f ? g) ? (f ? h)
- R13 f ? f ? f factoring
- Directed (logical entailments)
- R14 f ? g, f g modus ponens
- R15 f ? g, ?g ?f modus tollens
- R16 f ? g f and-elimination
- R17 l1 ? ... ? li ? ... lk, m1 ? ... ?
mj-1 ? ?li ? mj-1... mk l1 ? ... ? li-1 ?
li-1... lk ? m1 ? ... ? mj-1 ? mj-1... mk - resolution
13Multiple inference rule application
- Idea
- KB f ?
- KB0 KB
- Apply inference rule KBi g
- Update KBi1 KBi ? g
- Iterate until f ? KBk or until f ? KBn and KBn1
KBn - Transforms proving KB f into search problem
- At each step
- Which inference rule to apply?
- To which sub-formula of f?
- Example proof
- KB0 ?P1,1 ? (B1,1 ? P1,2 ? P2,1) ?
(B2,1 ? P1,1 ? P2,2 ? P3,1) ? ?B1,1 ?
B2,1 - Query ?(P1,2 ? P2,1)
- Cognitive interpretation
- BX,Y agent felt breeze in coordinate (X,Y)
- PX,Y agent knows there is a pit in coordinate
(X,Y) - Apply R8 to B1,1 ? P1,2 ? P2,1 KB1 KB0 ? (B1,1
? (P1,2 ? P2,1)) ? ((P1,2 ? P2,1) ?
B1,1) - Apply R6 to last sub-formula KB2 KB1 ? (?B1,1
? ?(P1,2 ? P2,1)) - Apply R14 to ?B1,1 and last sub-formula KB3
KB2 ? ?(P1,2 ? P2,1)
14Resolution and factoring
- Repeated application of only two inference rules
- resolution and factoring
- More efficient than using multiple inference
rules - search space with far smaller branching factor
- Refutation proof
- Derive false from KB ? ?Query
- Requires both in normal form (conjunctive or
implicative) - Example proof in conjunctive normal form
15Resolution strategies
- Search heuristics for resolution-based theorem
proving - Two heuristic classes
- Choice of clause pair to resolve inside current
KB - Choice of literals to resolve inside chosen
clause pair - Unit preference
- Prefer pairs with one unit clause (i.e.,
literals) - Rationale generates smaller clauses, eliminates
much literal choice in pair - Unit resolution turn preference into
requirement - Set of support
- Define small subset of initial clauses as
initial set of support - At each step
- Only consider clause pairs with one member from
current set of support - Add step result to set of support
- Efficiency depend on cleverness of initial set
of support - Common domain-independent initial set of
support negated query - Beyond efficiency, results in easier to
understand, goal-directed proofs - Linear resolution
- At each step only consider pairs (f,g) where f
is either - (a) in KB0, or
16FCPL theorem proving as boolean CSP exhaustive
global backtracking search
- Put f KB ? ?Query in conjunctive normal form
- Try to prove it unsatisfiable
- Consider each literal in f as a boolean variable
- Consider each clause in f as a constraint on
these variables - Solve the underlying boolean CSP problem by
using - Exhaustive global backtracking search
- of all complete variable assignments
- showing none satisfies all constraint in f
- Initial state empty assignment of pre-ordered
variables - Search operator
- Tentative assignment of next yet unassigned
variable Li (ith literal in f) - Apply truth table definitions to propagate
constraints in which Li appears (clauses of f
involving L) - If propagation violates one constraint,
backtrack on Li - If propagation satisfies all constraints
- iterate on Li1
- if Li was last literal in f, fail, KB ? ?Query
satisfiable, and thus KB ? Query
17FCPL theorem proving as boolean CSP backtracking
search example
- Variables B1,1 , P1,2, P2,1
- Constraints ?B1,1 , ?P1,2 ? B1,1 , ?P2,1 ?
B1,1, ?B1,1 ? P1,2 ? P2,1 , P1,2
V ?,?,? C ?,?,?,?,?
18DPLL algorithm
- General purpose CSP backtracking search very
inefficient for proving large CFPL theorems - Davis, Putnam, Logemann Loveland algorithm
(DPPL) - Specialization of CSP backtracking search
- Exploiting specificity of CFPL theorem proving
recast as CSP search - To apply search completeness preserving
heuristics - Concepts
- Pure symbol S yet unassigned variable positive
in all clauses or negated in all clauses - Unit clause C clause with all but one literal
already assigned to false - Heuristics
- Pure symbol heuristic assign pure symbols first
- Unit propagation
- Assign unit clause literals first
- Recursively generate new ones
- Early termination heuristic
- After assigning Li true, propagate Cj true
?Cj Li ? Cj (avoiding truth-table look-ups) - Prune sub-tree below any node where ?Cj Cj
false - Clause learning
19Satisfiability of formula as boolean CSP
heuristic local stochastic search
- DPLL is not restricted to proving entailment by
proving unsatisfiability - It can also prove satisfiability of a FCPL
formula - Many problems in computer science and AI can be
recast as a satisfiability problem - Heuristic local stochastic boolean CSP search
more space scalable than DPLL for satisfiability - However since it is not exhaustive search, it
cannot prove unsatisfiability (and thus
entailment), only strongly suspect it - WalkSAT
- Initial state random assignment of pre-ordered
variables - Search operator
- Pick a yet unsatisfied clause and one literal in
it - Flip the literal assignment
- At each step, randomly chose between to picking
strategies - Pick literal which flip results in steepest
decrease in number of yet unsatisfied clauses - Random pick
20Direct x indirect use of search for agent
reasoning
21Horn CPL reasoning
- Practical limitations of FCPL reasoning
- For experts in most application domain
(medicine, law, business, design,
troubleshooting) - Non-intuitiveness of FCPL formulas for knowledge
acquisition - Non-intuitiveness of proofs generated by FCPL
algorithms for knowledge validation - Theoretical limitation of FCPL reasoning
- exponential in the size of the KB
- Syntactic limitation to Horn clauses overcome
both limitations - KB becomes base of simple rules If p1 and ...
and pn then c, with logical semantics p1 ? ... ?
pn ? c - Two algorithms are available, rule forward
chaining and rule backward chaining, that are - Intuitive
- Sound and complete for HCPL
- Linear in the size of the KB
- For most application domains, loss of
expressiveness can be overcome by addition of new
symbols and clauses - ex, FCPL KB1 p ? q ? c ? d has no logical
equivalent in HCPL in terms of alphabet
p,q,c,d - However KB2 (p ? q ? notd ? c) ? (p ? q ? notc
? d) ? (c ? notc ? false) ?
(d ? notd ? false) is an HCPL formula
logically equivalent to KB1
22Propositional forward chaining
- Repeated application of modus ponens until
reaching a fixed point - At each step i
- Fire all rules (i.e., Horn clauses with at least
one positive and one negative literal) with all
premises already in KBi - Add their respective conclusions to KBi1
- Fixed point k reached when KBk KBk-1
- KBk f KB0 f, i.e., all logical
conclusions of KB0 - If f ? KBk, then KB0 f, otherwise, KB0 ? f
- Naturally data-driven reasoning
- Guided by fact (axioms) in KB0
- Allows intuitive, direct implementation of
reactive agents - Generally inefficient for
- Inefficient for specific entailment query
- Cumbersome for deliberative agent
implementations - Builds and-or proof graph bottom-up
23Propositional forward chaining example
24Propositional forward chaining example
25Propositional forward chaining example
26Propositional forward chaining example
27Propositional forward chaining example
28Propositional forward chaining example
29Propositional forward chaining example
30Propositional backward chaining
- Repeated application of resolution using
- Unit input resolution strategy with negated query
as initial set of support - At each step i
- Search KB0 for clause of the form p1 ?...? pn ?
g to resolve with clause g popped from the goal
stack - If there are several ones, pick one, push p1
?...? pn on goal stack, and push other ones
alternative stack to consider upon backtracking - If there are none, backtrack (i.e., pop
alternative stack) - Terminates
- Successfully when goal stack is empty
- As failure when goal stack is non empty but
alternative stack is - Naturally goal-driven reasoning
- Guided by goal (theorem to prove)
- Allows intuitive, direct implementation of
deliberative agents - Generally
- Inefficient for deriving all logical conclusions
from KB - Cumbersome implementation of reactive agents
- Builds and-or proof graph top-down
31Propositional backward chaining example
Goal Stack Q
Alternative Stack ?
32Propositional backward chaining example
Goal Stack P
Alternative Stack ?
33Propositional backward chaining example
Goal Stack L M
Alternative Stack ?
34Propositional backward chaining example
Goal Stack A P M
Alternative Stack A B
35Propositional backward chaining example
Goal Stack P M
Alternative Stack A B
36Propositional backward chaining example
Goal Stack A B M
Alternative Stack ?
37Propositional backward chaining example
Goal Stack M
Alternative Stack ?
38Propositional backward chaining example
Goal Stack B L
Alternative Stack ?
39Propositional backward chaining example
Goal Stack ?
Alternative Stack ?
40Propositional backward chaining example
Goal Stack ?
Alternative Stack ?
41Propositional backward chaining example
Goal Stack ?
Alternative Stack ?
42Limitations of propositional logic
- Ontological
- Cannot represent knowledge intentionally
- No concise representation of generic relations
(generic in terms of categories, space, time,
etc.) - ex, no way to concisely formalize the Wumpus
world ruleat any step during the exploration,
the agent perceiving a stench makes him knows
that there is a Wumpus in a location adjacent to
his - Propositional logic
- Requires conjunction of 100,000 equivalences to
represent this rule for an exploration of at most
1000 steps of a cavern size 10x10 - (stench1_1_1 ? wumpus1_1_2 ? wumpus1_2_1) ?
... ... ? (stench1000_1_1 ? wumpus100_1_2 ?
wumpus1000_2_1) ? ...... ? (stench1_10_10 ?
wumpus1_9_10 ? wumpus1_10_9) ? ... ... ?
(stench1000_10_10 ? wumpus100_9_10 ?
wumpus1000_9_10) - Epistemological
- Agent always completely confident of its
positive or negative beliefs - No explicit representation of ignorance (missing
knowledge) - Only way to represent uncertainty is disjunction
- Once held, agent belief cannot be questioned by
new evidence (ex, from sensors)