Title: Scalable Knowledge Representation and Reasoning Systems
1Scalable Knowledge Representation and Reasoning
Systems
- Henry Kautz
- ATT Shannon Laboratories
2Introduction
- In recent years, we've seen substantial progress
in scaling up knowledge representation and
reasoning systems - Shift from toy domains to real-world applications
- autonomous systems - NASA Remote Agent
- just in time manufacturing - I2, PeopleSoft
- deductive approaches to verification - Nitpick
(D. Jackson), bounded model checking (E. Clarke) - solutions to open problems in mathematics - group
theory (W. McCune, H. Zhang) - New emphasis on propositional reasoning and search
3Approaches to Scaling Up KRR
- Traditional approach specialized languages /
specialized reasoning algorithms - difficult to share / evaluate results
- New direction
- compile combinatorial reasoning problems into a
common propositional form (SAT) - apply new, highly efficient general search
engines
SAT Encoding
Combinatorial Task
SAT Solver
Decoder
4Methodology
- Compare with use of linear / integer programming
packages - emphasis on mathematical modeling
- after modeling, problem is handed to a state of
the art solver - Compare with reasoning under uncertainty
- convergence to Bayes nets and MDP's
5Would specialized solver not be better?
- Perhaps theoretically, but often not in practice
- Rapid evolution of fast solvers
- 1990 100 variable hard SAT problems
- 1999 10,000 - 100,000 variables
- competitions encourage sharing of algorithms and
implementations - Germany 91 / China 96 / DIMACS-93/97/98
- Encodings can compensate for much of the loss due
to going to a uniform representation
6Two Kinds of Knowledge Compilation
- Compilation to a tractable subset of logic
- shift inference costs offline
- guaranteed fast run-time response
- E.g. real-time diagnosis for NASA Deep Space One
- 35 msec response time! - fundamental limits to tractable compilation
- Compilation to a minimal combinatorial core
- can reduce SAT size by compiling together problem
spec control knowledge - inference for core still NP-hard
- new randomized SAT algorithms - low exponential
growth - E.g. optimal planning with 1018 states!
7OUTLINE
- I. Compilation to tractable languages
- Horn approximations
- Fundamental limits
- II. Compilation to a combinatorial core
- SATPLAN
- III. Improved encodings
- Compiling control knowledge
- IV. Improved SAT solvers
- Randomized restarts
8I. Compilation to Tractable Languages
9Expressiveness vs. Complexity Tradeoff
- Consider problem of determining if a query
follows from a knowledge base - KB q ?
- Highly expressive KB languages make querying
intractable - ( ignition_on engine_off ) ?
- ( battery_dead V tank_empty )
- require general CNF - query answering is
NP-complete - Less expressive languages allow polynomial time
query answering - Horn clauses, binary clauses, DNF
10Tractable Knowledge Compilation
- Goal guaranteed fast online query answering
- cost shifted to offline compilation
- Exact compilation often not possible
- Can approximate original theory
- yet retain soundness / completeness for queries
- (Kautz Selman 1993, 1996, 1999 Papadimitriou
1994)
expressive source language
tractable target language
11Example Compilation into Horn
- Source clausal propositional theories
- Inference NP-complete
- example (a V b V c V d)
- equivalently (a b) ? (c V d)
- Target Horn theories
- Inference linear time
- at most one positive literal per clause
- example (a b) ? c
- strictly less expressive
12Horn Bounds
- Idea compile CNF into a pair of Horn theories
that approximate it - Model truth assignment which satisfies a theory
- Can logically bound theory from above and below
- LB S UB
- lower bound fewer models logically stronger
- upper bound more models logically weaker
- BEST bounds LUB and GLB
13Using Approximations for Query Answering
- S q ?
- If LUB q then S q
- (linear time)
- If GLB q then S q
- (linear time)
- Otherwise, use S directly
- (or return "don't know")
- Queries answered in linear time lead to
improvement in overall response time to a series
of queries
14Computing Horn Approximations
- Theorem Computing LUB or GLB is NP-hard
- Amortize cost over total set of queries
- Query-algorithm still correct if weaker bounds
are used - anytime computation of bounds desirable
15Computing the GLB
- Horn strengthening
- r ? (p V q) has two Horn-strengthenings
- r ? p
- r ? q
- Horn-strengthening of a theory conjunction of
one Horn-strengthening of each clause - Theorem Each LB of S is equivalent to some
Horn-strengthening of S. - Algorithm search space of Horn-strengthenings
for a local maxima (GLB)
16Computing the LUB
- Basic strategy
- Compute all resolvents of original theory, and
collect all Horn resolvents - Problem
- Even a Horn theory can have exponentially many
Horn resolvents - Solution
- Resolve only pairs of clauses where exactly one
clause is Horn - Theorem Method is complete
17Properties of Bounds
- GLB
- Anytime algorithm
- Not unique - any GLB may be used for query
answering - Size of GLB ? size of original theory
- LUB
- Anytime algorithm
- Is unique
- No space blow-up for Horn
- Can construct non-Horn theories with
exponentially larger LUB
18Empirical Evaluation
- 1. Hard random theories, random queries
- 2. Plan-recognition domain
- e.g. query (obs1 obs2) ? (goal1 V goal2) ?
- Time to answer 1000 queries
- original with bounds
- rand100 340 45
- rand200 8600 51
- plan500 8950 620
- Cost of compilation amortized in less than 500
queries
19Limits of Tractable Compilation
- Some theories have an exponentially-larger
clausal form LUB - QUESTION Can we always find a clever way to keep
the LUB small (new variables, non-clausal form,
structure sharing, ...)? - Theorem There do exist theories whose Horn LUB
is inherently large - any representation that enables polytime
inference is exponentially large - Proof based on non-uniform circuit complexity -
if false, polynomial hierarchy collapses to ?2
20Other Tractable Target Languages
- Model-based representations
- (Kautz Selman 1992, Dechter Pear 1992,
Papadimitriou 1994, Roth Khardon 1996, Mannila
1999, Eiter 1999) - Prime Implicates
- (Reiter DeKleer 1987, del Val 1995, Marquis
1996, Williams 1998) - Compilation from nonmonotonic logics
- (Nerode 1995, Cadoli Donini 1996)
- Similar limits to compilability hold for all!
21Truly Combinatorial Problems
- Tractable compilation not a universal solution
for building scalable KRR systems - often useful, but theoretical and empirical
limits - not applicable if you only care about a single
query no opportunity to amortize cost of
compilation - Sometimes must face NP-hard reasoning problems
head on - will describe how advances in modeling and SAT
solvers are pushing the envelope of the size
problems that can be handled in practice
22II. Compilation to a Combinatorial Core
23Example Planning
- Planning find a (partially) ordered set of
actions that transform a given initial state to a
specified goal state. - in most general case, can cover most forms of
problem solving - scheduling fixes set of actions, need to find
optimal total ordering - planning problems typically highly non-linear,
require combinatorial search
24Some Applications of Planning
- Autonomous systems
- Deep Space One Remote Agent (Williams Nayak
1997) - Mission planning (Muscettola 1998)
- Natural language understanding
- TRAINS (Allen 1998) - mixed initiative dialog
- Software agents
- Softbots (Etzioni 1994)
- Goal-driven characters in games (Nareyek 1998)
- Help systems - plan recognition (Kautz 1989)
- Manufacturing
- Supply chain management (Crawford 1998)
- Software understanding / verification
- Bug-finding (goal undesired state) (Jackson
1998)
25State-space Planning
- State complete truth assignment to a set of
variables (fluents) - Goal partial truth assignment (set of states)
- Operator a partial function State State
- specified by three sets of variables
- precondition, add list, delete list
- (STRIPS, Fikes Nilsson 1971)
26Abdundance of Negative Complexity Results
- I. Domain-independent planning PSPACE-complete
or worse - (Chapman 1987 Bylander 1991 Backstrom 1993)
- II. Bounded-length planning NP-complete
- (Chenoweth 1991 Gupta and Nau 1992)
- III. Approximate planning NP-complete or worse
- (Selman 1994)
27Practice
- Traditional domain-independent planners can
generate plans of only a few steps. - Most practical systems try to eliminate search
- Tractable compilation
- Custom, domain-specific algorithms
- Scaling remains problematic when state space is
large or not well understood!
28Planning as Satisfiability
- SAT encodings are designed so that plans
correspond to satisfying assignments - Use recent efficient satisfiability procedures
(systematic and stochastic) to solve - Evaluation performance on benchmark instances
29SATPLAN
instantiated propositional clauses
instantiate
axiom schemas
problem description
length
SAT engine(s)
interpret
satisfying model
plan
30SAT Encodings
- Propositional CNF no variables or quantifiers
- Sets of clauses specified by axiom schemas
- Use modeling conventions (Kautz Selman 1996)
- Compile STRIPS operators (Kautz Selman 1999)
- Discrete time, modeled by integers
- upper bound on number of time steps
- predicates indexed by time at which fluent holds
/ action begins - each action takes 1 time step
- many actions may occur at the same step
- fly(Plane, City1, City2, i) É at(Plane, City2, i
1)
31Solution to a Planning Problem
- A solution is specified by any model (satisfying
truth assignment) of the conjunction of the
axioms describing the initial state, goal state,
and operators - Easy to convert back to a STRIPS-style plan
32Satisfiability Testing Procedures
- Systematic, complete procedures
- Davis-Putnam (DP)
- backtrack search unit propagation (1961)
- little progress until 1993 - then explosion of
improved algorithms implementations - satz (1997) - best branching heuristic
- See SATLIB 1998 / Hoos Stutzle
- csat, modoc, rel_sat, sato, ...
- Stochastic, incomplete procedures
- Walksat (Kautz, Selman Cohen 1993)
- greedy local search noise to escape local
minima - outperforms systematic algorithms on random
formulas, graph coloring, (DIMACS 1993, 1997)
33Walksat Procedure
- Start with random initial assignment.
- Pick a random unsatisfied clause.
- Select and flip a variable from that clause
- With probability p, pick a random variable.
- With probability 1-p, pick greedily
- a variable that minimizes the number of
unsatisfied clauses - Repeat until time limit reached.
34Planning Benchmark Test Set
- Extension of Graphplan benchmark set
- Graphplan (Blum Furst 1995) - best
domain-independent state-space planning algorithm - logistics - complex, highly-parallel
transportation domain, ranging up to - 14 time slots, unlimited parallelism
- 2,165 possible actions per time slot
- optimal solutions containing 150 distinct actions
- Problems of this size (1018 configurations) not
previously handled by any state-space planning
system
35Scaling Up Logistics Planning
36What SATPLAN Shows
- General propositional theorem provers can compete
with state of the art specialized planning
systems - New, highly tuned variations of DP surprising
powerful - result of sharing ideas and code in large SAT/CSP
research community - specialized engines can catch up, but by then new
general techniques - Radically new stochastic approaches to SAT can
provide very low exponential scaling - 2 orders magnitude speedup on hard benchmark
problems - Reflects general shift from first-order
non-standard logics to propositional logic as
basis of scalable KRR systems
37Further Paths to Scale-Up
- Efficient representations and new SAT engines
extend the range of domain-independent planning - Ways for further improvement
- Better SAT encodings
- Better general search algorithms
38III. Improved Encodings Compiling Control
Knowledge
39Kinds of Control Knowledge
- About domain itself
- a truck is only in one location
- airplanes are always at some airport
- About good plans
- do not remove a package from its destination
location - do not unload a package and immediate load it
again - About how to search
- plan air routes before land routes
- work on hardest goals first
40Expressing Knowledge
- Such information is traditionally incorporated in
the planning algorithm itself - or in a special programming language
- Instead use additional declarative axioms
- (Bacchus 1995 Kautz 1998 Chen, Kautz, Selman
1999) - Problem instance operator axioms initial and
goal axioms control axioms - Control knowledge constraints on search and
solution spaces - Independent of any search engine strategy
41Axiomatic Control Knowledge
- State Invariant A truck is at only one location
- at(truck,loc1,i) loc1 ¹ loc2 É Ø
at(truck,loc2,i) - Optimality Do not return a package to a location
- at(pkg,loc,i) Ø at(pkg,loc,i1) iltj É Ø
at(pkg,loc,j) - Simplifying Assumption Once a truck is loaded,
it should immediately move - Ø in(pkg,truck,i) in(pkg,truck,i1)
at(truck,loc,i1) É Ø at(truck,loc,i2)
42Adding Control Kx to SATPLAN
Problem Specification Axioms
Control Knowledge Axioms
Instantiated Clauses
As control knowledge increases, Core shrinks!
SAT Simplifier
SAT Core
SAT Engine
43Logistics - Control Knowledge
44Scale Up with Compiled Control Knowledge
- Significant scale-up using axiomatic control
knowledge - Same knowledge useful for both systematic and
local search engines - simple DP now scales from 1010 to 1016 states
- order of magnitude speedup for Walksat
- Control axioms summarize general features of
domain / good plans not a detailed program! - Obtained benefits using only admissible control
axioms no loss in solution quality (Cheng,
Kautz, Selman 1999) - Many kinds of control knowledge can be created
automatically - Machine learning (Minton 1988, Etzioni 1993,
Weld 1994, Kambhampati 1996) - Type inference (Fox Long 1998, Rintanen 1998)
- Reachability analysis (Kautz Selman 1999)
45IV. Improved SAT Solvers Randomized Restarts
46Background
- Combinatorial search methods often exhibit
- a remarkable variability in performance. It is
- common to observe significant differences
- between
- different heuristics
- same heuristic on different instances
- different runs of same heuristic with different
random seeds
47Example SATZ
48Preview of Strategy
- Well put variability / unpredictability to our
advantage via randomization / averaging.
49Cost Distributions
- Consider distribution of running times of
backtrack search on a large set of equivalent
problem instances - renumber variables
- change random seed used to break ties
- Observation (Gomes 1997) distributions often
have heavy tails - infinite variance
- mean increases without limit
- probability of long runs decays by power law
(Pareto-Levy), rather than exponentially (Normal)
50(No Transcript)
51Heavy-Tailed Distributions
- infinite variance infinite mean
- Introduced by Pareto in the 1920s
- probabilistic curiosity
- Mandelbrot established the use of heavy-tailed
distributions to model real-world fractal
phenomena. - Examples stock-market, earth-quakes, weather,...
52How to Check for Heavy Tails?
- Log-Log plot of tail of distribution
- should be approximately linear.
- Slope gives value of
-
- infinite mean and
infinite variance - infinite variance
-
-
53(No Transcript)
54Heavy Tails
- Bad scaling of systematic solvers can be caused
by heavy tailed distributions - Deterministic algorithms get stuck on particular
instances - but that same instance might be easy for a
different deterministic algorithm! - Expected (mean) solution time increases without
limit over large distributions
55Randomized Restarts
- Solution randomize the systematic solver
- Add noise to the heuristic branching (variable
choice) function - Cutoff and restart search after a fixed number of
backtracks - Provably Eliminates heavy tails
- In practice rapid restarts with low cutoff can
dramatically improve performance - (Gomes 1996, Gomes, Kautz, and Selman 1997,
1998)
56Rapid Restart on LOG.D
Note Log Scale Exponential speedup!
57Increased Predictability
58 - Overall insight
- Randomized tie-breaking with
- rapid restarts can boost
- systematic search
- Related analysis Luby Zuckerman 1993 Alt
Karp 1996. - Other applications sports scheduling, circuit
synthesis, quasigroup competion,
59Conclusions
- Discussed approaches to scalable KRR systems
based on propositional reasoning and search - Shift to 10,000 variables and 106 clauses has
- opened up new applications
- Methodology
- Model as SAT
- Compile away as much complexity as possible
- Use off-the-shelf SAT Solver for remaining core
- Analogous to LP approaches
-
60Conclusions, cont.
- Example AI planning / SATPLAN system
- Order of magnitude improvement (last
3yrs) - 10 step to 200 step optimal plans
- Huge economic impact possible with 2 more!
- up to 20,000 steps ...
- Discussed themes in Encodings Solvers
- Local search
- Control knowledge
- Heavy-tails / Randomized restarts
61Tractable Knowledge Compilation Summary
- Many techniques have been developed for compiling
general KR languages to computationally tractable
languages - Horn approximations (Kautz Selman 1993, Cadoli
1994, Papadimitriou 1994) - Model-based representations (Kautz Selman 1992,
Dechter Pearl 1992, Roth Khardon 1996,
Mannila 1999, Eiter 1999) - Prime Implicates (Reiter DeKleer 1987, del Val
1995, Marquis 1996, Williams 1998)
62Limits to Compilability
- While practical for some domains, there are
fundamental theoretical limitations to the
approach - some KBs cannot be compiled into a tractable
form unless polynomial hierarchy collapses
(Kautz) - Sometimes must face NP-hard reasoning problems
head on - will describe how advances in modeling and SAT
solvers are pushing the envelope of the size
problems that can be handled in practice
63Logistics Increased Predictability
64Example SATZ