Statistical Relational Learning - PowerPoint PPT Presentation

1 / 73
About This Presentation
Title:

Statistical Relational Learning

Description:

E.g.: Anna, X, mother_of(X), friends(X, Y) Grounding: Replace all variables by constants. E.g. ... Two constants: Anna (A) and Bob (B) Markov Logic Networks ... – PowerPoint PPT presentation

Number of Views:166
Avg rating:3.0/5.0
Slides: 74
Provided by: pedrodo
Category:

less

Transcript and Presenter's Notes

Title: Statistical Relational Learning


1
Statistical Relational Learning
  • Pedro Domingos
  • Dept. of Computer Science Eng.
  • University of Washington

2
Overview
  • Motivation
  • Background
  • Representation
  • Inference
  • Learning
  • Software
  • Applications
  • Discussion

3
The Real World Is ComplexAnd Uncertain
  • Information overload
  • Incomplete information
  • Contradictory information
  • Many sources and modalities
  • Rapid change

How can computer systems handle these?
4
Probability Handles Uncertainty
  • Sensor noise
  • Human error
  • Inconsistencies
  • Unpredictability

5
First-Order Logic Handles Complexity
  • Many types of entities
  • Relations between them
  • Non-independent samples
  • Arbitrary knowledge

6
Statistical Relational Learning Combines the Two
  • Unified representation
  • Solid formal foundations
  • Inference algorithms
  • Most probable explanations
  • Probabilities of causes and events
  • Learning algorithms
  • Structure
  • Parameters

7
Overview
  • Motivation
  • Background
  • Representation
  • Inference
  • Learning
  • Software
  • Applications
  • Discussion

8
Markov Networks
  • Undirected graphical models

Cancer
Smoking
Cough
Asthma
  • Potential functions defined over cliques

9
Markov Networks
  • Undirected graphical models

Cancer
Smoking
Cough
Asthma
  • Log-linear model

Weight of Feature i
Feature i
10
First-Order Logic
  • Constants, variables, functions, predicatesE.g.
    Anna, X, mother_of(X), friends(X, Y)
  • Grounding Replace all variables by
    constantsE.g. friends (Anna, Bob)
  • World (model, interpretation)Assignment of
    truth values to all ground predicates

11
Example Friends Smokers
12
Example Friends Smokers
13
Overview
  • Motivation
  • Background
  • Representation
  • Inference
  • Learning
  • Software
  • Applications
  • Discussion

14
Representations
15
Markov Logic
  • Most developed approach to date
  • Many other approaches can be viewedas special
    cases
  • Used in rest of this talk

16
Markov Logic Intuition
  • A logical KB is a set of hard constraintson the
    set of possible worlds
  • Lets make them soft constraintsWhen a world
    violates a formula,It becomes less probable, not
    impossible
  • Give each formula a weight(Higher weight ?
    Stronger constraint)

17
Markov Logic Definition
  • A Markov Logic Network (MLN) is a set of pairs
    (F, w) where
  • F is a formula in first-order logic
  • w is a real number
  • Together with a set of constants,it defines a
    Markov network with
  • One node for each grounding of each predicate in
    the MLN
  • One feature for each grounding of each formula F
    in the MLN, with the corresponding weight w

18
Example Friends Smokers
19
Example Friends Smokers
Two constants Anna (A) and Bob (B)
20
Example Friends Smokers
Two constants Anna (A) and Bob (B)
Smokes(A)
Smokes(B)
Cancer(A)
Cancer(B)
21
Example Friends Smokers
Two constants Anna (A) and Bob (B)
Friends(A,B)
Smokes(A)
Friends(A,A)
Smokes(B)
Friends(B,B)
Cancer(A)
Cancer(B)
Friends(B,A)
22
Example Friends Smokers
Two constants Anna (A) and Bob (B)
Friends(A,B)
Smokes(A)
Friends(A,A)
Smokes(B)
Friends(B,B)
Cancer(A)
Cancer(B)
Friends(B,A)
23
Example Friends Smokers
Two constants Anna (A) and Bob (B)
Friends(A,B)
Smokes(A)
Friends(A,A)
Smokes(B)
Friends(B,B)
Cancer(A)
Cancer(B)
Friends(B,A)
24
Markov Logic Networks
  • MLN is template for ground Markov nets
  • Probability of a world x
  • Typed variables and constants greatly reduce size
    of ground Markov net
  • Functions, existential quantifiers, etc.
  • Infinite and continuous domains

Weight of formula i
No. of true groundings of formula i in x
25
Relation to Statistical Models
  • Special cases
  • Markov networks
  • Markov random fields
  • Bayesian networks
  • Log-linear models
  • Exponential models
  • Max. entropy models
  • Gibbs distributions
  • Boltzmann machines
  • Logistic regression
  • Hidden Markov models
  • Conditional random fields
  • Obtained by making all predicates zero-arity
  • Markov logic allows objects to be interdependent
    (non-i.i.d.)

26
Relation to First-Order Logic
  • Infinite weights ? First-order logic
  • Satisfiable KB, positive weights ? Satisfying
    assignments Modes of distribution
  • Markov logic allows contradictions between
    formulas

27
Overview
  • Motivation
  • Background
  • Representation
  • Inference
  • Learning
  • Software
  • Applications
  • Discussion

28
Inferring the Most Probable Explanation
  • Problem Find most likely state of world given
    evidence

Query
Evidence
29
Inferring the Most Probable Explanation
  • Problem Find most likely state of world given
    evidence

30
Inferring the Most Probable Explanation
  • Problem Find most likely state of world given
    evidence

31
Inferring the Most Probable Explanation
  • Problem Find most likely state of world given
    evidence
  • This is just the weighted MaxSAT problem
  • Use weighted SAT solver(e.g., MaxWalkSAT Kautz
    et al., 1997 )
  • Potentially faster than logical inference (!)

32
The WalkSAT Algorithm
for i ? 1 to max-tries do solution random
truth assignment for j ? 1 to max-flips do
if all clauses satisfied then
return solution c ? random unsatisfied
clause with probability p
flip a random variable in c else
flip variable in c that maximizes
number of satisfied clauses return failure
33
The MaxWalkSAT Algorithm
for i ? 1 to max-tries do solution random
truth assignment for j ? 1 to max-flips do
if ? weights(sat. clauses) threshold then
return solution c ? random
unsatisfied clause with probability p
flip a random variable in c else
flip variable in c that maximizes
? weights(sat. clauses)
return failure, best solution found
34
But Memory Explosion
  • Problem If there are n constantsand the
    highest clause arity is c,the ground network
    requires O(n ) memory
  • SolutionExploit sparseness ground clauses
    lazily? LazySAT algorithm Singla Domingos,
    2006

c
35
Computing Probabilities
  • P(FormulaMLN,C) ?
  • MCMC Sample worlds, check formula holds
  • P(Formula1Formula2,MLN,C) ?
  • If Formula2 Conjunction of ground atoms
  • First construct min subset of network necessary
    to answer query (generalization of KBMC)
  • Then apply MCMC (or other)
  • Can also do lifted inference Braz et al, 2005

36
MCMC Gibbs Sampling
state ? random truth assignment for i ? 1 to
num-samples do for each variable x
sample x according to P(xneighbors(x))
state ? state with new value of x P(F) ? fraction
of states in which F is true
37
But Insufficient for Logic
  • ProblemDeterministic dependencies break
    MCMCNear-deterministic ones make it very slow
  • SolutionCombine MCMC and WalkSAT? MC-SAT
    algorithm Poon Domingos, 2006

38
Overview
  • Motivation
  • Background
  • Representation
  • Inference
  • Learning
  • Software
  • Applications
  • Discussion

39
Learning
  • Data is a relational database
  • Closed world assumption (if not EM)
  • Learning parameters (weights)
  • Generatively
  • Discriminatively
  • Learning structure (formulas)

40
Generative Weight Learning
  • Maximize likelihood
  • Use gradient ascent or L-BFGS
  • No local maxima
  • Requires inference at each step (slow!)

No. of true groundings of clause i in data
Expected no. true groundings according to model
41
Pseudo-Likelihood
  • Likelihood of each variable given its neighbors
    in the data
  • Does not require inference at each step
  • Consistent estimator
  • Widely used in vision, spatial statistics, etc.
  • But PL parameters may not work well forlong
    inference chains

42
Discriminative Weight Learning
  • Maximize conditional likelihood of query (y)
    given evidence (x)
  • Expected counts Counts in most prob. state of y
    given x, found by MaxWalkSAT

No. of true groundings of clause i in data
Expected no. true groundings according to model
43
Structure Learning
  • Generalizes feature induction in Markov nets
  • Any inductive logic programming approach can be
    used, but . . .
  • Goal is to induce any clauses, not just Horn
  • Evaluation function should be likelihood
  • Requires learning weights for each candidate
  • Turns out not to be bottleneck
  • Bottleneck is counting clause groundings
  • Solution Subsampling

44
Structure Learning
  • Initial state Unit clauses or hand-coded KB
  • Operators Add/remove literal, flip sign
  • Evaluation function Pseudo-likelihood
    Structure prior
  • Search Beam search, shortest-first search

45
Overview
  • Motivation
  • Background
  • Representation
  • Inference
  • Learning
  • Software
  • Applications
  • Discussion

46
Alchemy
  • Open-source software including
  • Full first-order logic syntax
  • Generative discriminative weight learning
  • Structure learning
  • Weighted satisfiability and MCMC
  • Programming language features

www.cs.washington.edu/ai/alchemy
47
(No Transcript)
48
Overview
  • Motivation
  • Background
  • Representation
  • Inference
  • Learning
  • Software
  • Applications
  • Discussion

49
Applications
  • Information extraction
  • Entity resolution
  • Link prediction
  • Collective classification
  • Web mining
  • Natural language processing
  • Computational biology
  • Social network analysis
  • Robot mapping
  • Activity recognition
  • Probabilistic Cyc
  • CALO
  • Etc.

50
Information Extraction
Parag Singla and Pedro Domingos,
Memory-Efficient Inference in Relational
Domains (AAAI-06). Singla, P., Domingos, P.
(2006). Memory-efficent inference in relatonal
domains. In Proceedings of the Twenty-First
National Conference on Artificial
Intelligence (pp. 500-505). Boston, MA AAAI
Press. H. Poon P. Domingos, Sound and
Efficient Inference with Probabilistic and
Deterministic Dependencies, in Proc. AAAI-06,
Boston, MA, 2006. P. Hoifung (2006). Efficent
inference. In Proceedings of the Twenty-First
National Conference on Artificial Intelligence.
51
Segmentation
Author
Title
Venue
Parag Singla and Pedro Domingos,
Memory-Efficient Inference in Relational
Domains (AAAI-06). Singla, P., Domingos, P.
(2006). Memory-efficent inference in relatonal
domains. In Proceedings of the Twenty-First
National Conference on Artificial
Intelligence (pp. 500-505). Boston, MA AAAI
Press. H. Poon P. Domingos, Sound and
Efficient Inference with Probabilistic and
Deterministic Dependencies, in Proc. AAAI-06,
Boston, MA, 2006. P. Hoifung (2006). Efficent
inference. In Proceedings of the Twenty-First
National Conference on Artificial Intelligence.
52
Entity Resolution
Parag Singla and Pedro Domingos,
Memory-Efficient Inference in Relational
Domains (AAAI-06). Singla, P., Domingos, P.
(2006). Memory-efficent inference in relatonal
domains. In Proceedings of the Twenty-First
National Conference on Artificial
Intelligence (pp. 500-505). Boston, MA AAAI
Press. H. Poon P. Domingos, Sound and
Efficient Inference with Probabilistic and
Deterministic Dependencies, in Proc. AAAI-06,
Boston, MA, 2006. P. Hoifung (2006). Efficent
inference. In Proceedings of the Twenty-First
National Conference on Artificial Intelligence.
53
Entity Resolution
Parag Singla and Pedro Domingos,
Memory-Efficient Inference in Relational
Domains (AAAI-06). Singla, P., Domingos, P.
(2006). Memory-efficent inference in relatonal
domains. In Proceedings of the Twenty-First
National Conference on Artificial
Intelligence (pp. 500-505). Boston, MA AAAI
Press. H. Poon P. Domingos, Sound and
Efficient Inference with Probabilistic and
Deterministic Dependencies, in Proc. AAAI-06,
Boston, MA, 2006. P. Hoifung (2006). Efficent
inference. In Proceedings of the Twenty-First
National Conference on Artificial Intelligence.
54
State of the Art
  • Segmentation
  • HMM (or CRF) to assign each token to a field
  • Entity resolution
  • Logistic regression to predict same
    field/citation
  • Transitive closure
  • Alchemy implementation Seven formulas

55
Types and Predicates
token Parag, Singla, and, Pedro, ... field
Author, Title, Venue citation C1, C2,
... position 0, 1, 2, ... Token(token,
position, citation) InField(position, field,
citation) SameField(field, citation,
citation) SameCit(citation, citation)
56
Types and Predicates
token Parag, Singla, and, Pedro, ... field
Author, Title, Venue, ... citation C1, C2,
... position 0, 1, 2, ... Token(token,
position, citation) InField(position, field,
citation) SameField(field, citation,
citation) SameCit(citation, citation)
Optional
57
Types and Predicates
token Parag, Singla, and, Pedro, ... field
Author, Title, Venue citation C1, C2,
... position 0, 1, 2, ... Token(token,
position, citation) InField(position, field,
citation) SameField(field, citation,
citation) SameCit(citation, citation)
Evidence
58
Types and Predicates
token Parag, Singla, and, Pedro, ... field
Author, Title, Venue citation C1, C2,
... position 0, 1, 2, ... Token(token,
position, citation) InField(position, field,
citation) SameField(field, citation,
citation) SameCit(citation, citation)
Query
59
Formulas
Token(t,i,c) InField(i,f,c) InField(i,f,c)
InField(i1,f,c) f ! f
(!InField(i,f,c) v !InField(i,f,c)) Token(t,i
,c) InField(i,f,c) Token(t,i,c)
InField(i,f,c) SameField(f,c,c) SameField(
f,c,c) SameCit(c,c) SameField(f,c,c)
SameField(f,c,c) SameField(f,c,c) SameCit
(c,c) SameCit(c,c) SameCit(c,c)
60
Formulas
Token(t,i,c) InField(i,f,c) InField(i,f,c)
InField(i1,f,c) f ! f
(!InField(i,f,c) v !InField(i,f,c)) Token(t,i
,c) InField(i,f,c) Token(t,i,c)
InField(i,f,c) SameField(f,c,c) SameField(
f,c,c) SameCit(c,c) SameField(f,c,c)
SameField(f,c,c) SameField(f,c,c) SameCit
(c,c) SameCit(c,c) SameCit(c,c)
61
Formulas
Token(t,i,c) InField(i,f,c) InField(i,f,c)
InField(i1,f,c) f ! f
(!InField(i,f,c) v !InField(i,f,c)) Token(t,i
,c) InField(i,f,c) Token(t,i,c)
InField(i,f,c) SameField(f,c,c) SameField(
f,c,c) SameCit(c,c) SameField(f,c,c)
SameField(f,c,c) SameField(f,c,c) SameCit
(c,c) SameCit(c,c) SameCit(c,c)
62
Formulas
Token(t,i,c) InField(i,f,c) InField(i,f,c)
InField(i1,f,c) f ! f
(!InField(i,f,c) v !InField(i,f,c)) Token(t,i
,c) InField(i,f,c) Token(t,i,c)
InField(i,f,c) SameField(f,c,c) SameField(
f,c,c) SameCit(c,c) SameField(f,c,c)
SameField(f,c,c) SameField(f,c,c) SameCit
(c,c) SameCit(c,c) SameCit(c,c)
63
Formulas
Token(t,i,c) InField(i,f,c) InField(i,f,c)
InField(i1,f,c) f ! f
(!InField(i,f,c) v !InField(i,f,c)) Token(t,i
,c) InField(i,f,c) Token(t,i,c)
InField(i,f,c) SameField(f,c,c) SameField(
f,c,c) SameCit(c,c) SameField(f,c,c)
SameField(f,c,c) SameField(f,c,c) SameCit
(c,c) SameCit(c,c) SameCit(c,c)
64
Formulas
Token(t,i,c) InField(i,f,c) InField(i,f,c)
InField(i1,f,c) f ! f
(!InField(i,f,c) v !InField(i,f,c)) Token(t,i
,c) InField(i,f,c) Token(t,i,c)
InField(i,f,c) SameField(f,c,c) SameField(
f,c,c) SameCit(c,c) SameField(f,c,c)
SameField(f,c,c) SameField(f,c,c) SameCit
(c,c) SameCit(c,c) SameCit(c,c)
65
Formulas
Token(t,i,c) InField(i,f,c) InField(i,f,c)
InField(i1,f,c) f ! f
(!InField(i,f,c) v !InField(i,f,c)) Token(t,i
,c) InField(i,f,c) Token(t,i,c)
InField(i,f,c) SameField(f,c,c) SameField(
f,c,c) SameCit(c,c) SameField(f,c,c)
SameField(f,c,c) SameField(f,c,c) SameCit
(c,c) SameCit(c,c) SameCit(c,c)
66
Formulas
Token(t,i,c) InField(i,f,c) InField(i,f,c)
!Token(.,i,c) InField(i1,f,c) f ! f
(!InField(i,f,c) v !InField(i,f,c)) Token(
t,i,c) InField(i,f,c) Token(t,i,c)
InField(i,f,c) SameField(f,c,c) SameField(
f,c,c) SameCit(c,c) SameField(f,c,c)
SameField(f,c,c) SameField(f,c,c) SameCit
(c,c) SameCit(c,c) SameCit(c,c)
67
Robot Mapping
  • InputLaser range finder segments (xi, yi, xf,
    yf)
  • Outputs
  • Segment labels (Wall, Door, Other)
  • Assignment of wall segments to walls
  • Position of walls (xi, yi, xf, yf)

68
Robot Mapping
69
MLNs for Hybrid Domains
  • Allow numeric properties of objects as nodes
  • E.g. Length(x), Distance(x,y)
  • Allow numeric terms as features
  • E.g. (Length(x) 5.0)2
  • (Gaussian distr. w/ mean 5.0 and variance
    1/(2w))
  • Allow a ß as shorthand for (a ß)2
  • E.g. Length(x) 5.0
  • Etc.

70
Robot Mapping
  • SegmentType(s,t) Length(s) Length(t)
  • SegmentType(s,t) Depth(s) Depth(t)
  • Neighbors(s,s) Aligned(s,s)
  • (SegType(s,t) SegType(s,t))
  • !PreviousAligned(s) PartOf(s,l)
    StartLine(s,l)
  • StartLine(s,l) Xi(s) Xi(l) Yi(s) Yi(l)
  • PartOf(s,l)
  • Etc.

Yf(s)-Yi(s) Yi(s)-Yi(l)
Xf(s)-Xi(s) Xi(s)-Xi(l)
71
Overview
  • Motivation
  • Background
  • Representation
  • Inference
  • Learning
  • Software
  • Applications
  • Discussion

72
Next Steps
  • Further improving scalability, robustnessand
    ease of use
  • Online learning and inference
  • Discovering deep structure
  • Generalizing across domains and tasks
  • Relational decision theory
  • Solving larger applications
  • Adversarial settings
  • Etc.

73
Summary
  • The real world is complex and uncertain
  • First-order logic handles complexity
  • Probability handles uncertainty
  • Statistical relational learning combines the two
  • Markov logic Most advanced approach to date
  • Alchemy Complete suite of state-of-the-art
    algorithms
  • Many challenging applications now within reach
  • Were at an inflection point in what we can do
Write a Comment
User Comments (0)
About PowerShow.com