Treewidth: 2 - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Treewidth: 2

Description:

For node i in tree decomposition, W Xi write ... four types of nodes (leaf, join, forget, introduce), the table can be computed ... – PowerPoint PPT presentation

Number of Views:38
Avg rating:3.0/5.0
Slides: 26
Provided by: hansbod
Category:

less

Transcript and Presenter's Notes

Title: Treewidth: 2


1
Treewidth 2
  • Network Algorithms 2002/2003

2
Define G(i)
  • Nice tree decomposition.
  • For each node i, G(i) subgraph of G, formed by
    all nodes in sets Xj, with ji or j a descendant
    of i in tree.
  • Notate G(i) ( V(i), E(i) ).

3
Maximum weighted independent set on graphs with
treewidth k
  • For node i in tree decomposition, W Í Xi write
  • R(i, W) maximum weight of independent set S of
    G(i) with S Ç Xi W,
  • if such S does not exist

4
Leave nodes
  • Let i be a leaf node. Say Xi v.
  • R(i,v) w(v)
  • R(i, Æ ) 0

v
G(i) is a graph with one vertex
5
Join nodes
  • Let i be a join node with children j1, j2.
  • R(i, W) R(j1, W) R(j2, W) w(W).



6
Introduce nodes
  • Let i be a node with child j, with Xi Xj È v.
  • Let W Í Xj.
  • R(i,W) R(j,W).
  • If v not adjacent to vertex in W
    R(i,WÈv)R(j,W)w(v)
  • If v adjacent to vertex in W R(i,W È v) .

v
7
Forget nodes
  • Let i be a node with child j, with Xi Xj - v.
  • Let W Í Xi.
  • R(i,W) max (R(i,W), R(i,W È v))

v
v
8
Probabilistic networks
Pr(x1)0.7 Pr(x1)0.3
  • For decision support system.
  • Models (in)dependencies between statistical
    variables.
  • Problem compute probability distribution for
    variable, given values of other variables.
  • Is NP-hard, but linearfor graphs with bounded
    treewidth.

x1
x2
x3


x5
x4
Pr(x5x3) 0.6 Pr(x5x3) 0.4 Pr(x5x3)
0.2 Pr(x5x3)0.8
Pr(x4 x2 and x3) 0.12 etc.
9
(No Transcript)
10
Probabilistic networks
Warning Notational Imprecision!
  • Each vertex is a statistical variable.
  • We assume binary (true or false)
  • Network is a directed acyclic graph.
  • Parents of vertex v set of vertices w with arc
    (w,v). Denote par(v).
  • For each vertex, given is probability
    distribution conditional on values for parents,
    e.g.
  • Pr(v T w T, x T)
  • Pr(v T w T, x F)
  • Etc.

w
x
v
11
Probability of total configurations
  • Denote network G(V,E)
  • Total configuration c value for each variable
    c V T,F.
  • Probability of total configuration Pr(c)

12
Some notations
  • Domain of function c dom(c).
  • Functions c and c agree, iff for all v in dom(c)
    Ç dom(c) c(v) c(v).
  • Denote c A c.
  • If c A c, then let c È c be the function with
    domain dom(c) È dom(c) with
  • If v in dom(c), then (c È c)(v) c(v).
  • If v in dom(c), then (c È c)(v) c(v).

13
Partial configurations and their probability
  • W Í V. Partial configuration c W T,F.
  • Probability of partial configuration sum of
    probabilities of all total configurations
    agreeing with c

14
Probability of variable
  • Pr(vT)
  • Is special case of partial configuration.
  • Conditional probability Pr(vT c) for some
    partial configuration c
  • Pr(vT c) Pr (vT and c) / Pr (c)
  • Answers queries to probabilistic network
  • What is the probability that this patient has
    disease v given that he has symptoms c?
  • Probabilistic inference
  • NP-hard (P-complete) on general networks

15
Computing the probability of a partial
configuration using tree decompositions
  • Make the moralized graph H of G.
  • For each v, make the parents of v into a clique.
  • Then drop direction of all edges in G.
  • Make a tree decomposition of H.
  • E.g., with minimum degree heuristic.
  • Dynamic programming approach on the tree
    decomposition.

16
Inference problem
  • Given
  • Probabilistic network, with dag G(V,E), and
    conditional probabilities.
  • Partial configuration c W T,F.
  • Question what is Pr(c)?

17
Subproblems for graphs G(i)
  • Consider G(i) (V(i),E(i)).
  • For each v in V, select a node iv which contains
    v È par(v).
  • Such a node exists, as this set is a clique!
  • Choose iv of maximum depth this is an introduce
    or leaf node.
  • Write Z(i) as the set of v with iv in the subtree
    with root i.
  • For each d Xi T,F, let Pi(d) be the sum over
    all partial configurations of V(i) that agree
    with c and d of products of the conditional
    probabilities for v with iv in the subtree with
    root i.

18
Method
  • Compute, bottom up, for each node i, the table
    Pi.
  • These tables contain at most 2k1 numbers, if the
    treewidth of H is k.
  • For each of the four types of nodes (leaf, join,
    forget, introduce), the table can be computed
    from the tables of the children of the node.

19
Leaves
  • Leaf i with Xi v.
  • If iv ¹ i
  • If v Ï W Pi(vT) Pi(vF) 1.
  • If v Î W, c(v)T Pi(vT) 1, Pi(vF) 0.
  • If v Î W, c(v)F Pi(vT) 0, Pi(vF) 1.
  • If iv i
  • If v Ï W Pi(vT) Pr(vT) Pi(vF) Pr(vF).
  • If v Î W, c(v)T Pi(vT) Pr(vT), Pi(vF) 0.
  • If v Î W, c(v)F Pi(vT) 0, Pi(vF) Pr(vF).

20
Join nodes
  • i join node with children j1, j2.
  • For each d Xi T,F
  • Pi(d) Pj1(d) Pj2(d).
  • This needs a proof

21
Introduce nodes case 1
  • Let i be a node with child j, with Xi Xj È x.
  • Suppose ix ¹ i.
  • Let d Xi T,F let d be d restricted to Xj.
  • If not d A c, then Pi(d) 0.
  • If d A c, then Pi(d) Pj(d).
  • Note Z(i) Z(j).

22
Introduce nodes case 2
  • Let i be a node with child j, with Xi Xj È x.
  • Suppose ix i.
  • Let d Xi T,F let d be d restricted to Xj.
  • Write q Pr ( x has value d(x) each parent y
    of x has value d(y) ). (Read from probabilistic
    network.)
  • If not d A c, then Pi(d) 0.
  • If d A c, then Pi(d) Pj1(d) q.
  • Note Z(i) Z(j) È x.

23
Forget nodes
  • Let i be a node with child j, with Xi Xj - x.
  • Let d Xi T,F.
  • Let d Xj T,F be with d(x) T and d A d.
  • Let d Xj T,F be with d(x) F and d A
    d.
  • Pi(d) Pj(d) Pj(d).

24
Inference using tree decompositions
  • In bottom-up order in the tree, compute for each
    node i a table Pi, giving for each possible
    function d Xi T,F, the value Pi(d).
  • Each table entry can be computed in constant
    time.
  • If the treewidth is bounded by k, we can have a
    tree decomposition with O(n) nodes, and the
    algorithm uses O(2k1 n) time.

25
Remarks
  • Actual scheme reuses information in order to
    compute conditional probability distributions for
    all variables at the same time.
  • Lauritzen/Spiegelhalter algorithm is most
    commonly used algorithm for inference in
    probabilistic networks.
Write a Comment
User Comments (0)
About PowerShow.com