Generalized Sparsest Cut and Embeddings of NegativeType Metrics - PowerPoint PPT Presentation

About This Presentation
Title:

Generalized Sparsest Cut and Embeddings of NegativeType Metrics

Description:

Finding sparsest cut. minimizing above function over all metrics ... vertex cuts. l2. 2. l1. l2. l1. Sparsest Cut and Embeddings of Negative-type Metrics. 15 ... – PowerPoint PPT presentation

Number of Views:84
Avg rating:3.0/5.0
Slides: 17
Provided by: Shuchi2
Learn more at: http://www.cs.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: Generalized Sparsest Cut and Embeddings of NegativeType Metrics


1
Generalized Sparsest Cut and Embeddings of
Negative-Type Metrics
  • Shuchi Chawla, Anupam Gupta, Harald Räcke
  • Carnegie Mellon University
  • 1/25/05

2
Finding Bottlenecks
  • Find the cut across which demand exceeds capacity
    by the largest factor

Sparsest cut
Capacity 2.1 units
Demand 3 units
1
Sparsity of the cut 0.7
0.1
10
3
The Generalized Sparsest Cut Problem
  • The givens
  • a graph G(V,E)
  • capacities on edges c(e)
  • demands on pairs of vertices D(x,y)
  • Sparsity of a cut S ? V,
  • ?(S) ??(S)c(e)
  • ?x?S, y?S D(x,y)
  • Sparsity of graph G,
  • ?(G) minS?V ?(S)
  • Our result an O(log¾n)-approximation for ?(G)

V
S\V
4
Whats known
  • Uniform-demands a special case
  • D(x,y) 1 for all x ? y
  • O(log n)-approx Leighton Rao88
  • based on LP-rounding
  • Cannot do better than O(log n) using the LP
  • O(?log n)-approx Arora Rao Vazirani04
  • based on an SDP relaxation
  • General case
  • O(log n)-approx Linial London Rabinovich95
    Aumann Rabani98
  • based on LP-rounding and low-distortion
    embeddings
  • Our result O(log¾n)-approx
  • Extends ARV04 using the same SDP

5
A metrics perspective
  • Given set S, define a cut metric
  • ?S(x,y) 1 if x and y on different sides of
    cut (S, V-S)
  • 0 otherwise
  • ?(S) ?e c(e) ?S(e)
  • ?x,y D(x,y) ?S(x,y)
  • Finding sparsest cut
  • ? minimizing above function over all
    metrics
  • Typical technique Minimize over class M of
    metrics, with M ? l1, and embed into l1

NP-hard
l1
cut
6
A metrics perspective
  • Finding sparsest cut
  • ? minimizing a(d) over metrics
  • Lemma Minimize over a class M to obtain d
  • have ?-distortion embedding from d into
  • ? ?-approx for sparsest cut

l1
l1
  • When M all metrics, obtain O(log n)
    approximation
  • Linial London Rabinovich 95, Aumann Rabani
    98
  • Cannot do any better Leighton Rao 88

7
A metrics perspective
  • Finding sparsest cut
  • ? minimizing a(d) over metrics
  • Lemma Minimize over a class M to obtain d
  • have ?-avg-distortion embedding from d
    into
  • ? ?-approx for uniform-demands
    sparsest cut

l1
l1
  • M negative-type metrics ? O(?log n)
    approx
  • Arora Rao Vazirani 04
  • Question Can we obtain O(?log n) for
    generalized
  • sparsest cut,
  • or an O(?log n) distortion embedding from
    into

l1
l2
2
8
Arora et al.s O(?log n)-approx
  • Solve an SDP relaxation to get the best
    representation
  • Key Theorem
  • Let d be a well-spread-out metric.
    Then ? m an embedding from d
    into a line, such that,
  • - for all pairs (x,y), m(x,y) ? d(x,y)
  • - for a constant fraction of (x,y), m(x,y) ? 1
    /O(?log n) d(x,y)
  • The general case issues
  • Well-spreading does not hold
  • Constant fraction is not enough
  • Want low distortion for every demand pair.

For a const. fraction of (x,y), d(x,y) gt const. ?
diameter
Implies an avg. distortion of O(?log n)
9
1. Ensuring well-spreading
  • Divide pairs into groups based on distances
  • Di (x,y) 2i ? d(x,y) ? 2i1
  • At most O(log n) groups
  • Each group by itself is well-spread, by
    definition
  • Embed each group individually
  • distortion O(?log n) contracting embedding into a
    line for each (assume for now)
  • Glue the embeddings appropriately

10
Gluing the groups
  • Start with an a O(?log n) embedding for each
    scale
  • A naïve gluing
  • concatenate all the embeddings and renormalize by
    dividing by O(?log n)
  • Distortion O(a?log n) O(log n)
  • A better gluing lemma
  • measured-descent by Krauthgamer, Lee, Mendel
    Naor (2004)
  • (Recall the previous talk by James Lee)
  • Gives distortion O(?a log n) ? distortion
    O(log¾n)

11
2. Average to worst-case distortion
  • Arora et al.s guarantee a constant fraction of
    pairs embed with low distortion
  • We want every pair should embed with low
    distortion
  • Idea Re-embed pairs that have high distortion
  • Problem Increases the number of embeddings,
    implying a larger distortion
  • A re-weighting solution
  • Dont ignore low-distortion pairs completely
    keep them around and reduce their importance

12
Weighting-and-watching
  • Initialize weight 1 for each pair
  • Apply ARV to weighted instance
  • For pairs with low-distortion,
  • decrease weights by factor of 2
  • For other pairs, do nothing
  • Repeat until total weight lt 1/k
  • Total weight decreases by constant factor every
    time
  • O(log k) iterations
  • Each individual weight decreases from 1 to 1/k
  • Each pair contributes to W(log k) iterations
  • Implies low distortion for every pair

A constant fraction of the weight is embed with
low distortion
13
Summarizing
  • Start with a solution to the SDP
  • For every distance scale
  • Use ARV04 to embed points into line
  • Use re-weighting to obtain good worst-case
    distortion
  • Combine distance scales using measured-descent
  • In practice
  • Write another SDP to find best embedding into
  • Use J-L to embed into and then into a
    cut-metric

l2
l1
l2
14
Recent developements
  • Arora, Lee Naor obtained an O(?log n log log n)
    approximation for sparsest cut
  • The improvement lies in a better concatenation
    technique
  • Nearly optimal embedding from into
  • Evidence for hardness
  • Khot Vishnoi W(log log log n) integrality gap
    for the SDP
  • l.b. for embedding into
  • Chawla, Krauthgamer, Kumar, Rabani Sivakumar
  • W(log log n) hardness based on Unique Games
    Conjecture
  • Evidence that constant factor approximation is
    not possible
  • Other approximations using similar SDP
    relaxations
  • Feige, Hajiaghayi Lee O(?log n) approx for
    min-wt. vertex cuts

l2
l1
l1
15
Open Problems
  • Beating the ALN05 O(?log n log log n)
    approximation
  • Can the SDP give a better bound?
  • Exploring flow-based techniques
  • Closing the gap between hardness and
    approximation
  • Other applications of SDP with triangle
    inequalities
  • Other partitioning problems
  • Directed versions? SDP/LP dont seem to work

16
Questions?
17
Gluing the groups a naïve approach
l2
l1
  • Basic plan embed into first and then into
  • A naïve idea concatenate all the embeddings and
    renormalize by dividing by O(?log n)
  • No expansion ?(x,y) ? d(x,y), ? x,y
  • Contraction ?(x,y) ? 1 /O(?log n) . d(x,y)
    /O(?log n)
  • Distortion at most O(log n)
  • No better than before We unnecessarily lose
    another factor of O(?log n)

18
Gluing via measured descent
  • A technique developed by Krauthgamer, Lee, Mendel
    Naor (2004)
  • Basic idea
  • Consider an expansion measure V(x,y)
  • V(x,y) is large if the number of points around x
    grows fast with distance from x for distance
    scales in O(d(x,y))
  • Fakcharoenphol, Rao Talwar give an embedding in
    which low V(x,y) implies low-distortion for the
    pair (x,y)
  • KLMN Embed pairs with large V(x,y) better, so
    as to correct the skew of FRT03

19
Gluing via measured descent
  • A technique developed by Krauthgamer, Lee, Mendel
    Naor (2004)
  • Basic idea
  • KLMN Embed pairs with large V(x,y) better, so
    as to correct the skew of FRT03
  • Naïve approach create an embedding for every
    distance scale and concatenate
  • KLMN
  • create an embedding for every number t of
    points
  • to embed a point x, use the distance scale
    corresponding to ball around x containing t
    points
  • Large V(x,y) ? more representation for (x,y)
    one for each distance scale lying in O(d(x,y))
  • Gives an embedding with distortion O(?a log n)
  • a ?log n ? distortion O(log¾n)

20
ARV
  • Novel technique of rounding SDP
  • Give rounding with low AVG distortion from l22
    into l1
  • What does this mean about uniform sparsest cut
  • The main result and l22 metric with large avg
    distance can be embed with low avg dist.

21
An intuition
  • Essentially want to do the same embedding as ARV
  • However, different distance scales no longer
    have well-spreading property
  • Solution treat each distance scale separately
  • At most log n distance scales
  • Naïve concatenation gives a total factor of log n
  • No benefit
  • Instead, use a smart concatenation based on
    KLMN
  • Details need to be filled in

22
A single distance scale
  • What do we need in the end?
  • Forming a partition
  • Partition to probabilistic embedding
  • Handling probabilities
  • Handling weights

23
Putting them together
  • Basic idea of KLMN

24
The algorithm
  • Best embedding can be found through an SDP
  • We embed into l2 which can then be embed into l1
    and then into a cut metric
Write a Comment
User Comments (0)
About PowerShow.com