Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard Baraniuk Random Projections of Smooth Manifolds Richard Baraniuk and Michael Wakin - PowerPoint PPT Presentation

1 / 14
About This Presentation
Title:

Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard Baraniuk Random Projections of Smooth Manifolds Richard Baraniuk and Michael Wakin

Description:

... If we have manifold information, we can perform compressive sensing using significantly fewer measurements. Whitney s Embedding Theorem: ... – PowerPoint PPT presentation

Number of Views:28
Avg rating:3.0/5.0
Slides: 15
Provided by: John4298
Category:

less

Transcript and Presenter's Notes

Title: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard Baraniuk Random Projections of Smooth Manifolds Richard Baraniuk and Michael Wakin


1
Random Projections of Signal Manifolds Michael
Wakin and Richard BaraniukRandom Projections
for Manifold LearningChinmay Hegde, Michael
Wakin and Richard BaraniukRandom Projections of
Smooth ManifoldsRichard Baraniuk and Michael
Wakin
  • Presented by
  • John Paisley
  • Duke University

2
Overview/Motivation
  • Random projections can allow for linear,
    nonadaptive dimensionality reduction.
  • If we can ensure that the manifold information is
    preserved in these projections, we can use all
    manifold learning techniques in this compressed
    space and know the results will be (essentially)
    the same.
  • Therefore we can sense compressively, meaning we
    can bypass the overhead and directly sense the
    compressed (dimensionality reduced) signal.

3
Random Projections of Signal Manifolds (ICASSP
2006)
  • This paper If we have manifold information, we
    can perform compressive sensing using
    significantly fewer measurements.
  • Whitneys Embedding Theorem For a noiseless
    manifold with intrinsic dimensionality of K, this
    theorem implies that a signal x in RN, projected
    into RM by the M x N orthonormal matrix, P (y
    Px), can be recovered with high probability if M
    gt 2K
  • Note that K is the intrinsic dimensionality,
    which is different from (and less than) the level
    of sparsity.

4
Random Projections of Signal Manifolds (ICASSP
2006)
  • The recovery algorithm considered here is a
    simple search through the projected manifold for
    the nearest neighbor.
  • Consider the case where the data is noisy, so
    slightly off the manifold, and define

5
Random Projections of Signal Manifolds (ICASSP
2006)
6
Random Projections for Manifold Learning (NIPS
2007)
  • How does a random projection of a manifold,
  • impact the ability to estimate the intrinsic
    dimensionality of the manifold and to embed that
    manifold into a Euclidean space that preserves
    geodesic distances (e.g. via the Isomap
    algorithm)?
  • How many projections are needed?
  • Grassberger-Procacia (GP) algorithm A common
    algorithm for estimating the intrinsic
    dimensionality of a manifold.
  • Also written as C(r1)/C(r2) (r1/r2)K where K
    is the intrinsic dimensionality. This method uses
    the fact that the volume of the intersection of a
    K dimensional object and a hypersphere of radius
    r is proportional to rK

7
Random Projections for Manifold Learning (NIPS
2007)
  • Isomap algorithm Produces a mapping where the
    Euclidean distance in the mapped space equals the
    geodesic distance in the original space.

8
Random Projections for Manifold Learning (NIPS
2007)
  • Lower bound on M for the GP algorithm. The proof
    is in

9
Random Projections for Manifold Learning (NIPS
2007)
  • Lower bound on M for the Isomap algorithm. The
    proof is in

10
Random Projections for Manifold Learning (NIPS
2007)
  • ML-RP algorithm (manifold learning using random
    projections)
  • Developed in paper to find M

11
Random Projections for Manifold Learning (NIPS
2007)
12
Random Projections for Manifold Learning (NIPS
2007)
13
Random Projections of Smooth Manifolds (in
Foundations of Computational Mathematics)
14
Random Projections of Smooth Manifolds (in
Foundations of Computational Mathematics)
  • Sketch of proof
  • Sample points from the manifold such that the
    (geodesic) distortion of any point on the
    manifold to the nearest sampled point is less
    than some value. Also, sample points from the
    tangent space of the manifold, ensuring the
    distance of all points to the nearest sample is
    less than some threshold. Then use the JL-lemma
    to ensure that the embedding of all of these
    sampled points preserves relative distances. Then
    use some theorems and the facts about how the
    points were sampled to extend this distance
    preservation to all points on the manifold.
Write a Comment
User Comments (0)
About PowerShow.com