Title: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard Baraniuk Random Projections of Smooth Manifolds Richard Baraniuk and Michael Wakin
1Random Projections of Signal Manifolds Michael
Wakin and Richard BaraniukRandom Projections
for Manifold LearningChinmay Hegde, Michael
Wakin and Richard BaraniukRandom Projections of
Smooth ManifoldsRichard Baraniuk and Michael
Wakin
- Presented by
- John Paisley
- Duke University
2Overview/Motivation
- Random projections can allow for linear,
nonadaptive dimensionality reduction. - If we can ensure that the manifold information is
preserved in these projections, we can use all
manifold learning techniques in this compressed
space and know the results will be (essentially)
the same. - Therefore we can sense compressively, meaning we
can bypass the overhead and directly sense the
compressed (dimensionality reduced) signal.
3Random Projections of Signal Manifolds (ICASSP
2006)
- This paper If we have manifold information, we
can perform compressive sensing using
significantly fewer measurements. - Whitneys Embedding Theorem For a noiseless
manifold with intrinsic dimensionality of K, this
theorem implies that a signal x in RN, projected
into RM by the M x N orthonormal matrix, P (y
Px), can be recovered with high probability if M
gt 2K - Note that K is the intrinsic dimensionality,
which is different from (and less than) the level
of sparsity.
4Random Projections of Signal Manifolds (ICASSP
2006)
- The recovery algorithm considered here is a
simple search through the projected manifold for
the nearest neighbor. - Consider the case where the data is noisy, so
slightly off the manifold, and define
5Random Projections of Signal Manifolds (ICASSP
2006)
6Random Projections for Manifold Learning (NIPS
2007)
- How does a random projection of a manifold,
- impact the ability to estimate the intrinsic
dimensionality of the manifold and to embed that
manifold into a Euclidean space that preserves
geodesic distances (e.g. via the Isomap
algorithm)? - How many projections are needed?
- Grassberger-Procacia (GP) algorithm A common
algorithm for estimating the intrinsic
dimensionality of a manifold. - Also written as C(r1)/C(r2) (r1/r2)K where K
is the intrinsic dimensionality. This method uses
the fact that the volume of the intersection of a
K dimensional object and a hypersphere of radius
r is proportional to rK
7Random Projections for Manifold Learning (NIPS
2007)
- Isomap algorithm Produces a mapping where the
Euclidean distance in the mapped space equals the
geodesic distance in the original space.
8Random Projections for Manifold Learning (NIPS
2007)
- Lower bound on M for the GP algorithm. The proof
is in
9Random Projections for Manifold Learning (NIPS
2007)
- Lower bound on M for the Isomap algorithm. The
proof is in
10Random Projections for Manifold Learning (NIPS
2007)
- ML-RP algorithm (manifold learning using random
projections) - Developed in paper to find M
11Random Projections for Manifold Learning (NIPS
2007)
12Random Projections for Manifold Learning (NIPS
2007)
13Random Projections of Smooth Manifolds (in
Foundations of Computational Mathematics)
14Random Projections of Smooth Manifolds (in
Foundations of Computational Mathematics)
- Sketch of proof
- Sample points from the manifold such that the
(geodesic) distortion of any point on the
manifold to the nearest sampled point is less
than some value. Also, sample points from the
tangent space of the manifold, ensuring the
distance of all points to the nearest sample is
less than some threshold. Then use the JL-lemma
to ensure that the embedding of all of these
sampled points preserves relative distances. Then
use some theorems and the facts about how the
points were sampled to extend this distance
preservation to all points on the manifold.