Approximate Nearest Subspace Search with Applications to Pattern Recognition - PowerPoint PPT Presentation

About This Presentation
Title:

Approximate Nearest Subspace Search with Applications to Pattern Recognition

Description:

KD-Trees Decompose space into axis aligned rectangles Image from Dan Pelleg ... Yale Faces Yale Patches Image Reconstruction Yale Faces Questions ... – PowerPoint PPT presentation

Number of Views:126
Avg rating:3.0/5.0
Slides: 40
Provided by: Ian152
Category:

less

Transcript and Presenter's Notes

Title: Approximate Nearest Subspace Search with Applications to Pattern Recognition


1
Approximate Nearest Subspace Search with
Applications to Pattern Recognition
  • Ronen Basri, Tal Hassner, Lihi Zelnik-Manor
  • presented by Andrew Guillory and Ian Simon

2
The Problem
  • Given n linear subspaces Si

3
The Problem
  • Given n linear subspaces Si
  • And a query point q

4
The Problem
  • Given n linear subspaces Si
  • And a query point q
  • Find the subspace Si that minimizes dist(Si,q).

5
Why?
  • object appearance variation subspace
  • fast queries on object database

6
Why?
  • object appearance variation subspace
  • fast queries on object database
  • Other reasons?

7
Approach
  • Solve by reduction to nearest neighbor.
  • point-to-point distances

8
Approach
  • Solve by reduction to nearest neighbor.
  • point-to-point distances

not actual reduction
9
Approach
  • Solve by reduction to nearest neighbor.
  • point-to-point distances
  • In higher-dimensional space.

not actual reduction
10
Point-Subspace Distance
  • Use squared distance.

11
Point-Subspace Distance
  • Use squared distance.

12
Point-Subspace Distance
  • Use squared distance.
  • Squared point-subspace distancecan be
    represented as a dot product.

13
The Reduction
  • Let
  • Remember

14
The Reduction
  • Let
  • Then
  • Remember

15
The Reduction
16
The Reduction
constant over query
17
The Reduction
?
constant over query
18
The Reduction
?
constant over query
ZTZ I
19
The Reduction
?
constant over query
ZTZ I
Z is d-by-(d-k), columns orthonormal.
20
The Reduction
?
constant over query
ZTZ I
Z is d-by-(d-k), columns orthonormal.
21
The Reduction
  • For query point q

22
The Reduction
  • For query point q
  • Can we decrease the additive constant?

23
Observation 1
  • All data points lie on a hyperplane.

24
Observation 1
  • All data points lie on a hyperplane.
  • Let
  • Now the hyperplane contains the origin.

25
Observation 2
  • After hyperplane projection
  • All data points lie on a hypersphere.

26
Observation 2
  • After hyperplane projection
  • All data points lie on a hypersphere.
  • Let
  • Now the query point lies on the hypersphere.

27
Observation 2
  • After hyperplane projection
  • All data points lie on a hypersphere.
  • Let
  • Now the query point lies on the hypersphere.

28
Reduction Geometry
  • What is happening?

29
Reduction Geometry
  • What is happening?

30
Finally
  • Additive constant depends only on dimension of
    points and subspaces.
  • This applies to linear subspaces, all of the same
    dimension.

31
Extensions
  • subspaces of different dimension
  • lines and planes, e.g.
  • Not all data points have the same norm.
  • Add extra dimension to fix this.

32
Extensions
  • subspaces of different dimension
  • lines and planes, e.g.
  • Not all data points have the same norm.
  • Add extra dimension to fix this.
  • affine subspaces
  • Again, not all data pointshave the same norm.

33
Approximate Nearest Neighbor Search
  • Find point x with distance
  • d(x, q) lt (1 e) mini d(xi,q)
  • Tree based approaches KD-trees, metric / ball
    trees, cover trees
  • Locality sensitive hashing
  • This paper uses multiple KD-Trees with
    (different) random projections

34
KD-Trees
  • Decompose space into axis aligned rectangles

Image from Dan Pelleg
35
Random Projections
  • Multiply data with a random matrix X with X(i,j)
    drawn from N(0,1)
  • Several different justifications
  • Johnson-Lindenstrauss (data set that is small
    compared to dimensionality)
  • Compressed Sensing (data set that is sparse in
    some linear basis)
  • RP-Trees (data set that has small doubling
    dimension)

36
Results
  • Two goals
  • show their method is fast
  • show nearest subspace is useful
  • Four experiments
  • Synthetic Experiments
  • Image Approximation
  • Yale Faces
  • Yale Patches

37
Image Reconstruction
38
Yale Faces
39
Questions / Issues
  • Should random projections be applied before or
    after the reduction?
  • Why does the effective distance error go down
    with the ambient dimensionality?
  • The reduction tends to make query points far away
    from the points in the database. Are there
    better approximate nearest neighbor algorithms in
    this case?
Write a Comment
User Comments (0)
About PowerShow.com