Partial Shape Matching - PowerPoint PPT Presentation

About This Presentation
Title:

Partial Shape Matching

Description:

That is, we would like to define a measure of similarity that answers the question: ... To do this, we have to define different descriptors for a model depending on ... – PowerPoint PPT presentation

Number of Views:195
Avg rating:3.0/5.0
Slides: 25
Provided by: csJ8
Learn more at: https://www.cs.jhu.edu
Category:

less

Transcript and Presenter's Notes

Title: Partial Shape Matching


1
Partial Shape Matching
2
  • Outline
  • Motivation
  • Sum of Squared Distances

3
Representation Theory
  • Motivation
  • We have seen a large number of different shape
    descriptors
  • Shape Distributions
  • Extended Gaussian Images
  • Shape Histograms
  • Gaussian EDT
  • Wavelets
  • Spherical Parameterizations
  • Spherical Extent Functions
  • Light Field Descriptors
  • Shock Graphs
  • Reeb Graphs

4
Representation Theory
  • Challenge
  • Partial shape matching problem
  • Given a part of a model S and a whole model M
    determine if the part is a subset of the whole
    S?M.

S
M
5
Representation Theory
  • Difficulty
  • For whole object matching, we would associate a
    shape descriptor vM to every model M and would
    define the measure of similarity between models M
    and N as the distance between their descriptors
  • D(M,N)vM-vN

6
Representation Theory
  • Difficulty
  • For partial object matching, we cannot use the
    same approach
  • Vector differences are symmetric but subset
    matching is not
  • If S?M, then we would like to have vS?vM
    andwhich means that we cannot use difference
    norms to measure similarity.

7
Representation Theory
  • Motivation
  • We have seen a number of different ways for
    addressing the alignment problem
  • Center of Mass Normalization
  • Scale Normalization
  • PCA Alignment
  • Translation Invariance
  • Rotation Invariance

8
Representation Theory
  • Motivation
  • Most of these methods will change give very
    different descriptors if only part of the model
    is given.
  • Center of mass, variance, and principal axes of a
    part of the model will not be the same as those
    of the whole.

9
Representation Theory
  • Motivation
  • Most of these methods will change give very
    different descriptors if only part of the model
    is given.
  • Changing the values of a function will change the
    (non-constant) frequency distribution in
    non-trivial ways.

10
  • Outline
  • Motivation
  • Sum of Squared Distances

11
Representation Theory
  • Goal
  • Design a new paradigm for shape matching that
    associates a simple structure to each shape M?vM
    such that if S?M, then
  • vS?vM (unless SM)
  • but D(S,M)0
  • That is, we would like to define a measure of
    similarity that answers the questionHow close
    is S to being a subset of M?

12
Representation Theory
  • Key Idea
  • Instead of using the norm of the difference, use
    the dot product
  • Then, S is a subset of M if is orthogonal to
    .
  • To do this, we have to define different
    descriptors for a model depending on whether it
    is the target or the query.

13
Representation Theory
  • Implementation
  • For a model M, represent the model by two
    different 3D function

M
RasterM
EDTM
14
Representation Theory
  • Implementation
  • Then RasterM is non-zero only on the boundary
    points of the model, and EDTM is non-zero
    everywhere else. Consequently we haveand
    hence

M
RasterM
EDTM
15
Representation Theory
  • Implementation
  • Moreover, if S?M, then we still haveso that

S
RasterS
M
EDTM
16
Representation Theory
  • What is the value of D(S,M)?

17
Representation Theory
  • What is the value of D(S,M)?

18
Representation Theory
  • What is the value of D(S,M)?
  • Since RasterS is equal to 1 for points that lie
    on S and equal to 0 everywhere else

19
Representation Theory
  • What is the value of D(S,M)?
  • So that distance between S and M is equal to the
    sum of squared distances from points on S to the
    nearest point M.

S
M
20
Representation Theory
  • What is the value of D(S,M)?
  • Note that if we rasterize the models into an
    nxnxn voxel grid, then a brute force computation
    would compute the sum of the distances for each
    of O(n2) on the query by testing against each of
    O(n2) points on the target for the minimum
    distance, giving a total running time of O(n4).
    By pre-computing the EDT, we reduce the
    computation to O(n2) operations.

21
Representation Theory
  • Advantages
  • Model similarity is defined in terms of the
    dot-product
  • We can still use SVD for efficiency and
    compression (since rotations do not change the
    dot product)
  • We can still use fast correlation methods
    (translation, rotation, axial flip) but now we
    want to find the transformation minimizing the
    correlation.

22
Representation Theory
  • Advantages
  • We can use a symmetric version of this for whole
    object matching.

23
Representation Theory
  • Advantages
  • We can perform importance matching by assigning a
    value larger than 1 to sub-regions of the
    rasterization.

24
Representation Theory
  • Disadvantage
  • Aside from using fast Fourier /
    Spherical-Harmonic / Wigner-D transforms, we
    still have no good way to address the alignment
    problem.
Write a Comment
User Comments (0)
About PowerShow.com