3D Geometry for Computer Graphics - PowerPoint PPT Presentation

1 / 36
About This Presentation
Title:

3D Geometry for Computer Graphics

Description:

Spectral decomposition. If we are lucky: A = V VT, V orthogonal ... Spectral decomposition. Scaling: x'' = 1 x', y'' = 2 y' Scale. Au = V VTu. 8 ... – PowerPoint PPT presentation

Number of Views:47
Avg rating:3.0/5.0
Slides: 37
Provided by: OS8
Category:

less

Transcript and Presenter's Notes

Title: 3D Geometry for Computer Graphics


1
3D Geometry forComputer Graphics
  • Class 7

2
The plan today
  • Review of SVD basics
  • Connection PCA
  • Applications
  • Eigenfaces
  • Animation compression

3
The geometry of linear transformations
  • A linear transform A always takes hyper-spheres
    to hyper-ellipses.

A
4
The geometry of linear transformations
  • Thus, one good way to understand what A does is
    to find which vectors are mapped to the main
    axes of the ellipsoid.

A
5
Spectral decomposition
  • If we are lucky A V ? VT, V orthogonal
  • The eigenvectors of A are the axes of the ellipse

A
6
Spectral decomposition
  • The standard basis

y
x
Rotation VT
Au V ? VTu
7
Spectral decomposition
  • Scaling x ?1 x, y ?2 y

Scale ?
Au V ? VTu
8
Spectral decomposition
  • Rotate back

Rotate V
Au V ? VTu
9
General linear transformations SVD
  • In general A will also contain rotations, not
    just scales

?1
1
1
?2
A
10
SVD
  • The standard basis

y
x
Rotation VT
Au U ? VTu
11
Spectral decomposition
  • Scaling x ?1 x, y ?2 y

Scale ?
Au U ? VTu
12
Spectral decomposition
  • Rotate again

Rotate U
Au U ? VTu
13
SVD more formally
  • SVD exists for any matrix
  • Formal definition
  • For square matrices A ? Rn?n, there exist
    orthogonal matrices U, V ? Rn?n and a diagonal
    matrix ?, such that all the diagonal values ?i of
    ? are non-negative and


14
Reduced SVD
  • For rectangular matrices, we have two forms of
    SVD. The reduced SVD looks like this
  • The columns of U are orthonormal
  • Cheaper form for computation and storage

M?n
M?n
n?n
n?n

15
Full SVD
  • We can complete U to a full orthogonal matrix and
    pad ? by zeros accordingly

M?n
M?M
M?n
n?n

16
SVD is the working horse of linear algebra
  • There are numerical algorithms to compute SVD.
    Once you have it, you have many things
  • Matrix inverse ? can solve square linear systems
  • Numerical rank of a matrix
  • Can solve least-squares systems
  • PCA
  • Many more

17
SVD is the working horse of linear algebra
  • Must remember SVD is expensive!
  • For a n?n matrix, SVD costs O(n3).
  • For sparse matrices could sometimes be cheaper

18
Shape matching
  • We have two objects in correspondence
  • Want to find the rigid transformation that aligns
    them

19
Shape matching
  • When the objects are aligned, the lengths of the
    connecting lines are small.

20
Shape matching formalization
  • Align two point sets
  • Find a translation vector t and rotation matrix R
    so that

21
Summary of rigid alignment
  • Translate the input points to the centroids
  • Compute the covariance matrix
  • Compute the SVD of H
  • The optimal rotation is
  • The translation vector is

22
SVD for animation compression
Chicken animation
23
3D animations
  • Each frame is a 3D model (mesh)
  • Connectivity mesh faces

24
3D animations
  • Each frame is a 3D model (mesh)
  • Connectivity mesh faces
  • Geometry 3D coordinates of the vertices

25
3D animations
  • Connectivity is usually constant (at least on
    large segments of the animation)
  • The geometry changes in each frame ? vast amount
    of data, huge filesize!

13 seconds, 3000 vertices/frame, 26 MB
26
Animation compression by dimensionality reduction
  • The geometry of each frame is a vector in R3N
    space (N vertices)

3N ? f
27
Animation compression by dimensionality reduction
  • Find a few vectors of R3N that will best
    represent our frame vectors!

VT f?f
? f?f
U 3N?f
VT

28
Animation compression by dimensionality reduction
  • The first principal components are the important
    ones

u1
u2
u3

29
Animation compression by dimensionality reduction
  • Approximate each frame by linear combination of
    the first principal components
  • The more components we use, the better the
    approximation
  • Usually, the number of components needed is much
    smaller than f.

u3
u1
u2

?1
?2
?3
30
Animation compression by dimensionality reduction
ui
  • Compressed representation
  • The chosen principal component vectors
  • Coefficients ?i for each frame

Animation with only 2 principal components
Animation with 20 out of 400 principal components
31
Eigenfaces
  • Same principal components analysis can be applied
    to images

32
Eigenfaces
  • Each image is a vector in R250?300
  • Want to find the principal axes vectors that
    best represent the input database of images

33
Reconstruction with a few vectors
  • Represent each image by the first few (n)
    principal components

34
Face recognition
  • Given a new image of a face, w ? R250?300
  • Represent w using the first n PCA vectors
  • Now find an image in the database whose
    representation in the PCA basis is the closest

The angle between w and w is the smallest
w
w
w
w
35
Non-linear dimensionality reduction
  • More sophisticated methods can discover
    non-linear structures in the face datasets

Isomap, Science, Dec. 2000
36
See you next time
Write a Comment
User Comments (0)
About PowerShow.com