SVD%20and%20PCA - PowerPoint PPT Presentation

About This Presentation
Title:

SVD%20and%20PCA

Description:

Columns of Vk are principal components. Value of wi gives importance of each component ... In practice, similar class of methods, but. operate on A directly ... – PowerPoint PPT presentation

Number of Views:148
Avg rating:3.0/5.0
Slides: 28
Provided by: szymonrus
Category:
Tags: 20pca | 20and | svd | class

less

Transcript and Presenter's Notes

Title: SVD%20and%20PCA


1
SVD and PCA
  • COS 323

2
Dimensionality Reduction
  • Map points in high-dimensional space tolower
    number of dimensions
  • Preserve structure pairwise distances, etc.
  • Useful for further processing
  • Less computation, fewer parameters
  • Easier to understand, visualize

3
PCA
  • Principal Components Analysis (PCA)
    approximating a high-dimensional data setwith a
    lower-dimensional linear subspace

Original axes
4
SVD and PCA
  • Data matrix with points as rows, take SVD
  • Subtract out mean (whitening)
  • Columns of Vk are principal components
  • Value of wi gives importance of each component

5
PCA on Faces Eigenfaces
First principal component
Averageface
Othercomponents
For all except average,gray 0,white gt
0, black lt 0
6
Uses of PCA
  • Compression each new image can be approximated
    by projection onto first few principal components
  • Recognition for a new image, project onto first
    few principal components, match feature vectors

7
PCA for Relighting
  • Images under different illumination

Matusik McMillan
8
PCA for Relighting
  • Images under different illumination
  • Most variation capturedby first 5
    principalcomponents canre-illuminate
    bycombining onlya few images

Matusik McMillan
9
PCA for DNA Microarrays
  • Measure gene activation under different conditions

Troyanskaya
10
PCA for DNA Microarrays
  • Measure gene activation under different conditions

Troyanskaya
11
PCA for DNA Microarrays
  • PCA shows patterns of correlated activation
  • Genes with same pattern might have similar
    function

Wall et al.
12
PCA for DNA Microarrays
  • PCA shows patterns of correlated activation
  • Genes with same pattern might have similar
    function

Wall et al.
13
Multidimensional Scaling
  • In some experiments, can only measure similarity
    or dissimilarity
  • e.g., is response to stimuli similar or
    different?
  • Frequent in psychophysical experiments,preference
    surveys, etc.
  • Want to recover absolute positions
    ink-dimensional space

14
Multidimensional Scaling
  • Example given pairwise distances between cities
  • Want to recover locations

Pellacini et al.
15
Euclidean MDS
  • Formally, lets say we have n ? n matrix
    Dconsisting of squared distances dij (xi
    xj)2
  • Want to recover n ? d matrix X of positionsin
    d-dimensional space

16
Euclidean MDS
  • Observe that
  • Strategy convert matrix D of dij2 intomatrix B
    of xixj
  • Centered distance matrix
  • B XXT

17
Euclidean MDS
  • Centering
  • Sum of row i of D sum of column i of D
  • Sum of all entries in D

18
Euclidean MDS
  • Choose ?xi 0
  • Solution will have average position at origin
  • Then,
  • So, to get B
  • compute row (or column) sums
  • compute sum of sums
  • apply above formula to each entry of D
  • Divide by 2

19
Euclidean MDS
  • Now have B, want to factor into XXT
  • If X is n ? d, B must have rank d
  • Take SVD, set all but top d singular values to 0
  • Eliminate corresponding columns of U and V
  • Have B3U3W3V3T
  • B is square and symmetric, so U V
  • Take X U3 times square root of W3

20
Multidimensional Scaling
  • Result (d 2)

Pellacini et al.
21
Multidimensional Scaling
  • Caveat actual axes, center not necessarilywhat
    you want (cant recover them!)
  • This is classical or Euclidean MDS
    Torgerson 52
  • Distance matrix assumed to be actual Euclidean
    distance
  • More sophisticated versions available
  • Non-metric MDS not Euclidean
    distance,sometimes just inequalities
  • Weighted MDS account for observer bias

22
Computation
  • SVD very closely related to eigenvalue/vector
    computation
  • Eigenvectors/values of ATA
  • In practice, similar class of methods,
    butoperate on A directly

23
Methods for Eigenvalue Computation
  • Simplest power method
  • Begin with arbitrary vector x0
  • Compute xi1Axi
  • Normalize
  • Iterate
  • Converges to eigenvector with maximum eigenvalue!

24
Power Method
  • As this is repeated, coefficient of e1 approaches
    1

25
Power Method II
  • To find smallest eigenvalue, similar process
  • Begin with arbitrary vector x0
  • Solve Axi1 xi
  • Normalize
  • Iterate

26
Deflation
  • Once we have found an eigenvector e1 with
    eigenvalue ?1, can compute matrix A ?1 e1
    e1T
  • This makes eigenvalue of e1 equal to 0, buthas
    no effect on other eigenvectors/values
  • In principle, could find all eigenvectors this way

27
Other Eigenvector Computation Methods
  • Power method OK for a few eigenvalues, butslow
    and sensitive to roundoff error
  • Modern methods for eigendecomposition/SVD use
    sequence of similarity transformationsto
    reduce to diagonal, then read off eigenvalues
Write a Comment
User Comments (0)
About PowerShow.com