Title: Image Classification using Sparse Coding: Advanced Topics
1 Part 3 Image Classification using Sparse
Coding Advanced Topics
Kai Yu Dept. of Media Analytics NEC Laboratories
America
Andrew Ng Computer Science Dept. Stanford
University
2Outline of Part 3
- Why can sparse coding learn good features?
- Intuition, topic model view, and geometric view
- A theoretical framework local coordinate coding
- Two practical coding methods
- Recent advances in sparse coding for image
classification
3Outline of Part 3
- Why can sparse coding learn good features?
- Intuition, topic model view, and geometric view
- A theoretical framework local coordinate coding
- Two practical coding methods
- Recent advances in sparse coding for image
classification
4Intuition why sparse coding helps classification?
Figure from http//www.dtreg.com/svm.htm
- The coding is a nonlinear feature mapping
- Represent data in a higher dimensional space
- Sparsity makes prominent patterns more
distinctive
5A topic model view to sparse coding
- Each basis is a direction or a topic.
- Sparsity each datum is a linear combination of
only a few bases. - Applicable to image denoising, inpainting, and
super-resolution.
6A geometric view to sparse coding
Data manifold
- Each basis is somewhat like a pseudo data point
anchor point - Sparsity each datum is a sparse combination of
neighbor anchors. - The coding scheme explores the manifold
structure of data.
7MNIST Experiment Classification using SC
- 60K training, 10K for test
- Let k512
- Linear SVM on sparse codes
8MNIST Experiment Lambda 0.0005
Each basis is like a part or direction.
9MNIST Experiment Lambda 0.005
Again, each basis is like a part or direction.
10MNIST Experiment Lambda 0.05
Now, each basis is more like a digit !
11MNIST Experiment Lambda 0.5
Like clustering now!
12Geometric view of sparse coding
Error 4.54
Error 3.75
Error 2.64
- When SC achieves the best classification
accuracy, the learned bases are like digits
each basis has a clear local class association. - Implication exploring data geometry may be
useful for classification.
13Distribution of coefficients (MNIST)
Neighbor bases tend to get nonzero coefficients
14Distribution of coefficient (SIFT, Caltech101)
Similar observation here!
15Recap two different views to sparse coding
- View 1
- Discover topic components
- Each basis is a direction
- Sparsity each datum is a linear combination of
several bases. - Related to topic model
- View 2
- Geometric structure of data manifold
- Each basis is an anchor point
- Sparsity each datum is a linear combination of
neighbor anchors. - Somewhat like a soft VQ (link to BoW)
- Either can be valid for sparse coding under
certain circumstances. - View 2 seems to be helpful to sensory data
classification.
16Outline of Part 3
- Why can sparse coding learn good features?
- Intuition, topic model view, and geometric view
- A theoretical framework local coordinate coding
- Two practical coding methods
- Recent advances in sparse coding for image
classification
17Key theoretical question
- Why unsupervised feature learning via sparse
coding can help classification?
18The image classification setting for analysis
Implication Learning an image classifier is a
matter of learning nonlinear functions on
patches.
19Illustration nonlinear learning via local coding
data points
bases
20How to learn a nonlinear function?
Step 1 Learning the dictionary from unlabeled
data
21How to learn a nonlinear function?
Step 2 Use the dictionary to encode data
22How to learn a nonlinear function?
Step 3 Estimate parameters
Sparse codes of data
- Nonlinear local learning via learning a global
linear function.
23Local Coordinate Coding (LCC) connect coding to
nonlinear function learning
Yu et al NIPS-09
If f(x) is (alpha, beta)-Lipschitz smooth
The key message A good coding scheme should 1.
have a small coding error, 2. and also be
sufficiently local
Locality term
Function approximation error
Coding error
24Outline of Part 3
- Why can sparse coding learn good features?
- Intuition, topic model view, and geometric view
- A theoretical framework local coordinate coding
- Two practical coding methods
- Recent advances in sparse coding for image
classification
25Application of LCC theory
- Fast Implementation with a large dictionary
- A simple geometric way to improve BoW
Wang et al, CVPR 10
Zhou et al, ECCV 10
26Application of LCC theory
- Fast Implementation with a large dictionary
- A simple geometric way to improve BoW
27The larger dictionary, the higher accuracy, but
also the higher computation cost
Yu et al NIPS-09
Yang et al CVPR 09
The same observation for Caltech-256, PASCAL,
ImageNet,
28Locality-constrained linear coding a fast
implementation of LCC
Wang et al, CVPR 10
- Dictionary Learning k-means (or hierarchical
k-means)
- Coding for X,
- Step 1 ensure locality find the K nearest
bases - Step 2 ensure low coding error
29Competitive in accuracy, cheap in computation
Comparable with sparse coding
This is one of the two major algorithms applied
by NEC-UIUC team to achieve the No.1 position in
ImageNet challenge 2010!
Sparse coding
Significantly better than sparse coding
Wang et al CVPR 10
30Application of the LCC theory
- Fast Implementation with a large dictionary
- A simple geometric way to improve BoW
31Interpret BoW linear classifier
32Super-vector coding a simple geometric way to
improve BoW (VQ)
Zhou et al, ECCV 10
33Super-vector coding a simple geometric way to
improve BoW (VQ)
If f(x) is beta-Lipschitz smooth, and
Quantization error
Function approximation error
34Super-vector coding learning nonlinear function
via a global linear model
Let be the VQ
coding of
This is one of the two major algorithms applied
by NEC-UIUC team to achieve the No.1 position in
PASCAL VOC 2009!
35Summary of Geometric Coding Methods
Super-vector Coding
- All lead to higher-dimensional, sparse, and
localized coding - All explore geometric structure of data
- New coding methods are suitable for linear
classifiers. - Their implementations are quite straightforward.
36Things not covered here
- Improved LCC using Local Tangent, Yu Zhang,
ICML10 - Mixture of Sparse Coding, Yang et al ECCV 10
- Deep Coding Network, Lin et al NIPS 10
- Pooling methods
- Max-pooling works well in practice, but appears
to be ad-hoc. - An interesting analysis on max-pooling, Boureau
et al. ICML 2010 - We are working on a linear pooling method, which
has a similar effect as max-pooling. Some
preliminary results already in the super-vector
coding paper, Zhou et al, ECCV2010.
37Outline of Part 3
- Why can sparse coding learn good features?
- Intuition, topic model view, and geometric view
- A theoretical framework local coordinate coding
- Two practical coding methods
- Recent advances in sparse coding for image
classification
38Fast approximation of sparse coding via neural
networks
Gregor LeCun, ICML-10
- The method aims at improving sparse coding speed
in coding time, not training speed, potentially
make sparse coding practical for video. - Idea Given a trained sparse coding model, use
its input outputs as training data to train a
feed-forward model - They showed a speedup of X20 faster. But not
evaluated on real video data.
39Group sparse coding
Bengio et al, NIPS 09
- Sparse coding is on patches, the image
representation is unlikely sparse. - Idea enforce joint sparsity via L1/L2 norm on
sparse codes of a group of patches. - The resultant image representation becomes
sparse, which can save the memory cost, but the
classification accuracy decreases.
40Learning hierarchical dictionary
Jenatton, Mairal, Obozinski, and Bach, 2010
A node can be active only if its ancestors are
active.
41Reference
- Image Classification using Super-Vector Coding of
Local Image Descriptors, Xi Zhou, Kai Yu, Tong
Zhang, and Thomas Huang. In ECCV 2010. - Efficient Highly Over-Complete Sparse Coding
using a Mixture Model, Jianchao Yang, Kai Yu, and
Thomas Huang. In ECCV 2010. - Learning Fast Approximations of Sparse Coding,
Karol Gregor and Yann LeCun. In ICML 2010. - Improved Local Coordinate Coding using Local
Tangents, Kai Yu and Tong Zhang. In ICML 2010. - Sparse Coding and Dictionary Learning for
Image Analysis, Francis Bach, Julien Mairal,
Jean Ponce, and Guillermo Sapiro. CVPR 2010
Tutorial - Supervised translation-invariant sparse coding,
Jianchao Yang, Kai Yu, and Thomas Huang, In CVPR
2010. - Learning locality-constrained linear coding for
image classification, Jingjun Wang, Jianchao
Yang, Kai Yu, Fengjun Lv, Thomas Huang, and
Yihong Gong. In CVPR 2010. - Group Sparse Coding, Samy Bengio, Fernando Pereira
, Yoram Singer, and Dennis  Strelow, In
NIPS2009. - Nonlinear learning using local coordinate coding,
Kai Yu, Tong Zhang, and Yihong Gong. In
NIPS2009. - Linear spatial pyramid matching using sparse
coding for image classification, Jianchao Yang,
Kai Yu, Yihong Gong, and Thomas Huang. In CVPR
2009. - Efficient sparse coding algorithms. Honglak Lee,
Alexis Battle, Raina Rajat and Andrew Y.Ng. In
NIPS2007.