SPARSE DISTANCE METRIC LEARNING IN HIGHDIMENSIONAL SPACE VIA L1PERNALIZED LOGDETERMINANT DIVERGENCE - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

SPARSE DISTANCE METRIC LEARNING IN HIGHDIMENSIONAL SPACE VIA L1PERNALIZED LOGDETERMINANT DIVERGENCE

Description:

VIA L1-PERNALIZED LOG-DETERMINANT DIVERGENCE. Authors: Guo ... Image datasets COREL. Compared methods. EUCLIDEAN. INVCOV. LMNN. ITML. EXPRIMENTS. EXPERIMENTS ... – PowerPoint PPT presentation

Number of Views:51
Avg rating:3.0/5.0
Slides: 22
Provided by: carbonVide1
Category:

less

Transcript and Presenter's Notes

Title: SPARSE DISTANCE METRIC LEARNING IN HIGHDIMENSIONAL SPACE VIA L1PERNALIZED LOGDETERMINANT DIVERGENCE


1
SPARSE DISTANCE METRIC LEARNING IN
HIGH-DIMENSIONAL SPACE VIA L1-PERNALIZED
LOG-DETERMINANT DIVERGENCE
  • Authors Guo-Jun Qi,
  • Dept. ECE, UIUC
  • Jinhui Tang, Zheng-Jun Zha, Tat-Seng Chua
  • SOC, NUS
  • Hong-Jiang Zhang
  • Microsoft ATC

2
OUTLINE
  • Motivations
  • Sparse Distance Metric
  • Formulations
  • Optimization by L1
  • Efficient L1-Penalized Log-Determinant Solver
  • Consistency Results
  • Experiments

3
OUTLINE
  • Motivations
  • Sparse Distance Metric
  • Formulations
  • Optimization by L1
  • Efficient L1-Penalized Log-Determinant Solver
  • Consistency Results
  • Experiments

4
MOTIVATIONS
  • Sparsity Nature of Mahalanobis Distance
  • An Example Practical Viewpoint
  • Impose Sparsity on Off-Diagonal Elements
  • Consistency Results Theoretical Review Later

5
OUTLINE
  • Motivations
  • Sparse Distance Metric
  • Formulations
  • L1 Optimization
  • Efficient L1-Penalized Log-Determinant Solver
  • Consistency Results
  • Experiments

6
FORMULATION
  • Learn a Mahalanobis Distance
  • Criterion Given similar pairs S and dissimilar
    pairs D, the learned dM has smaller distance on S
    and larger distance on D

7
FORMULATION (CONT)
  • Loss function

8
FORMULATION (CONT)
  • M0 is the prior Mahalanobis matrix
  • Euclidean prior M0 is the identity matrix
  • Covariance prior M0 is the covariance matrix,
    reflecting the sample distribution

9
L1 OPTIMIZATION
  • A Natural Solution Convert into SDP problem

Let
Problem Too expensive computational cost!
10
OUTLINE
  • Motivations
  • Sparse Distance Metric
  • Formulations
  • Optimization by L1
  • Efficient L1-Penalized Log-Determinant Solver
  • Consistency Results
  • Experiments

11
EFFICIENT L1-PENALIZED LOG-DETERMINANT SOLVER
  • Block coordinate descent algorithm (Friedman et
    al., 2007)
  • Let W be an estimation of M-1 and

An efficient Iterative procedure
12
OUTLINE
  • Motivations
  • Sparse Distance Metric
  • Formulations
  • Optimization by L1
  • Efficient L1-Penalized Log-Determinant Solver
  • Consistency Results
  • Experiments

13
CONSISTENCY RESULT
  • Consistency rate

For a target Mahalanobis matrix at most m nonzero
per row, L1-pernalized log-determinant formulaton
leads to the consistency rate
A smaller m leads to more rapid convergence!
14
OUTLINE
  • Motivations
  • Sparse Distance Metric
  • Formulations
  • Optimization by L1
  • Efficient L1-Penalized Log-Determinant Solver
  • Consistency Results
  • Experiments

15
EXPERIMENTS
  • Datasets
  • UCI datasets IRIS, IONOSPHERE, WINE, SONAR
  • Image datasets COREL
  • Compared methods
  • EUCLIDEAN
  • INVCOV
  • LMNN
  • ITML

16
EXPRIMENTS
17
EXPERIMENTS
  • Performance Changes with different n/d

18
EXPERIMENTS
  • Computational Cost

19
OUTLINE
  • Motivations
  • Sparse Distance Metric
  • Formulations
  • Optimization by L1
  • Efficient L1-Penalized Log-Determinant Solver
  • Consistency Results
  • Experiments
  • Conclusion

20
CONCLUSIONS
  • L1-penalized log-determinant formulation to learn
    Mahalanobis distance
  • A consistency rate which prefers a sparsity
    assumption
  • An efficiently L1 solver

21
Thanks for Attention ! Q A
Write a Comment
User Comments (0)
About PowerShow.com