Advanced Computer Vision - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Advanced Computer Vision

Description:

Ratio Pruning. Review the paper contribution. January 28th, 2005 ... The branch with higher probability is conserved and the other are pruned ... Ratio Pruning ... – PowerPoint PPT presentation

Number of Views:51
Avg rating:3.0/5.0
Slides: 26
Provided by: carlos105
Category:

less

Transcript and Presenter's Notes

Title: Advanced Computer Vision


1
Advanced Computer Vision
An efficient implementation of Reids Multiple
Hypothesis Tracking Algorithm and Its evaluation
for the purpose of visual tracking I. Cox and S. 
Hingorani Presented by Carlos Busso
2
Overview
  • Motivation
  • Multiple Hypothesis Tracking (MHT)
  • Practical Implementation
  • Experiments and results
  • Conclusion

3
Motivation
  • Tracking can be divided in two main steps
  • Prediction
  • Data association (object/measurement matching)
  • In the matching step, ambiguities may arise
  • One measure match more than one object
  • One object match more than one measurement
  • Occluded object
  • Spurious measurements
  • Appearance of new objects
  • Objects cease to exist
  • Features to track are corners automatically
    extracted using a variant of the Lucas and Kanade
    algorithm

4
Motivation
  • Easy solution Nearest Neighbor
  • Good if consecutive frames do not have big
    differences
  • Partial occlusions and background changes
    decreases performance
  • Multiple Hypothesis Tracking Algorithm
  • Provide a good solution to solve
    objects/measurements ambiguities
  • Track Initiation
  • Track Termination
  • Track Continuation
  • Explicit modeling of spurious measurements
  • Explicit modeling of uniqueness constrain

5
Multiple Hypothesis Tracking
Come back to 1979...
  • Presented by Donald B. Reid in the context of
    multi-target tracking
  • Each hypothesis represents a different set of
    assignment of measurements to features
    (collection of disjoint tracks)
  • A tree structure is used, in which each global
    hypothesis generated child hypotheses
  • Each child hypothesis together with its parent
    hypothesis represent one possible interpretation
    of all past measurement
  • Considerations
  • The hypotheses are represented by ambiguity
    matrixes
  • Each measurement may be a feature, a new
    feature, noise, deleted feature or occluded
    features
  • Mahalonobis distance is used

6
Multiple Hypothesis Tracking
  • Outlines of the basic Operation

7
Multiple Hypothesis Tracking
  • Measurement of the hypotheses
  • They use statistical measures to evaluate how
    good are the hypotheses
  • Three terms
  • First term Conditional probability of the
    measurement Z(k)
  • Second term Conditional probability of the
    assignment ?
  • Third term Conditional probability of parent
    hypotheses T

8
Multiple Hypothesis Tracking
  • Assumptions
  • Measurements are Gaussian distributed
  • Number of false alarms and new features are
    Poisson distributed
  • The implementation is straightforward
  • However the complexity is exponential, see next
    slide

9
Multiple Hypothesis Tracking
t t1 t2
t3 t4
Impractical for real applications
10
Practical Implementation
Review the paper contribution
  • The paper describes an efficient implementation
    of an MHT algorithm
  • Several strategies were jointly employed in order
    contain the growth of the hypothesis
  • Track Trees
  • Spatially disjoint Hypothesis Trees
  • Generating the k-best Hypotheses
  • N-scan-back
  • Ratio Pruning

11
Practical Implementation
  • Track Trees
  • Idea duplicate path must be removed
  • The same track may appear in more than one global
    hypothesis
  • Track tree provides considerable saving

12
Practical Implementation
  • Track Trees

13
Practical Implementation
  • Spatially disjoint Hypothesis Trees
  • Idea tracks in a global hypothesis tree must
    compete for the same measurements
  • Track are partitioned into separate clusters
  • Track in a cluster compete for common
    measurements
  • The combinatorial problem associated with forming
    global hypotheses is significantly reduced
  • Cluster may be merged or split

14
Practical Implementation
  • Spatially Disjoint Hypothesis Three

15
Practical Implementation
  • Generating the k-best Hypotheses
  • Idea select only the k-best hypotheses without
    enumerate all possible global hypotheses.
  • Enumeration of all hypotheses is impractical
  • They use Murty algorithm to optimally determine
    the k-best assignments
  • Formulated as linear assignment problem
  • The number of linear assigment is linear in k
    (Polynomial time)
  • The algorithm avoid solving duplicate assignments
  • In the paper k was set to 300

16
Practical Implementation
  • Generating the k-best Hypotheses

e.g k4
17
Practical Implementation
  • N-scan-back
  • Idea Any ambiguity at time k is resolved by time
    kN
  • The branch with higher probability is conserved
    and the other are pruned
  • Below the decision node there is a degenerated
    tree of depth N
  • N must be set low enough
  • If N0, the algorithm is similar to Nearest
    Neighbor
  • Results suggest N3

18
Practical Implementation
  • N-scan-Back

19
Practical Implementation
  • Ratio Pruning
  • Idea Hypotheses with low probability compared to
    other competing hypotheses can be removed
  • The maximum number of hypotheses is limited by
    k-best hypotheses algorithm
  • Some of these hypotheses can have low probability
  • A threshold is set to prevent hypotheses from
    being considered if the ratio of their
    probability to that of the best hypothesis became
    too small

20
Practical Implementation
  • Ratio pruning

e.g K-best hypotheses4 Ratio pruning 0.1
21
Practical Implementation
  • Other considerations
  • They use linear Kalman filter for prediction
  • Features to track are corners automatically
    extracted using a variant of the Lucas and Kanade
    algorithm
  • Track initiation There are no prediction for
    measurements in the first frame, their need to be
    considered a new track or spurious measurement
  • Mahalanobis test with a cross correlation test to
    prevent nonsense matches (color is first used
    here)

22
Experiments and Results
  • You can clearly see.

23
Experiments and Results
MHT
Nearest Neighbor
24
Experiments and Results
25
Conclusion
  • Features matching is a not trivial issue. The
    paper described a good solution for complex
    scenarios
  • The algorithm is robust to errors in the motion
    model
  • Occlusions, noise, start and end of features are
    successfully and naturally handled by the
    algorithm
  • The computation complexity is the biggest
    drawback of the algorithm
  • The computation complexity depends on the number
    of features (therefore on the scene)
  • In one video it took 1.5 second per frame, in the
    other 3.06
  • MIPS R4400 150 MHz
Write a Comment
User Comments (0)
About PowerShow.com