MultiView Learning in the Presence of View Disagreement - PowerPoint PPT Presentation

1 / 45
About This Presentation
Title:

MultiView Learning in the Presence of View Disagreement

Description:

Multi-View Learning in the Presence of View Disagreement. C. Mario Christoudias, Raquel Urtasun, Trevor Darrell. UC Berkeley EECS & ICSI. MIT CSAIL ... – PowerPoint PPT presentation

Number of Views:74
Avg rating:3.0/5.0
Slides: 46
Provided by: cmch
Category:

less

Transcript and Presenter's Notes

Title: MultiView Learning in the Presence of View Disagreement


1
Multi-View Learning in the Presence of View
Disagreement
  • C. Mario Christoudias, Raquel Urtasun,
    Trevor Darrell

UC Berkeley EECS ICSI MIT CSAIL
2
The World is Multi-view
  • Several datasets are comprised of multiple
    feature sets or views

3
Learning from Multiple Information Sources
  • Multi-view learning methods exploit view
    redundancy to learn from partially labeled data
  • Can be advantageous to learning with only a
    single view Blum et.al., 98, Kakade et.al.,
    07
  • Weaknesses of one view complement the strengths
    of the other

4
Dealing with Noise
  • Multi-view learning approaches have difficulty
    dealing with noisy observations
  • Methods proposed that model stream reliability
    Yan et.al., 05, Yu et.al., 07

5
Dealing with Noise
  • More generally view corruption is non-uniform

View 1
View 2
View V
6
View disagreement
  • View disagreement can be caused by view
    corruption
  • Samples in each view belong to a different class
  • Audio-Visual Examples
  • Uni-modal Expression
  • (person says yes without nodding)
  • Temporary View Occlusions
  • (person temporarily covers mouth while speaking)

7
Our Approach
  • Consider view disagreement caused by view
    corruption
  • Detect and filter samples with view disagreement
    using an information theoretic measure based on
    conditional view entropy

8
Related Work
  • View disagreement is a new type of view
    in-sufficiency
  • Multi-view learning with insufficient views
  • Co-regularization
  • Collins et.al., 99, Sindhwani et.al., 05
  • View validation
  • Muslea et.al., 02, Naphade et.al., 05, Yu
    et.al., 07
  • Multi-view manifold learning
  • Ando et.al., 07, Kakade et.al., 07
  • Previous still rely on samples from all views
    belonging to the same class

9
Multi-View Bootstrapping
  • Co-training Blum Mitchell, 98
  • Mutually bootstrap a set of classifiers from
    partially labeled data
  • Cross-view Bootstrapping
  • Learn a classifier in one modality from the
    labels provided by a classifier from another
    modality

10
Bootstrapping One View from the Other
  • Extrapolate from high-confidence labels in other
    modality

Audio Classifier
Video Classifier
Audio data
Labels
Video data
11
Bootstrapping One View from the Other
  • Extrapolate from high-confidence labels in other
    modality

Test
Audio Classifier
Video Classifier
Audio data
Labels
Video data
12
Bootstrapping One View from the Other
  • Extrapolate from high-confidence labels in other
    modality

Audio Classifier
Video Classifier
Video Classifier
Audio data
Labels
Video data
13
Co-training
Blum and Mitchell, 98
  • Learns from partially labeled data by mutually
    bootstrapping a set of classifiers on multi-view
    data
  • Assumptions
  • Class conditional independence
  • Sufficiency
  • Applied to
  • Text classification (Collins and Singer, 99)
  • Visual object detection (Levin et al, 03)
  • Information retrieval (Yan and Naphade, 05)

14
Co-training Algorithm
  • Start with seed set of labeled examples

Audio Classifier
Audio data
Labels
Video Classifier
Video data
15
Co-training Algorithm
  • Step 1 Train classifiers on seed set

Audio Classifier
Audio Classifier
Audio data
Labels
Video Classifier
Video data
16
Co-training Algorithm
  • Step 1 Train classifiers on seed set

Audio Classifier
Audio data
Labels
Video Classifier
Video Classifier
Video data
17
Co-training Algorithm
  • Step 2 Evaluate on unlabeled data, add N most
    confident examples from each view

Audio Classifier
Audio data
Labels
Video Classifier
Video data
18
Co-training Algorithm
  • Step 2 Evaluate on unlabeled data, add N most
    confident examples from each view

Test
Audio Classifier
Audio data
Labels
Video Classifier
Video data
19
Co-training Algorithm
  • Step 2 Evaluate on unlabeled data, add N most
    confident examples from each view

Audio Classifier
Audio data
Labels
Video Classifier
Video data
Test
20
Co-training Algorithm
  • Step 2 Evaluate on unlabeled data, add N most
    confident examples from each view

Audio Classifier
Audio data
Labels
Video Classifier
Video data
Test
21
Co-training Algorithm
  • Iterate steps 1 and 2 until done

Audio Classifier
Audio data
Labels
Video Classifier
Video data
22
View Disagreement Example Normally Distributed
Classes
23
Conventional Co-training under View Disagreement
24
Our Approach Key Assumption
  • Given n foreground classes and background
  • Foreground classes can only co-occur with the
    same class or background
  • Background class can co-occur with either of the
    n1 classes
  • Reasonable assumption for audio-visual problems

25
Our Approach Notional Example
Conditioning on a foreground sample gives
distribution with low entropy.
Conditioning on a background sample gives
distribution with high entropy.
26
Conditional Entropy Measure
  • Let
  • Indicator function m( ) over view pairs (xi, x j)
  • with
  • Hij is the mean conditional entropy
  • p(x) is a kernel density estimate Silverman, 70

m() detects foreground samples xkj
27
Redundant Sample Detection
  • A sample xk is a redundant foreground sample if
    it satisfies
  • A sample xk is a redundant background sample if
    it satisfies

28
View Disagreement Detection
  • Two views of a multi-view sample xk
    are in view disagreement if
  • where is the logical xor operator.
  • Define modified co-training algorithm

29
Co-training in the Presence of View Disagreement
  • Start with seed set of labeled examples

Audio data
Audio Classifier
Audio Labels
Video Labels
Video Classifier
Video data
30
Co-training in the Presence of View Disagreement
  • Step 1 Train classifiers on seed set

Audio data
Audio Classifier
Audio Classifier
Audio Labels
Video Labels
Video Classifier
Video data
31
Co-training in the Presence of View Disagreement
  • Step 1 Train classifiers on seed set

Audio data
Audio Classifier
Audio Labels
Video Labels
Video Classifier
Video Classifier
Video data
32
Co-training in the Presence of View Disagreement
  • Step 2 Evaluate on unlabeled data, add N most
    confident examples from each view

Test
Audio data
Audio Classifier
Audio Labels
Video Labels
Video Classifier
Video data
33
Co-training in the Presence of View Disagreement
  • Step 2 Evaluate on unlabeled data, add N most
    confident examples from each view

Test
Audio data
Audio Classifier
Audio Labels
Video Labels
Video Classifier
Video data
34
Co-training in the Presence of View Disagreement
  • Step 2 Evaluate on unlabeled data, add N most
    confident examples from each view

Audio data
Audio Classifier
Audio Labels
Video Labels
Video Classifier
Video data
Test
35
Co-training in the Presence of View Disagreement
  • Step 2 Evaluate on unlabeled data, add N most
    confident examples from each view

Audio data
Audio Classifier
Audio Labels
Video Labels
Video Classifier
Video data
Test
36
Co-training in the Presence of View Disagreement
  • Step 3 Map labels using conditional-entropy
    measure

Audio data
Audio Classifier
Audio Labels
Video Labels
Video Classifier
Video data
37
Co-training in the Presence of View Disagreement
  • Step 3 Map labels using conditional-entropy
    measure

Audio data
Audio Classifier
Audio Labels
View Disagreement
Redundant
Video Labels
Video Classifier
Video data
38
Co-training in the Presence of View Disagreement
  • Step 3 Map labels using conditional-entropy
    measure

Audio data
Audio Classifier
Audio Labels
View Disagreement
Redundant
Video Labels
Video Classifier
Video data
39
Co-training in the Presence of View Disagreement
  • Iterate steps 1 through 3 until done

Audio data
Audio Classifier
Video Classifier
Video data
40
Normally Distributed Classes Results
41
Real Data
  • Agreement from head gesture and speech
  • Head gesture nod/shake
  • Speech yes or no
  • 15 subjects, 103 questions
  • Simulated view disagreement
  • Background segments in visual domain
  • Babble noise in audio

42
Experimental Setup
  • Single frame audio and video observations
  • Bayes classifier for audio and visual gesture
    recognition,
  • p(xy) is Gaussian.
  • Randomly separated subjects into 10 train and 5
    test subjects
  • Show results averaged over 5 splits

43
Cross-View Bootstrapping Experiment
  • Bootstrap visual classifier from audio labels

Video
44
Co-training Experiment
  • Learn both audio and video classifiers

45
Conclusions and Future Work
  • Investigated the problem of view disagreement in
    multi-view learning
  • Information theoretic measure to detect view
    disagreement due to view corruption
  • On audio-visual user agreement task our method
    was robust to gross amounts of view disagreement
    (50-70)
  • Future Work
  • More general view disagreement distributions
  • Integrate view disagreement uncertainty into
    co-training
Write a Comment
User Comments (0)
About PowerShow.com