Recognizing Action Units for Facial Expression Analysis - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

Recognizing Action Units for Facial Expression Analysis

Description:

Power of facial expression: Immediate means for human ... Quickly express genuine emotion. Applications: Intelligent environment ... Psychophysiology 36, ... – PowerPoint PPT presentation

Number of Views:198
Avg rating:3.0/5.0
Slides: 20
Provided by: jun97
Category:

less

Transcript and Presenter's Notes

Title: Recognizing Action Units for Facial Expression Analysis


1
Recognizing Action Units for Facial Expression
Analysis
  • Speaker Junwen WU
  • Course ECE285
  • Date 02/11/2002

2
Introduction
  • Power of facial expression
  • Immediate means for human beings to
    communicate
  • Quickly express genuine emotion
  • Applications
  • Intelligent environment (Car, Smart room,
    Etc.)
  • Helping nursing patients with language
    disability
  • Approaches
  • Build prototypic facial expressions for basic
    emotions.
  • Disadvantage of this approach Such
    prototypic facial expressions can not represent
    the rich emotion of human beings
  • Capture subtle change of isolated features

3
Facial Action Coding System(FACS)
4
Facial Action Coding System(FACS) (Contd.)
  • Function Describing facial expression by action
    units (AUs)
  • No quantitative definition provided
  • Altogether there are 44 FACS AUs
  • 12 for upper face
  • 18 for lower face
  • AUs can occur either singly or in combination,
    when in combination, AUs can be either additive
    or non-additive
  • Advantage Powerful means to describe the details
    of facial expression

5
Feature-based Automatic Facial Action Analysis
System (AFA)
  • Steps
  • Facial feature extraction
  • Facial expression classification
  • Characteristics
  • Multi-state localized facial feature, derived
    by accurate geometrical modeling
  • Explicitly analyzes appearance changes
  • Applicable for a nearly frontal image sequence

6
Feature-based Automatic Facial Action Analysis
System (AFA) (Contd.)
7
Feature Used in the AFA System
  • Permanent feature
  • Definition Features not changing with facial
    expressions change
  • Examples Lip, eyes, permanent furrows
  • Transient feature
  • Definition Features appearing only with
    facial expressions
  • Examples Facial lines, transient furrows,
    such as dimple, etc
  • Different multi-state models are introduced for
    each facial components

8
Multi-State Face Component Models(Lip)
  • 3-state lip model (open, closed and tightly
    closed)

2 parabolic arcs for open and closed lip (6
Parameters altogether) Dark mouth line
connecting lip corners for tightly closed mouth
(4 parameters altogether)
9
Multi-State Face Component Models(Eyes)
  • 2-state eyes model (Open and closed)

For open eye, 2 parabolic arcs with 6
parameters are used to model the eye boundary,
and a circle with 3 parameters are used to model
the iris For closed eye, 4 parameters that
describe the eye corners are used to model the eye
10
Multi-State Face Component Models(Brows, Cheeks
and Furrows)
  • Triangular models with 6 parameters for the brows
    and cheeks
  • 2-state wrinkle model (present and not present)

11
Face Detection, Feature Location and Feature
Extraction
  • Initial frame
  • Automatically detect and approximately locate
    individual features
  • Manually adjust location of important points
    for face features
  • Following sequence
  • Features (permanent and transient) features are
    automatically detected and tracked

12
Examples of Face Features
  • Iris still can be tracked for the half open eye
    (Half circle mask is used)
  • Furrows are different for different AU
  • Wrinkles are detected in some specific regions
    (Canny edge detector is used)

13
AUs Recognition
  • Facial features are grouped into two sets
  • Upper face feature set 15 parameters (eyes,
    brows, cheek, possible furrow)
  • Lower face feature set 9 parameters (lip,
    possible furrow)
  • Features are geometrically normalized
  • Classifier design
  • Two neural-network based classifiers
  • Multiple output nodes could be excited for AUs
    combination so as to be able to respond to both
    the single AUs and the AUs combination

14
AUs Recognition (Contd.)
Upper face feature
Lower face feature
Classifier for upper face
15
Experimental Evaluation
  • Database
  • Cohn-Kanade AU-Coded Face Expression Image
    Database
  • Ekman-Hager Facial Action Exemplars
  • Total recognition rate

16
Compared with Other Methods
17
Compared with Other Methods (Contd.)
18
Conclusion and Discussions
  • Degree of manual preprocessing is reduced
  • In-plane and limited out-of-plane head motion can
    be handled
  • Facial feature tracker is efficient (lt1sec/frm)
  • Multi-state face-component models can increase
    robust and accuracy of feature representation
  • More AUs are recognized (Single or combination)

19
References
  • Tian, Y., Kanade,T., and Cohn, J. Recognizing
    action units for facial expression analysis, IEEE
    Transactions on Pattern Analysis and Machine
    Intelligence, Vol. 23, No. 2, February, 2001, pp.
    97 - 115.
  • Bartlett, M.S., Hager, J.C., Ekman, P., and
    Sejnowski, T.J. Measuring facial expressions by
    computer image analysis. Psychophysiology 36,
    1999, pp. 253-263.
  • Donato, G.L., Bartlett, M.S., Hager, J.C., Ekman,
    P., and Sejnowski, T.J. Classifying Facial
    Actions. IEEE Transactions on Pattern Analysis
    and Machine Intelligence 21(10), 1999, pp.
    974-989.
Write a Comment
User Comments (0)
About PowerShow.com