annotation of emotions in meetings in the AMI project - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

annotation of emotions in meetings in the AMI project

Description:

The most frequently chosen one or two labels were shortlisted from each group. ... determine where each of the shortlisted labels should appear in FeelTrace ... – PowerPoint PPT presentation

Number of Views:152
Avg rating:3.0/5.0
Slides: 26
Provided by: Orde8
Category:

less

Transcript and Presenter's Notes

Title: annotation of emotions in meetings in the AMI project


1
annotation of emotionsin meetingsin theAMI
project
  • Roeland Ordelman Dirk Heylen
  • Human Media Interaction
  • University of Twente
  • The Netherlands

2
overview
  • on the AMI project 100 hours of audio/video
    recordings of project meetings
  • on annotating the emotional state of the
    participants in the meetings
  • on emotion annotation in AMI
  • on implementing the annotation tool in the NXT
    framework

3
AMI in brief
  • European Integrated Project of the IST 6th FWP,
    initiated in January 2004 and involving 15
    partners
  • aims at advancing the state-of-the-art in
    important basic technologies such as human-human
    communication modeling, speech recognition,
    computer vision, multimedia indexing and
    retrieval
  • produces tools for off-line and on-line browsing
    of multi-modal meeting data, including meeting
    structure analysis and summarizing functions
  • makes recorded and annotated multimodal meeting
    data widely available for the European research
    community, thereby contributing to the research
    infrastructure in the field

4
on the AMI meetings
  • scenario-based meetings
  • participants are asked to carry out a certain
    task and are provided with role-restricted
    information
  • design project, involve one particular task
    (remote control design)
  • 35 hours (48 meetings)
  • real design project meetings on
  • real design projects
  • such as engineering student projects, brochure or
    poster design, etc. 
  • 35 hours
  • other real or scenario-based meetings
  • cover a wider variety of meeting types, topics,
    behaviours, etc.
  • 30 hours

5
on the recording of the meetings
  • smart meeting rooms at three different sites
  • (TNO, Edinburgh, IDIAP)
  • video (side camera, close up)
  • audio (mic-array, lapel, headset, manikin)
  • whiteboard strokes
  • pen

6
ami annotations
  • many features of meeting interactions are
    annotated
  • speech
  • gestures
  • dialogue acts
  • posture
  • emotion
  • information is used for (at least) three
    purposes
  • primary meta-data source for multi-featured
    browsing of the recorded meetings.
  • training recognition algorithms that eventually
    should be able to provide automatically generated
    meta-data.
  • evidence on the basis of which theoretical models
    of human multi party interaction can be
    developed.

7
properties of the data
  • 100 hours of meeting data
  • emotion annotation of every single speaker in
    meeting 4 x 100 hours !
  • only a very small proportion can be expected to
    show emotional states in any strong sense
  • neutral state will cover much data

8
emotion annotation
  • no general agreement on how to annotate or label
    emotional content in a natural database
  • a number of emotion annotation or labeling
    schemes have been proposed in the literature
  • given the gradations and subtlety of emotions
    occurring in natural data, the labeling of
    emotion using category labels is not
    straightforward

9
emotion annotation in AMI
  • Given
  • the amount of data to be annotated, and
  • the expected gradations and subtlety of emotions
    occurring in meeting data
  • a dimensional labeling approach complemented by a
    categorical labeling scheme seemed most
    appropriate in the context of AMI
  • the FeelTrace software developed at Queens
    University Belfast (Cowie et al.) is reported to
    produce good quality annotations within a
    reasonable amount of time

10
FeelTrace
  • judge the emotional experience of the
    participants of the meetings on two dimensions
    arousal and valence.

11
annotation with FeelTrace in AMI
  • survey on meeting specific landmarks
  • pilot annotations
  • re-implementation of FeelTrace

12
landmarks survey (1)
  • (main investigator Vincent Wan, University of
    Sheffield)
  • Method
  • list of 243 terms describing emotions
  • participants (37) had to select twenty emotions
    that they most frequently perceived in their
    meetings
  • participants from various companies and with
    various job descriptions, including lecturers,
    researchers, managers, secretaries and students.
  • 243 terms were clustered by meaning into groups.
    The most frequently chosen one or two labels were
    shortlisted from each group. Taking some labels
    from each group ensures that there is sufficent
    coverage of the emotion space.

13
landmarks survey (2)
  • list of 26 meeting domain specific emotional
    words
  • at ease, bored, joking, annoyed, nervous,
    satisfied, frustrated, amused, relaxed,
    interested, cheerful, uninterested, disappointed,
    agreeable, contemplative, encouraging,
    sceptical, friendly, attentive, confused,
    confident, decisive, impatient, concerned,
    serious, curious

14
landmarks survey (3)
  • second survey to determine where each of the
    shortlisted labels should appear in FeelTrace
    emotional space
  • first presented participants with the five
    labels anger, irritation, sadness, happiness and
    contentment (presented so that participants
    unfamiliar with FeelTrace would get some minimal
    experience in its use).
  • emotional words were presented twice the first
    to allow the participant to gain additional
    training and the second to collect data.

15
example results 1
16
example results 2
17
example results 3
18
resulting landmark distribution
19
annotation trials
  • we have 15-20 annotators available
  • coming weeks first annotation trials
  • what do we want to learn
  • inter-annotator agreement
  • distribution of emotional states in meeting data
    (how much is neutral)
  • annotator experience with tool
  • effect of using/not using landmarks
  • validation of manual, annotation area, etc

20
trial annotation set-up
  • set of 4 meetings, each of about 20 minutes, 4
    speakers per meeting
  • 10 minute segments (0-10,10-20)
  • (opt.) dummy pass with Feeltrace
  • offical pass with FeelTrace
  • with and without landmarks
  • no categorical labeling (yet)

21
implementation in NXT
  • Belfast implementation of dimensional approach
    knows some limitations
  • no cross-platform support
  • cannot easily be tailored to specific needs from
    different stakeholders of annotations (e.g.,
    additional categorical of labeling of longer
    segments is hard in current setup)

22
stakeholders of annotation
  • corpus developer
  • defines the structure of corpus, maintains the
    data, takes care of a proper data/tool
    distribution (CVS, validation, time)
  • corpus annotator
  • needs to know as little as possible about
    configuration, installation and version control
    issues, only the annotation process itself is of
    concern for the annotator
  • data consumer
  • interested in the annotations for analysis and
    may want to configure in detail the annotation
    process (tool functionalities such as redo,
    fastforward, landmarks/no-landmarks, ect)
  • tool developer
  • creates the tool that serves the needs of the
    other users, keeping technical issues in mind

23
NXT
  • Nite XML toolkit
  • defines a data storage format that can easily be
    shared across a multitude of annotation and
    analysis tools for the many different aspects of
    a multi-modal corpus.
  • the NXT libraries provide many ready-made
    components that facilitate easy development of
    new tools.
  • expertise readily available at University of
    Twente

24
draft list of requirements
  • easy distribution of data (and tools) to and from
    the annotators (e.g., annotate certain segments
    of a file, CVS functionality, batch
    functionality)
  • easy management of multiple annotators, possibly
    working on the same data (e.g., CVS
    functionality)
  • validation functionality or possibilities to plug
    this into the process
  • as many input formats as possible on as many
    platforms as possible
  • customizable landmarks, dimensions (1D, 2D, 3D),
    shortcuts
  • categorical labeling options within the tool
  • video control (fastforward/backward)
  • progress bar
  • replay annotation aligned with video
  • selection of multiple video signals (close up,
    wide angle)
  • color coding of the 2D space (provide a priori
    color feedback instead of feedback with a
    changing color of the cursor)
  • easy configuration in general
  • ? discuss requirements with AMI/HUMAINE
    researchers

25
roadmap
  • starting first trial annotations
  • investigate trial results
  • discuss re-implementation of FeelTrace in NXT
    with interested parties and create an first
    implementation version
  • follow-up trials, FeelTrace-NXT versions
  • monitor process and discuss results with
    researchers in the field (AMI, Humaine)
Write a Comment
User Comments (0)
About PowerShow.com