Toward DecisionFocused Summarisation: Decision Point Detection, Discussion Segmentation, and Linking - PowerPoint PPT Presentation

1 / 38
About This Presentation
Title:

Toward DecisionFocused Summarisation: Decision Point Detection, Discussion Segmentation, and Linking

Description:

Decision Point Detection, Discussion Segmentation, and Linking to Abstracts ... Analysing the difference between the discussion about decisions that are given ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 39
Provided by: homepage7
Category:

less

Transcript and Presenter's Notes

Title: Toward DecisionFocused Summarisation: Decision Point Detection, Discussion Segmentation, and Linking


1
Toward Decision-Focused Summarisation Decision
Point Detection, Discussion Segmentation, and
Linking to Abstracts
  • Pei-Yun (Sabrina) Hsueh
  • Meeting Modelling Workshop
  • Enschede, Netherlands
  • June 11th, 2007

2
Meeting Summarisation
Abstractive Summarisation
Extractive Summarisation
30minutes (1000DAs)?
5 decision notes
3 minutes (100DAs)?
3
Meeting Summarisation (Extractive)?
  • Obstacles observed
  • Still too long
  • Semantic gap between unconnected extracted
    dialogue acts
  • Unexpected topic shifts
  • Unresolved anaphora

Extractive Summarisation
30minutes (1000DAs)?
3 minutes (100DAs)?
4
Meeting Summarisation (Abstractive)?
  • Obstacles observed
  • Although self-contained, it is difficult to trace
    back where the relevant discussions has taken
    place.

Abstractive Summarisation
30minutes (1000DAs)?
5 decision notes
5
Why focused on argumentation?
  • For the extractive summary task
  • Produce a more focused version
  • For the abstractive summary task
  • Provide reference points for human-generated
    meeting minutes
  • Argumentation outcomes (e.g., Decision) are
    important to the re-use of meeting archives
    (Pallota et al., 2005 Rienks et al., 2005 AMI
    WP6 deliverable, 2005)?

6
Pallota et al., ACL 2007
  • Recovering information about the argumentative
    process in discussion (e.g., decision points) is
    difficult.
  • even when a standard keyword search utility is
    provided

7
Decision Detection and Tracking The Three Tasks
  • (1) Decision-related dialogue act recognition
  • (2) Decision extract contextualization
  • Decision discussion segmentation and labelling
  • (3) Decision summary linking
  • Link disambiguation to decision abstracts

Step 1
Step 2
Step 3
Identify discussion topics
Identify decision points
Decision DA Recognition
DECSEG Segmentation and labelling
Decision Summary Linking
8
My approach (1) Decision-related DA Recognition
  • Further reduce the general DA extracts by 90\ by
    identifying only those related to decisions.

Extractive Summarisation
3 minutes (100DAs)?
3 minutes (100DAs)?
Decision DA Recognition
30 seconds (14DAs)?
9
Proposed Solution (2) Contextualized Decision
Extracts
  • Decision Discussion Segmentation and Labelling
  • Bridge semantic gaps by providing just enough
    contexts
  • Detect topic shifts
  • Context for anaphora resolution

Extractive Summarisation
3 minutes (100DAs)?
Decision DA Recognition
Decision Discussion Segmentation
30 seconds (14DAs)?
10
Proposed Solution(3) Links to Decision Summary
  • Decision Summary Linking
  • Provide indicators to relevant decision
    discussion segments for each decision note.

Abstractive Summarisation
Decision abstracts
Decision discussion Segment (DECSEG)?
Decision link (DECLINK)?
11
Meeting Summarisation
Abstractive Summarisation
Extractive Summarisation
Decision Discussion Segmentation
Decision Summary Linking
Decision DA Recognition
30 seconds (14DAs)?
12
Research Issues
  • Multiple
  • Coverage
  • Will a more focused extractive summary help users
    to find information more efficiently and
    effectively?

13
Decision-Focused Summarisation Three Tasks
  • (1) Decision DA Recognition
  • Recognize where people are making decisions in a
    meeting

Step 1
Identify decision points
Decision DA Recognition
14
Decision-Focused Summarisation Three Tasks
  • (2) Discussion Segmentation and Labelling
  • Identify boundaries of DECSEGs
  • Determine the topic of the identified DECSEGs

Step 1
Step 2
Identify discussion topics
Identify decision points
Decision DA Recognition
Discussion Segmentation and labelling
15
Decision-Focused Summarisation Three Tasks
  • (3) Decision Summary Linking
  • Link those identified discussion segments to
    their most closely related decision note in the
    abstractive summary

Step 1
Step 2
Step 3
Identify discussion topics
Identify decision points
Decision DA Recognition
DECSEG Segmentation and labelling
Decision Summary Linking
16
Decision DA Recognition
  • Hsueh, P. and Moore, J. (2007). What Decisions
    Have You Made Automatic Decision Detection in
    Conversational Speech. In Proceedings of
    NAACL/HLT 2007.

17
DECSEGSegmentation and Labelling
  • Motivation
  • Being able to detect decision-related DAs is not
    enough to interpret what a detected decision
    discussion is about.
  • E.g., Um, we have decided not to worry about
    that for now.
  • ?We need to know context!
  • However, how far ahead or behind should we look
    for contexts is a question left unanswered.

18
DECSEG Segmentation
  • DECSEG Annotation
  • The regions where participants are making the
    decisions that are summary-worthy.
  • E.g., example decision abstract of Meeting
    ES2008d

19
Annotation Guideline
  • Segmentation
  • Annotators go through the abstract and the
    meeting transcripts (along with audio/video
    recordings) to familiarize themselves with the
    meeting.
  • On the transcript, the previously annotated
    decision-related dialogue acts are highlighted.
  • Annotators determine the region in which the
    meeting participants are marking the decisions
    described in the abstracts.

20
Why this round of annotation?
  • Train a ML classifier to identify segment
    boundaries automatically
  • Also, we are interested in
  • Analysing the function roles (i.e., recap) of
    decision discussion segments
  • Analysing the difference between the discussion
    about decisions that are given by top management
    and that about decisions that are made internally.

21
Why this round of annotation?
  • Find conventional expressions that speakers use
    to initiate, break, resume and end a discussion.
  • Initiate, E.g., Well, Okay
  • Break, E.g., bringing up another question
  • Resume, E.g., so have we concluded on that
    design feature yet?
  • End, E.g., when more than two parties expressing
    Okay, Uh, Umm.

22
Pilot Annotation
  • 2 coders, 2 series (8 meetings)?
  • percentage agreement

23
Pilot Annotation
  • Level Not so good agreement
  • Maybe does not matter as we are going to flatten
    the structure later on.
  • Segmentation Just fine agreement on boundaries
  • Possibly to improve it by further restricting
    statement-like sub-segments.
  • Confusion on in what case should a discussion
    segment being broken down to two sub-segments
  • We need better examples of what counted as
    disregardable discussion
  • Dont throw away sidetracked discussion segments
    about another decision

24
Pilot Annotation
  • Labelling Good agreement on what decisions an
    identified segment is associated with.
  • External decision V.S. recapped decision
    segments Some confusion
  • Should a decision from the conclusion of previous
    meetings an external decision or a recapped one?
  • We may consider merging the two categories later.
  • Overall, 18.9 (21.0) external decision
    segments 22.2 (19.8) recap segments 34.4
    (28.4) merged.

25
(No Transcript)
26
Two Major Areas of Confusion
  • (1) When to separate a discussion as two
    discontinuous discussion segments?
  • (2) Whether to include the final part (such as
    clarification, argument refinement) of a
    discussion
  • If it is some discussion that may have changed
    the decision, do include it.

27
Guideline modification needed
  • Always look for specific dialogue acts wherein
    speakers are trying to initiate, resume, or end a
    discussion.
  • Avoid marking one utterance as a segment.
  • Give some more examples about how to determine
    the starting and end point of a decision
    discussion.
  • Mark an end in the last response to a particular
    decision.
  • E.g., Okay, Yeah, Yeah.
  • Whether to include the extended discussion of a
    decision?
  • E.g., technical difficulty of a particular design
    feature
  • Give examples of disregardable and
    non-disregardable sidetracked discussion.
  • Where some clarification of the decision is
    taking place? (What is the safety range to keep
    it?)?
  • Where some other decision discussions are taking
    place?
  • Adding more functional roles of segments
  • Initial proposal, counter proposal, any other?
  • Keep it for the next round of annotation?

28
Questions remained
  • Are there really local features specific to
    decision discussion segment boundaries?
  • Or we can just use features that have proposed
    previously for finding sub-topic segment
    boundaries?
  • Lexical, Audio (e.g., pitch, energy), Video (i.e.
    Motion), Context (i.e., dialogue act type,
    speaker role), Conversational features (i.e.,
    speaker activity change, overlap rate, pause,
    lexical cohesion statistics)?
  • Or some even finer-level discourse segments, such
    as those for anaphora resolution?

29
Decision Discussion v.s. Topic Segments
  • 34.0 of the annotated boundaries in ES2008
    series are near subtopic segment boundaries
    (18.9 with the beginning of segments and 24.5
    with the end of segments)?
  • Very few cases across multiple subtopic segments

30
Ground Truth Annotation
  • 1 annotator
  • 48 meetings
  • On average, 5.17 decision discussion segments
    (DECSEGs) per meeting.
  • 1.45 decision links per DECSEG
  • 1 minute per DECSEG (stddev 1.2 minutes)
  • External 15
  • Recap 4

31
Ground Truth Annotation
  • Overlap with topic boundaries
  • Fully corresponded 0
  • Nearly corresponded (ANY) (lt20s)?
  • Near where a topic has been initiated (START)?
  • Near where a topic has been concluded or
    interrupted (END)?

32
Three goals
  • Find proper context for disambiguating decision
    links
  • Find uncaptured segments of decision discussion ?
  • Enable an automatic check of the incorrectly
    marked decision-related dialogue acts

33
(No Transcript)
34
Decision Discussion Segmentation
Discussion Boundary
Lexical features
Boundary Model
Feature Extraction
Non-Boundary
Audio/Video features
Interaction features
35
Decision Discussion Labelling
Language Model (e.g., Marketing expert
presentation, UI expert presentation,
Discussion, Budget, Target Market, etc.)?
  • Text classification task

Multi-Class Topic Classification
Feature Extraction
Lexical features
Target Market
Decision discussion about the target market
36
Decision Summary Linking
  • The task is similar to WSD
  • Disambiguating the link to the decision points.

37
Ultimate goal
38
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com