Generation of Referring Expressions: Modeling Partner Effects - PowerPoint PPT Presentation

About This Presentation
Title:

Generation of Referring Expressions: Modeling Partner Effects

Description:

Generation of Referring Expressions: Modeling Partner Effects Surabhi Gupta Advisor: Amanda Stent Department of Computer Science Outline Introduction Data Previous ... – PowerPoint PPT presentation

Number of Views:106
Avg rating:3.0/5.0
Slides: 36
Provided by: Sura50
Category:

less

Transcript and Presenter's Notes

Title: Generation of Referring Expressions: Modeling Partner Effects


1
Generation of Referring Expressions Modeling
Partner Effects
  • Surabhi Gupta
  • Advisor Amanda Stent
  • Department of Computer Science

2
Outline
  • Introduction
  • Data
  • Previous work
  • Modeling partner effects
  • Generating NP postmodifiers
  • A little statistical experiment
  • Discussion and Future Work

3
Referring Expressions
  • A referring expression denotes (or points to) an
    object in the world of a discourse.
  • Examples of referring expressions include the red
    chair, the 400 dollar red chair and 5 red chairs.
  • Referring expressions are usually noun phrases
  • Improper construction of a referring expression
    can result in
  • referring expressions that are ambiguous (e.g.
    the book when there are two books).
  • referring expressions that are too descriptive
    and lead to false implicatures (e.g. the 400
    dollar chair when there is only one chair)

4
Structure of a Noun Phrase
  • A definite/indefinite noun phrase is constructed
    of
  • An (optional) determiner or quantifier e.g. a,
    three
  • A number of premodifiers (adjectives, adverbs,
    noun modifiers) e.g. red
  • A number of postmodifiers (prepositional phrases,
    relative clauses) e.g. worth 400 dollars, that is
    red
  • Other noun phrases include pronouns, proper
    nouns, deictics

green
5
Adaptation in Conversation
  • When people talk with each other, they adapt to
    the others choice of referring expression (Clark
    1996, Levinson 1983, Brennan 1987).
  • Example
  • (A) Lets buy the 400 dollar red chair
  • (B) Thats a good idea. The chair matches with
    the red table.
  • (A) The chair it is then.

6
Generation of Referring Expressions in Dialog
  • When a computer constructs human language, it is
    called generation
  • NewsBlaster summaries, or Google translation
  • Generation for dialog must involve consideration
    of the dialog partner (the human)

7
Good Generation of Referring Expressions
  • The algorithm should generate a referring
    expression for which the human reader can
    identify the referent.
  • The algorithm should generate referring
    expressions that do not lead the human reader to
    make false implicatures (Grice 1968).
  • The algorithm should model how conversational
    partners adapt to each other.
  • The algorithm should be able to generate the
    whole range of referring expressions observed in
    discourse.
  • The algorithm should be computationally feasible.

8
Our Objective
  • We are building a model of referring expression
    generation that captures adaptation to partners
    in conversation.
  • Related work in this field does not include
    partner adaptation for dialog (Dale and Reiter
    1995, Siddharthan and Copestake 2004).

9
Outline
  • Introduction
  • Data
  • Previous work
  • Modeling partner effects
  • Generating NP postmodifiers
  • A little statistical experiment
  • Discussion and Future Work

10
Data
  • Two corpora of spoken dialog rich in noun
    phrases
  • Maptask - Speaker A giving instructions to
    Speaker B about following directions in a map
  • Coconut - Two participants trying to buy
    furniture by using both of their inventories and
    money.
  • For each corpus, we
  • Automatically extracted the noun phrases
  • Annotated the noun phrases by hand for referent
    (in a knowledge representation we built), type
    (noun phrase or pronoun), and to indicate whether
    the noun phrase was embedded in another noun
    phrase.

11
Coconut Maptask
Def 116 2118
Indef 967 1411
1st person pronoun 440 563
2nd person pronoun 165 1275
3rd person pronoun 79 614
Deictics 0 0
Proper Nouns 0 0
Quantity Nouns 291 160
Mass Nouns 0 0
No Modifiers 13 113
Not Embedded 229 1633
Embedded 242 26
Set Constructions 0 0
Not in KR 612 1875
NPs Used 471 1294
Total 1767 5986
12
Outline
  • Introduction
  • Data
  • Previous work
  • Modeling partner effects
  • Generating NP postmodifiers
  • A little statistical experiment
  • Discussion and Future Work

13
Algorithms Compared
  • Rule Based
  • Dale and Reiter 1995
  • With partner effects (x 2)
  • With postmodifier ordering (x 4)
  • Siddharthan and Copestake 2004
  • With partner effects (x 2)
  • With postmodifier ordering (x 4)
  • Statistical
  • Support Vector Machines

14
Rule-Based Algorithms
  • Terms used
  • Contrast Set contains information of all the
    objects in the world.
  • Preferred list of attributes the attributes that
    are known for the objects.
  • For Coconut type, quantity, cost, color, state
  • E.g. three green high tables worth 400
  • Intended Referent The object from the world,
    which we are trying to describe.

15
Dale and Reiter
  • Basic idea
  • Specify the preference list by hand
  • Repeat until all members of the contrast set are
    gone
  • Add the value for the next attribute from the
    preference list for the intended referent to the
    noun phrase to be generated

16
200 dollar green couch
300 dollar red couch
250 dollar brown table
  • Example
  • Preference list Type, Color, Cost, Quantity,
    State
  • Contrast set 300 dollar red couch, 200 dollar
    green couch, 250 dollar brown table
  • Intended referent 200 dollar green couch
  • Generated NP green couch

17
Siddharthan and Copestake
  • Basic idea See Dale and Reiter
  • Preference list is reordered by using synonyms
    and antonyms of words in each attribute

18
Benefits to Rule Based Algorithms
  • They consider the way humans actually converse
    ie. humans use unnecessary attributes, they also
    begin mentioning a referring expression without
    scanning the entire list of distractors.
  • They do not attempt to look for the optimal
    number of attributes. They just go through the
    list of preferred attributes and iteratively
    includes those attributes that rule out at least
    one distractor from the contrast set.
  • There is no backtracking and the head noun is
    always included.

19
Disadvantages to Rule Based Algorithms
  • They dont generate the whole range of referring
    expressions
  • Ones with postmodifiers
  • Pronouns
  • Deictics
  • They dont model adaptation to partners.

20
Outline
  • Introduction
  • Data
  • Previous work
  • Modeling partner effects
  • Generating NP postmodifiers
  • A little statistical experiment
  • Discussion and Future Work

21
Adding Partner Effects
  • A rule based algorithm
  • Basic idea See Dale and Reiter, Siddharthan and
    Copestake
  • Preference list is reordered to match selection
    of attributes in previous mentions of the
    intended referent.
  • Variant to this where those attributes mentioned
    previously are definitely included even if all
    the competitors have been eliminated.

22
Evaluation
  • Metric Correct / Correct Inserted Deleted
    Moved
  • Example
  • Human the big fat green cat
  • Computer the green happy cat
  • Correct the, cat
  • Inserted happy
  • Deleted big, fat
  • Moved green
  • Score 2 / 6

23
Results
  • The variant to our partner effects algorithms
    performs significantly better that our Baseline,
    Dale and Reiter and Siddharthan and Copestake for
    both the cropora used.

24
Outline
  • Introduction
  • Data
  • Previous work
  • Modeling partner effects
  • Generating NP postmodifiers
  • A little statistical experiment
  • Discussion and Future Work

25
Discussion and Conclusions
  • The corpus you choose makes a difference
  • Maptask Few distractors, no significant
    different between Baseline, Dale and Reiter and
    Siddharthan and Copestake
  • Do partner effects make a difference?

26
References
  • Advaith Siddharthan and Ann Copestake. 2004.
    Generating Referring Expressions in Open Domains.
    In Proceedings of the 42th Meeting of the
    Association for Computational Linguistics Annual
    Conference (ACL 2004), Barcelona, Spain.
  • Grice, H P (1975). Logic and conversation. In P.
    Cole and J. Morgan, editors, Syntax and
    Semantics Vol 3, Speech Acts, pages 43-58. New
    York Academic Press.
  • Grosz, B and Sidner, C (1986). Attention,
    intention, and the structure of discourse.
    Computational Linguistics, 12 175-206.
  • Robert Dale and Ehud Reiter. 1995. Computational
    interpretations of the Gricean maxims in the
    generation of referring expressions. Cognitive
    Science, 19233263.

27
Acknowledgements
  • Dr. Amanda Stent, for all her time and efforts
    during the last three years.
  • The Natural Language Processing Lab in Computer
    Science.
  • The Honors College for giving me the chance of
    working on this year long project.
  • NSF

28
Questions?
29
Outline
  • Introduction
  • Data
  • Previous work
  • Modeling partner effects
  • Generating NP postmodifiers
  • A little statistical experiment
  • Discussion and Future Work

30
Generating with Postmodifiers
  • Why? -- because previous algorithms dont but
    its a big part of the corpus we have used.
  • Random - randomly decide whether the attribute
    selected should be a post modifier or premodifier
  • Unigrams - see where the attribute is in relation
    to the type.
  • Bigrams - statistics of pairs of attributes. E.g
    probability of finding an attribute given
    another.

31
Results
32
Outline
  • Introduction
  • Data
  • Previous work
  • Modeling partner effects
  • Generating NP postmodifiers
  • A little statistical experiment
  • Discussion and Future Work

33
Support Vector Machines
  • SVMs are a set of machine learning algorithms for
    binary classification that have been applied to
    NLP.
  • We used a set of SVMs, one per attribute, that
    voted yes or no to use this attribute at this
    point in the noun phrase.
  • Maptask 6 attributes, Coconut 5 attributes
  • We evaluated using
  • 10-fold cross-validation for Maptask.
  • 4-fold cross-validation for Coconut.

34
Evaluation
35
Results
Write a Comment
User Comments (0)
About PowerShow.com