Shallow Semantics - PowerPoint PPT Presentation

1 / 54
About This Presentation
Title:

Shallow Semantics

Description:

Title: Introduction to Semantics and Pragmatics Last modified by: Alexander Yates Created Date: 8/16/2006 12:00:00 AM Document presentation format – PowerPoint PPT presentation

Number of Views:135
Avg rating:3.0/5.0
Slides: 55
Provided by: templeEdu
Category:

less

Transcript and Presenter's Notes

Title: Shallow Semantics


1
Shallow Semantics
2
Semantics and Pragmatics
  • High-level Linguistics (the good stuff!)
  • Semantics the study of meaning that can be
    determined from a sentence, phrase or word.
  • Pragmatics the study of meaning, as it depends
    on context (speaker, situation, dialogue history)

3
Language to (Simplistic) Logic
  • John went to the book store.
  • go(John, store1)
  • John bought a book.
  • buy(John,book1)
  • John gave the book to Mary.
  • give(John,book1,Mary)
  • Mary put the book on the table.
  • put(Mary,book1,on table1)

4
Whats missing?
  • Word sense disambiguation
  • Quantification
  • Coreference
  • Interpreting within a phrase
  • Many, many more issues
  • But its still more than you get from parsing!

5
Some problems in shallow semantics
  • Identifying entities
  • noun-phrase chunking
  • named-entity recognition
  • coreference resolution
  • (involves discourse/pragmatics too)
  • Identifying relationship names
  • Verb-phrase chunking
  • Predicate identification (step 0 of semantic role
    labeling)
  • Synonym resolution (e.g., get receive)
  • Identifying arguments to predicates
  • Information extraction
  • Argument identification (step 1 of semantic role
    labeling)
  • Assigning semantic roles (step 2 of semantic role
    labeling)
  • Sentiment classification
  • That is, does the relationship express an
    opinion?
  • If so, is the opinion positive or negative?

6
1. Identifying Entities
  • Named Entity Tagging Identify all the proper
    names in a text
  • Sally went to see Up in the Air at the local
    theater.
  • Person Film
  • Noun Phrase Chunking Find all base noun phrases
  • (that is, noun phrases that dont have smaller
    noun phrases nested inside them)
  • Sally went to see Up in the Air at the local
    theater on Elm Street.

7
1. Identifying Entities (2)
  • Parsing Identify all phrase constituents, which
    will of course include all noun phrases.

S
VP
NP
NP
V
PP
N
P
NP
NP
PP
P
NP
at
Up in the Air
Elm St.
on
Sally
the theater
saw
8
1. Identifying Entities (3)
  • Coreference Resolution Identify all references
    (aka mentions) of people, places and things in
    text, and determine which mentions are
    co-referential.
  • John stuck his foot in his mouth.

9
2. Identifying relationship names
  • Verb phrase chunking the commonest approach
  • Some issues
  • Often, prepositions/particles belong with the
    relation name
  • Youre ticking me off.
  • 2. Many relationships are expressed without a
    verb
  • Jack Welch, CEO of GE,
  • Some verbs dont really express a meaningful
    relationship by themselves
  • Jim is the father of 12 boys.
  • Verb sense disambiguation
  • Synonymy
  • ticking off bothering

10
2. Identifying relationship names (2)
  • Synonym Resolution
  • Discovery of Inference Rules from Text (DIRT)
    (Lin and Pantel, 2001)
  • 1. They collect millions of examples of
  • Subject Verb Object
  • triples by parsing a Web corpus.
  • 2. For a pair of verbs, v1 and v2, they compute
    mutual information scores between
  • - the vector space model (VSM) for subjects of
    v1 and the vector space model for the subjects
    of v2
  • - the VSM for objects of v1 and VSM for
    objects of v2
  • 3. They cluster verbs with high MI scores
    between them

give give donate donate
many gift souls gift
. your self partner monthly
How to animal please hair
you gift many dollars
please blood you car
help life you money
members energy you today
See (Yates and Etzioni, JAIR 2009) for a more
recent approach using probabilistic models.
11
5. Sentiment Classification
  • Given a review (about a movie, hotel, Amazon
    product, etc.), a sentiment classification system
    tries to determine what opinions are expressed in
    the review.
  • Coarse-level objective is the review positive,
    negative, or neutral overall?
  • Fine-grained objective what are the positive
    aspects (according to the reviewer), and what are
    the negative aspects?
  • Question what technique(s) would you use to
    solve these two problems?

12
Semantic Role Labeling
  • a.k.a., Shallow Semantic Parsing

13
Semantic Role Labeling
  • Semantic role labeling is the computational task
    of assigning semantic roles to phrases
  • Its usually divided into three subtasks
  • Predicate identification
  • Argument Identification
  • Argument Classification -- assigning semantic
    roles

Means (or instrument)
Agent
Patient
B-Arg
B-Arg
I-Arg
B-Arg
I-Arg
I-Arg
Pred
John broke the window with a
hammer.
14
Same event - different sentences
  • John broke the window with a hammer.
  • John broke the window with the crack.
  • The hammer broke the window.
  • The window broke.

15
Same event - different syntactic frames
  • John broke the window with a hammer.
  • SUBJ VERB OBJ MODIFIER
  • John broke the window with the crack.
  • SUBJ VERB OBJ MODIFIER
  • The hammer broke the window.
  • SUBJ VERB OBJ
  • The window broke.
  • SUBJ VERB

16
Semantic role example
  • break(AGENT, INSTRUMENT, PATIENT)
  • AGENT PATIENT INSTRUMENT
  • John broke the window with a hammer.
  • INSTRUMENT PATIENT
  • The hammer broke the window.
  • PATIENT
  • The window broke.
  • Fillmore 68 - The case
    for case

17
  • AGENT PATIENT INSTRUMENT
  • John broke the window with a hammer.
  • SUBJ OBJ
    MODIFIER
  • INSTRUMENT PATIENT
  • The hammer broke the window.
  • SUBJ OBJ
  • PATIENT
  • The window broke.
  • SUBJ

18
Semantic roles
  • Semantic roles (or just roles) are slots,
    belonging to a predicate, which arguments can
    fill.
  • - There are different naming conventions, but
    one common set of names for semantic roles are
    agent, patient, means/instrument, .
  • Some constraints
  • 1. Only certain kinds of phrases can fill
    certain kinds of semantic roles
  • with a crack will never be an agent
  • But many are ambiguous
  • hammer? patient or instrument?
  • 2. Syntax provides a clue, but it is not the
    full answer
  • Subject ? Agent? Patient? Instrument?

19
Slot Filling
Phrases
Slots
Pred
John
Agent
broke
Patient
the window
with a hammer
Means (or instrument)
Argument Classification
20
Slot Filling
Phrases
Slots
The hammer
Pred
Agent
broke
Patient
the window
Means (or instrument)
Argument Classification
21
Slot Filling
Phrases
Slots
The window
Pred
Agent
broke
Patient
Means (or instrument)
Argument Classification
22
Slot Filling and Shallow Semantics
Shallow Semantics
Phrases
Slots
Pred
John
Means (or instrument)
Pred
Agent
Patient
Agent
broke(John, the window, with a hammer)
broke
Patient
the window
with a hammer
Means (or instrument)
23
Slot Filling and Shallow Semantics
Shallow Semantics
Phrases
Slots
Pred
The window
Means (or instrument)
Pred
Agent
Patient
Agent
broke( ?x , the window, ?y
)
broke
Patient
Means (or instrument)
24
Semantic Role Labeling Techniques
25
Semantic Role Labeling Techniques
  • Well cover 3 approaches to SRL
  • Basic (Gildea and Jurafsky, Comp. Ling. 2003)
  • Joint inference for argument structure (Toutanova
    et al., Comp. Ling. 2008)
  • Open-domain (Huang and Yates, ACL 2010)

26
1. Gildea and Jurafsky
Main idea start with parse tree, and try to
identify constituents that are arguments.
27
GJ (1)
  • Build a (probabilistic) classifier for
    predicting
  • - for each constituent, which role is it?
  • - Essentially, a maximum-entropy classifier,
    although its not described that way
  • Features for Argument Classification
  • Phrase type of constituent
  • Governing category of NPs S or VP
    (differentiates between subjects and objects)
  • Position w.r.t. predicate (before or after)
  • Voice of predicate (active or passive verb)
  • Head word of constituent
  • Parse tree path between predicate and constituent

28
GJ (2) Parse Tree Path Feature
Parse tree path (or just path) feature Determine
s the syntactic relationship between predicate
and current constituent.
In this example, path feature VB ? VP ? S ? NP
29
GJ (3)
4086 possible values of the Path feature in
training data. A sparse feature!
30
GJ (4)
  • Build a (probabilistic) classifier for
    predicting
  • - for each constituent, which role is it?
  • - Essentially, a maximum-entropy classifier,
    although its not described that way
  • Features for Argument Identification
  • Predicate word
  • Head word of constituent
  • Parse tree path between predicate and constituent

31
GJ (5) Results
Task Best Result
Argument Identification (only) 92 prec., 86 rec., .89 F1
Argument Classification (only) 78.5 assigned correct role
32
2. Toutanova, Haghighi, and Manning
  • A Global Joint Model for SRL (Comp. Ling., 2008)
  • Main idea(s)
  • Include features that depend on multiple
    arguments
  • Use multiple parsers as input, for robustness

33
THM (1) Motivation
  • 1. The day that the ogre cooked the children is
    still remembered.
  • 2. The meal that the ogre cooked the children is
    still remembered.
  • Both sentences have identical syntax.
  • They differ in only 1 word (day vs. meal).
  • If we classify arguments 1 at a time, the
    children will be labeled the same thing in both
    cases.
  • But in (1), the children is the Patient (thing
    being cooked).
  • And in (2), the children is the Beneficiary
    (people for whom the cooking is done).
  • Intuitively, we cant classify these arguments
    independently.

34
THM(2) Features
  • Features
  • Whole label sequence
  • voiceactive, Arg1, pred, Arg4, ArgM-TMP
  • voiceactive, lemmaaccelerated, Arg1, pred,
    Arg4, ArgM-TMP
  • voiceactive, lemmaaccelerated, Arg1, pred,
    Arg4 (no adjuncts)
  • voiceactive, lemmaaccelerated, Arg, pred, Arg
    (no adjuncts, no s)
  • Syntax and semantics in the label sequence
  • voiceactive, NP-Arg1, pred, PP-Arg4
  • voiceactive, lemmaaccelerated, NP-Arg1, pred,
    PP-Arg4
  • Repetition features whether Arg1 (for example)
    appears multiple times

35
THM(3) Classifier
  • First, for each sentence, obtain the top-10 most
    likely parse tree/semantic role label outputs
    from GJ
  • Build a max-ent classifier to select from these
    10, using the features above
  • Also, include top-10 parses from the Charniak
    parser

36
THM(4) Results
  • These are on a different data set from GJ, so
    results not directly comparable. But the local
    model is similar to GJ, so think of that as the
    comparison.

Model WSJ (ID CLS) Brown (ID CLS)
Local 78.00 65.55
Joint (1 parse) 79.71 67.79
Joint (top 5 parses) 80.32 68.81
Results show F1 scores for IDentification and
CLaSsification of arguments together. WSJ is the
Wall Street Journal test set, a collection of
approximately 4,000 news sentences. Brown is a
smaller collection of fiction stories. The system
is trained on a separate set of WSJ sentences.
37
3. Huang and Yates
  • Open-Domain SRL by Modeling Word Spans, ACL 2010
  • Main Idea
  • One of the biggest problems for SRL systems is
    that they need lexical features to classify
    arguments, but lexical features are sparse.
  • We build a simple SRL system that outperforms the
    previous state-of-the-art on out-of-domain data,
    by learning new lexical representations.

38
Simple, open-domain SRL
SRL Label
Breaker
Pred
Thing Broken
Means
Baseline Features
-1
0
1
2
3
4
5
dist. from predicate
B-NP
B-VP
B-NP
I-NP
B-PP
B-NP
I-NP
Chunk tag
Proper Noun
Verb
Det.
Noun
Prep.
Det.
Noun
POS tag
Chris
broke
the
window
with
a
hammer
39
Simple, open-domain SRL
SRL Label
Breaker
Pred
Thing Broken
Means
Baseline HMM
HMM label
-1
0
1
2
3
4
5
dist. from predicate
B-NP
B-VP
B-NP
I-NP
B-PP
B-NP
I-NP
Chunk tag
Proper Noun
Verb
Det.
Noun
Prep.
Det.
Noun
POS tag
Chris
broke
the
window
with
a
hammer
40
The importance of paths
  • Chris predicate broke thing broken a hammer
  • Chris predicate broke a window with means a
    hammer
  • Chris predicate broke the desk, so she fetched
  • not an arg a hammer and nails.

41
Simple, open-domain SRL
SRL Label
Breaker
Pred
Thing Broken
Means
Baseline HMM Paths
the-window-with
None
None
None
the
Word path
the-window
the-window-with-a
Chris
broke
the
window
with
a
hammer
42
Simple, open-domain SRL
SRL Label
Breaker
Pred
Thing Broken
Means
Baseline HMM Paths
Det-Noun-Prep
Det-Noun-Prep-Det
Det
Det-Noun
None
None
None
POS path
the-window-with
None
None
None
the
Word path
the-window
the-window-with-a
Chris
broke
the
window
with
a
hammer
43
Simple, open-domain SRL
SRL Label
Breaker
Pred
Thing Broken
Means
Baseline HMM Paths
HMM path
None
None
None
Det-Noun-Prep
Det-Noun-Prep-Det
Det
Det-Noun
None
None
None
POS path
the-window-with
None
None
None
the
Word path
the-window
the-window-with-a
Chris
broke
the
window
with
a
hammer
44
Experimental results F1
All systems were trained on newswire text from
the Wall Street Journal (WSJ), and tested on WSJ
and fiction texts from the Brown corpus (Brown).
45
Experimental results F1
All systems were trained on newswire text from
the Wall Street Journal (WSJ), and tested on WSJ
and fiction texts from the Brown corpus (Brown).
46
Span-HMMs
47
Span-HMM features
SRL Label
Breaker
Pred
Thing Broken
Means
Span-HMM Features
Span-HMM feature
Span-HMM for hammer
Chris
broke
the
window
with
a
hammer
48
Span-HMM features
SRL Label
Breaker
Pred
Thing Broken
Means
Span-HMM Features
Span-HMM feature
Span-HMM for hammer
Chris
broke
the
window
with
a
hammer
49
Span-HMM features
SRL Label
Breaker
Pred
Thing Broken
Means
Span-HMM Features
Span-HMM feature
Span-HMM for a
Chris
broke
the
window
with
a
hammer
50
Span-HMM features
SRL Label
Breaker
Pred
Thing Broken
Means
Span-HMM Features
Span-HMM feature
Span-HMM for a
Chris
broke
the
window
with
a
hammer
51
Span-HMM features
SRL Label
Breaker
Pred
Thing Broken
Means
Span-HMM Features
Span-HMM feature
None
None
None
Chris
broke
the
window
with
a
hammer
52
Experimental results SRL F1
All systems were trained on newswire text from
the Wall Street Journal (WSJ), and tested on WSJ
and fiction texts from the Brown corpus (Brown).
53
Experimental results feature sparsity
54
Benefit grows with distance from predicate
Write a Comment
User Comments (0)
About PowerShow.com