Transferbased MT with Strong Decoding for a Miserly Data Scenario PowerPoint PPT Presentation

presentation player overlay
1 / 32
About This Presentation
Transcript and Presenter's Notes

Title: Transferbased MT with Strong Decoding for a Miserly Data Scenario


1
Transfer-based MT with Strong Decoding for a
Miserly Data Scenario
  • Alon Lavie
  • Language Technologies Institute
  • Carnegie Mellon University
  • Joint work with
  • Stephan Vogel, Kathrin Probst, Erik Peterson, Ari
    Font-Llitjos, Lori Levin, Rachel Reynolds, Jaime
    Carbonell, Richard Cohen

2
Rationale and Motivation
  • Our Transfer-based MT approach is specifically
    designed for limited-data scenarios
  • Hindi SLE was first open-domain large-scale test
    for our system, but Hindi turned out to be not a
    limited-data scenario
  • 1.5 Million words of parallel text
  • Lessons Learned by end of SLE
  • Basic XFER system did not have a strong decoder
  • noisy statistical lexical resources interfere
    with transfer-rules in our basic XFER system

3
Rationale and Motivation
  • Research Questions
  • How would we do in a more realistic minority
    language scenario, with very limited resources?
    How does XFER compare with EBMT and SMT under
    such a scenario?
  • How well can we do when we add a strong decoder
    to our XFER system?
  • What is the effect of Multi-Engine combination
    when using a strong decoder?

4
A Limited Data Scenario for Hindi-to-English
  • Put together a scenario with miserly data
    resources
  • Elicited Data corpus 17589 phrases
  • Cleaned portion (top 12) of LDC dictionary
    2725 Hindi words (23612 translation pairs)
  • Manually acquired resources during the SLE
  • 500 manual bigram translations
  • 72 manually written phrase transfer rules
  • 105 manually written postposition rules
  • 48 manually written time expression rules
  • No additional parallel text!!

5
Learning Transfer-Rules from Elicited Data
  • Rationale
  • Large bilingual corpora not available
  • Bilingual native informant(s) can translate and
    word align a well-designed elicitation corpus,
    using our elicitation tool
  • Controlled Elicitation Corpus designed to be
    typologically comprehensive and compositional
  • Significantly enhance the elicitation corpus
    using a new technique for extracting appropriate
    data from an uncontrolled corpus
  • Transfer-rule engine and learning approach
    support acquisition of generalized transfer-rules
    from the data

6
The CMU Elicitation Tool
7
Elicited Data Collection
  • Goal Acquire high quality word aligned
    Hindi-English data to support system development,
    especially grammar development and automatic
    grammar learning
  • Recruited team of 20 bilingual speakers
  • Extracted a corpus of phrases (NPs and PPs) from
    Brown Corpus section of Penn TreeBank
  • Extracted corpus divided into files and assigned
    to translators, here and in India
  • Controlled Elicitation Corpus also translated
    into Hindi
  • Resulting in total of 17589 word aligned
    translated phrases

8
XFER System Architecture
Run-Time Module
Learning Module
SL Input
SL Parser
Elicitation Process
SVS Learning Process
Transfer Rules
Transfer Engine
TL Generator
User
TL Output
Decoder Module
9
The Transfer Engine
10
Transfer Rule Formalism
SL the man, TL der Mann NPNP DET N -gt
DET N ( (X1Y1) (X2Y2) ((X1 AGR)
3-SING) ((X1 DEF DEF) ((X2 AGR)
3-SING) ((X2 COUNT) ) ((Y1 AGR)
3-SING) ((Y1 DEF) DEF) ((Y2 AGR)
3-SING) ((Y2 GENDER) (Y1 GENDER)) )
  • Type information
  • Part-of-speech/constituent information
  • Alignments
  • x-side constraints
  • y-side constraints
  • xy-constraints,
  • e.g. ((Y1 AGR) (X1 AGR))

11
Example Transfer Rule
PASSIVE OF SIMPLE PAST (NO AUX) WITH LIGHT
VERB passive of 43 (7b) VP,28 VPVP V V
V -gt Aux V ( (X1Y2) ((x1 form) root)
((x2 type) c light) ((x2 form) part) ((x2
aspect) perf) ((x3 lexwx) 'jAnA') ((x3
form) part) ((x3 aspect) perf) (x0 x1)
((y1 lex) be) ((y1 tense) past) ((y1 agr
num) (x3 agr num)) ((y1 agr pers) (x3 agr
pers)) ((y2 form) part) )
12
Rule Learning - Overview
  • Goal Acquire Syntactic Transfer Rules
  • Use available knowledge from the source side
    (grammatical structure)
  • Three steps
  • Flat Seed Generation first guesses at transfer
    rules no syntactic structure
  • Compositionality use previously learned rules to
    add structure
  • Seeded Version Space Learning refine rules by
    generalizing with validation (learn appropriate
    feature constraints)

13
Examples of Learned Rules (I)
14
Examples of Learned Rules (II)
15
Basic XFER System for Hindi
  • Three passes
  • Pass1 match against phrase-to-phrase entries
    (full-forms, no morphology)
  • Pass2 morphologically analyze input words and
    match against lexicon matches are allowed to
    feed into higher-level transfer grammar rules
  • Pass3 match original word against lexicon -
    provides only word-to-word translation, no
    feeding into grammar rules.
  • Weak decoding greedy left-to-right search that
    prefers longer input segments

16
Manual Grammar Development
  • Manual grammar developed only late into SLE
    exercise, after morphology and lexical resource
    issues were resolved
  • Covers mostly NPs, PPs and VPs (verb complexes)
  • 70 grammar rules, covering basic and recursive
    NPs and PPs, verb complexes of main tenses in
    Hindi

17
Manual Transfer Rules Example
PASSIVE OF SIMPLE PAST (NO AUX) WITH LIGHT
VERB passive of 43 (7b) VP,28 VPVP V V
V -gt Aux V ( (X1Y2) ((x1 form) root)
((x2 type) c light) ((x2 form) part) ((x2
aspect) perf) ((x3 lexwx) 'jAnA') ((x3
form) part) ((x3 aspect) perf) (x0 x1)
((y1 lex) be) ((y1 tense) past) ((y1 agr
num) (x3 agr num)) ((y1 agr pers) (x3 agr
pers)) ((y2 form) part) )
18
Manual Transfer Rules Example
NP1 ke NP2 -gt NP2 of NP1 Example jIvana
ke eka aXyAya life of (one)
chapter gt a chapter of life NP,12 NPNP
PP NP1 -gt NP1 PP ( (X1Y2) (X2Y1)
((x2 lexwx) 'kA') ) NP,13 NPNP NP1 -gt
NP1 ( (X1Y1) ) PP,12 PPPP NP Postp
-gt Prep NP ( (X1Y2) (X2Y1) )
19
Adding a Strong Decoder
  • XFER system produces a full lattice
  • Edges are scored using word-to-word translation
    probabilities, trained from the limited bilingual
    data
  • Decoder uses an English LM (70m words)
  • Decoder can also reorder words or phrases (up to
    4 positions ahead)
  • For XFER(strong) , ONLY edges from basic XFER
    system are used!

20
Testing Conditions
  • Tested on section of JHU provided data 258
    sentences with four reference translations
  • SMT system (stand-alone)
  • EBMT system (stand-alone)
  • XFER system (naïve decoding)
  • XFER system with strong decoder
  • No grammar rules (baseline)
  • Manually developed grammar rules
  • Automatically learned grammar rules
  • XFERSMT with strong decoder (MEMT)

21
Results on JHU Test Set
22
Effect of Reordering in the Decoder

23
Observations and Lessons (I)
  • XFER with strong decoder outperformed SMT even
    without any grammar rules
  • SMT Trained on elicited phrases that are very
    short
  • SMT has insufficient data to train more
    discriminative translation probabilities
  • XFER takes advantage of Morphology
  • Token coverage without morphology 0.6989
  • Token coverage with morphology 0.7892
  • Manual grammar currently quite a bit better than
    automatically learned grammar
  • Learned rules did not use version-space learning
  • Large room for improvement on learning rules
  • Importance of effective well-founded scoring of
    learned rules

24
Observations and Lessons (II)
  • Strong decoder for XFER system is essential, even
    with extremely limited data
  • XFER system with manual or automatically learned
    grammar outperforms SMT and EBMT in the extremely
    limited data scenario
  • where is the cross-over point?
  • MEMT based on strong decoder produced best
    results in this scenario
  • Reordering within the decoder provided very
    significant score improvements
  • Much room for more sophisticated grammar rules
  • Strong decoder can carry some of the reordering
    burden
  • Conclusion transfer rules (both manual and
    learned) offer significant contributions that can
    complement existing data-driven approaches
  • Also in medium and large data settings?

25
Conclusions
  • Initial steps to development of a statistically
    grounded transfer-based MT system with
  • Rules that are scored based on a well-founded
    probability model
  • Strong and effective decoding that incorporates
    the most advanced techniques used in SMT decoding
  • Working from the opposite end of research on
    incorporating models of syntax into standard
    SMT systems Knight et al
  • Our direction makes sense in the limited data
    scenario

26
Future Directions
  • Significant work on automatic rule learning
    (especially Seeded Version Space Learning)
  • Improved leveraging from manual grammar
    resources, interaction with bilingual speakers
  • Developing a well-founded model for assigning
    scores (probabilities) to transfer rules
  • Improving the strong decoder to better fit the
    specific characteristics of the XFER model
  • MEMT with improved
  • Combination of output from different translation
    engines with different scorings
  • strong decoding capabilities

27
Debug Output with Sources
praXAnamaMwrIatalajI , rAjyapAla SrI BAI mahAvIra
va muKyamaMwrI SrI xigvijayasiMha sahiwa aneka
newAoM ne Soka vyakwa kiyA hE ltthe _at_unk,25gt lt,
_at_unk,26gt ltgovernor mr. _at_np1,23gt ltbrother _at_n,7575gt
ltthe _at_unk,27gt ltand _at_lex,6762gt ltthe _at_unk,28gt ltmr.
_at_n,20629gt ltthe _at_unk,29gt ltaccompanied by
_at_postp,140gt ltgrief by many leaders _at_np,12gt ltthe
_at_unk,30gt ltact _at_v,411gt ltbe _at_aux,12gt lt. _at_punct,2gt
gyAwavya ho ki jile ke cAroM kRewroM meM
mawaxAna wIna aktUbara ko honA hE ltthe _at_unk,31gt
ltbe _at_aux,12gt ltthat _at_lex,106gt ltvoting three in
four areas of counties _at_np,12gt ltoct. _at_lex,9153gt
ltto _at_postp,8gt ltbe _at_aux,12gt ltbe _at_aux,12gt lt.
_at_punct,2gt
28
Main CMU Contributions to SLE Shared Resources
  • OFFICIAL CREDIT ON SLE WEBSITE "PROCESSED
    RESOURCES"
  • CMU Phrase Lexicon Joyphrase.gz (Ying Zhang, 3.5
    MB)
  • Cleaned IBM lexicon ibmlex-cleaned.txt.gz (Ralf
    Brown, 1.5 MB)
  • CMU Aligned Sentences CMU-aligned-sentences.tar.gz
    (Lori Levin, 1.3 MB)
  • Indian Government Parallel Text ERDC.tgz (Raj
    Reddy and Alon Lavie, 338 MB)
  • CMU Phrases and sentences CMU-phrasessentences.zi
    p (Lori Levin, 468 KB)
  • Bilingual Named Entity List IndiaTodayLPNETranslis
    ts.tar.gz (Fei Huang, 54KB)
  • OFFICIAL CREDIT ON SLE WEBSITE "FOUND RESOURCES"
  • Osho http//www.osho.com/Content.cfm?LanguageHin
    di

29
Other CMU Contributions to SLE Shared Resources
  • FOUND RESOURCES BUT NO CREDIT
  • From TidesSLList Archive website
  • Vogel email 6/2
  • Hindi Language Resources http//www.cs.colostate.
    edu/malaiya/hindilinks.html
  • General Information on Hindi Script
    http//www.latrobe.edu.au/indiangallery/devanagari
    .htm
  • Dictionaries at http//www.iiit.net/ltrc/Dictiona
    ries/Dict_Frame.html
  • English to Hindu dictionary in different formats
    http//sanskrit.gde.to/hindi/
  • A small English to Urdu dictionary
    http//www.cs.wisc.edu/navin/india/urdu.dictionar
    y
  • The Bible at http//www.gospelcom.net/ibs/bibles/
  • The Emille Project http//www.emille.lancs.ac.uk/
    home.htm
  • Hardcopy phrasebook references
  • A Monthly Newsletter of Vigyan Prasar
  • http//www.vigyanprasar.com/dream/index.asp
  • Morphological Analyser http//www.iiit.net/ltrc/m
    orph/index.htm

30
Other CMU Contributions to SLE Shared Resources
  • FOUND RESOURCES BUT NO CREDIT (cont.)
  • From TidesSLList Archive website
  • Tribble email, via Vogel 6/2 Possible parallel
    websites
  • http//www.bbc.co.uk (English)
  • http//www.bbc.co.uk/urdu/ (Hindi)
  • http//sify.com/news_info/news/
  • http//sify.com/hindi/
  • http//in.rediff.com/index.html (English)
  • http//www.rediff.com/hindi/index.html (Hindi)
  • http//www.indiatoday.com/itoday/index.html
  • http//www.indiatodayhindi.com
  • Vogel email 6/2
  • http//us.rediff.com/index.html
  • http//www.rediff.com/hindi/index.html Already
    listed
  • http//www.niharonline.com/
  • http//www.niharonline.com/hindi/index.html
  • http//www.boloji.com/hindi/index.html
  • http//www.boloji.com/hindi/hindi/index.htm
  • The Gita Supersite http//www.gitasupersite.iitk.a
    c.in/

31
Other CMU Contributions to SLE Shared Resources
  • FOUND RESOURCES BUT NO CREDIT (cont.)
  • From TidesSLList Archive website
  • 6/20 Parallel Hindi/English webpages
  • GAIL (Natural Gas Co.) http//gail.nic.in/
    UTF-8. Found by CMU undergrad Web team Mike
    Maxwell, LDC, found it at the same time.
  • SHARED PROCESSED RESOURCES NOT ON LDC WEBSITE
  • From TidesSLList Archive website
  • Frederking email 6/3 announced, 6/4 provided
  • Ralf Brown's idenc encoding classifier
  • Frederking email 6/5
  • PDF extractions from LanguageWeaver URLs
    http//progress.is.cs.cmu.edu/surprise/Hindi/ParDo
    c/06-04-2003/English/ http//progress.is.cs.cmu.ed
    u/surprise/Hindi/ParDoc/06-04-2003/Hindi/
  • Frederking email 6/5
  • Richard Wang's Perl ident.pl encoding classifier
    and ISCII-UTF8.pl converter
  • Frederking email 6/11
  • Erik Peterson here has put together a Perl
    wrapper for the IIIT Morphology package, so that
    the input can be UTF-8 http//progress.is.cs.cmu.
    edu/surprise/morph_wrapper.tar.gz

32
Other CMU Contributions to SLE Shared Resources
  • SHARED PROCESSED RESOURCES NOT ON LDC WEBSITE
    (cont.)
  • From TidesSLList Archive website
  • Levin email 6/13
  • Directory of Elicited Word-Aligned English-Hindi
    Translated Phrases http//progress.is.cs.cmu.edu/
    surprise/Elicited-Data/
  • Frederking email 6/20
  • Undecoded but believed to be parallel webpages
    http//progress.is.cs.cmu.edu/surprise/merged_urls
    .txt
  • PDF extractions from same http//progress.is.cs.c
    mu.edu/surprise/merged_urls/
  • Frederking email 6/24
  • Several individual parallel webpages sites may
    have more www.commerce.nic.in/setup.htm www.comme
    rce.nic.in/hindi/setup.html mohfw.nic.in/kk/95/boo
    ks1.htm mohfw.nic.in/oph.htm wwww.mp.nic.in
Write a Comment
User Comments (0)
About PowerShow.com