Modeling Speech Acts and Joint Intentions in Markov Logic - PowerPoint PPT Presentation

1 / 6
About This Presentation
Title:

Modeling Speech Acts and Joint Intentions in Markov Logic

Description:

U Texas CLib Knowledge Base. Commonsense background knowledge. Bruce Porter et al. ... Good support for importing rule-based background knowledge ... – PowerPoint PPT presentation

Number of Views:39
Avg rating:3.0/5.0
Slides: 7
Provided by: kau52
Category:

less

Transcript and Presenter's Notes

Title: Modeling Speech Acts and Joint Intentions in Markov Logic


1
Modeling Speech Acts and Joint Intentions in
Markov Logic
  • Henry Kautz
  • University of Washington

2
Goal
  • Infer and track the goals, intentions, and
    beliefs of the participants in a meeting
  • Sources of information
  • Communicative actions
  • Gestures
  • Documents (agenda, minutes, emails)
  • Commonsense background knowledge

3
Requirement
  • Handle
  • Uncertain observations
  • Incomplete user models
  • Non-categorical background knowledge
  • Integrate with rest of CALO
  • Build a working prototype
  • In 12 months ?!
  • Can leverage three key developments...

4
Key Pieces
  • Joint intention theory
  • Unified logical theory of belief, intention,
    communicative actions
  • Result of 20 years of work by Phil Cohen, Ray
    Perrault, James Allen, and others
  • U Texas CLib Knowledge Base
  • Commonsense background knowledge
  • Bruce Porter et al.

5
3. CALO Core Inference
  • A general learning and probabilistic state
    estimation module
  • Basis Markov Logic, a probabilistic
    generalization of clausal first-order logic
  • Good support for importing rule-based background
    knowledge
  • Good algorithms for learning MLE inference
  • Tomas Uribe et al (SRI) main implementation
    support
  • Other team members Pedro Domingos, Tom
    Dietterich, Stuart Russell, Alon Halevy, Henry
    Kautz

6
Work Plan
  • Encode (simplified) version of Joint Intention
    Theory in Markov Logic
  • Dynamic version ML already exists
  • Adding modal operations underway
  • Translate appropriate parts of CLib to ML
  • Can be largely automated
  • Define implement inputs from vision audio
  • Use existing (eventually, new) ML engine for
    parameter learning (from annotated scenarios) and
    state tracking
Write a Comment
User Comments (0)
About PowerShow.com