Learning to Share Meaning in a Multi-Agent System (Part I) - PowerPoint PPT Presentation

About This Presentation
Title:

Learning to Share Meaning in a Multi-Agent System (Part I)

Description:

Title: Learning to Share Meaning in a Multi-Agent System Author: Ganesh Padmanabhan Last modified by: Ganesh Padmanabhan Created Date: 4/28/2004 12:39:34 PM – PowerPoint PPT presentation

Number of Views:232
Avg rating:3.0/5.0
Slides: 23
Provided by: GaneshPad4
Category:

less

Transcript and Presenter's Notes

Title: Learning to Share Meaning in a Multi-Agent System (Part I)


1
Learning to Share Meaning in a Multi-Agent
System(Part I)
  • Ganesh Padmanabhan

2
Article
  • Williams, A.B., "Learning to Share Meaning in a
    Multi-Agent System ", Journal of Autonomous
    Agents and Multi-Agent Systems, Vol. 8, No. 2,
    165-193, March 2004. (Most downloaded article in
    Journal)

3
Overview
  • Introduction (part I)
  • Approach (part I)
  • Evaluation (part II)
  • Related Work (part II)
  • Conclusions and Future Work (part II)
  • Discussion

4
Introduction
  • One Common Ontology? Does that work?
  • If not, what issues do we face when agents have
    similar views of the world but different
    vocabularies?
  • Reconciling Diverse Ontologies so that Agents can
    communicate effectively when appropriate.

5
Diverse Ontology Paradgm Questions Addressed
  • How do agents determine if they know the same
    semantic concepts?
  • How do agents determine if their different
    semantic concepts actually have the same
    meaning?
  • How can agents improve their interpretation of
    semantic concepts by recursively learning missing
    discriminating attributes?
  • How do these methods affect the group
    performance at a given collective task?

6
Ontologies and Meaning
  • Operational Definitions Needed
  • Conceptualization, ontology, universe of
    discourse, functional basis set, relational basis
    set, object, class, concept description, meaning,
    object constant, semantic concept, semantic
    object, semantic concept set, distributed
    collective memory

7
Conceptualization
  • All objects that an agent presumes to exist and
    their interrelationships with one another.
  • Tuple Universe of Discourse, Functional Basis
    Set, Relational Basis Set

8
Ontology
  • Specification of a conceptualization
  • Mapping of language symbols to an agents
    conceptualization
  • Terms used to name objects
  • Functions to interpret objects
  • Relations in the agents world

9
Object
  • Anything we can say something about
  • Concrete or Abstract ? classes
  • Primitive or Composite
  • Fictional or non-fictional

10
UOD and ontology
  • The difference between the UOD and the ontology
    is that the UOD are objects that exist but until
    they are placed in an agents ontology, the agent
    does not have a vocabulary to specify objects in
    the UOD.

11
Forming a Conceptualization
  • Agents first step at looking at the world.
  • Declarative Knowledge
  • Declarative Semantics
  • Interpretation Function maps an object in a
    conceptualization to language elements

12
Distributed Collective Memory
13
Approach Overview
  • Assumptions
  • Agents use of supervised inductive learning to
    learn representations for their ontologies.
  • Mechanics of discovering similar semantic
    concepts, translation, and interpretation.
  • Recursive Semantic Context Rule Learning for
    improved performance.

14
Key Assumptions
  • Agents live in a closed world represented by
    distributed collective memory.
  • The identity of the objects in this world are
    accessible to all agents and can be known by the
    agents.
  • Agents use a knowledge structure that can be
    learned using objects in the distributed
    collective memory.
  • The agents do not have any errors in their
    perception of the world even though their
    perceptions may differ.

15
Semantic Concept Learning
  • Individual Learning, i.e. learning ones own
    ontology
  • Group Learning, i.e. one agent learning that
    another agent knows a particular concept

16
WWW Example Domain
  • Web Page specific semantic object
  • Groupings of Web Pages semantic concept or
    class
  • Analogous to Bookmark organization
  • Words and HTML tags are taken to be boolean
    features.
  • Web Page represented by boolean vector.
  • Concepts ? Concept Vectors ? Learner ? Semantic
    Concept Description (rules)

17
Ontology Learning
  • Supervised Inductive Learning
  • Output Semantic Concept Descriptions (SCD)
  • SCD are rules with a LHS and RHS etc.
  • Object instances are discriminated based on
    tokens contained within sometimes resulting in
    a peculiar learned descriptor vocabulary.
  • Certainty Value

18
Locating Similar Semantic Concepts
  1. Agent queries another agent for a concept by
    showing it examples.
  2. Second agent receives examples and uses its own
    conceptualization to determine if it knows the
    concept (K), maybe knows it (M), or doesnt know
    it (D).
  3. For cases, K and M, the second agent sends back
    examples of what it thinks is the concept that
    was queried.
  4. First agent receives the examples, and interprets
    those using its own conceptualization to verify
    that they are talking about the same concept.
  5. If verified, the querying agent then adds that
    the other agent knows its concept to its own
    knowledge base.

19
Concept Similarity Estimation
  • Assuming two agents know a particular concept, it
    is feasible and probable given a large DCM, that
    the sets of concept defining objects differ
    completely.
  • Cannot simply assume that the target functions
    generated by each agent using supervised
    inductive learning from example will be the same.
  • Need to define other ways to estimate similarity.

20
Concept Similarity Estimation Function
  • Input sample set of objects representing a
    concept in another agent
  • Output Knows Concept (K), Might Know Concept
    (M), Dont Know Concept(D).
  • Set of Objects ? Tries mapping set of objects to
    each of its concepts using description rules ?
    each concept receives an interpretation value ?
    interpretation value is compared with thresholds
    to make K,M, or D determination.
  • Interpretation Value for one concept is the
    proportion of objects in the CBQ that were
    inferred to be this concept.
  • Positive Interpretation Threshold how often
    this concept description correctly determined an
    object in the training set to belong to this
    concept
  • Negative Interpretation Threshold

21
  • Group Knowledge
  • Individual Knowledge
  • Verification

22
Translating Semantic Concepts
  • Same algorithm as for locating similar concepts
    in other agents.
  • Two concepts determined to be the same, can be
    translated regardless of label in the ontologies.
  • Difference After verification, knowledge is
    stored as Agent B knows my semantic concept X as
    Y.
Write a Comment
User Comments (0)
About PowerShow.com