Hybrid HumanAgent Teams for Simulation Based Training - PowerPoint PPT Presentation

1 / 54
About This Presentation
Title:

Hybrid HumanAgent Teams for Simulation Based Training

Description:

The sergeant is under stress because he's responsible for the boy's injury. ... The agent responsible for the step should not perform it until authorization is ... – PowerPoint PPT presentation

Number of Views:24
Avg rating:3.0/5.0
Slides: 55
Provided by: SDB7
Category:

less

Transcript and Presenter's Notes

Title: Hybrid HumanAgent Teams for Simulation Based Training


1
Hybrid Human-Agent Teams for Simulation Based
Training
2
Part 1 The VR Environment
3
Background
  • The MRE (Mission Rehearsal Exercise)
  • Aims to create a virtual reality training
    environment in which various scenarios that may
    be encountered by military units on peace-keeping
    missions can be played out.

4
(No Transcript)
5
Introduction
  • Methodology in the design of the system
  • Selecting state-of-the-art systems for different
    needs of the simulation.
  • Improving/modifying these systems where needed.
  • Developing specialized software for the
    simulation (the actor agents, and their AI).
  • Combining these different systems as seamlessly
    as possible.

6
Tweaking
  • Step-by-step improvements of the different
    components of the system strive to create a more
    believable and realistic experience for the
    trainee.
  • Every little step forward can take us a long way
    towards better virtual reality.

7
System Components
  • Video
  • Panoramic (150), semi-circular, 2 2/3 meter high
    screen encompasses the user.
  • Environment and special effects rendered using
    commercial Vega program.
  • Characters are rendered using the commercial
    PeopleShop program.
  • Rendering is done in real-time.

8
System Components (2)
  • Audio
  • Complex multi-channel audio system, with speakers
    in front, sides, rear and overhead.
  • Provides all background sound, with strong
    spatial capabilities.
  • High realism.
  • Large efforts were made to synchronize sound and
    events occurring on the screen.

9
Human Interface
  • Input (speaking to characters)
  • Speech recognition
  • Identifies keywords
  • No deeper understanding
  • Plans for the future include merging natural
    language technologies.

10
Human Interface (2)
  • Output (Agents to humans)
  • Pre-recorded statements
  • Synthesized speech (in real-time)
  • System can produce nuances in synthesized speech,
    adds to believability.
  • Lips are synchronized with generated speech!

11
Eliminating know-it-alls
  • What would happen if the mother agent could know
    everything you whispered to your sergeant?
  • What if your sergeant criticized your actions
    based on facts you he had no way of knowing?

12
Limiting Perception
  • Filters were placed on the agents perceptions, so
    that they only perceived the parts of the
    simulation that a normal person would.
  • Dont have eyes behind their back
  • Cant hear things far away, or with too much
    background noise (helicopter flying over).
  • The sergeant will cup his ear when he cant hear
    you over the noise!!

13
The Script
  • We cant allow the trainee to do whatever he/she
    wants, both because of limits in technology, and
    because of the training goals of the system.
  • On the other hand, if every attempt to deviate
    from the path is punished, the user will soon get
    frustrated, and wont actually learn to find the
    right solution.

14
StoryNet
  • The StoryNet model allows several plot nodes,
    with transitions between them.
  • Within each node, there is a relatively area of
    freeplay.
  • Transitions between nodes are caused by key
    actions or events, in a way transparent to the
    user.

15
(No Transcript)
16
The Director
  • In the future, there are plans to implement a
    Director agent, who will be in charge of guiding
    events to the correct path, if the student
    deviates too much.

17
Last but not least the Cast
  • Three types of Agents/actors
  • Scripted have predetermined behavior
    (movement/actions) which is triggered by the
    controller of the simulation, or by the actions
    of other agents.

18
The Cast (2)
  • AI these include the main characters in the
    simulation, with which the user has direct
    contact.
  • These agents have a set of general, domain -
    independent capabilities, operating over a
    declarative representation of domain tasks.
  • The task are represented to the agents as a set
    of steps (primitive actions, or other tasks), a
    set of ordering constraints, and a set of causal
    links that describe what is accomplished by each
    step, and why it is needed (to achieve a
    precondition for another step).

19
  • This knowledge about the world and tasks in it,
    allow the AI agents to
  • generate a plan to fill their tasks,
  • change their plans when the world state changes,
  • and maintain a dialog with humans and teammates
    about the task.

20
The Cast (3)
  • AI agents with emotions
  • Are built on top of the regular AI model.
  • Express emotion with gestures, and nuances of
    speech (!!).
  • Use their plan model to decide which emotions to
    express.
  • The emotional state influences the agents
    beliefs, desires and intentions.

21
Expressing Emotions
  • The emotional agents use their plan model to
    decide which emotions to express.

22
Expressing Emotions (cont.)
  • If the mother believes that the commander plans
    to leave, and that that will make her task of
    helping her child impossible, she may express
    anger.
  • If she believes the commander plans to stay, she
    may express hope.

23
Emotions (2)
  • The emotional state influences the agents
    beliefs, desires and intentions
  • The sergeant is under stress because hes
    responsible for the boys injury.
  • He may seek to relieve the stress by
  • shifting the blame to others,
  • or by
  • forming an intention to help the boy get medical
    help.
  • This choice will depend on other, adjustable,
    character traits/states-of-mind.

24
Part 2 Dialogue
25
Layout
  • Structure of AI model
  • Domain representation
  • Planning
  • Dialog actions
  • Speech acts
  • Grounding
  • Stances
  • Negotiation proceedings
  • Future work

26
AI Model
  • The agents use a general reasoning algorithm to
    understand and plan in a declarative
    representation of team tasks.
  • The algorithm is a domain-independent non-linear
    STRIPS planer.
  • The task representation is domain specific, and
    encodes the given training scenario.

27
Task Representation
  • Each task is represented by a set of steps, which
    are either
  • A primitive action physical or sensory.
  • Or
  • An abstract action a sub-task (which in turn is
    composed of a series of steps, etc.).

28
Task Representation (2)
  • In addition, there may be ordering constraints
    between the steps.
  • There can be interdependencies between steps,
    represented by causal links and threat relations.

29
Task Interdependency
  • Causal links specify that one steps goals are
    anothers precondition.
  • Threat relations step A is a threat to step B,
    if the completion of A causes the removal of one
    of the preconditions of B.

30
Refinements - Responsibility and Authority
  • Task steps are associated with the team members
    that are responsible for performing them, and,
    optionally, with the agent who has authority over
    that step
  • The agent responsible for the step should not
    perform it until authorization is given by the
    teammate with that authority.

31
Planning
  • When the team is given a (top-level) task, each
    member recursively constructs a complete detailed
    task model.
  • Each member builds the model according to his/her
    knowledge, which may be incomplete or erroneous.

32
Re-planning
  • The Agents perception module constantly monitors
    the world, and sends messages to the cognition
    module about changes in the world state.
  • Whenever changes occur, the agent rechecks his
    full plan, and corrects it according to the
    changes.
  • For instance, if a goal in the plan has been
    achieved by a teammate, there is no longer a need
    to perform the appropriate task yourself.

33
Re-planning (cont.)
  • Instead of creating the whole task plan from
    scratch whenever a change occurs, the agent uses
    its previous model as a guideline, and so cuts
    down on the amount of alternatives it calculates.

34
Agents Task Model
  • At any given time, each agent holds a complete
    plan that models the way in which he believes the
    whole team can accomplish the task.
  • This plan allows the agent to answer questions
    and reason about the task.

35
Courses Of Action (COA)
  • In order to negotiate, there is a need to support
    reasoning over several possible COAs.
  • Each COA is a high level task, as described
    above, but is not automatically marked as
    intended by the agent just because it achieves
    the goal.
  • Instead, if the agent reasons that the COA can
    achieve the goal, it is marked as relevant.

36
COAs (cont.)
  • Since different COAs may be mutualy-exclusive,
    threats and causal links between alternative COAs
    must be ignored.
  • On the other hand, the effects of a COA on other
    tasks are taken into account when calculating the
    utility of the that COA.
  • A complex utility function is used to decide
    between the alternative COAs.

37
Dialog
  • The information state is the information
    representing the complete current context of the
    dialog.
  • This model is more complex than the dialog
    state of the standard plan-based approach.
  • It contains all the necessary data relevant to
    the current state of the dialog, and data needed
    to decide on future dialog acts.
  • The information state is updated via dialog acts,
    according to dialog rules.

38
Speech Acts
  • Core speech acts
  • influence the topic under discussion
  • Establish and remove commitments.

39
Speech Acts
  • Assert
  • Info-request
  • Suggest
  • Request
  • Order
  • These acts reflect social commitments of the
    speakers, but do not directly affect the BDI of
    the person addressed, since they may be
    insincere.

40
Types Of Speech Acts
  • Speech acts can describe three things
  • Action consists of
  • Agent the person performing the action
  • Event the action being performed
  • Patient the object on which the action is being
    performed.
  • State indicates the state of some aspect of an
    object. (for instance, whether the door of the
    house is open or closed).
  • Question - about one of the above.

41
Grounding
  • In order for an act to be considered as accepted
    into the common ground of the conversation, it
    needs to be acknowledged by the agent to whom it
    was addressed.
  • This is accomplished by grounding acts.
  • Core speech acts are not seen as having their
    full effects on the social state until they are
    grounded.

42
Grounding Acts
  • Request-acknowledge
  • Acknowledge
  • Request-repair
  • Repair
  • Initiate
  • Continue
  • Display
  • Cancel

43
Stances
  • The current state of team negotiation is
    represented by a sequence of stances.
  • Each stance represents the outward representation
    of an agents view on the issue.

44
Stances (2)
  • Each stance is composed of
  • The agent who holds the stance
  • The action that the stance is about
  • The attitude (stance) of the agent towards the
    action.
  • The audience to whom the agent is presenting the
    stance.
  • The reason for the stance
  • The time at which the stance was made.

45
Stances (3)
  • The actual attitude (stance) can be one of the
    following (from most positive to most negative)
  • Committed (????????)
  • Endorsed (?????)
  • Mentioned
  • Not mentioned
  • Disparaged (?????-???)
  • Rejected (????)

46
Negotiation
  • The sequence of negotiation stances shows the
    progression of the negotiation progresses from
    proposals of action towards commitments.
  • Stances arise from core speech acts (for example
    if A requests B to do something, As stance is
    committed to that action).
  • There are also special negotiation acts accept,
    reject, counterproposal, and explanation (for or
    against), each of which give rise to the
    appropriate stance.

47
Initiating Negotiation
  • An agents initiative characteristic determines
    whether the agent is likely to start a new
    negotiation.
  • If another agent starts a negotiation with an
    order or request, the agent addressed is required
    to respond in some fashion.

48
Appropriate Responses
  • The response to an action proposal is guided by
    the following factors
  • The relevant party the agent who is in charge
    of the next step in performing the action (the
    authorizor, the agent responsible for the action,
    or the addressee himself)
  • The dialog state one of discussed,
    needs-discussion or unmentioned.
  • The plan state how the action relates to the
    agents plan (good/bad , intended/not-intended
    etc.)

49
Appropriate Responses (cont.)
  • Examples
  • The agent will reject the proposal if
  • plan-state is one of bad, considered-bad,
    unknown, conflict, goals-satisfied
  • dialogue-stateneeds-discussion.
  • The agent will accept (reluctantly)! If
  • relevant-partyme
  • plan-stateconsidered-bad
  • dialogue-statediscussed

50
Appropriate Response (cont.)
  • There are possibilities of overlap of conditions,
    in which case several responses are possible.
  • The agent has to decide which responses to
    choose, and in what order.
  • The decision is based on practical considerations
    (which is the best, or most immediately
    relevant), and on social relations between the
    proposing party and the agent (superiority/rank).

51
Resolution
  • Negotiation between teammates proceeds until both
    agree to accept/reject, or until one drops the
    contrary stance.
  • It is also possible to agree to disagree,
    assuming it is possible to proceed with other
    actions despite the disagreement.

52
Team Action
  • In order for an action to be performed, it must
    be at least endorsed by the authorized agent, and
    committed to by the responsible agent.
  • If the authorized agent endorses the action in
    front of the responsible agent who has committed
    to the action, the responsible agent will be
    expected to either execute the action (by himself
    or with help) or explain why not.

53
Plans for the Future - Explaining Responses
  • In a training simulation, it is important that
    the trainee understand why his proposals or
    actions are incorrect or inappropriate.
  • Work is being done on better methods of
    explaining and justifying the responses of the
    agents in to the trainees proposals.

54
References
  • Audio, Video and VR
  • J. Rickel et al. Towards a New Generation of
    Virtual Humans for Interactive Experiences .
  • W. Swartout et al. Towards the Holodeck
    Integrating Graphics, Sound, Character and Story.
  • The Agent Models
  • J. Rickel , W. L. Johnson Animated Agents for
    Procedural Training in Virtual Reality
    Perception, Cognition, and Motor Control.
  • J. Rickel Extending Virtual Humans to Support
    Team Training in Virtual Reality.
  • D. McAllester, D. Rosenblitt Systematic Nonlinear
    Planning.
  • The Negotiation Model
  • S Larsson, D. Traum Information state and
    dialogue management in the TRINDI Dialogue Move
    Engine Toolkit.
  • D. Traum, Computational Theory of Grounding in
    Natural Language Conversation.
  • More articles (and other related material) can be
    found at the web page of The Institute for
    Creative Technologies
Write a Comment
User Comments (0)
About PowerShow.com