Altering the ICARUS Architecture - PowerPoint PPT Presentation

About This Presentation
Title:

Altering the ICARUS Architecture

Description:

Altering the ICARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12 1/31/15 – PowerPoint PPT presentation

Number of Views:128
Avg rating:3.0/5.0
Slides: 37
Provided by: OUS47
Learn more at: http://www.isle.org
Category:

less

Transcript and Presenter's Notes

Title: Altering the ICARUS Architecture


1
Altering the ICARUS Architecture to Model Social
Cognition Pat Langley Institute for the Study
of Learning and Expertise Award Period
2/1/121/31/15 ONR Cognitive Science and
Human-Robot Interaction 6.1 Program Review
June 2528, 2013
2
Critique of the ICARUS Architecture
  • In previous work (Langley et al., 2009), we have
    developed ICARUS, an architecture that, despite
    its accomplishments
  • Relies on exhaustive, deductive inference
  • Emphasizes physical activities over mental ones
  • Cannot represent or reason about others mental
    states
  • Has inflexible mechanisms for execution / problem
    solving

This project aims to address these drawbacks by
developing a radically new version of the
architecture.
3
Research Objectives
  • We aim to develop a unified theory of the human
    cognitive architecture that supports
  • Representing and reasoning about others mental
    states
  • Flexible inference and problem solving in this
    context
  • Structural learning that supports these
    processes
  • The research projects significance lies in its
    potential to
  • Improve accounts of human reasoning and learning
  • Support agents/robots that interact effectively
    with humans
  • This effort addresses aspects of high-level
    cognition that have received little attention
    elsewhere.

4
Recent Accomplishments
  • During the past year, our teams accomplishments
    have included
  • Developing new formalisms for
  • Beliefs and goals that refer to other agents
    mental states
  • Concepts and skills that involve relations among
    mental states
  • Designing, implementing, and testing an approach
    to the incremental abduction of explanations
  • Adapting and applying this mechanism to
  • Understanding domain-level plans
  • Understanding stories in which agents reason
    about others
  • Explaining and judging behavior in moral contexts
  • Reimplementing / improving a flexible framework
    for problem solving that incorporates meta-level
    control rules

Together, these support our aims to produce a
more complete account of human cognitive
abilities.
5
Challenge Plan Understanding
A basic task that involves reasoning about
others' mental states is plan understanding,
which we can define as
  • Given A sequence S of actions agent A is
    observed to carry out
  • Given Knowledge about concepts and activities,
    organized hierarchically, that are available to
    agent A
  • Infer An explanation, E, in proof lattice form,
    that accounts for S in terms of A's goals,
    beliefs, and intentions.

This is analogous to language understanding in
that analysis produces a connected account of
input. We distinguish it from plan recognition
(Goldman et al., 1999), which assigns observed
behavior to some known category.
6
An Illustrative Example
Consider an action sequence from the Monroe
County corpus (Blaylock Allen, 2005)
Truck driver tdriver1 navigates the dump truck
dtruck1 to the location brightondump, where a
hazard team ht2 climbs into the vehicle. Then
tdriver1 navigates dtruck1 to the gas station
texaco1, where ht2 loads a generator gen2 into
dtruck1
Given such observations and knowledge about
possible goals / activities, we want to infer the
latter to explain events. In this case, we might
conclude the driver is collecting people and a
power source for some mission.
7
Plan Understanding as Abductive Inference
Our theoretical claims about plan understanding
are that it
  • Involves inference about the participating
    agents mental states (beliefs / goals about
    activities and environment)
  • Involves the abductive generation of explanations
    through the introduction of default assumptions
  • Operates in an incremental fashion to process
    observations that arrive sequentially
  • Proceeds in a data-driven manner because
    understanding arises from observations about
    agents activities

These four assumptions place constraints on our
computational account of this important process.
8
A Sample Explanation
get-to(ht2, texaco1) ? get-to(dtruck1,
br-dump) ? drive-to(tdriver1, dtruck1,
br-dump) at-loc(dtruck1, _)
at-loc(tdriver1, _) ?
navigate-vehicle(tdriver1, dtruck1, br-dump)
person(tdriver1)
vehicle(dtruck1)
can-drive(tdriver1, dtruck1)
at-loc(dtruck1, br-dump)
at-loc(tdriver1, br-dump) ? get-in(ht2,
dtruck1) not(non-ambulatory(ht2))
person(ht2) ? climb-in(ht2,
dtruck1) at-loc(ht2,
br-dump) at-loc(dtruck1,
br-dump) fit-in(ht2,
dtruck1) at-loc(ht2,
dtruck1)
? get-to(dtruck1, texaco1) ?
drive-to(tdriver1, dtruck1, texaco1)
at-loc(dtruck1, br-dump)
at-loc(tdriver1, br-dump) ?
navigate-vehicle(tdriver1, dtruck1, texaco1)
person(tdriver1)
vehicle(dtruck1)
can-drive(tdriver1, dtruck1)
at-loc(dtruck1, texaco1)
at-loc(tdriver1, texaco1) ? get-out(ht2,
dtruck1) not(non-ambulatory(ht2))
person(ht2) ? climb-out(ht2,
dtruck1) at-loc(ht2, dtruck1)
at-loc(dtruck1, texaco1)
at-loc(ht2, texaco1)
9
Representing Plan Knowledge
We represent knowledge about activities in a
notation similar to hierarchical task networks.
For example
navigate_vehicle(Driver, Veh, Loc, T_Start,
T_End) at_location(Veh, VLoc, T_1, T_Start),
at_location(Driver, VLoc, T_3, T_Start),
Driver(Driver), vehicle(Veh),
can_drive(Driver, Veh, T_9, T_10),
at_location(Veh, Loc, T_End, T_13),
at_location(Driver, Loc, T_End, T_15),
constraint(before(T_1, T_Start)),
constraint(before(T_2, T_Start)),
constraint(before(T_3, T_Start)),
constraint(before(T_4, T_Start)),
constraint(inside(T_Start, T_End, T_5, T_6)),
constraint(before(T_End, T_14)),
constraint(inside(T_Start, T_End, T_7, T_8)),
constraint(before(T_End, T_13)),
constraint(inside(T_Start, T_End, T_9, T_10)),
constraint(before(T_End, T_15)),
constraint(inside(T_Start, T_End, T_11, T_12)),
constraint(before(T_End, T_16)).
This formalism separates conditions, effects, and
invariants in terms of temporal constraints on
antecedents.
10
The UMBRA Abduction System
We have developed UMBRA, an abductive inference
system that
  • Accepts observations and adds them to working
    memory
  • Incrementally extends an explanation by
  • - Finding rules with antecedents that unify
    with wm elements
  • - Tentatively completing each rule
    instance's missing antecedents
  • - Selecting the rule instance R with best
    evaluation score
  • - Adding Rs inferred elements to memory as
    default assumptions
  • Continues until no further observations arrive

This data-driven strategy aims to produce a
coherent explanation in terms of available
knowledge. UMBRA is similar in spirit to AbRA
(Bridewell Langley, 2011).
11
Experiments on Plan Understanding
Experiments with UMBRA on the Monroe corpus show
that
  • The system can reconstruct much higher-level plan
    structure
  • Even when only a fraction of agent actions are
    observed
  • Incremental abduction is nearly as effective as
    batch processing

12
Results on Plan Understanding
Precision and recall for each problem on ten
batch runs. The former is very high on some
tasks but not as good on others. Differences are
due to features of problems in the Monroe domain.
Recall is mediocre for similar reasons.
13
Challenge Social Understanding in Fables
A more challenging task involves reasoning about
plans that take others' mental states in
account. This ability is required to understand
Aesop-style fables like
The Snake, the Lion and the Sheep. The lion is
too old to chase down animals. The lion announces
he is sick. The sheep, believing he is harmless,
follows social convention and visits the lion's
caves to pay his respects. The lion kills and
devours the sheep. A snake watches these events
and understands the deception that occurred.
Explanations of such stories include beliefs and
goals about others beliefs and goals. This
requires extensions to representations in both
working memory and long-term knowledge.
14
Extending Working Memory
UMBRA represents agents mental states in terms
of embedded structures like
  • belief(fox, has(crow, grapes, 0930, _), 0931, _)
  • goal(crow, acquire_edible_food(crow, _, _))
  • belief(snake, belief(lion,
  • at_location(lion, river, 0900, _), 0902,
    _), 0902, _)
  • belief(snake, goal(fox,
  • trade_food(crow, grapes, fox, grain, 0940,
    _), 0930, _), 0933, _)
  • goal(lion, belief(sheep, sick(lion, 0900, 2400),
    0945, _), 0900, _)

Elements of this sort provide building blocks for
explanations of scenarios that involve agents
reasoning about others.
15
Extending Knowledge about Activities
UMRBA also requires planning operators that
influence others' mental states, such as for
communicative actions
announce_falsehood(Actor, Agent2, Content, START,
END) neg(dead(Actor, T1, T2)),
exists(Actor, T3, T4), belief(Actor,
neg(Content), T5, T6), agent(Actor),
agent(Agent2), announce_act(Actor, Agent2,
Content, T_S, T_END), belief(Agent2,
Content, T_END, T7), belief(Actor,
belief(Agent2, Content, T_END, T8), T_END, T9),
constraint(inside(T_S, T_END, T1, T2)),
constraint(before(T_END, T8)),
constraint(before(T_S, T_END)).
These structures, combined with domain knowledge,
support abductive construction of complex social
explanations.
16
A Testbed for Social Understanding
We have constructed a domain and test scenarios,
based largely on Aesop's fables, with knowledge
that includes
  • About 60 distinct skills / operators
  • alternative decompositions
  • many with overlapping conditions
  • only ten percent used in any 'correct' fable
    explanation
  • about 500 domain-level conditions, excluding
    constraints
  • About 100 distinct domain-level predicates

Most of the six scenarios involve plans that
depend on one or more agents reasoning about the
mental states of others.
17
Results on Social Understanding
We have tested UMBRA on fable scenarios that
involve different levels of complexity beyond
basic plan understanding.
Nested understanding The primary agent
interprets another agent's mental states and/or
plan based on observed behavior. Feeling hungry,
a crow travels to a barn and acquires grain by
opening a jar. A snake watches and understands
the crow solving her simple problem. Deeply
nested understanding The primary agent infers a
secondary agents inferences about a third
agent's mental states. A fox, watching the snake
watching the crow, imagines what the snake thinks
about the crow's situation. Inferring mistakes
in understanding The primary agent infers
another agent's mistaken beliefs, why they arise,
and the true account. A lion is proud of his
mane. He passes by a river, sees his reflection,
and attacks the other lion. An observing snake
infers why he takes this action.
18
Results on Social Understanding
Reasoning about opportunism in understanding The
primary agent understands how another agent
capitalizes upon another's false beliefs. A
hungry crow in possession of some sour grapes
trades them to a fox, who assumes they are sweet,
in return for delicious grain. A watching snake
explains the interaction. Reasoning about
deception in understanding The primary agent
infers than another agent deliberately engenders
false beliefs in a third agent in order to
achieve some goal. A lion is too old to chase
down animals. The lion announces he is sick. The
sheep, believing he is harmless, follows social
convention and visits the lion's caves to pay his
respects. The lion kills and devours the sheep. A
snake who watches these events and understands
the deception that occurred.
UMBRA constructs the desired explanations for
each scenario, some of which involve deeply
embedded mental models.
19
Complete Structure of a Fable Explanation
Green condition Yellow effect Orange
invariant Blue constraint Diamond task / skill
20
Portion of a Fable Explanation
Green condition Yellow effect Orange
invariant Blue constraint Diamond task / skill
Green condition Yellow effect Orange
invariant Blue constraint Diamond task / skill
21
One Element of a Fable Explanation
Green condition Yellow effect Orange
invariant Blue constraint Diamond task / skill
22
Challenge Moral Judgement
An even more challenging cognitive task involves
complex moral judgement, which we can specify as
  • Given A sequence S of observed actions,
    including the agent(s) A who performed them
  • Given Knowledge about these and related events,
    including their relation to moral concepts
  • Infer An explanation E that accounts for S in
    terms of this knowledge and As beliefs, goals,
    and intentions and
  • Infer A moral evaluation of S that takes into
    account the explanation E.

This task combines plan understanding with
evaluation in terms of moral concepts.
23
Claims about Moral Judgement
We maintain that complex moral judgement is a
form of social plan understanding in that it
  • Focuses on the mental states of agents who
    interact in a given scenario
  • Depends on rules that abstract away from
    domain-specific details and focus on relations
    among mental states
  • Involves the linking of rule instances into some
    connected explanation of observed behavior.

However, the process also relies on calculating
numeric values on elements that reflect
evaluations of behavior.
24
A Sample Moral Explanation
Consider a scenario in which one agent (John)
causes another (Kelly) to feel pain by shoving
her.
We might infer that John carried out this action
deliberately so that Kelly would experience
distress.
25
Evaluations of Moral Explanations
We plan to extend UMBRA to support the evaluation
of moral explanations by
  • Adding numeric annotations to long-term
    knowledge structures
  • A default weight for each conceptual
    predicate
  • An upward factor for each rule's antecedent
  • A downward factor for each rule's antecedent
  • Calculating an evaluation for each element in an
    explanation by
  • Multiplying the sum of upward factors by the
    default value and
  • propagating the result upward to the
    root(s)
  • Multiplying downward factors by the accrued
    values at root(s)

We also maintain that top-down influences account
for the effect of mitigating factors on judgement
scores.
26
Problem Solving in ICARUS
The current ICARUS architecture incorporates a
distinct module for problem solving that
  • Utilizes means-ends analysis
  • Carries out depth-first search
  • Interleaves tightly with skill execution
  • Cannot reason about others mental states

These features do not reflect the character of
human problem solving, which is far more
flexible. Our new framework aims to support such
flexibility by using meta-level knowledge.
27
Flexible Problem Solving
We have redsigned and reimplemented our
meta-level approach to problem solving to support
different
  • Search strategies (depth first, breadth-first,
    iterative sampling)
  • Intention selection strategies (means ends,
    forward search)
  • Intention application strategies (eager, delayed
    commitment)
  • Failure conditions (depth limited, effort
    limited, loops)
  • Solution conditions (single, multiple, all)

These behaviors are produced by differences among
meta-level, domain-independent control rules
associated with five modules. Soar (Laird, 2012)
takes a similar but finer-grained approach our
framework is closer to that in Prodigy (Minton,
1988).
28
Organization of Problem Solving
Problem solving occurs in cycles, with meta-level
rules determining behavior at each successive
stage.
Meta-level rules determine the systems behavior
for each stage.
29
Problem Decompositions
Problems play the central organizing structure in
our framework. Down subproblems have the same
state as their parents. Right subproblems have
the same goals as their parents.
This organization is the same as that in
means-ends problem solving, but we use it to
support very different strategies.
30
Plans for Future Research
Although we have made substantial progress toward
the project goals, we still need to
  • Extend UMBRA to support belief revision when it
    decides its default assumptions are faulty
  • Augment the meta-level problem solver to support
    execution of plans in the environment
  • Integrate UMBRAs inference mechanism with our
    approach to flexible problem solving
  • Introduce mechanisms for learning structures from
    explanations
  • Carry our experiments to demonstrate these
    extensions benefits

The resulting architecture should offer a more
complete account of high-level cognition in
humans.
31
Summary Remarks
In this talk, I presented elements of a new
cognitive architecture that addresses limitations
of ICARUS by
  • Represents mental states in terms of embedded
    beliefs / goals
  • Incorporates an incremental approach to
    abductive inference
  • Combines these to support plan understanding
  • Basic explanations of observed physical
    activities
  • Explanations that involve agents reasoning about
    other agents
  • Moral judgements that include inferences about
    agent intentions
  • Uses meta-level control to support flexible
    problem solving

When integrated, these should give us a new
version of ICARUS that has substantially greater
breadth and flexibility.
32
Publications and Presentations
Langley, P. (2012). The cognitive systems
paradigm. Advances in Cognitive Systems, 1,
3-13. Langley, P. (2012). Intelligent behavior in
humans and machines. Advances in Cognitive
Systems, 2, 3-12. MacLellan, C., Langley, P.,
Walker, C. (2012). A generative theory of problem
solving. Poster Collection / First Annual
Conference on Advances in Cognitive Systems,
1-18. Meadows, B., Langley, P., Emery, M. (in
press). Seeing beyond shadows Incremental
abductive explanation for plan understanding.
Proceedings of the AAAI-2013 Workshop on Plan,
Activity, and Intent Recognition. Liu, L.,
Langley, P., Meadows, B. (in press). A
computational account of complex moral judgement.
Proceedings of the Annual Conference of the
International Association for Computing and
Philosophy. The Cognitive Systems Paradigm.
Presented at AAAI Fall Symposium on Advances in
Cognitive Systems, Arlington, VA, November, 2011.
Intelligent Behavior in Humans and Machines.
Presented at First Annual Conference on Advances
in Cognitive Systems, Palo Alto, CA, December,
2012.
33
Cooperative Development
Our research on this project has benefited from
results produced on a number of other efforts
  • Commitments to hierarchical concepts / skills
    borrowed from initial ICARUS architecture
    developed under ONR funding
  • Representation of mental states developed jointly
    with ONR MURI project at CMU
  • Ideas on abductive inference co-developed with W.
    Bridewell in ONR MURI work at Stanford

These efforts have let us make more rapid
progress than would have been possible otherwise.
34
Transition Plan
Our research on computational social cognition
has clear uses in virtual agents and human-robot
interaction. In the longer term, we hope to
transition our results to applied settings like
  • Virtual medical assistants that interact with
    field medics to help them provide emergency care
  • Cognitive robots that interact with Navy
    personnel dealing with shipboard problems (e.g.,
    fighting fires)

We hope to take advantage of existing
relationships with NRL researchers to increase
the chances of successful transitions.
35
Project Budget
  • The research projects budget, by federal fiscal
    year, is
  • FY2012 118K
  • FY2013 179K
  • FY2014 182K
  • FY2014 60K
  • No DURIP were awarded in relation to this
    project.

36
End of Presentation
Write a Comment
User Comments (0)
About PowerShow.com