Computational%20Discovery%20of%20Communicable%20Knowledge - PowerPoint PPT Presentation

About This Presentation
Title:

Computational%20Discovery%20of%20Communicable%20Knowledge

Description:

Title: Computational Discovery of Communicable Knowledge Author: Langley Last modified by: Pat Langley Created Date: 11/3/1999 6:22:08 PM Document presentation format – PowerPoint PPT presentation

Number of Views:159
Avg rating:3.0/5.0
Slides: 111
Provided by: Lang147
Learn more at: http://www.isle.org
Category:

less

Transcript and Presenter's Notes

Title: Computational%20Discovery%20of%20Communicable%20Knowledge


1
A Cognitive Architecture for Complex Learning
Pat Langley Institute for the Study of Learning
and Expertise Palo Alto, California http//www.isl
e.org/
Thanks to D. Choi, K. Cummings, N. Nejati, S.
Rogers, S. Sage, and D. Shapiro for their
contributions. This talk reports research. funded
by grants from DARPA IPTO and the National
Science Foundation, which are not responsible for
its contents.
2
Reasons for Studying Cognitive Architectures
3
Cognitive Systems
  • The original goal of artificial intelligence was
    to design and implement computational artifacts
    that
  • handled difficult tasks that require cognitive
    processing
  • combined many capabilities into integrated
    systems
  • provided insights into the nature of mind and
    intelligence.

Instead, modern AI has divided into many
subfields that care little about cognition,
systems, or intelligence. But the challenge
remains and we need far more research on
cognitive systems.
4
The Fragmentation of AI Research
?
5
The Domain of In-City Driving
  • Consider driving a vehicle in a city, which
    requires
  • selecting routes
  • obeying traffic lights
  • avoiding collisions
  • being polite to others
  • finding addresses
  • staying in the lane
  • parking safely
  • stopping for pedestrians
  • following other vehicles
  • delivering packages
  • These tasks range from low-level execution to
    high-level reasoning.

6
Newells Critique
In 1973, Allen Newell argued You cant play
twenty questions with nature and win. Instead,
he proposed that we
  • move beyond isolated phenomena and capabilities
    to develop complete models of intelligent
    behavior
  • demonstrate our systems intelligence on the same
    range of domains and tasks as humans can handle
  • view artificial intelligence and cognitive
    psychology as close allies with distinct but
    related goals
  • evaluate these systems in terms of generality and
    flexibility rather than success on a single class
    of tasks.
  • However, there are different paths toward
    achieving such systems.

7
A System with Communicating Modules
software engineering / multi-agent systems
8
A System with Shared Short-Term Memory
short-term beliefs and goals
blackboard architectures
9
Integration vs. Unification
  • Newells vision for research on theories of
    intelligence was that
  • cognitive systems should make strong theoretical
    assumptions about the nature of the mind
  • theories of intelligence should change only
    gradually, as new structures or processes are
    determined necessary
  • later design choices should be constrained
    heavily by earlier ones, not made independently.

A successful framework is all about mutual
constraints, and it should provide a unified
theory of intelligent behavior. He associated
these aims with the idea of a cognitive
architecture.
10
A System with Shared Long-Term Memory
short-term beliefs and goals long-term
memory structures
cognitive architectures
11
A Constrained Cognitive Architecture
short-term beliefs and goals long-term
memory structures
12
Aspects of Cognitive Architectures
  • As traditionally defined and utilized, a
    cognitive architecture
  • specifies the infrastructure that holds constant
    over domains, as opposed to knowledge, which
    varies.
  • models behavior at the level of functional
    structures and processes, not the knowledge or
    implementation levels.
  • commits to representations and organizations of
    knowledge and processes that operate on them.
  • comes with a programming language for encoding
    knowledge and constructing intelligent systems.

Early candidates were cast as production system
architectures, but alternatives have gradually
expanded the known space.
13
The ICARUS Architecture
In this talk I will use one such framework ?
ICARUS ? to illustrate the advantages of
cognitive architectures. ICARUS incorporates a
variety of assumptions from psychological
theories the most basic are that
  1. Short-term memories are distinct from long-term
    stores
  2. Memories contain modular elements cast as list
    structures
  3. Long-term structures are accessed through pattern
    matching
  4. Cognition occurs in retrieval/selection/action
    cycles
  5. Performance and learning compose elements in
    memory

These claims give ICARUS much in common with
other cognitive architectures like ACT-R, Soar,
and Prodigy.
14
Memories and Representations in the ICARUS
Architecture
15
Architectural Commitment to Memories
  • A cognitive architecture makes a specific
    commitment to
  • long-term memories that store knowledge and
    procedures
  • short-term memories that store beliefs and
    goals
  • sensori-motor memories that hold percepts and
    actions.
  • For each memory, a cognitive architecture also
    commits to
  • the encoding of contents in that memory
  • the organization of structures within the
    memory
  • the connections among structures across
    memories.

Each memory holds different content the agent
uses in activities.
16
Ideas about Representation
Cognitive psychology makes important
representational claims
  • concepts and skills encode different aspects of
    knowledge that are stored as distinct cognitive
    structures
  • cognition occurs in a physical context, with
    concepts and skills being grounded in perception
    and action
  • many mental structures are relational in nature,
    in that they describe connections or interactions
    among objects
  • long-term memories have hierarchical
    organizations that define complex structures in
    terms of simpler ones
  • each element in a short-term memory is an active
    version of some structure in long-term memory.

ICARUS adopts these assumptions about the
contents of memory.
17
ICARUS Memories
Perceptual Buffer
Long-Term Conceptual Memory
Short-Term Belief Memory
Environment
Long-Term Skill Memory
Short-Term Goal Memory
Motor Buffer
18
Representing Long-Term Structures
ICARUS encodes two forms of general long-term
knowledge
  • Conceptual clauses A set of relational inference
    rules with perceived objects or defined concepts
    in their antecedents
  • Skill clauses A set of executable skills that
    specify
  • a head that indicates a goal the skill achieves
  • a single (typically defined) precondition
  • a set of ordered subgoals or actions for
    achieving the goal.

These define a specialized class of hierarchical
task networks in which each task corresponds to a
goal concept. ICARUS syntax is very similar to
Nau et al.s SHOP2 formalism for hierarchical
task networks.
19
ICARUS Concepts for In-City Driving
((in-rightmost-lane ?self ?clane) percepts
( (self ?self) (segment ?seg) (line ?clane
segment ?seg)) relations ((driving-well-in-segme
nt ?self ?seg ?clane) (last-lane ?clane) (not
(lane-to-right ?clane ?anylane)))) ((driving-well
-in-segment ?self ?seg ?lane) percepts ((self
?self) (segment ?seg) (line ?lane segment ?seg))
relations ((in-segment ?self ?seg) (in-lane
?self ?lane) (aligned-with-lane-in-segment ?self
?seg ?lane) (centered-in-lane ?self ?seg
?lane) (steering-wheel-straight
?self))) ((in-lane ?self ?lane) percepts
( (self ?self segment ?seg) (line ?lane segment
?seg dist ?dist)) tests ( (gt ?dist -10)
(lt ?dist 0))) ((in-segment ?self ?seg)
percepts ( (self ?self segment ?seg) (segment
?seg)))
20
ICARUS Skills for In-City Driving
((in-rightmost-lane ?self ?line) percepts
((self ?self) (line ?line)) start
((last-lane ?line)) subgoals ((driving-well-in-s
egment ?self ?seg ?line))) ((driving-well-in-seg
ment ?self ?seg ?line) percepts ((segment
?seg) (line ?line) (self ?self)) start
((steering-wheel-straight ?self)) subgoals
((in-segment ?self ?seg) (centered-in-lane ?self
?seg ?line) (aligned-with-lane-in-segment ?self
?seg ?line) (steering-wheel-straight
?self))) ((in-segment ?self ?endsg) percepts
((self ?self speed ?speed) (intersection ?int
cross ?cross) (segment ?endsg street ?cross
angle ?angle)) start ((in-intersection-fo
r-right-turn ?self ?int)) actions ((?steer
1)))
21
Representing Short-Term Beliefs/Goals
(current-street me A) (current-segment me
g550) (lane-to-right g599 g601) (first-lane
g599) (last-lane g599) (last-lane
g601) (at-speed-for-u-turn me) (slow-for-right-tur
n me) (steering-wheel-not-straight
me) (centered-in-lane me g550 g599) (in-lane me
g599) (in-segment me g550) (on-right-side-in-segme
nt me) (intersection-behind g550
g522) (building-on-left g288) (building-on-left
g425) (building-on-left g427) (building-on-left
g429) (building-on-left g431) (building-on-left
g433) (building-on-right g287) (building-on-right
g279) (increasing-direction me) (buildings-on-righ
t g287 g279)
22
Encoding Perceived Objects
(self me speed 5 angle-of-road -0.5
steering-wheel-angle -0.1) (segment g562 street 1
dist -5.0 latdist 15.0) (line g564 length 100.0
width 0.5 dist 35.0 angle 1.1 color white segment
g562) (line g565 length 100.0 width 0.5 dist 15.0
angle 1.1 color white segment g562) (line g563
length 100.0 width 0.5 dist 25.0 angle 1.1 color
yellow segment g562) (segment g550 street A dist
oor latdist nil) (line g600 length 100.0 width
0.5 dist -15.0 angle -0.5 color white segment
g550) (line g601 length 100.0 width 0.5 dist 5.0
angle -0.5 color white segment g550) (line g599
length 100.0 width 0.5 dist -5.0 angle -0.5 color
yellow segment g550) (intersection g522 street A
cross 1 dist -5.0 latdist nil) (building g431
address 99 street A c1dist 38.2 c1angle -1.4
c2dist 57.4 c2angle -1.0) (building g425 address
25 street A c1dist 37.8 c1angle -2.8 c2dist 56.9
c2angle -3.1) (building g389 address 49 street 1
c1dist 49.2 c1angle 2.7 c2dist 53.0 c2angle
2.2) (sidewalk g471 dist 15.0 angle
-0.5) (sidewalk g474 dist 5.0 angle
1.07) (sidewalk g469 dist -25.0 angle
-0.5) (sidewalk g470 dist 45.0 angle
1.07) (stoplight g538 vcolor green hcolor red))
23
Hierarchical Structure of Long-Term Memory
ICARUS organizes both concepts and skills in a
hierarchical manner.
concepts
Each concept is defined in terms of other
concepts and/or percepts. Each skill is defined
in terms of other skills, concepts, and percepts.
skills
24
Hierarchical Structure of Long-Term Memory
ICARUS interleaves its long-term memories for
concepts and skills.
For example, the skill highlighted here refers
directly to the highlighted concepts.
25
Performance and Learning in the ICARUS
Architecture
26
Architectural Commitment to Processes
  • In addition, a cognitive architecture makes
    commitments about
  • performance processes for
  • retrieval, matching, and selection
  • inference and problem solving
  • perception and motor control
  • learning processes that
  • generate new long-term knowledge structures
  • refine and modulate existing structures

In most cognitive architectures, performance and
learning are tightly intertwined.
27
Ideas about Performance
Cognitive psychology makes clear claims about
performance
  • humans can handle multiple goals with different
    priorities, which can interrupt tasks to which
    attention returns later
  • conceptual inference, which typically occurs
    rapidly and unconsciously, is more basic than
    problem solving
  • humans often resort to means-ends analysis to
    solve novel, unfamiliar problems
  • mental problem solving requires greater cognitive
    resources than execution of automatized skills
  • problem solving often occurs in a physical
    context, with mental processing being interleaved
    with execution.

ICARUS embodies these ideas in its performance
mechanisms.
28
ICARUS Functional Processes
Perceptual Buffer
Short-Term Belief Memory
Long-Term Conceptual Memory
Conceptual Inference
Perception
Environment
Skill Retrieval and Selection
Short-Term Goal Memory
Long-Term Skill Memory
Skill Execution
Problem Solving Skill Learning
Motor Buffer
29
Cascaded Integration in ICARUS
ICARUS adopts a cascaded approach to system
integration in which lower-level modules produce
results for higher-level ones.
learning
problem solving
skill execution
conceptual inference
This contrasts sharply with multi-agent
approaches to building AI systems and reflects
the notion of a unified cognitive architecture.
30
ICARUS Inference-Execution Cycle
On each successive execution cycle, the ICARUS
architecture
  1. places descriptions of sensed objects in the
    perceptual buffer
  2. infers instances of concepts implied by the
    current situation
  3. finds paths through the skill hierarchy from
    top-level goals
  4. selects one or more applicable skill paths for
    execution
  5. invokes the actions associated with each selected
    path.

ICARUS agents are teleoreactive (Nilsson, 1994)
in that they are executed reactively but in a
goal-directed manner.
31
Inference and Execution in ICARUS
ICARUS matches patterns to recognize concepts and
select skills.
concepts
Concepts are matched bottom up, starting from
percepts. Skill paths are matched top down,
starting from intentions.
skills
32
The Standard Theory of Problem Solving
  • Traditional theories claim that human problem
    solving occurs in response to unfamiliar tasks
    and involves
  • the mental inspection and manipulation of list
    structures
  • search through a space of states generated by
    operators
  • backward chaining from goals through means-ends
    analysis
  • a shift from backward to forward chaining with
    experience.

These claims characterize problem solving
accurately, but this does not mean they are
complete.
33
The Physical Context of Problem Solving
ICARUS is a cognitive architecture for physical,
embodied agents. On each successive
perception-execution cycle, the architecture
  1. places descriptions of sensed objects in the
    perceptual buffer
  2. infers instances of concepts implied by the
    current situation
  3. finds paths through the skill hierarchy from
    top-level goals
  4. selects one or more applicable skill paths for
    execution
  5. invokes the actions associated with each selected
    path.

Problem solving in ICARUS builds upon this basic
ability to recognize physical situations and
execute skills therein.
34
Abstraction from Physical Details
ICARUS typically pursues problem solving at an
abstract level
  • conceptual inference augments perceptions using
    high-level concepts that provide abstract state
    descriptions.
  • execution operates over high-level durative
    skills that serve as abstract problem-space
    operators.
  • both inference and execution occur in an
    automated manner that demands few attentional
    resources.

However, concepts are always grounded in
primitive percepts and skills always terminate in
executable actions. ICARUS holds that cognition
relies on a symbolic physical system which
utilizes mental models of the environment.
35
A Successful Problem-Solving Trace
initial state
(clear C)
(hand-empty)
(unst. C B)
(clear B)
(unstack C B)
goal
(on C B)
(unst. B A)
(clear A)
(unstack B A)
(ontable A T)
(holding C)
(hand-empty)
(putdown C T)
(on B A)
(holding B)
36
Interleaved Problem Solving and Execution
ICARUS includes a module for means-ends problem
solving that
  • chains backward off skills that would produce the
    goal
  • chains backwards off concepts if no skills are
    available
  • creates subgoals based on skill or concept
    conditions
  • pushes these subgoals onto a goal stack and
    recurses
  • executes any selected skill as soon as it is
    applicable.

Embedding execution within problem solving
reduces memory load and uses the environment as
an external store.
37
ICARUS Interleaves Execution and Problem Solving
Skill Hierarchy
Problem
Reactive Execution
?
no
impasse?
Primitive Skills
Executed plan
yes
Problem Solving
38
Interleaving Reactive Control and Problem Solving
Solve(G) Push the goal literal G onto the empty
goal stack GS. On each cycle, If the top
goal G of the goal stack GS is satisfied,
Then pop GS. Else if the goal stack GS does
not exceed the depth limit, Let S be
the skill instances whose heads unify with G.
If any applicable skill paths start from an
instance in S, Then select one of these
paths and execute it. Else let M be the
set of primitive skill instances that have not
already failed in which G is an effect.
If the set M is nonempty,
Then select a skill instance Q from M. Push
the start condition C of Q onto goal stack GS.
Else if G is a complex concept with
the unsatisfied subconcepts H and with satisfied
subconcepts F, Then if
there is a subconcept I in H that has not yet
failed, Then push
I onto the goal stack GS.
Else pop G from the goal stack GS and
store information about failure with G's parent.
Else pop G from the goal
stack GS. Store
information about failure with G's parent.
This is traditional means-ends analysis, with
three exceptions (1) conjunctive goals must be
defined concepts (2) chaining occurs over both
skills/operators and concepts/axioms and (3)
selected skills are executed whenever applicable.
39
Restarting on Problems
Even when combined with backtracking, eager
execution can lead problem solving to
unrecoverable states. ICARUS problem solver
handles such untenable situations by
  • detecting when action has made backtracking
    impossible
  • storing the goal context to avoid repeating the
    error
  • physically restarting the problem in the initial
    situation
  • repeating this process until succeeding or giving
    up.

This strategy produces quite different behavior
from the purely mental systematic search assumed
by most models.
40
Claims about Learning
Cognitive psychology has also developed ideas
about learning
  • efforts to overcome impasses during problem
    solving can lead to the acquisition of new
    skills
  • learning can transform backward-chaining
    heuristic search into more informed
    forward-chaining behavior
  • learning is incremental and interleaved with
    performance
  • structural learning involves monotonic addition
    of symbolic elements to long-term memory
  • transfer to new tasks depends on the amount of
    structure shared with previously mastered tasks.

ICARUS incorporates these assumptions into its
basic operation.
41
Learning from Problem Solutions
ICARUS incorporates a mechanism for learning new
skills that
  • operates whenever problem solving overcomes an
    impasse
  • incorporates only information available from the
    goal stack
  • generalizes beyond the specific objects
    concerned
  • depends on whether chaining involved skills or
    concepts
  • supports cumulative learning and within-problem
    transfer.

This skill creation process is fully interleaved
with means-ends analysis and execution. Learned
skills carry out forward execution in the
environment rather than backward chaining in the
mind.
42
ICARUS Learns Skills from Problem Solving
Reactive Execution
no
impasse?
Primitive Skills
Executed plan
yes
Problem Solving
Skill Learning
43
ICARUS Constraints on Skill Learning
  • What determines the hierarchical structure of
    skill memory?
  • The structure emerges the subproblems that arise
    during problem solving, which, because operator
    conditions and goals are single literals, form a
    semilattice.
  • What determines the heads of the learned
    clauses/methods?
  • The head of a learned clause is the goal literal
    that the planner achieved for the subproblem that
    produced it.
  • What are the conditions on the learned
    clauses/methods?
  • If the subproblem involved skill chaining, they
    are the conditions of the first subskill clause.
  • If the subproblem involved concept chaining, they
    are the subconcepts that held at the subproblems
    outset.

44
Constructing Skills from a Trace
(clear C)
skill chaining
1
(hand-empty)
(unst. C B)
(clear B)
(unstack C B)
(on C B)
(unst. B A)
(clear A)
(unstack B A)
(ontable A T)
(holding C)
(hand-empty)
(putdown C T)
(on B A)
(holding B)
45
Constructing Skills from a Trace
(clear C)
1
(hand-empty)
(unst. C B)
(clear B)
(unstack C B)
(on C B)
(unst. B A)
(clear A)
(unstack B A)
skill chaining
2
(ontable A T)
(holding C)
(hand-empty)
(putdown C T)
(on B A)
(holding B)
46
Constructing Skills from a Trace
(clear C)
concept chaining
3
1
(hand-empty)
(unst. C B)
(clear B)
(unstack C B)
(on C B)
(unst. B A)
(clear A)
(unstack B A)
2
(ontable A T)
(holding C)
(hand-empty)
(putdown C T)
(on B A)
(holding B)
47
Constructing Skills from a Trace
skill chaining
(clear C)
4
3
1
(hand-empty)
(unst. C B)
(clear B)
(unstack C B)
(on C B)
(unst. B A)
(clear A)
(unstack B A)
2
(ontable A T)
(holding C)
(hand-empty)
(putdown C T)
(on B A)
(holding B)
48
Learned Skills in the Blocks World
(clear (?C) percepts ((block ?D) (block ?C))
start ((unstackable ?D ?C)) skills ((unstack
?D ?C)))(clear (?B) percepts ((block ?C)
(block ?B)) start ((on ?C ?B) (hand-empty))
skills ((unstackable ?C ?B) (unstack ?C
?B)))(unstackable (?C ?B) percepts ((block
?B) (block ?C)) start ((on ?C ?B)
(hand-empty)) skills ((clear ?C)
(hand-empty)))(hand-empty ( ) percepts
((block ?D) (table ?T1)) start ((putdownable
?D ?T1)) skills ((putdown ?D ?T1)))
Hierarchical skills are generalized traces of
successful means-ends problem solving
49
Initial Results with ICARUS
50
Architectures as Programming Languages
  • Cognitive architectures come with a programming
    language that
  • includes a syntax linked to its representational
    assumptions
  • inputs long-term knowledge and initial short-term
    elements
  • provides an interpreter that runs the specified
    program
  • incorporates tracing facilities to inspect system
    behavior

Such programming languages ease construction and
debugging of knowledge-based systems. For this
reason, cognitive architectures support far more
efficient development of software for intelligent
systems.
51
Programming in ICARUS
  • The programming language associated with ICARUS
    comes with
  • a syntax for concepts, skills, beliefs, and
    percepts
  • the ability to load and parse such programs
  • an interpreter for inference, execution,
    planning, and learning
  • a trace package that displays system behavior
    over time

We have used this language to develop adaptive
intelligent agents in a variety of domains.
52
An ICARUS Agent for Urban Combat
53
Learning Skills for In-City Driving
We have also trained ICARUS to drive in our
in-city environment. We provide the system with
tasks of increasing complexity. Learning
transforms the problem-solving traces into
hierarchical skills. The agent uses these skills
to change lanes, turn, and park using only
reactive control.
54
Skill Clauses Learning for In-City Driving
((parked ?me ?g1152) percepts ( (lane-line
?g1152) (self ?me)) start ( )
subgoals ( (in-rightmost-lane ?me ?g1152)
(stopped ?me)) )((in-rightmost-lane
?me ?g1152) percepts ( (self ?me) (lane-line
?g1152)) start ( (last-lane ?g1152))
subgoals ( (driving-well-in-segment ?me ?g1101
?g1152)) ) ((driving-well-in-segment ?me ?g1101
?g1152) percepts ( (lane-line ?g1152)
(segment ?g1101) (self ?me)) start
( (steering-wheel-straight ?me))
subgoals ( (in-lane ?me ?g1152)
(centered-in-lane ?me ?g1101 ?g1152)
(aligned-with-lane-in-segment ?me ?g1101
?g1152) (steering-wheel-straight
?me)) )
55
Learning Curves for In-City Driving
56
Cumulative Curves for Blocks World
57
Cumulative Curves for Blocks World
58
Cumulative Curves for FreeCell
59
Related Research
60
Intellectual Precursors
ICARUS design has been influenced by many
previous efforts
  • earlier research on integrated cognitive
    architectures
  • especially ACT, Soar, and Prodigy
  • earlier frameworks for reactive control of agents
  • research on belief-desire-intention (BDI)
    architectures
  • planning/execution with hierarchical transition
    networks
  • work on learning macro-operators and
    search-control rules
  • previous work on cumulative structure learning

However, the framework combines and extends ideas
from its various predecessors in novel ways.
61
Some Other Cognitive Architectures
ACT
Soar
PRODIGY
EPIC
RCS
GIPS
3T
APEX
CAPS
CLARION
Dynamic Memory
Society of Mind
62
Similarities to Previous Architectures
ICARUS has much in common with other cognitive
architectures like Soar (Laird et al., 1987) and
ACT-R (Anderson, 1993)
  1. Short-term memories are distinct from long-term
    stores
  2. Memories contain modular elements cast as
    symbolic structures
  3. Long-term structures are accessed through pattern
    matching
  4. Cognition occurs in retrieval/selection/action
    cycles
  5. Learning is incremental and interleaved with
    performance

These ideas all have their origin in theories of
human memory, problem solving, and skill
acquisition.
63
Distinctive Features of ICARUS
However, ICARUS also makes assumptions that
distinguish it from most other architectures
  1. Cognition is grounded in perception and action
  2. Categories and skills are separate cognitive
    entities
  3. Short-term elements are instances of long-term
    structures
  4. Inference and execution are more basic than
    problem solving
  5. Skill/concept hierarchies are learned in a
    cumulative manner

Some of these assumptions appear in Bonasso et
al.s 3T, Freeds APEX, and Sun et al.s CLARION
architectures.
These ideas have their roots in cognitive
psychology, but they are also effective in
building integrated intelligent agents.
64
Directions for Future Research
Future work on ICARUS should introduce additional
methods for
  • forward chaining and mental simulation of skills
  • learning expected utilities from skill execution
    histories
  • learning new conceptual structures in addition to
    skills
  • probabilistic encoding and matching of Boolean
    concepts
  • flexible recognition of skills executed by other
    agents
  • extension of short-term memory to store episodic
    traces.

Taken together, these features should make ICARUS
a more general and powerful cognitive
architecture.
65
Contributions of ICARUS
ICARUS is a cognitive architecture for physical
agents that
  • includes separate memories for concepts and
    skills
  • organizes both memories in a hierarchical
    fashion
  • modulates reactive execution with goal seeking
  • augments routine behavior with problem solving
    and
  • learns hierarchical skills in a cumulative manner.

These ideas have their roots in cognitive
psychology, but they are also effective in
building flexible intelligent agents.
For more information about the ICARUS
architecture, see http//cll.stanford.edu
/research/ongoing/icarus/
66
Transfer of Learned Knowledge
67
Generality in Learning
general learning in multiple domains
training items
test items
Humans exhibit general intelligence by their
ability to learn in many domains.
68
Generality and Transfer in Learning
transfer of learning across domains
general learning in multiple domains
test items
training items
test items
training items
Humans exhibit general intelligence by their
ability to learn in many domains.
Humans are also able to utilize knowledge learned
in one domain in other domains.
69
What is Transfer?
A learner exhibits transfer of learning from
task/domain A to task/domain B when, after it has
trained on A, it shows improved behavior on B.
performance
performance
experience
experience
learning curve for task A
better intercept on task B
performance
performance
experience
experience
faster learning rate on task B
better asymptote on task B
70
What is Transfer?
  • Transfer is a sequential phenomenon that occurs
    in settings which involve on-line learning.
  • Thus, multi-task learning does not involve
    transfer.
  • Transfer involves the reuse of knowledge
    structures.
  • Thus, it requires more than purely statistical
    learning.
  • Transfer can lead to improved behavior (positive
    transfer).
  • But it can also produce worse behavior (negative
    transfer).
  • Transfer influences learning but is not a form of
    learning.
  • Thus, transfer learning is an oxymoron, much
    like the phrase learning performance.

71
Roots of Transfer in Psychology
  • The notion of transfer comes from psychology,
    where it has been studied for over a hundred
    years
  • benefits of Latin (Thorndike Woodworth, 1901)
  • puzzle solving (Luchins Luchins, 1970)
  • operating devices (Kieras Bovair, 1986)
  • using text editors (Singley Anderson, 1988)
  • analogical reasoning (Gick Holyoak, 1983)
  • Some recent studies have included computational
    models that predict the transfer observed under
    different conditions.

72
Domain Classes that Exhibit Transfer
From tsenator_at_darpa.mil To langley_at_csli.stanford
.edu Subject site visit next week Date Nov 14,
2004 Pat I am looking forward to hearing
about your progress over the past year during my
site visit next week. - Ted
From noname_at_somewhere.com To langley_at_csli.stanfo
rd.edu Subject special offer!!! Date Nov 14,
2004 One week only! Buy viagra at half
the price available in stores. Go now to
http//special.deals.com
What are the problem answers?
What path should the plane take?
Which is an emergency vehicle?
Which email is spam?
Classification tasks that involve assigning items
to categories, such as recognizing types of
vehicles or detecting spam. These are not very
interesting.
Procedural tasks that involve execution of
routinized skills, both cognitive (e.g.,
multi-column arithmetic) and sensori-motor (e.g.,
flying an aircraft).
A block sits on an inclined plane but is
connected to a weight by a string through a
pulley. If the angle of the plane is 30 degrees
and . . .
Which ladder is safer to climb on?
Which jump should red make?
What should the blue team do?
Inference tasks that require multi-step reasoning
to obtain an answer, such as solving physics word
problems and aptitude/achievement tests.
Problem-solving tasks that benefit from strategic
choices and heuristic search, such as complex
strategy games.
73
Claims About Transfer
Transfer requires the ability to compose these
knowledge elements dynamically.
Transfer requires that knowledge be represented
in a modular fashion.
The degree of transfer depends on the structure
shared with the training tasks.
Transfer across domains requires
abstract relations among representations.
74
Dimensions of Knowledge Transfer
Knowledge transfer complexity is determined
primarily by differences in the knowledge content
and representation between the source and target
problems.
We have not solved this before, but we know other
pertinent information about this domain that uses
the same representation.
We have not solved similar problems, and are not
familiar with this domain and problem
representation.
First-Principles Reasoning
Different Representations (e.g., most
cross-domain transfer)
Similar Representations (e.g., within-domain
transfer)
Difference in Content
Knowledge Reuse
Problem Solver
Isomorphism
0
0
Difference in Representation
Memorization
We know the solution to a similar problem with a
different representation, possibly from another
domain.
We have already solved these problems.
75
Memorization
Improvement in which the transfer tasks are the
same as those encountered during training.
target items
source items
E.g., solving the same geometry problems on a
homework assignment as were presented in class.
This is not very interesting.
76
Within-Domain Lateral Transfer
Improvement on related tasks of similar
difficulty within the same domain that share
goals, initial state, or other structure.
target items
source items
E.g., solving new physics problems that involve
some of the same principles but that also
introduce new ones.
77
Within-Domain Vertical Transfer
Improvement on related tasks of greater
difficulty within the same domain that build on
results from training items.
target items
source items
E.g., solving new physics problems that involve
the same principles but that also require more
reasoning steps.
78
Cross-Domain Lateral Transfer
Improvement on related tasks of similar
difficulty in a different domain that shares
either higher-level or lower-level structures.
target items
source items
E.g., solving problems about electric circuits
that involve some of the same principles as
problems in fluid flow but that also introduce
new ones.
79
Cross-Domain Vertical Transfer
Improvement on related tasks of greater
difficulty in a different domain that share
higher-level or lower-level structures.
target items
source items
E.g., solving physics problems that require
mastery of geometry and algebra or applying
abstract thermodynamic principles to a new
domain.
80
Approaches to Transfer Cumulative Learning
Methods for cumulative learning of
hierarchical skills and concepts define new
cognitive structures in terms of structures
learned on earlier tasks.
11
12
3
6
9
1
2
4
5
7
8
10
This approach is well suited to support vertical
transfer to new tasks of ever increasing
complexity.
13
11
12
Learning can operate on problem-solving traces,
observations of another agents behavior,
and even on direct instructions.
3
6
9
1
2
4
5
8
10
81
Approaches to Transfer Analogical Reasoning
Methods for analogical reasoning store cognitive
structures that encode relations in training
problems.
Upon encountering a new problem, they retrieve
stored experiences with similar relational
structure.
Additional relations are then inferred based on
elements in the retrieved problem.
Analogical reasoning can operate over any stored
relational structure, but must map training
elements to transfer elements, which can benefit
from knowledge. This approach is well suited for
lateral transfer to tasks of similar difficulty.
82
Approaches to Transfer Mapping Representations
Source domain Electricity
Voltage Drop
Knowledge Ohms law
V1
V2
I
Electrical Resistance R
Transfer of learned knowledge across domains may
require mapping between their representations of
shared content.
Mapping Process
Pressure Drop
Q If P13, P22, and R2, then what force F is
being applied, assuming we only know Ohms law
for electric currents?
P1
P2
F
Knowledge Poiseuilles law
Resistance to Flow R
Target domain Fluid Flow
83
Experimental Studies of Transfer
Transfer condition
Control condition
Compare results from transfer and control
conditions
84
Dependent Variables in Transfer Studies
  • Dependent variables for transfer experiments
    should include
  • Initial performance on the transfer tasks
  • Asymptotic performance on the transfer tasks
  • Rate of improvement on the transfer tasks
  • These require collecting learning curves over a
    series of tasks.
  • Such second-order variables build on basic
    metrics such as
  • Accuracy of response or solutions to tasks
  • Speed or efficiency of solutions to tasks
  • Quality or utility of solutions to tasks
  • Different basic measures are appropriate for
    different domains.

85
Transfer of Knowledge in ICARUS
86
Transfer in ICARUS
  • What forms of knowledge does ICARUS transfer?
  • Hierarchical/relational skill and concept clauses
  • Where does the transferred knowledge originate?
  • It comes from experience on source problems and
    background knowledge
  • How does ICARUS know what to transfer?
  • Skills are indexed by goals they achieve, with
    preference for more recently learned structures

87
A Transfer Scenario from Urban Combat
Target Problem
Source Problem
Here the first part of the source route transfers
to the target, but the second part must be
learned to solve the new task.
88
Structures Transferred in Scenario
Target
Shared structures
89
Primitive Concepts for Urban Combat
((stopped ?self) percepts ((self ?self xvel
?xvel yvel ?yvel)) tests ((lt ( ( ?xvel
?xvel) ( ?yvel ?yvel)) 1))) ((in-region ?self
?region) percepts ((self ?self region
?region))) ((connected-region ?target
?gateway) percepts ((gateway ?gateway region
?target))) ((blocked-gateway ?gateway)
percepts ((gateway ?gateway visible1 ?v1
visible2 ?v2)) tests ((and (equal ?v1 'B)
(equal ?v2 'B))) ) ((first-side-blocked-gateway
?gateway) percepts ((gateway ?gateway type
?type visible1 ?v1 visible2 ?v2)) tests
((equal ?type 'WALK) (equal ?v1 'B) (equal ?v2
'C))) ((flag-captured ?self flag1) percepts
((self ?self holding ?flag1) (entity ?flag1)
tests ((not (equal ?flag NIL))))
90
Nonprimitive Urban Combat Concepts
((not-stopped ?self) percepts ((self
?self)) relations ((not (stopped ?self))))
((clear-gateway ?gateway) percepts ((gateway
?gateway type ?type visible1 ?v1 visible2 ?v2))
relations ((not-stopped ?self)) tests
((equal ?type 'WALK) (equal ?v1 'C) (equal ?v2
'C))) ((stopped-in-region ?self ?region)
percepts ((self ?self)) relations
((in-region ?self ?region)(stopped ?self)))
((crossable-region ?target) percepts ((self
?self) (region ?target)) relations
((connected-region ?target ?gateway)
(clear-gateway ?gateway))) ((in-region-able
?self ?current ?region) percepts ((self
?self) (region ?current) (region ?region))
relations ((crossable-region ?region)
(in-region ?self ?current)))
91
Primitive Skills for Urban Combat
((in-region ?self ?region) percepts ((self
?self) (region ?current) (region ?region)
(gateway ?gateway region ?region dist1 ?dist1
angle1 ?angle1
dist2 ?dist2 angle2 ?angle2))
start ((in-region-able ?self ?current ?region))
actions ((move-toward (max ?dist1 ?dist2)
(mid-direction ?angle1 ?angle2))))
((clear-gateway ?gateway) percepts ((self
?self) (gateway ?gateway type WALK)) start
((stopped ?self)) actions ((move-toward 50
0))) ((clear-gateway ?gateway) percepts
((gateway ?gateway region ?region dist1 ?dist1
angle1 ?angle1 visible1 ?v1dist2
?dist2 angle2 ?angle2 visible2 ?v2)) start
((first-side-blocked-gateway ?gateway))
actions ((move-toward ?dist2 ?angle2)))
((flag-captured ?self ?flag) percepts ((self
?self) (entity ?flag dist ?dist angle ?angle))
start ((in-region ?self region107))
actions ((move-toward ?dist ?angle)))
92
Nonprimitive Skills for Urban Combat
((crossable-region ?target) percepts ((region
?target) (gateway ?gateway)) start
((connected-region ?target ?gateway)) subgoals
((clear-gateway ?gateway))) ((in-region-able
?self ?current ?region) percepts ((self
?self) (region ?current) (region ?region))
subgoals ((in-region ?self ?current)
(crossable-region ?region))) ((stopped-in-region
?self ?region) percepts ((self ?self)
(region ?region)) subgoals ((in-region ?self
?region) (stopped ?self))) ((stopped-in-region
?self ?current) percepts ((self ?self))
start ((in-region ?self ?current)) subgoals
((stopped ?self))) ((flag-captured ?self
?flag) percepts ((self ?self)) subgoals
((in-region ?self region107) (flag-captured ?self
?flag)))
93
Route Knowledge for Urban Combat
((in-region ?self region105) percepts ((self
?self)) subgoals ((in-region-able ?self
region115 region105) (in-region ?self
region105))) ((in-region ?self region114)
percepts ((self ?self)) subgoals
((in-region-able ?self region105 region114)
(in-region ?self region114))) ((in-region ?self
region110) percepts ((self ?self))
subgoals ((in-region-able ?self region114
region110) (in-region ?self region110)))
((in-region ?self region116) percepts ((self
?self)) subgoals ((in-region-able ?self
region110 region116) (in-region ?self
region116))) ((in-region ?self region107)
percepts ((self ?self)) subgoals
((in-region-able ?self region116 region107)
(in-region ?self region107)))
94
Experimental Protocol
Transfer Condition
Non-Transfer Condition
Target Problems in Random Order
Source Problems in Random Order
Statistical Analysis
Agent/Human Transfer Ratios
95
Domain Performance Metrics and Goals
TL Level Metric(s) Goal(s)
1 - 8 Time to completion (300s max) plus penalty for health effects 2s for minor hazards (glass) 30s for major hazards (landmines) plus penalty for resource use 2s for each ammunition round Find IED Minimize score
  • Each minor hazard costs the time required to
    traverse two regions at a walking pace. Penalty
    chosen to equalize expected difficulty of
    navigation and problem solving tasks.
  • Major hazards and ammunition costs chosen to
    enhance realism.

96
TL Level 1 Task Parameterization
Source
Target
Solution preserved, task parametrically changed
IED moved in region
Start
IED
  • Source Problem Find IED
  • Target Problem Find IED, location in goal region
    changed
  • Transferred knowledge
  • Route from source to goal region
  • Solution for surmounting obstacles, if any
  • Performance Goal Time to completion
  • Background Knowledge Primitive actions,
    exploration skills, relational concepts (e.g.,
    object close, path clear)

97
Transfer Level 1 ISLE Raw CurvesUrban Combat
98
TL Level 2 Task Extrapolation
Source
Target
Gap in wall removed
IED
Solution preserved up to obstacle, then changed
Start
  • Source Problem Find IED given obstacle
  • Target Problem Find IED, obstacle extended
  • Transferred knowledge
  • Route from source to goal region
  • Performance Goal Time to completion
  • Background Knowledge Primitive actions,
    exploration skills, relational concepts (e.g.,
    object close, path clear)

99
Transfer Level 2 ISLE Raw CurvesUrban Combat
100
TL Level 3 Task Restructuring
Source
Target
Role of pit and boxes reversed
Breakable Boxes
IED
Discover, then reuse action sequences in new order
Start
Unclimbable wall
Pit
Box
  • Source Problem Find IED given obstacles
  • Target Problem Find IED, obstacles rearranged
  • Transferred knowledge
  • Route from source to goal region
  • Solution for surmounting obstacles, if any
  • Performance Goal Time to completion
  • Background Knowledge Primitive actions,
    exploration skills, relational concepts (e.g.,
    object close, path clear)

101
Transfer Level 3 ISLE Raw CurvesUrban Combat
102
TL Level 4 Task Extending
Source
Target
Jumpable wall
Multiple walls
Start
Discover and repeatedly reuse solutions
IED
Unclimbable barrier
Multiple barriers
  • Source Problem Find IED given obstacles
  • Target Problem Find IED, same obstacles
    multiplied
  • Transferred knowledge
  • Route from source to goal region
  • Solution for surmounting obstacles, if any
  • Performance Goal Time to completion
  • Background Knowledge Primitive actions,
    exploration skills, relational concepts (e.g.,
    object close, path clear)

103
Transfer Level 4 ISLE Raw CurvesUrban Combat
104
ISLE Agent Transfer Scores P values for UCT
TL Metrics Level 1 Level 1 Level 2 Level 2 Level 3 Level 3 Level 4 Level 4
TL Metrics Score P-value Score P-Value Score P-value Score P-value
Jump start 84.5000 0.0078 18.5000 0.3126 281.120 0.1198 256.578 0.1148
ARR (narrow) 0.0000 0.277 0 0.1612 1.4285 0.0134 0 0.31
ARR (wide) 0.9913 0.0104 0.86889 0.1846 1.073 0.0018 0.725 0.1064
Ratio 0.5129 1 0.7848 0.9992 0.802 0.8988 0.8316 0.99
Transfer ratio 7.3768 8.0E-04 3.6652 0.003 1.87 0.0628 2.7949 0.01
Truncated transfer ratio 7.2717 0.0106 4.0054 0.0282 1.87 0.1946 2.7949 0.08
Transfer difference 10565.6 0 4740.18 0.0012 11044.0 0.1946 7329.6 0.01
Scaled transfer difference -111.569 1 -30.567 0.9954 -34.426 0.866 -22.822 0.988
Asymptotic advantage 94.2000 0 39.5000 0.055 11.0 0.45 50.895 0.02
TL1 50 trials/condition TL2 37 TL3 35 TL4
45 Data preprocessed by subtracting from 0
105
Human Transfer Scores P values for UCT
TL Metrics Level 1 Level 1 Level 2 Level 2 Level 3 Level 3 Level 4 Level 4
TL Metrics Score P-value Score P-value Score P-value Score P-value
Jump start -0.6471 0.5448 -42.333 0.8126 -14.9474 0.6896 104.750 0.0014
ARR (narrow) 0.0000 0.0364 -1E31 0.5818 0.0000 0.2204 0.0000 0.2720
ARR (wide) -1E31 0.4028 -1E31 0.2750 -1E31 0.3334 0.8968 0.0656
Ratio 0.3666 0.9980 0.7301 0.8882 0.6387 0.8850 0.5502 0.9354
Transfer ratio 6.4523 0.0028 2.2237 0.0898 2.4274 0.1678 2.2699 0.1670
Truncated transfer ratio 6.4523 0.0374 2.3207 0.0972 2.5123 0.2176 2.2699 0.1748
Transfer difference 163.088 0.0044 104.667 0.1240 193.974 0.1166 264.969 0.0748
Scaled transfer difference -10.1186 0.9966 -2.1188 0.8598 -3.7492 0.9004 -6.8823 0.9788
Asymptotic advantage 8.9412 0.1054 34.4000 0.0478 40.4211 0.0942 108.875 0.0014
TL1 17 trials/condition TL2 15 TL3 19 TL4
16 Data preprocessed by subtracting from 0
106
Other Transfer Results with ICARUS
  • We have also tested ICARUS in domains like
    FreeCell solitaire.

Experiments suggest that learned knowledge
transfers well here.
107
Key Ideas about Transfer in ICARUS
  • The most important transfer concerns
    goal-directed behavior that involves sequential
    actions aimed toward an objective.
  • Transfer mainly involves the reuse of knowledge
    structures.
  • Organizing structures in a hierarchy aids reuse
    and transfer.
  • Indexing skills by goals they achieve determines
    relevance.
  • One can learn hierarchical, relational,
    goal-directed skills by analyzing traces of
    expert behavior and problem solving.
  • Skill learning can build upon structures acquired
    earlier.
  • Successful transfer benefits from knowledge-based
    inference to recognize equivalent situations.

108
Open Research Problems
There remain many research issues that we must
still address
  • Goal transfer ? across tasks with distinct but
    related objectives
  • Negative transfer ? minimizing use of
    inappropriate knowledge
  • Context handling ? avoiding catastrophic
    interference
  • Representation mapping
  • Lateral ? Deep analogy that involves partial
    isomorphisms
  • Vertical ? Bootstrapped learning that builds on
    lower levels

These challenges should keep our field occupied
for some time.
109
Closing Remarks
Transfer of learned knowledge is an important
capability that
  • involves the sequential reuse of knowledge
    structures
  • takes many forms depending on source/target
    relationships
  • has been repeatedly examined within
    psychology/education
  • has received little attention in AI and machine
    learning
  • requires a fairly sophisticated experimental
    method

Transfer originated in psychology, and it is best
studied in the context of cognitive
architectures, which have similar roots.
110
End of Presentation
Write a Comment
User Comments (0)
About PowerShow.com