Title: Learning Agents Center
1Disciple Reasoning and Learning Agents
Gheorghe Tecuci with Mihai Boicu, Dorin Marcu,
Bogdan Stanescu, Cristina Boicu, Marcel Barbulescu
Learning Agents Center George Mason University
Symposium on Reasoning and Learning in Cognitive
Systems Stanford, CA, 20-21 May 2004
2Overview
Research Problem, Approach, and Application
Problem Solving Method Task Reduction
Learnable Knowledge Representation Plausible
Version Spaces
Multistrategy Learning during Problem Solving
Agent Development Experiments
Teaching and Learning Demo
Acknowledgements
3Research Problem and Approach
Research Problem Elaborate a theory, methodology
and family of systems for the development of
knowledge-base agents by subject matter experts,
with limited assistance from knowledge engineers.
Approach Develop a learning agent that can be
taught directly by a subject matter expert while
solving problems in cooperation.
The agent learns from the expert, building,
verifying and improving its knowledge base
The expert teaches the agent to perform various
tasks in a way that resembles how the expert
would teach a person.
1. Mixed-initiative problem solving 2.
Teaching and learning 3. Multistrategy
learning
Problem Solving
Ontology Rules
Interface
Learning
4Sample Domain Center of Gravity Analysis
Centers of Gravity Primary sources of moral or
physical strength, power or resistance of the
opposing forces in a conflict.
Application to current war scenarios (e.g. War on
terror, Iraq) with state and non-state actors
(e.g. Al Qaeda).
Identify COG candidates
Test COG candidates
Identify potential primary sources of moral or
physical strength, power and resistance from
Test each identified COG candidate to determine
whether it has all the necessary critical
capabilities
Which are the critical capabilities? Are the
critical requirements of these capabilities
satisfied? If not, eliminate the candidate. If
yes, do these capabilities have any vulnerability?
Government Military People Economy Alliances Etc.
5Overview
Research Problem, Approach, and Application
Problem Solving Method Task Reduction
Learnable Knowledge Representation Plausible
Version Spaces
Multistrategy Learning during Problem Solving
Agent Development Experiments
Teaching and Learning Demo
Acknowledgements
6Problem Solving Task Reduction
T1
- A complex problem solving task is performed by
- successively reducing it to simpler tasks
- finding the solutionsof the simplest tasks
- successively composing these solutions until the
solution to the initial task is obtained.
S1
Q1
S11
A1n
A11
S1n
T1n
S11a
T11a
S11b
T11b
S11b
Q11b
S11bm
S11b1
A11bm
A11b1
T11bm
T11b1
Let T1 be the problem solving task to be
performed. Finding a solution is an iterative
process where, at each step, we consider some
relevant information that leads us to reduce the
current task to a simpler task or to several
simpler tasks. The question Q associated with
the current task identifies the type of
information to be considered. The answer A
identifies that piece of information and leads us
to the reduction of the current task.
7COG Analysis World War II at the time of Sicily
1943
We need to
Identify and test a strategic COG candidate for
Sicily_1943
Which is an opposing_force in the Sicily_1943
scenario?
Allied_Forces_1943
Therefore we need to
Identify and test a strategic COG candidate for
Allied_Forces_1943
Is Allied_Forces_1943 a single_member_force or a
multi_member_force?
Allied_Forces_1943 is a multi_member_force
Therefore we need to
Identify and test a strategic COG candidate for
Allied_Forces_1943 which is a multi_member_force
What type of strategic COG candidate should I
consider for this multi_member_force?
I consider a candidate corresponding to a member
of the multi_member_force
Therefore we need to
Identify and test a strategic COG candidate
corresponding to a member of the
Allied_Forces_1943
Which is a member of Allied_Forces_1943?
US_1943
Therefore we need to
Identify and test a strategic COG candidate for
US_1943
8Overview
Research Problem, Approach, and Application
Problem Solving Method Task Reduction
Learnable Knowledge Representation Plausible
Version Spaces
Multistrategy Learning during Problem Solving
Agent Development Experiments
Teaching and Learning Demo
Acknowledgements
9Knowledge Base Object Ontology Rules
Object Ontology
A hierarchical representation of the objects and
types of objects.
A hierarchical representation of the types of
features.
10Knowledge Base Object Ontology Rules
We need to
Identify and test a strategic COG candidate
corresponding to a member of the
Allied_Forces_1943
Which is a member of Allied_Forces_1943?
EXAMPLE OF REASONING STEP
US_1943
Therefore we need to
Identify and test a strategic COG candidate for
US_1943
LEARNED RULE
IF Identify and test a strategic COG candidate
corresponding to a member of a force The force
is ?O1
IF Identify and test a strategic COG candidate
corresponding to a member of the ?O1
Plausible Upper Bound Condition
?O1 is multi_member_force has_as_member ?O2
?O2 is force
Question Which is a member of ?O1 ? Answer
?O2
Plausible Lower Bound Condition
?O1 is equal_partners_multi_state_alliance has_as
_member ?O2 ?O2 is single_state_force
THEN Identify and test a strategic COG candidate
for ?O2
THEN Identify and test a strategic COG candidate
for a force The force is ?O2
INFORMAL STRUCTURE
FORMAL STRUCTURE
11Learnable knowledge representation
Use of the object ontology as an incomplete and
evolving generalization language.
Plausible version space (PVS)
Use of plausible version spaces to represent and
use partially learned knowledge
Universe of Instances
Plausible Upper Bound
Concept
Plausible Lower Bound
- Rules with PVS conditions
- Tasks with PVS conditions
- Object features with PVS concept
- Task features with PVS concept
Feature Domain PVS concept Range PVS concept
12Overview
Research Problem, Approach, and Application
Problem Solving Method Task Reduction
Learnable Knowledge Representation Plausible
Version Spaces
Multistrategy Learning during Problem Solving
Agent Development Experiments
Teaching and Learning Demo
Acknowledgements
13Control of modeling, learning and problem solving
Input Task
Mixed-Initiative Problem Solving
Ontology Rules
Generated Reduction
Reject Reduction
Accept Reduction
New Reduction
Solution
Rule Refinement
Task Refinement
Rule Refinement
Modeling
Learning
14Disciple uses the learned rules in problem
solving, and refines them based on experts
feedback.
Learning
Modeling
Problem Solving
Refining
15Rule learning method
Analogy and Hint Guided Explanation
Analogy-based Generalization
Plausible version space rule
plausible explanations
PUB
guidance, hints
Example of a task reduction step
PLB
Incomplete justification
analogy
Knowledge Base
16Find an explanation of why the example is correct
I need to
Identify and test a strategic COG candidate
corresponding to a member of the
Allied_Forces_1943
Which is a member of Allied_Forces_1943?
US_1943
Therefore I need to
Identify and test a strategic COG candidate for
US_1943
The explanation is the best possible
approximation of the question and the answer, in
the object ontology.
has_as_member
US_1943
Allied_Forces_1943
17Generate the PVS rule
has_as_member
US_1943
Allied_Forces_1943
IF Identify and test a strategic COG candidate
corresponding to a member of a force The force
is ?O1
Rewrite as
explanation ?O1 has_as_member ?O2
Most general generalization
Plausible Upper Bound Condition
?O1 is multi_member_force has_as_member ?O2
?O2 is force
Condition ?O1 is Allied_Forces_1943
has_as_member ?O2 ?O2 is US_1943
Plausible Lower Bound Condition
?O1 is equal_partners_multi_state_alliance has_as
_member ?O2 ?O2 is single_state_force
Most specific generalization
THEN Identify and test a strategic COG candidate
for a force The force is ?O2
18Rule refinement method
Learning by Analogy And Experimentation
Knowledge Base
IF lttaskgt
Conditionltcondition 1gt Except when
conditionltcondition 2gt Except when
conditionltcondition ngt
PVS Rule
Failure explanation
Example of task reductions generated by the agent
THEN ltsubtask 1gt ltsubtask mgt
Incorrect example
Correct example
Learning from Explanations
Learning from Examples
19Overview
Research Problem, Approach, and Application
Problem Solving Method Task Reduction
Learnable Knowledge Representation Plausible
Version Spaces
Multistrategy Learning during Problem Solving
Agent Development Experiments
Teaching and Learning Demo
Acknowledgements
20Agent Development Methodology
21Use of Disciple at the US Army War College
319jw Case Studies in Center of Gravity Analysis
Disciple helps the students to perform a center
of gravity analysis of an assigned war scenario.
Disciple was taught based on the expertise of
Prof. Comello in center of gravity analysis.
Problemsolving
Teaching
DiscipleAgent
KB
Learning
Global evaluations of Disciple by officers from
the Spring 03 course
Disciple helped me to learn to perform a
strategic COG analysis of a scenario
The use of Disciple is an assignment that is well
suited to the course's learning objectives
Disciple should be used in future versions of
this course
22Use of Disciple at the US Army War College
589jw Military Applications of Artificial
Intelligence course
Students teach Disciple their COG analysis
expertise, using sample scenarios (Iraq 2003, War
on terror 2003, Arab-Israeli 1973)
Students test the trained Disciple agent based on
a new scenario (North Korea 2003)
Global evaluations of Disciple by officers during
three experiments
I think that a subject matter expert can use
Disciple to build an agent, with limited
assistance from a knowledge engineer
Spring 2001 COG identification
Spring 2002 COG identification and testing
Spring 2003 COG testing based on critical
capabilities
23Parallel development and merging of KBs
432 concepts and features, 29 tasks, 18 rules For
COG identification for leaders
Initial KB
Domain analysis and ontology development (KESME)
Knowledge Engineer (KE)
All subject matter experts (SME)
Training scenarios Iraq 2003 Arab-Israeli
1973 War on Terror 2003
Parallel KB development (SME assisted by KE)
37 acquired concepts and features for COG testing
Extended KB
DISCIPLE-COG
DISCIPLE-COG
DISCIPLE-COG
DISCIPLE-COG
DISCIPLE-COG
stay informed be irreplaceable
communicate
be influential
have support
be protected be driving force
Team 1
Team 2
Team 3
Team 4
Team 5
5 features 10 tasks 10 rules
14 tasks 14 rules
2 features 19 tasks 19 rules
35 tasks 33 rules
3 features 24 tasks 23 rules
KB merging (KE)
Learned features, tasks, rules
Integrated KB
Unified 2 features Deleted 4 rules
Refined 12 rules Final KB 9 features ? 478
concepts and features 105 tasks ?134 tasks 95
rules ?113 rules
5h 28min average training time / team 3.53
average rule learning rate / team
COG identification and testing (leaders)
DISCIPLE-COG
Testing scenario North Korea 2003
Correctness 98.15
24Other Disciple agents
Disciple-WA (1997-1998) Estimates the best plan
of working around damage to a transportation
infrastructure, such as a damaged bridge or road.
Demonstrated that a knowledge engineer can use
Disciple to rapidly build and update a knowledge
base capturing knowledge from military
engineering manuals and a set of sample solutions
provided by a subject matter expert.
Disciple-COA (1998-1999) Identifies strengths
and weaknesses in a Course of Action, based on
the principles of war and the tenets of army
operations.
Demonstrated the generality of its learning
methods that used an object ontology created by
another group (TFS/Cycorp).
Demonstrated that a knowledge engineer and a
subject matter expert can jointly teach Disciple.
25Overview
Research Problem, Approach, and Application
Problem Solving Method Task Reduction
Learnable Knowledge Representation Plausible
Version Spaces
Multistrategy Learning during Problem Solving
Agent Development Experiments
Teaching and Learning Demo
Acknowledgements
26Acknowledgements
This research was sponsored by the Defense
Advanced Research Projects Agency, Air Force
Research Laboratory, Air Force Material Command,
USAF under agreement number F30602-00-2-0546, by
the Air Force Office of Scientific Research under
grant number F49620-00-1-0072 and by the US Army
War College.