Title: Learning Agents
1Learning Agents
Prof. Gheorghe Tecuci
Learning Agents Laboratory Computer Science
Department George Mason University
2Overview
Learning strategies
Instructable agents the Disciple approach
Basic bibliography
3Learning strategies
Introduction
Inductive learning from examples
Deductive (explanation-based) learning
Analogical learning
Abductive learning
Multistrategy learning
4What is Machine Learning
Machine Learning is the domain of Artificial
Intelligence which is concerned with building
adaptive computer systems that are able to
improve their competence and/or efficiency
through learning from input data or from their
own problem solving experience.
What does it mean to improve competence?
What does it mean to improve efficiency?
5Two complementary dimensions for learning
Competence
A system is improving its competence if it learns
to solve a broader class of problems, and to make
fewer mistakes in problem solving.
Efficiency
A system is improving its efficiency, if it
learns to solve the problems from its area of
competence faster or by using fewer resources.
6The architecture of a learning agent
Implements a general problem solving method that
uses the knowledge from the knowledge base to
interpret the input and provide an appropriate
output.
Implements learning methods for extending and
refining the knowledge base to improve agents
competence and/or efficiency in problem solving.
Learning Agent
Problem Solving Engine
Input/
Sensors
Learning Engine
User/ Environment
Output/
Ontology Rules/Cases/Methods
Knowledge Base
Effectors
Data structures that represent the objects from
the application domain, general laws governing
them, actions that can be performed with them,
etc.
7Learning strategies
A Learning Strategy is a basic form of learning
characterized by the employment of a certain type
of inference (like deduction, induction or
analogy) and a certain type of computational or
representational mechanism (like rules, trees,
neural networks, etc.).
8Successful applications of Machine Learning
- Learning to recognize spoken words (all of the
most successful systems use machine learning) - Learning to drive an autonomous vehicle on public
highway - Learning to classify new astronomical structures
(by learning regularities in a very large data
base of image data) - Learning to play games
- Automation of knowledge acquisition from domain
experts - Instructable agents.
9Basic ontological elements instances and concepts
An instance is a representation of a particular
entity from the application domain.
A concept is a representation of a set of
instances.
state_government
state_government
government_of_US_1943
government_of_Britain_1943
instance_of
instance_of
government_of_Britain_1943
government_of_US_1943
instance_of is the relationship between an
instance and the concept to which it belongs.
state_government represents the set of all
entities that are governments of states. This set
includes government_of_US_1943 and
government_of_Britain_1943 which are called
positive examples.
An entity which is not an instance of a concept
is called a negative example of that concept.
10Concept generality
A concept P is more general than another concept
Q if and only if the set of instances represented
by P includes the set of instances represented by
Q.
state_government
Example
democratic_government
representative_ democracy
totalitarian_ government
parliamentary_ democracy
state_government
subconcept_of is the relationship between a
concept and a more general concept.
subconcept_of
democratic_government
11A generalization hierarchy
governing_body
ad_hoc_ governing_body
established_ governing_body
other_type_of_ governing_body
state_government
group_governing_body
feudal_god_ king_government
other_state_ government
dictator
other_ group_ governing_ body
democratic_ government
monarchy
deity_figure
representative_ democracy
parliamentary_ democracy
government_ of_Italy_1943
democratic_ council_ or_board
autocratic_ leader
totalitarian_ government
government_ of_US_1943
government_ of_Britain_1943
chief_and_ tribal_council
theocratic_ government
military_ dictatorship
police_ state
fascist_ state
religious_ dictatorship
theocratic_ democracy
communist_ dictatorship
religious_ dictatorship
government_ of_Germany_1943
government_ of_USSRy_1943
12Learning strategies
Introduction
Inductive learning from Examples
Deductive (explanation-based) learning
Analogical learning
Abductive learning
Multistrategy learning
13Empirical inductive concept learning from examples
Illustration
Given
Positive examples of cups P1
P2 ...
Negative examples of cups N1
Learn
A description of the cup concept
has-handle(x), ...
Approach Compare the positive and the negative
examples of a concept, in terms of their
similarities and differences, and learn the
concept as a generalized description of the
similarities of the positive examples. This
allows the agent to recognize other entities as
being instances of the learned concept.
14The learning problem
Given a language of instances a language
of generalizations a set of positive examples
(E1, ..., En) of a concept a set of negative
examples (C1, ... , Cm) of the same concept a
learning bias other background
knowledge Determine a concept description
which is a generalization of the positive
examples that does not cover any of the negative
examples
Purpose of concept learning Predict if an
instance is an example of the learned concept.
15Generalization and specialization rules
Indicate various generalizations of the following
sentence Students who have lived in Fairfax for
3 years.
- Learning a concept from examples is based on
generalization and specialization rules - A generalization rule is a rule that transforms
an expression into a more general expression - - A specialization rule is a rule that transforms
an expression into a less general expression.
16Generalization (and specialization) rules
Turning constants into variables
Climbing the generalization hierarchy
Dropping condition
Generalizing numbers
Adding alternatives
17Climbing/descending the generalization hierarchy
Generalizes an expression by replacing a concept
with a more general one.
democratic_government
representative_democracy
parliamentary_democracy
The set of single state forces governed by
representative democracies
?O1 is single_state_force has_as_governing_body
?O2 ?O2 is representative_democracy
generalization
specialization
representative_democracy ? democratic_government
democratic_government ? representative_democracy
The set of single state forces governed by
democracies
?O1 is single_state_force has_as_governing_body
?O2 ?O2 is democratic_government
18Basic idea of version space concept learning
Consider the examples E1, , E2 in sequence.
19The candidate elimination algorithm
- Initialize S to the first positive example and G
to its most general generalization - 2. Accept a new training instance I
- If I is a positive example then
- - remove from G all the concepts that do not
cover I - - generalize the elements in S as little as
possible to cover I but remain less
general than some concept in G - - keep in S the minimally general concepts.
- If I is a negative example then
- - remove from S all the concepts that cover I
- - specialize the elements in G as little as
possible to uncover I and be more general than
at least one element from S - - keep in G the maximally general concepts.
- 3. Repeat 2 until GS and they contain a single
concept C (this is the learned concept)
20Illustration of the candidate elimination
algorithm
Language of generalizations (shape,
size) shape ball, brick, cube, any-shape size
large, small, any-size
Language of instances (shape, size) shape
ball, brick, cube size large, small
21General features of the empirical inductive
methods
Require many examples.
Do not need much domain knowledge.
Improve the competence of the agent.
The version space method relies on an exhaustive
bi-directional search and is computationally
intensive. This limits its practical
applicability.
Practical empirical inductive learning methods
(such as ID3) rely on heuristic search to
hypothesize the concept.
22Learning strategies
Introduction
Inductive learning from Examples
Deductive (explanation-based) learning
Analogical learning
Abductive learning
Multistrategy learning
23Explanation-based learning problem
Given A training example cup(o1) Ü color(o1,
white), made-of(o1, plastic), light-mat(plastic),
has-handle(o1), has-flat-bottom(o1),
up-concave(o1),... Learning goal A specification
of the desirable features of the concept to be
learned (e.g. the learned concept should have
only features from the example) Background
knowledge Complete knowledge that allows proving
that the training example represents the
concept cup(x) Ü liftable(x), stable(x),
open-vessel(x). liftable(x) Ü light(x),
graspable(x). stable(x) Ü has-flat-bottom(x).
Determine A concept definition representing a
deductive generalization of the training example
that satisfies the learning goal. cup(x) Ü
made-of(x, y),light-mat(y),has-handle(x),has-flat-
bottom(x),up-concave(x).
Purpose of learning Improve the problem solving
efficiency of the agent.
24Explanation-based learning method
1. Construct an explanation that proves that the
training example is an example of the concept to
be learned. 2. Generalize the explanation as much
as possible so that the proof still holds, and
extract from it a concept definition that
satisfies the learning goal.
25Discussion
Proof generalization generalizes them
The proof identifies the characteristic features
How does this learning method improve the
efficiency of the problem solving process?
A cup is recognized by using a single rule rather
than building a proof tree.
Do we need a training example to learn an
operational definition of the concept? Why?
The learner does not need a training example. It
can simply build proof trees from top-down,
starting with an abstract definition of the
concept and growing the tree until the leaves are
operational features. However, without a training
example the learner will learn many operational
definitions. The training example focuses the
learner on the most typical example.
26General features of explanation-based learning
Needs only one example
Requires complete knowledge about the concept
(which makes this learning strategy
impractical).
Improves agent's efficiency in problem solving
Shows the importance of explanations in learning
27Learning strategies
Introduction
Inductive learning from Examples
Deductive (explanation-based) learning
Analogical learning
Abductive learning
Multistrategy learning
28Learning by analogy
Learning by analogy means acquiring new knowledge
about an input entity by transferring it from a
known similar entity.
29Discussion
Examples of analogies
Pressure Drop is like Voltage Drop A variable in
a programming language is like a box.
Provide other examples of analogies.
Which is the central intuition supporting
learning by analogy
If two entities are similar in some respects then
they could be similar in other respects as well.
30Learning by analogy the learning problem
Given A partially known target entity T and
a goal concerning it. Background knowledge
containing known entities. Find New
knowledge about T obtained from a source entity S
belonging to the background knowledge.
Partially understood structure of the hydrogen
atom under study.
Knowledge from different domains, including
astronomy, geography, etc.
In a hydrogen atom the electron revolves around
the nucleus, in a similar way in which a planet
revolves around the sun.
31Learning by analogy the learning method
ACCESS find a known entity that is analogous
with the input entity. MATCHING match the
two entities and hypothesize knowledge.
EVALUATION test the hypotheses. LEARNING
store or generalize the new knowledge.
Based on what is known about the hydrogen atom
and the solar system identify the solar system as
a source for the hydrogen atom.
One may map the nucleus to the sun and the
electron to the planet, allowing one to infer
that the electron revolves around the nucleus
because the nucleus attracts the electron and the
mass of the nucleus is greater than the mass of
the electron.
A specially designed experiment shows that indeed
the electron revolves around the nucleus.
Store that, in a hydrogen atom, the electron
revolves around the nucleus. By generalization
from the solar system and the hydrogen atom,
learn the abstract concept that a central force
can cause revolution.
32Discussion
How could we define problem solving by analogy?
How does analogy help to solve new problems?
33General features of analogical learning
Requires a huge amount of background knowledge
from a wide variety of domains.
Improves agent's competence through knowledge
base refinement.
Reduces the solution finding to solution
verification.
Is a very powerful reasoning method with
incomplete knowledge
34Learning strategies
Introduction
Inductive learning from Examples
Deductive (explanation-based) learning
Analogical learning
Abductive learning
Multistrategy learning
35Abductive learning
The learning problem
Finds the hypothesis that explains best an
observation (or collection of data) and assumes
it as being true.
36Abduction (cont.)
Consider the observation University Dr. is
wet Use abductive learning.
Which are other potential explanations?
Provide other examples of abductive reasoning.
What real world applications of abductive
reasoning can you imagine?
37Learning strategies
Introduction
Inductive learning from Examples
Deductive (explanation-based) learning
Analogical learning
Abductive learning
Multistrategy learning
38Multistrategy learning
Multistrategy learning is concerned with
developing learning agents that synergistically
integrate two or more learning strategies in
order to solve learning tasks that are beyond the
capabilities of the individual learning
strategies that are integrated.
39Complementariness of learning strategies
Explanation- based learning
Multistrategy
Learningfrom examples
learning
Examples
needed
Knowledge
needed
Type of
inference
Effect on
agent's
behavior
40Overview
Learning strategies
Instructable agents the Disciple approach
Basic bibliography
41Instructable agents the Disciple approach
Introduction
Modeling experts reasoning
Rule learning
Integrated modeling, learning, solving
Application and evaluation
Multi-agent problem solving and learning
42How are agents built and why it is hard
The knowledge engineer attempts to understand how
the subject matter expert reasons and solves
problems and then encodes the acquired expertise
into the system's knowledge base. This modeling
and representation of expert knowledge is long,
painful and inefficient.
Why?
43The Disciple approach Problem statement
Elaborate a theory and methodology for the
mixed-initiative, end-to-end development of
knowledge bases and agents by subject matter
experts, with limited assistance from knowledge
engineers.
How is this approach addresses the knowledge
acquisition bottleneck?
44Vision on the evolution of computer systems
Mainframe Computers
Software systems developed and used by computer
experts
45General architecture of Disciple-RKF
Teaching, Learningand Problem Solving
Intelligent User Interface
Modeling
Task Learning
ScenarioElicitation
Mixed Initiative Manager
OntologyEditors and Browsers
Rule Learning
Mixed_Initiative Problem Solving
Autonomous ProblemSolving
Natural Language Generation
Rule Refinement
Knowledge Base Management
Problem
Ontology
Solving
Ontology Import
Solving
Instances
Rules
Instances
Rules
46Knowledge base Object ontology reasoning rules
Object ontology
47Task reduction rule
IF Test whether the will of the people can make
a state accept the strategic goal of an opposing
force The will of the people is ?O1 The state
is ?O2 The opposing force is ?O3 The goal is ?O4
Plausible Lower Bound Condition?O1 is
will_of_the_people_of_US_1943 ?O2 is
US_1943 has_as_people ?O6 has_as_governing_body
?O5 ?O3 is European_Axis_1943 ?O4 is
Dominance_of_Europe_by_European_Axis ?O5 is
government_of_US_1943 has_as_will ?O7 ?O6 is
people_of_US_1945 has_as_will ?O1 ?O7 is
will_of_the_government_of_US_1943 reflects ?O1
Plausible Upper Bound Condition?O1 is
will_of_agent ?O2 is force has_as_people ?O6
has_as_governing_body ?O5 ?O3 is
(strategic_COG_relevant_factor agent) ?O4 is
force_goal ?O5 is representative_democracy
has_as_will ?O7 ?O6 is people has_as_will
?O1 ?O7 is will_of_agent reflects ?O1
THEN Test whether the will of the people, that
controls the government, can make a state accept
the strategic goal of an opposing force The will
of the people is ?O1 The government is ?O5 The
state is ?O2 The opposing force is ?O3 The goal
is ?O4
48The developed Disciple approach
49Instructable agents the Disciple approach
Introduction
Modeling experts reasoning
Rule learning
Integrated modeling, learning, solving
Application and evaluation
Multi-agent problem solving and learning
50Application domain
Identification of strategic Center of Gravity
candidates in war scenarios
The center of gravity of an entity (state,
alliance, coalition, or group) is the foundation
of capability, the hub of all power and movement,
upon which everything depends, the point against
which all the energies should be directed. Carl
Von Clausewitz, On War, 1832. If a combatant
eliminates or influences the enemys strategic
center of gravity, then the enemy will lose
control of its power and resources and will
eventually fall to defeat. If the combatant fails
to adequately protect his own strategic center of
gravity, he invites disaster.
51Modeling the identification of center of gravity
candidates
I need to
Identify and test a strategic COG candidate for
the Sicily_1943 scenario
What kind of scenario is Sicily_1943?
Sicily_1943 is a war scenario
Therefore I need to
Identify and test a strategic COG candidate for
Sicily_1943 which is a war scenario
Which is an opposing force in the Sicily_1943
scenario?
Allied_Forces_1943
Therefore I need to
Identify and test a strategic COG candidate for
Allied_Forces_1943
Is Allied_Forces_1943 a single-member force or a
multi-member force?
Allied_Forces_1943 is a multi-member force
Therefore I need to
Identify and test a strategic COG candidate for
Allied_Forces_1943 which is a multi-member force
European_Axis_1943
Therefore I need to
Identify and test a strategic COG candidate for
European_Axis_1943
52I need to
Test the will of the people of US_1943 which is a
strategic COG candidate with respect to the
people of US_1943
What is the strategic goal of European_Axis_1943?
Dominance of Europe by European Axis
Therefore I need to
Test whether the will of the people of US_1943
can make US_1943 accept the strategic goal of
European_Axis_1943 which is Dominance of Europe
by European Axis
Let us assume that the people of US_1943 would
accept Dominance of Europe by European Axis.
Could the people of US_1943 make the government
of US_1943 accept Dominance of Europe by
European Axis?
Yes, because US_1943 is a representative
democracy and the government of US_1943 reflects
the will of the people of US_1943
Therefore I need to
Test whether the will of the people of US_1943
that controls the government of US_1943 can make
US_1943 accept the strategic goal of
European_Axis_1943 which is Dominance of Europe
by European Axis
Let us assume that the people of US_1943 would
accept Dominance of Europe by European Axis.
Could the people of US_1943 make the military of
US_1943 accept Dominance of Europe by European
Axis?
Yes, because US_1943 is a representative
democracy and the will of the military of US_1943
reflects the will of the people of US_1943
Therefore conclude that
The will of the people of US_1943 is a strategic
COG candidate that cannot be eliminated
53Modeling advisor helping the expert to express
his reasoning
54Instructable agents the Disciple approach
Introduction
Modeling experts reasoning
Rule learning
Integrated modeling, learning, solving
Application and evaluation
Multi-agent problem solving and learning
55Natural Language
Logic
I need to
IF Identify and test a strategic COG candidate
corresponding to a member of a force The force
is ?O1
Identify and test a strategic COG candidate
corresponding to a member of the
Allied_Forces_1943
explanation ?O1 has_as_member ?O2
Which is a member of Allied_Forces_1943?
US_1943
Plausible Upper Bound Condition
?O1 is multi_member_force has_as_member ?O2
?O2 is force
Therefore I need to
Identify and test a strategic COG candidate for
US_1943
Plausible Lower Bound Condition
?O1 is equal_partners_multi_state_alliance has_as
_member ?O2 ?O2 is single_state_force
IF Identify and test a strategic COG candidate
corresponding to a member of the ?O1
THEN Identify and test a strategic COG candidate
for a force The force is ?O2
QuestionWhich is a member of ?O1 ? Answer?O2
FORMAL STRUCTURE OF THE RULE
THEN Identify and test a strategic COG candidate
for ?O2
Rule learning
INFORMAL STRUCTURE OF THE RULE
561. Formalize the tasks
I need to
I need to
Identify and test a strategic COG candidate
corresponding to a member of the
Allied_Forces_1943
Identify and test a strategic COG candidate
corresponding to a member of a force The force
is Allied_Forces_1943
Therefore I need to
Therefore I need to
Identify and test a strategic COG candidate for a
force The force is US_1943
Identify and test a strategic COG candidate for
US_1943
57Task formalization interface
582. Find an explanation of why the example is
correct
I need to
Identify and test a strategic COG candidate
corresponding to a member of the
Allied_Forces_1943
Which is a member of Allied_Forces_1943?
US_1943
Therefore I need to
Identify and test a strategic COG candidate for
US_1943
The explanation is the best possible
approximation of the question and the answer, in
the object ontology.
has_as_member
US_1943
Allied_Forces_1943
59Explanation generation and selection interface
603. Generate the plausible version space rule
has_as_member
US_1943
Allied_Forces_1943
IF Identify and test a strategic COG candidate
corresponding to a member of a force The force
is ?O1
Rewrite as
explanation ?O1 has_as_member ?O2
Most general generalization
Plausible Upper Bound Condition
?O1 is multi_member_force has_as_member ?O2
?O2 is force
Condition ?O1 is Allied_Forces_1943
has_as_member ?O2 ?O2 is US_1943
Plausible Lower Bound Condition
?O1 is equal_partners_multi_state_alliance has_as
_member ?O2 ?O2 is single_state_force
Most specific generalization
THEN Identify and test a strategic COG candidate
for a force The force is ?O2
Which is the justification for these
generalizations?
61Analogical reasoning
62Generalization by analogy
63Instructable agents the Disciple approach
Introduction
Modeling experts reasoning
Rule learning
Integrated modeling, learning, solving
Application and evaluation
Multi-agent problem solving and learning
64Control of modeling, learning and solving
Input Task
Mixed-Initiative Problem Solving
Ontology Rules
Generated Reduction
Reject Reduction
New Reduction
Accept Reduction
Solution
Rule Refinement
Task Refinement
Modeling
Rule Refinement
Formalization
Learning
65(No Transcript)
66Rule refinement with a positive example
Positive example that satisfies the upper bound
IF Identify and test a strategic COG candidate
corresponding to a member of a force The force
is ?O1
I need to
Identify and test a strategic COG candidate
corresponding to a member of the
European_Axis_1943
Therefore I need to
explanation ?O1 has_as_member ?O2
Identify and test a strategic COG candidate for
Germany_1943
Plausible Upper Bound Condition
?O1 is multi_member_force has_as_member ?O2
?O2 is force
Condition satisfied by positive example ?O1 is
European_Axis_1943 has_as_member ?O3 ?O2
is Germany_1943
less general than
Plausible Lower Bound Condition
?O1 is equal_partners_multi_state_alliance has_as
_member ?O2 ?O2 is single_state_force
explanation European_Axis_1943 has_as_member
Germany_1943
THEN Identify and test a strategic COG candidate
for a force The force is ?O2
67Minimal generalization of the plausible lower
bound
Plausible Upper Bound Condition?O1 is
multi_member_force has_as_member ?O2 ?O2
is force
less general than (or at most as general as)
New Plausible Lower Bound Condition?O1 is
multi_state_alliance has_as_member
?O2 ?O2 is single_state_force
minimal generalization
Condition satisfied by the positive example ?O1
is European_Axis_1943 has_as_member
?O2 ?O2 is Germany_1943
Plausible Lower Bound Condition (from rule) ?O1
is equal_partners_multi_state_alliance
has_as_member ?O2 ?O2 is single_state_force
68Forces hierarchy
composition_of_forces
force
multi_member_force
single_member_force
single_state_force
single_group_force
multi_state_force
multi_group_force
multi_state_alliance
multi_state_coalition
Germany_1943
US_1943
dominant_partner_ multi_state_alliance
dominant_partner_ multi_state_coalition
multi_state_alliance is the minimal
generalization of equals_partners_multi_state_alli
ance that covers European_Axis_1943
equal_partners_ multi_state_alliance
equal_partners_ multi_state_coalition
European_Axis_1943
Allied_Forces_1943
69Refined rule
IF Identify and test a strategic COG candidate
corresponding to a member of a force The force
is ?O1
IF Identify and test a strategic COG candidate
corresponding to a member of a force The force
is ?O1
explanation ?O1 has_as_member ?O2
explanation ?O1 has_as_member ?O2
Plausible Upper Bound Condition
?O1 is multi_member_force has_as_member ?O2
?O2 is force
Plausible Upper Bound Condition
?O1 is multi_member_force has_as_member ?O2
?O2 is force
less general than
Plausible Lower Bound Condition
?O1 is equal_partners_multi_state_alliance has_as
_member ?O2 ?O2 is single_state_force
Plausible Lower Bound Condition
?O1 is multi_state_alliance has_as_member
?O2 ?O2 is single_state_force
THEN Identify and test a strategic COG candidate
for a force The force is ?O2
THEN Identify and test a strategic COG candidate
for a force The force is ?O2
70Which are some learning strategies used by
Disciple?
71Instructable agents the Disciple approach
Introduction
Modeling experts reasoning
Rule learning
Integrated modeling, learning, solving
Application and evaluation
Multi-agent problem solving and learning
72Use of Disciple in Army War College courses
319jw Case Studies in Center of Gravity Analysis
(COG), Term II and Term III
Students develop input scenarios
Students use Disciple as an intelligent assistant
that supports them to develop a Center of
Gravity analysis report for a war scenario.
589 Military Applications of Artificial
Intelligence (MAAI) ,Term III
Students develop agents
Students act as subject matter experts that teach
personal Disciple agents their own reasoning in
Center of Gravity analysis.
73COG Winter 2002 Expert assessments
The use of Disciple is an assignment that is well
suited to the course's learning objectives
Disciple helped me to learn to perform a
strategic center of gravity analysis of a scenario
The use of Disciple was a useful learning
experience
Disciple should be used in future versions of
this course
74589 MAAI Spring 2002 SME assessment
I think that a subject matter expert can use
Disciple to build an agent, with limited
assistance from a knowledge engineer
75Instructable agents the Disciple approach
Introduction
Modeling experts reasoning
Rule learning
Integrated modeling, learning, solving
Application and evaluation
Multi-agent problem solving and learning
76Modeling, problem solving, and learning as
collaborative agents
Rules similar with the current example
Rule Analogy Engine
Explanation Generator
New correct examples for learning
Rule Analyzer
Learning
Partially learned rules with informal and formal
descriptions
Mixed
Example Composer
Modeling
New positive and negative examples for rule
refinement
Example Analyzer
Initiative
Problem Solving
Analysis of the instantiations for a rule
Informal description of the reasoning
Mixed-Initiative PS
New situations that need to be modeled
Reasoning Tree Analyzer
Solution Analyzer
Reasoning tree similar with the current model
77Question-answering based task reduction
T1
- A complex problem solving task is performed by
- successively reducing it to simpler tasks
- finding the solutionsof the simplest tasks
- successively composing these solutions until the
solution to the initial task is obtained.
S1
Q1
S11
A1n
A11
S1n
T1n
S11a
T11a
S11b
T11b
S11b
Q11b
S11bm
S11b1
A11bm
A11b1
T11bm
T11b1
Let T1 be the problem solving task to be
performed. Finding a solution is an iterative
process where, at each step, we consider some
relevant information that leads us to reduce the
current task to a simpler task or to several
simpler tasks. The question Q associated with
the current task identifies the type of
information to be considered. The answer A
identifies that piece of information and leads us
to the reduction of the current task.
78Intelligence Analysis
79A possible multi-agent architecture for Disciple
PERSONALIZED LEARNING ASSISTANT
EXPERT
Information-Interaction Agent
Support Agent
Support Agent
Support Agent
Local Shared KB
Global KB
Experts Model
Ontology
Rules
KB
Awareness Agent
External- Expertise Agent
KB
KB
KB
KB
EXPERT
PERSONALIZED LEARNING ASSISTANT
80Bibliography
Mitchell T.M., Machine Learning, McGraw Hill,
1997. Shavlik J.W. and Dietterich T. (Eds.),
Readings in Machine Learning, Morgan Kaufmann,
1990. Buchanan B., Wilkins D. (Eds.), Readings in
Knowledge Acquisition and Learning Automating
the Construction and the Improvement of Programs,
Morgan Kaufmann, 1992. Langley P., Elements of
Machine Learning, Morgan Kaufmann,
1996. Michalski R.S., Carbonell J.G., Mitchell
T.M. (Eds), Machine Learning An Artificial
Intelligence Approach, Morgan Kaufmann, 1983
(Vol. 1), 1986 (Vol. 2). Kodratoff Y. and
Michalski R.S. (Eds.) Machine Learning An
Artificial Intelligence Approach (Vol. 3), Morgan
Kaufmann Publishers, Inc., 1990. Michalski R.S.
and Tecuci G. (Eds.), Machine Learning A
Multistrategy Approach (Vol. 4), Morgan Kaufmann
Publishers, San Mateo, CA, 1994. Tecuci G. and
Kodratoff Y. (Eds.), Machine Learning and
Knowledge Acquisition Integrated Approaches,
Academic Press, 1995. Tecuci G., Building
Intelligent Agents An Apprenticeship
Multistrategy Learning Theory, Methodology, Tool
and Case Studies, Academic Press, 1998.