Knowledge Acquisition and Problem Solving - PowerPoint PPT Presentation

About This Presentation
Title:

Knowledge Acquisition and Problem Solving

Description:

CS 785 Fall 2004 Gheorghe Tecuci tecuci_at_gmu.edu http://lac.gmu.edu/ Learning Agents Center and Computer Science Department George Mason University – PowerPoint PPT presentation

Number of Views:25
Avg rating:3.0/5.0
Slides: 60
Provided by: Gheorgh8
Learn more at: http://lalab.gmu.edu
Category:

less

Transcript and Presenter's Notes

Title: Knowledge Acquisition and Problem Solving


1
CS 785 Fall 2004
Knowledge Acquisition and Problem Solving
Mixed-initiative Problem Solving and Knowledge
Base Refinement
Gheorghe Tecuci tecuci_at_gmu.eduhttp//lac.gmu.edu
/
Learning Agents Center and Computer Science
Department George Mason University
2
Overview
The Rule Refinement Problem and Method
Illustration
General Presentation of the Rule Refinement Method
Another Illustration of the Rule Refinement Method
Integrated Modeling, Learning, and Problem Solving
Characterization of the Disciple Rule Learning
Method
Demo Problem Solving and Rule Refinement
Recommended Reading
3
The rule refinement problem (definition)

GIVEN a plausible version space rule a
positive or a negative example of the rule (i.e.
a correct or an incorrect problem solving
episode) a knowledge base that includes an
object ontology and a set of problem solving
rules an expert that understands why the
example is positive or negative, and can answer
agents questions. DETERMINE an improved rule
that covers the example if it is positive, or
does not cover the example if it is negative
an extended object ontology (if needed for rule
refinement).
4
Initial example from which the rule was learned

I need to
Identify and test a strategic COG candidate
corresponding to a member of the
Allied_Forces_1943
Which is a member of Allied_Forces_1943?
US_1943
Therefore I need to
Identify and test a strategic COG candidate for
US_1943
This is an example of a problem solving step from
which the agent will learn a general problem
solving rule.
5
Learned rule to be refined


IF Identify and test a strategic COG candidate
corresponding to a member of a force The force
is ?O1
IF Identify and test a strategic COG candidate
corresponding to a member of the ?O1
QuestionWhich is a member of ?O1 ? Answer?O2
explanation ?O1 has_as_member ?O2
Plausible Upper Bound Condition
?O1 is multi_member_force has_as_member ?O2
?O2 is force
THEN Identify and test a strategic COG candidate
for ?O2
Plausible Lower Bound Condition
?O1 is equal_partners_multi_state_alliance has_as
_member ?O2 ?O2 is single_state_force
INFORMAL STRUCTURE OF THE RULE
THEN Identify and test a strategic COG candidate
for a force The force is ?O2
FORMAL STRUCTURE OF THE RULE
6
The agent uses the partially learned rules in
problem solving. The solutions generated by the
agent, when it uses the plausible upper bound
condition, have to be confirmed or rejected by
the expert. We will now present how the agent
improves (refines) its rules based on these
examples. In essence, the plausible lower bound
condition is generalized and the plausible upper
bound condition is specialized, both conditions
converging toward one another. The next slide
illustrates the rule refinement
process. Initially the agent does not contain any
task or rule in its knowledge base. The expert is
teaching the agent to reduce the task
Identify and test a strategic COG candidate
corresponding to a member of the
Allied_Forces_1943 To the task
Identify and test a strategic COG candidate
corresponding to a member of the US_1943 From
this task the agent learns a plausible version
space task reduction rule, as has been
illustrated before. Now the agent can use this
rule in problem solving, proposing to reduce the
task Identify and test a strategic COG
candidate corresponding to a member of the
European_Axis_1943 To the task Identify
and test a strategic COG candidate for
Germany_1943 The expert accepts this reduction as
correct, and the agent refines the rule. In the
following we will show the internal reasoning of
the agent that corresponds to this behavior.
7
Rule refinement method
Learning by Analogy And Experimentation
Knowledge Base
PVS Rule
Failure explanation
Example of task reductions generated by the agent
Incorrect example
Correct example
Learning from Explanations
Learning from Examples
8
Version space rule learning and refinement
Let E1 be the first task reduction from which the
rule is learned.
UB
The agent learns a rule with a very specific
lower bound condition (LB) and a very general
upper bound condition (UB).
LB
E1
UB
Let E2 be a new task reduction generated by the
agent and accepted as correct by the expert. Then
the agent generalizes LB as little as possible to
cover it.
LB


E2
UB
Let E3 be a new task reduction generated by the
agent which is rejected by the expert. Then the
agent specialize UB as little as possible to
uncover it and to remain more general than LB.
LB

_

E3

_
UBLB
After several iterations of this process LB may
become identical with UB and a rule with an exact
condition is learned.
_


_


9
(No Transcript)
10
Rule refinement with a positive example
Positive example that satisfies the upper bound

IF Identify and test a strategic COG candidate
corresponding to a member of a force The force
is ?O1

I need to
Identify and test a strategic COG candidate
corresponding to a member of the
European_Axis_1943
Therefore I need to
explanation ?O1 has_as_member ?O2
Identify and test a strategic COG candidate for
Germany_1943
Plausible Upper Bound Condition
?O1 is multi_member_force has_as_member ?O2
?O2 is force
Condition satisfied by positive example ?O1 is
European_Axis_1943 has_as_member ?O3 ?O2
is Germany_1943
less general than
Plausible Lower Bound Condition
?O1 is equal_partners_multi_state_alliance has_as
_member ?O2 ?O2 is single_state_force
explanation European_Axis_1943 has_as_member
Germany_1943
THEN Identify and test a strategic COG candidate
for a force The force is ?O2
11
The upper right side of this slide shows an
example generated by the agent. This example is
generated because it satisfies the plausible
upper bound condition of the rule (as shown by
the red arrows). This example is accepted as
correct by the expert. Therefore the plausible
lower bound condition is generalized to cover it
as shown in the following.
12
Minimal generalization of the plausible lower
bound
Plausible Upper Bound Condition?O1 is
multi_member_force has_as_member ?O2 ?O2
is force
less general than (or at most as general as)
New Plausible Lower Bound Condition?O1 is
multi_state_alliance has_as_member
?O2 ?O2 is single_state_force
minimal generalization
Condition satisfied by the positive example ?O1
is European_Axis_1943 has_as_member
?O2 ?O2 is Germany_1943
Plausible Lower Bound Condition (from rule) ?O1
is equal_partners_multi_state_alliance
has_as_member ?O2 ?O2 is single_state_force
13
The lower left side of this slide shows the
plausible lower bound condition of the rule. The
lower right side of this slide shows the
condition corresponding to the generated positive
example. These two conditions are generalized as
shown in the middle of this slide, by using the
climbing generalization hierarchy rule. Notice,
for instance, that equal_partners_multi_state_alli
ance and European_Axis_1943 are generalized to
multi_state_alliance. This generalization is
based on the object ontology, as illustrated in
the following slide. Indeed, multi_state_alliance
is the minimal generalization of
equals_partners_multi_state_alliance that covers
European_Axis_1943.
14
Forces
composition_of_forces
force
multi_member_force
single_member_force
single_state_force
single_group_force
multi_state_force
multi_group_force
multi_state_alliance
multi_state_coalition
Germany_1943
US_1943
dominant_partner_ multi_state_alliance
dominant_partner_ multi_state_coalition
multi_state_alliance is the minimal
generalization of equals_partners_multi_state_alli
ance that covers European_Axis_1943
equal_partners_ multi_state_alliance
equal_partners_ multi_state_coalition
European_Axis_1943
Allied_Forces_1943
15
Refined rule


IF Identify and test a strategic COG candidate
corresponding to a member of a force The force
is ?O1
IF Identify and test a strategic COG candidate
corresponding to a member of a force The force
is ?O1
explanation ?O1 has_as_member ?O2
explanation ?O1 has_as_member ?O2
Plausible Upper Bound Condition
?O1 is multi_member_force has_as_member ?O2
?O2 is force
Plausible Upper Bound Condition
?O1 is multi_member_force has_as_member ?O2
?O2 is force
generalization
Plausible Lower Bound Condition
?O1 is equal_partners_multi_state_alliance has_as
_member ?O2 ?O2 is single_state_force
Plausible Lower Bound Condition
?O1 is multi_state_alliance has_as_member
?O2 ?O2 is single_state_force
THEN Identify and test a strategic COG candidate
for a force The force is ?O2
THEN Identify and test a strategic COG candidate
for a force The force is ?O2
16
Overview
The Rule Refinement Problem and Method
Illustration
General Presentation of the Rule Refinement Method
Another Illustration of the Rule Refinement Method
Integrated Modeling, Learning, and Problem Solving
Characterization of the Disciple Rule Learning
Method
Demo Problem Solving and Rule Refinement
Recommended Reading
17
The rule refinement method general presentation
Let R be a plausible version space rule, U its
plausible upper bound condition, L its plausible
lower bound condition, and E a new example of the
rule. 1. If E is covered by U but it is not
covered by L then If E is a positive example
then L needs to be generalized as little as
possible to cover it while remaining less general
or at most as general as U. If E is a negative
example then U needs to be specialized as little
as possible to no longer cover it while remaining
more general than or at least as general as L.
Alternatively, both bounds need to be
specialized. 2. If E is covered by L then If
E is a positive example then R need not to be
refined. If E is a negative example then both
U and L need to be specialized as little as
possible to no longer cover this example while
still covering the known positive examples of the
rule. If this is not possible, then the E
represents a negative exception to the rule. 3.
If E is not covered by U then If E is a
positive example then it represents a positive
exception to the rule. If E is a negative
example then no refinement is necessary.
18
The rule refinement method general presentation
  • If E is covered by U but it is not covered by L
    then
  • If E is a positive example then L needs to be
    generalized as little as possible to cover it
    while remaining less general or at most as
    general as U.

UB
UB
LB
LB






19
The rule refinement method general presentation
  • If E is covered by U but it is not covered by L
    then
  • If E is a negative example then U needs to be
    specialized as little as possible to no longer
    cover it while remaining more general than or at
    least as general as L.
  • Alternatively, both bounds need to be
    specialized.

Strategy 1 Specialize UB by using a
specialization rule (e.g. the descending the
generalization hierarchy rule, or specializing a
numeric interval rule).
UB
UB
_
_
LB
LB




20
The rule refinement method general presentation
Strategy 2 Find a failure explanation EXw of why
E is a wrong problem solving episode.
EXw identifies the features that make E a wrong
problem solving episode. The inductive hypothesis
is that the correct problem solving episodes
should not have these features. EXw is taken as
an example of a condition that the correct
problem solving episodes should not satisfy, an
Except-When condition. The Except-when condition
needs also to be learned, based on additional
examples. Based on EXw an initial Except-When
plausible version space condition is generated.
UB
UB
LB
LB


_


21
The rule refinement method general presentation
Strategy 3 Find an additional explanation EXw
for the correct problem solving episodes, which
is not satisfied by the current wrong problem
solving episode.
Specialize both bounds of the plausible version
space condition by - adding the most general
generalization of EXw, corresponding to the
examples encountered so far, to the upper bound
- adding the least general generalization of EXw,
corresponding to the examples encountered so far,
to the lower bound.
UB
UB
LB
LB


_
_


22
The rule refinement method general presentation
2. If E is covered by L then If E is a
positive example then R need not to be refined.
UB
LB



23
The rule refinement method general presentation
2. If E is covered by L then If E is a
negative example then both U and L need to be
specialized as little as possible to no longer
cover this example while still covering the known
positive examples of the rule. If this is not
possible, then the E represents a negative
exception to the rule.
Strategy 1 Find a failure explanation EXw of why
E is a wrong problem solving episode and create
an Except-When a plausible version space
condition, as indicated before.
UB
UB
LB
LB


-


24
The rule refinement method general presentation
3. If E is not covered by U then If E is a
positive example then it represents a positive
exception to the rule. If E is a negative
example then no refinement is necessary.
-

UB
UB
LB
LB




25
Overview
The Rule Refinement Problem and Method
Illustration
General Presentation of the Rule Refinement Method
Another Illustration of the Rule Refinement Method
Integrated Modeling, Learning, and Problem Solving
Characterization of the Disciple Rule Learning
Method
Demo Problem Solving and Rule Refinement
Recommended Reading
26
Initial example from which a rule was learned

IF the task to accomplish is
Identify the strategic COG candidates with
respect to the industrial civilization of US_1943
Who or what is a strategicallycritical
industrial civilizationelement in US_1943?
THEN
Industrial_capacity_of_US_1943
industrial_capacity_of_US_1943 is a strategic COG
candidate for US_1943
27
Learned PVS rule to be refined

IF Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is ?O1

IF Identify the strategic COG candidates with
respect to the industrial civilization of ?O1
QuestionWho or what is a strategically critical
industrialcivilization element in ?O1
? Answer?O2
explanation ?O1 has_as_industrial_factor
?O2 ?O2 is_a_major_generator_of ?O3
Plausible Upper Bound Condition?O1 IS Force has_
as_industrial_factor ?O2 ?O2 IS Industrial_fa
ctor is_a_major_generator_of
?O3 ?O3 IS Product
THEN ?O2 is a strategic COG candidate for ?O1
INFORMAL STRUCTURE OF THE RULE
Plausible Lower Bound Condition ?O1 IS US_1943 ha
s_as_industrial_factor ?O2 ?O2 IS Industrial_c
apacity_of_US_1943 is_a_major_generator_of
?O3 ?O3 IS War_materiel_and_transports_of_US_1943
THEN A strategic COG relevant factor is strategic
COG candidate for a force The force is ?O1 The
strategic COG relevant factor is ?O2
FORMAL STRUCTURE OF THE RULE
28
Positive example covered by the upper bound
Positive example that satisfies the upper bound

IF Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is ?O1

IF the task to accomplish is
Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is Germany_1943
explanation ?O1 has_as_industrial_factor
?O2 ?O2 is_a_major_generator_of ?O3
THEN accomplish the task
A strategic COG relevant factor is strategic COG
candidate for a force The force is
Germany_1943 The strategic COG relevant factor
is Industrial_capacity_of_Germany_1943
Plausible Upper Bound Condition?O1 IS Force has_
as_industrial_factor ?O2 ?O2 IS Industrial_fa
ctor is_a_major_generator_of
?O3 ?O3 IS Product
Condition satisfied by positive
example ?O1 IS Germany_1943 has_as_industrial_fac
tor ?O2 ?O2 IS Industrial_capacity_of_Germany_1
943 is_a_major_generator_of
?O3 ?O3 IS War_materiel_and_fuel_of_Germany_1943
less general than
Plausible Lower Bound Condition ?O1 IS US_1943 ha
s_as_industrial_factor ?O2 ?O2 IS Industrial_
capacity_of_US_1943 is_a_major_generator_of
?O3 ?O3 IS War_materiel_and_transports_of_US_1943
explanation Germany_1943 has_as_industrial_factor
Industrial_capacity_of_Germany_1943 Indust
rial_capacity_of_Germany_1943
is_a_major_generator_of
War_materiel_and_fuel_of_Germany_1943
THEN A strategic COG relevant factor is strategic
COG candidate for a force The force is ?O1 The
strategic COG relevant factor is ?O2
29
Minimal generalization of the plausible lower
bound
Plausible Upper Bound Condition?O1 IS Force has_
as_industrial_factor ?O2 ?O2 IS Industrial_fac
tor is_a_major_generator_of
?O3 ?O3 IS Product
less general than (or at most as general as)
New Plausible Lower Bound Condition?O1 IS Single_
state_force has_as_industrial_factor ?O2
?O2 IS Industrial_capacity is_a_major_generato
r_of ?O3 ?O3 IS Strategically_essential_goods_
or_materials
minimal generalization
Condition satisfied by the positive
example ?O1 IS Germany_1943 has_as_industrial_fac
tor ?O2 ?O2 IS Industrial_capacity_of_Germany_1
943 is_a_major_generator_of
?O3 ?O3 IS War_materiel_and_fuel_of_Germany_1943
Plausible Lower Bound Condition (from
rule) ?O1 IS US_1943 has_as_industrial_factor
?O2 ?O2 IS Industrial_capacity_of_US_1943
is_a_major_generator_of ?O3 ?O3 IS War_materie
l_and_transports_of_US_1943
30
Generalization hierarchy of forces
ltobjectgt
Force
Group
Opposing_force
Single_state_force
Single_group_force
Multi_group_force
Multi_state_force
component_state
US_1943
Anglo_allies_1943
component_state
Britain_1943
component_state
Germany_1943
European_axis_1943
component_state
Italy_1943
31
Generalized rule


IF Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is ?O1
IF Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is ?O1
explanation ?O1 has_as_industrial_factor
?O2 ?O2 is_a_major_generator_of ?O4
explanation ?O1 has_as_industrial_factor
?O2 ?O2 is_a_major_generator_of ?O3
Plausible Upper Bound Condition?O1 IS Force has_
as_industrial_factor ?O2 ?O2 IS Industrial_fa
ctor is_a_major_generator_of
?O3 ?O3 IS Product
Plausible Upper Bound Condition?O1 IS Force has_
as_industrial_factor ?O2 ?O2 IS Industrial_fa
ctor is_a_major_generator_of
?O3 ?O3 IS Product
Plausible Lower Bound Condition ?O1 IS US_1943 ha
s_as_industrial_factor ?O2 ?O2 IS Industrial_
capacity_of_US_1943 is_a_major_generator_of
?O3 ?O3 IS War_materiel_and_transports_of_US_1943
Plausible Upper Bound Condition?O1 IS Single_stat
e_force has_as_industrial_factor ?O2
?O2 IS Industrial_capacity is_a_major_generato
r_of ?O3 ?O3 IS Strategically_essential_goo
ds_or_materials
THEN A strategic COG relevant factor is strategic
COG candidate for a force The force is ?O1 The
strategic COG relevant factor is ?O2
THEN A strategic COG relevant factor is strategic
COG candidate for a force The force is ?O1 The
strategic COG relevant factor is ?O2
32
A negative example covered by the upper bound
Negative example that satisfies the upper bound

IF Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is ?O1

IF the task to accomplish is
Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is Italy_1943
explanation ?O1 has_as_industrial_factor
?O2 ?O2 is_a_major_generator_of ?O3
THEN accomplish the task
A strategic COG relevant factor is strategic COG
candidate for a force The force is
Italy_1943 The strategic COG relevant factor
is Farm_implement_industry_of_Italy_1943
Plausible Upper Bound Condition?O1 IS Force has_
as_industrial_factor ?O2 ?O2 IS Industrial_fa
ctor is_a_major_generator_of
?O3 ?O3 IS Product
Condition satisfied by positive
example ?O1 IS Italy_1943 has_as_industrial_facto
r ?O2 ?O2 IS Farm_implement_industry_of_Italy_1
943 is_a_major_generator_of
?O3 ?O3 IS Farm_implements_of_Italy_1943
less general than
Plausible Upper Bound Condition?O1 IS Single_stat
e_force has_as_industrial_factor ?O2
?O2 IS Industrial_capacity is_a_major_generato
r_of ?O3 ?O3 IS Strategically_essential_goo
ds_or_materials
explanation Italy_1943 has_as_industrial_factor
Farm_implement_industry_of_Italy_1943 Farm_impl
ement_industry_of_Italy_1943
is_a_major_generator_of
Farm_implements_of_Italy_1943
THEN A strategic COG relevant factor is strategic
COG candidate for a force The force is ?O1 The
strategic COG relevant factor is ?O2
33
Automatic generation of plausible explanations
IF

Identify the strategic COG candidates with
respect to the industrial civilization of
Italy_1943
No!
Who or what is a strategicallycritical
industrial civilizationelement in Italy_1943?
explanation Italy_1943 has_as_industrial_factor
Farm_implement_industry_of_Italy_1943 Farm_impl
ement_industry_of_Italy_1943
is_a_major_generator_of
Farm_implements_of_Italy_1943
Industrial_capacity_of_Italy_1943
THEN
Industrial_capacity_of_Italy_1943is a strategic
COG candidate for Italy_1943
The agent generates a list of plausible
explanations from which the expert has to select
the correct one
Farm_implement_industry_of_Italy_1943 IS_NOT
Industrial_capacity
Farm_implements_of_Italy_1943
IS_NOT Strategically_essential_goods_or_materie
l
34
Minimal specialization of the plausible upper
bound
Plausible Upper Bound Condition (from
rule)?O1 IS Force has_as_industrial_factor
?O2 ?O2 IS Industrial_factor
is_a_major_generator_of ?O3 ?O3 IS Product
specialization
Condition satisfied by the negative
example ?O1 IS Italy_1943 has_as_industrial_facto
r ?O2 ?O2 IS Farm_implement_industry_of_Italy_1
943 is_a_major_generator_of
?O3 ?O3 IS Farm_Implements_of_Italy_1943
New Plausible Upper Bound Condition ?O1 IS Force
has_as_industrial_factor ?O2
?O2 IS Industrial_factor is_a_major_generator_
of ?O3 ?O3 IS Strategically_essential_goods_or
_materiel
more general than(or at least as general as)
New Plausible Lower Bound Condition?O1 IS Single_
state_force has_as_industrial_factor ?O2
?O2 IS Industrial_capacity is_a_major_generato
r_of ?O3 ?O3 IS Strategically_essential_goods_
or_materiel
35
Fragment of the generalization hierarchy
ltobjectgt
Resource_or_ infrastructure_element
UB
Product
Strategically_essential_resource_or_infrastructur
e_element
Raw_material
specialization
Non-strategically_essential goods_or_services
subconcept_of
Strategic_raw_material
Strategically_essential_goods_or_materiel
LB
instance_of
subconcept_of
Strategically_essential_ infrastructure_element
War_materiel_and_transports
subconcept_of

War_materiel_and_fuel

Main_airport
Main_seaport
Farm-implements of_Italy_1943
_
Sole_airport
Sole_seaport
36
Specialized rule


IF Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is ?O1
IF Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is ?O1
explanation ?O1 has_as_industrial_factor
?O2 ?O2 is_a_major_generator_of ?O3
explanation ?O1 has_as_industrial_factor
?O2 ?O2 is_a_major_generator_of ?O3
Plausible Upper Bound Condition?O1 IS Force has_
as_industrial_factor ?O2 ?O2 IS Industrial_fa
ctor is_a_major_generator_of
?O3 ?O3 IS Strategically_essential_goods_or_materi
als
Plausible Upper Bound Condition?O1 IS Force has_
as_industrial_factor ?O2 ?O2 IS Industrial_fa
ctor is_a_major_generator_of
?O3 ?O3 IS Product
Plausible Upper Bound Condition?O1 IS Single_stat
e_force has_as_industrial_factor ?O2
?O2 IS Industrial_capacity is_a_major_generato
r_of ?O3 ?O3 IS Strategically_essential_goo
ds_or_materials
Plausible Upper Bound Condition?O1 IS Single_stat
e_force has_as_industrial_factor ?O2
?O2 IS Industrial_capacity is_a_major_generato
r_of ?O3 ?O3 IS Strategically_essential_goo
ds_or_materials
THEN A strategic COG relevant factor is strategic
COG candidate for a force The force is ?O1 The
strategic COG relevant factor is ?O2
THEN A strategic COG relevant factor is strategic
COG candidate for a force The force is ?O1 The
strategic COG relevant factor is ?O2
37
Overview
The Rule Refinement Problem and Method
Illustration
General Presentation of the Rule Refinement Method
Another Illustration of the Rule Refinement Method
Integrated Modeling, Learning, and Problem Solving
Characterization of the Disciple Rule Learning
Method
Demo Problem Solving and Rule Refinement
Recommended Reading
38
Control of modeling, learning and problem solving
Input Task
Mixed-Initiative Problem Solving
Ontology Rules
Generated Reduction
Reject Reduction
New Reduction
Accept Reduction
Solution
Rule Refinement
Task Refinement
Modeling
Rule Refinement
Formalization
Learning
39
  • This slide shows the interaction between the
    expert and the agent when the agent has already
    learned some rules.
  • This interaction is governed by the
    mixed-initiative problem solver.
  • The expert formulates the initial task.
  • Then the agent attempts to reduce this task by
    using the previously learned rules. Let us assume
    that the agent succeeded to propose a reduction
    to the current task.
  • The expert has to accept it if it is correct, or
    he has to reject it, if it is incorrect.
  • If the reduction proposed by the agent is
    accepted by the expert, the rule that generated
    it and its component tasks are generalized. Then
    the process resumes, the agent attempting to
    reduce the new task.
  • If the reduction proposed by the agent is
    rejected, then the agent will have to specialize
    the rule, and possibly its component tasks.
  • In this case the expert will have to indicate the
    correct reduction, going through the normal steps
    of modeling, formalization, and learning.
    Similarly, when the agent cannot propose a
    reduction of the current task, the expert will
    have to indicate it, again going through the
    steps of modeling, formalization and learning.
  • The control of this interaction is done by the
    mixed-initiative problem solver tool.

40
A systematic approach to agent teaching
Identify and test a strategic COG candidate for
the Sicily_1943 scenario
Allies_Forces_1943
European_Axis_1943
alliance
individual states
alliance
individual states
14
13
US_1943
Britain_1943
Italy_1943
Germany_1943
11
12
government
government
Other factors
Other factors
1
6
people
people
5
10
2
7
military
military
3
8
economy
economy
9
4
41
  • This slide shows a recommended order of
    operations for teaching the agent
  • Modeling for branches 1 through 5
  • Rule Learning for branches 1 through 5
  • Problem solving, Rule refinement, Modeling, and
    Rule Learning for branches 6 through 10You
    will notice that several of the rules learned
    from branch 1 will apply to generate branch 6.
    One only needs to model and teach Disciple for
    those steps where the previously learned rules do
    not apply (i.e. for the aspects where there are
    significant differences between US_1943 and
    Britain_1943 with respect to their
    governments).Similarly, several of the rules
    learned from branch 2 will apply to generate
    branch 7, an so on.
  • Problem solving, Rule refinement, Modeling, and
    Rule Learning for branches 11 and 12Again,
    many of the rules learned from branches 1
    through 10, will apply for the branches 11 and
    12.
  • Modeling for branches 13
  • Rule Learning for branches 13
  • Problem solving, Rule refinement, Modeling, and
    Rule Learning for branches 14

42
Overview
The Rule Refinement Problem and Method
Illustration
General Presentation of the Rule Refinement Method
Another Illustration of the Rule Refinement Method
Integrated Modeling, Learning, and Problem Solving
Characterization of the Disciple Rule Learning
Method
Demo Problem Solving and Rule Refinement
Recommended Reading
43
Characterization of the PVS rule
44
This slide shows the relationship between the
plausible lower bound condition, the plausible
upper bound condition, and the exact
(hypothetical) condition that the agent is
attempting to learn. During rule learning, both
the upper bound and the lower bound are
generalized and specialized to converge toward
one another and toward the hypothetical exact
condition. This is different from the classical
version space method where the upper bound is
only specialized and the lower bound is only
generalized. Notice also that, as opposed to the
classical version space method (where the exact
condition is always between the upper and the
lower bound conditions), in Disciple the exact
condition may not include part of the plausible
lower bound condition, and may include a part
that is outside the plausible upper bound
condition. We say that the plausible lower bound
is, AS AN APPROXIMATION, less general than the
hypothetical exact condition. Similarly, the
plausible upper bound is, AS AN APPROXIMATION,
more general than the hypothetical exact
condition. These characteristics are a
consequence of the incompleteness of the
representation language (i.e. the incompleteness
of the object ontology), of the heuristic
strategies used to learn the rule, and of the
fact that the object ontology may evolve during
learning.
45
Characterization of the rule learning method
Uses the explanation of the first positive
example to generate a much smaller version space
than the classical version space method. Conducts
an efficient heuristic search of the version
space, guided by explanations, and by the
maintenance of a single upper bound condition and
a single lower bound condition. Will always learn
a rule, even in the presence of
exceptions. Learns from a few examples and an
incomplete knowledge base. Uses a form of
multistrategy learning that synergistically
integrates learning from examples, learning from
explanations, and learning by analogy, to
compensate for the incomplete knowledge. Uses
mixed-initiative reasoning to involve the expert
in the learning process. Is applicable in complex
real-world domains, being able to learn within a
complex representation language.
46
Problem solving with PVS rules
PVS Condition
Except-When PVS Condition
Rules conclusion is (most likely) incorrect
Rules conclusion is (most likely) incorrect
Rules conclusion is plausible
Rules conclusion is not plausible
Rules conclusion is (most likely) correct
47
Overview
The Rule Refinement Problem and Method
Illustration
General Presentation of the Rule Refinement Method
Another Illustration of the Rule Refinement Method
Integrated Modeling, Learning, and Problem Solving
Characterization of the Disciple Rule Learning
Method
Demo Problem Solving and Rule Refinement
Recommended Reading
48
DISCIPLE-RKF
Disciple-RKF/COG Integrated Modeling, Learning
and Problem Solving
49
Disciple uses the partially learned rules in
problem solving and refines them based on
experts feedback.
This is done in the Refining mode.
50
Disciple applies previously learned rules in
other similar cases
The expert can expand the More node to view
the solution generated by the rule
51
Disciple uses the rule learned from Republican
Guard Protection Unit to the System of Saddam
doubles
The ? indicates that Disciple is uncertain
whether the reasoning step is correct.
52
The expert has to examine this step and has to
indicate whether it is
correct but incompletely explained by selecting
Explain Example
correct and completely explained by selecting
Correct Example
incorrect by selecting Incorrect Example
53
The expert has indicated that the reasoning step
is correct and Disciple has generalized the
plausible lower bound condition of the
corresponding rule, to cover this example.
54
Following the same procedure, Disciple
generalized the plausible lower bound condition
of the rule used to generate this elementary
solution.
55
Another protection means of Saddam Hussein is the
Complex of Bunkers of Iraq 2003. Since this means
of protection is different from the previously
identified ones, the learned rules do not apply.
The expert has to provide the modeling that
identifies the Complex of Bunkers of Iraq 2003 as
a means for protection of Saddam Hussein and to
test it for any significant vulnerabilities.
This is done with the Modeling tool.
56
Disciple starts the modeling tool with the
appropriate task and suggests the question to ask
57
The expert develops a complete modeling for the
Complex of Bunkers of Iraq 2003
When the modeling is completed, the expert
returns to the teaching tool
58
The expert can now learn new rules for the
Complex of Bunkers of Iraq 2003 as means of
protection for Saddam Hussein
59
Recommended reading
Tecuci G., Boicu M., Boicu C., Marcu D., Stanescu
B., Barbulescu M., The Disciple-RKF Learning and
Reasoning Agent, Research Report submitted for
publication, Learning Agents Center, George Mason
University, September 2004. G. Tecuci, Building
Intelligent Agents, Academic Press, 1998, pp.
21-23, pp. 27-32, pp. 101-129, pp.
198-228. Tecuci G., Boicu M., Bowman M., and
Dorin Marcu, with a commentary by Murray
Burke, An Innovative Application from the DARPA
Knowledge Bases Programs Rapid Development of a
High Performance Knowledge Base for Course of
Action Critiquing, invited paper for the special
IAAI issue of the AI Magazine, Volume 22, No, 2,
Summer 2001, pp. 43-61. http//lac.gmu.edu/publica
tions/data/2001/COA-critiquer.pdf Boicu M.,
Tecuci G., Stanescu B., Marcu D. and Cascaval C.,
"Automatic Knowledge Acquisition from Subject
Matter Experts," in Proceedings of the IEEE
International Conference on Tools with Artificial
Intelligence, Dallas, Texas, November 2001.
http//lac.gmu.edu/publications/data/2001/ICTAI.d
oc
Write a Comment
User Comments (0)
About PowerShow.com