Title: Design of Multi-Agent Systems
1Design of Multi-Agent Systems
- Teacher
- Bart Verheij
- Student assistants
- Albert Hankel
- Elske van der Vaart
- Web site
- http//www.ai.rug.nl/verheij/teaching/dmas/
- (Nestor contains a link)
2Overview
- Deductive reasoning agents
- Planning
- Agent-oriented programming
- Concurrent MetateM
- Practical reasoning agents
- Practical reasoning intentions
- Implementation deliberation
- Implementation commitment strategies
- Implementation intention reconsideration
3Deductive Reasoning Agents
- Decide what to do on the basis of a theory
stating the best action to perform in any given
situation - ?, ? Do(a) with a ? Ac
- where
- ? is such a theory (typically a set of rules)
- ? is a logical database that describes the
current state of the world - Ac is the set of actions the agent can perform
4Deductive Reasoning Agents
- But
- Theorem proving is in general neither fast nor
efficient - Calculative rationality (rationality with respect
to the moment calculation started) requires a
static environment - Encoding of perception environment into logical
symbols isnt straightforward - So
- Use a weaker logic
- Use a symbolic, not logic-based representation
5Overview
- Deductive reasoning agents
- Planning
- Agent-oriented programming
- Concurrent MetateM
- Practical reasoning agents
- Practical reasoning intentions
- Implementation deliberation
- Implementation commitment strategies
- Implementation intention reconsideration
6Planning STRIPS
- Only atoms and their negation
- Only represent changes
- Blocks world (blocks a robot arm)
- Stack(x,y)
- Pre Clear(y), Holding(x)
- Del Clear(y), Holding(x)
- Add ArmEmpty(y), On(x,y)
7Problems with planning
- Frame problem
- Describe what does not change by an action
- Qualification problem
- Describe all preconditions of an action
- Ramification problem
- Describe all consequences of an action
- Prediction problem
- Describe the duration that something remains true
8Overview
- Deductive reasoning agents
- Planning
- Agent-oriented programming
- Concurrent MetateM
- Practical reasoning agents
- Practical reasoning intentions
- Implementation deliberation
- Implementation commitment strategies
- Implementation intention reconsideration
9Agent-oriented programming
- Agent0 (Shoham)
- Key idea directly programming agents in terms of
intentional notions like belief, commitment, and
intention - In other words, the intentional stance is used as
an abstraction tool for programming!
10Agent-oriented programming
- Shoham suggested that a complete AOP system will
have 3 components - a logic for specifying agents and describing
their mental states - an interpreted programming language for
programming agents (example Agent0) - an agentification process, for converting
neutral applications (e.g., databases) into
agents
11Agent-oriented programming
- Agents in Agent0 have four components
- a set of capabilities (things the agent can do)
- a set of initial beliefs
- a set of initial commitments (things the agent
will do) - a set of commitment rules
12Agent-oriented programming
- Each commitment rule contains
- a message condition
- a mental condition
- an action
- On each agent cycle
- The message condition is matched against the
messages the agent has received - The mental condition is matched against the
beliefs of the agent - If the rule fires, then the agent becomes
committed to the action (the action gets added to
the agents commitment set)
13A commitment rule in Agent0
- COMMIT(
- ( agent, REQUEST, DO(time, action)
- ), msg condition
- ( B,
- now, Friend agent AND
- CAN(self, action) AND
- NOT time, CMT(self, anyaction)
- ), mental condition
- self,
- DO(time, action)
14A commitment rule in Agent0
- Meaning
- If I receive a message from agent which requests
me to do action at time, and I believe that - agent is currently a friend
- I can do the action
- At time, I am not committed to doing any other
action - then I commit to doing action at time
15Overview
- Deductive reasoning agents
- Planning
- Agent-oriented programming
- Concurrent MetateM
- Practical reasoning agents
- Practical reasoning intentions
- Implementation deliberation
- Implementation commitment strategies
- Implementation intention reconsideration
16Concurrent METATEM
- Concurrent METATEM is a multi-agent language in
which each agent is programmed by giving it a
temporal logic specification of the behavior it
should exhibit - These specifications are executed directly in
order to generate the behavior of the agent - Temporal logic is classical logic augmented by
modal operators for describing how the truth of
propositions changes over time
17Concurrent METATEM
- important(agents)it is now, and will always be
true that agents are important - ?important(ConcurrentMetateM)sometime in the
future, ConcurrentMetateM will be important - ?important(Prolog)sometime in the past it was
true that Prolog was important - (?friends(us)) U apologize(you)we are not
friends until you apologize - ?apologize(you)tomorrow (in the next state), you
apologize
18Concurrent METATEM
- MetateM is a framework for directly executing
temporal logic specifications - The root of the MetateM concept is Gabbays
separation theoremAny arbitrary temporal logic
formula can be rewritten in a logically
equivalent past ? future form. - This past ? future form can be used as execution
rules
19Concurrent METATEM
- A MetateM program is a set of such rules
- Execution proceeds by a process of continually
matching rules against a history, and firing
those rules whose antecedents are satisfied - The instantiated future-time consequents become
commitments which must subsequently be satisfied
20Concurrent METATEM
- Execution is thus a process of iteratively
generating a model for the formula made up of the
program rules - The future-time parts of instantiated rules
represent constraints on this model - All asks at some time in the past are followed
by a give at some time in the future
21Concurrent METATEM
- Execution is thus a process of iteratively
generating a model for the formula made up of the
program rules - The future-time parts of instantiated rules
represent constraints on this model - ConcurrentMetateM provides an operational
framework through which societies of MetateM
processes can operate and communicate
22(No Transcript)
23Overview
- Deductive reasoning agents
- Planning
- Agent-oriented programming
- Concurrent MetateM
- Practical reasoning agents
- Practical reasoning intentions
- Implementation deliberation
- Implementation commitment strategies
- Implementation intention reconsideration
24Practical reasoning
- Practical reasoning is reasoning directed towards
actions the process of figuring out what to do - Practical reasoning is a matter of weighing
conflicting considerations for and against
competing options, where the relevant
considerations are provided by what the agent
desires/values/cares about and what the agent
believes. (Bratman) - Practical reasoning is distinguished from
theoretical reasoning theoretical reasoning is
directed towards beliefs
25Practical reasoning
- Human practical reasoning consists of two
activities - deliberationdeciding what state of affairs we
want to achieve - means-ends reasoningdeciding how to achieve
these states of affairs - The outputs of deliberation are intentions
26Intentions in practical reasoning
- Intentions pose problems for agents, who need to
determine ways of achieving them.If I have an
intention to ?, you would expect me to devote
resources to deciding how to bring about ?. - Intentions provide a filter for adopting other
intentions, which must not conflict.If I have an
intention to ?, you would not expect me to adopt
an intention ? such that ? and ? are mutually
exclusive. - Agents track the success of their intentions, and
are inclined to try again if their attempts
fail.If an agents first attempt to achieve ?
fails, then all other things being equal, it will
try an alternative plan to achieve ?.
27Intentions in practical reasoning
- Agents believe their intentions are
possible.That is, they believe there is at least
some way that the intentions could be brought
about. Otherwise intention-belief inconsistency - Agents do not believe they will not bring about
their intentions.It would not be rational of me
to adopt an intention to ? if I believed ? was
not possible. Otherwise intention-belief
incompleteness - Under certain circumstances, agents believe they
will bring about their intentions.It would not
normally be rational of me to believe that I
would bring my intentions about intentions can
fail. Moreover, it does not make sense that if I
believe ? is inevitable that I would adopt it as
an intention.
28Intentions in practical reasoning
- Agents need not intend all the expected side
effects of their intentions.If I believe ??? and
I intend that ?, I do not necessarily intend ?
also. (Intentions are not closed under
implication.)This last problem is known as the
side effect or package deal problem.
29Intentions in practical reasoning
- Intentions are stronger than mere desires
- My desire to play basketball this afternoon is
merely a potential influencer of my conduct this
afternoon. It must vie with my other relevant
desires . . . before it is settled what I will
do. In contrast, once I intend to play basketball
this afternoon, the matter is settled I normally
need not continue to weigh the pros and cons.
When the afternoon arrives, I will normally just
proceed to execute my intentions. (Bratman, 1990)
30Practical reasoning (abstract)
- Current beliefs and perception determine next
beliefs - Current beliefs and intentions determine next
desires - Current beliefs, desires and intentions determine
next intentions - Current beliefs, desires and available actions
determine a plan
31Overview
- Deductive reasoning agents
- Planning
- Agent-oriented programming
- Concurrent MetateM
- Practical reasoning agents
- Practical reasoning intentions
- Implementation deliberation
- Implementation commitment strategies
- Implementation intention reconsideration
32Implementing practical reasoning agents
B B_initial I I_initial loop p
see B brf(B,p) //Update world
model I deliberate(B) ?
plan(B,I) //Use means-end reasoning execute(?)
end
33Interaction between deliberation and planning
- Both deliberation and planning take time, perhaps
too much time. - Even if deliberation is optimal (maximizes
expected utility), the resulting intention may no
longer be optimal when deliberation has finished. - (Calculative rationality)
34Deliberation
- How does an agent deliberate?
- Option generationin which the agent generates a
set of possible alternatives - Filteringin which the agent chooses between
competing alternatives, and commits to achieving
them.
35Implementing practical reasoning agents
B B_initial I I_initial loop p
see B brf(B,p) D option(B,I) //
Deliberate (1) I filter(B,D,I) //
Deliberate (2) ? plan(B,I) execute(?) e
nd
36Overview
- Deductive reasoning agents
- Planning
- Agent-oriented programming
- Concurrent MetateM
- Practical reasoning agents
- Practical reasoning intentions
- Implementation deliberation
- Implementation commitment strategies
- Implementation intention reconsideration
37Commitment Strategies
- The following commitment strategies are commonly
discussed in the literature of rational agents - Blind commitmentA blindly committed agent will
continue to maintain an intention until it
believes the intention has actually been
achieved. Blind commitment is also sometimes
referred to as fanatical commitment. - Single-minded commitmentA single-minded agent
will continue to maintain an intention until it
believes that either the intention has been
achieved, or else that it is no longer possible
to achieve the intention. - Open-minded commitmentAn open-minded agent will
maintain an intention as long as it is still
believed possible.
38Commitment Strategies
- An agent has commitment both to ends (i.e., the
wishes to bring about), and means (i.e., the
mechanism via which the agent wishes to achieve
the state of affairs) - Currently, our agent control loop is
overcommitted, both to means and ends - Modification replan if ever a plan goes wrong
39B B_initial I I_initial loop p
see B brf(B,p) D option(B,I) I
filter(B,D,I) ? plan(B,I) while not
empty(?) do a head(?) execute(a)
//Start plan execution ? tail(?) p
see B brf(B,p) //Update world
plan if not sound(?,B,I) then ?
plan(B,I) //Replan if necessary end end en
d
40Commitment Strategies
- Still overcommitted to intentions Never stops to
consider whether or not its intentions are
appropriate - Modification stop to determine whether
intentions have succeeded or whether they are
impossible(Single-minded commitment)
41B B_initial I I_initial loop p
see B brf(B,p) D option(B,I) I
filter(B,D,I) ? plan(B,I) while not
(empty(?) or succeeded(B,I) or impossible (B,I)
do a head(?) //Check whether intentions
succeeded execute(a) //and are still
possible ? tail(?) p see B
brf(B,p) if not sound(?,B,I) then ?
plan(B,I) end end end
42Overview
- Deductive reasoning agents
- Planning
- Agent-oriented programming
- Concurrent MetateM
- Practical reasoning agents
- Practical reasoning intentions
- Implementation deliberation
- Implementation commitment strategies
- Implementation intention reconsideration
43Intention Reconsideration
- Our agent gets to reconsider its intentions once
every time around the outer control loop, i.e.,
when - it has completely executed a plan to achieve its
current intentions or - it believes it has achieved its current
intentions or - it believes its current intentions are no longer
possible. - This is limited in the way that it permits an
agent to reconsider its intentions - Modification Reconsider intentions after
executing every action
44- B B_initial
- I I_initial
- loop
- p see
- B brf(B,p)
- D option(B,I)
- I filter(B,D,I)
- ? plan(B,I)
- while not (empty(?) or succeeded(B,I) or
impossible (B,I) do - a head(?)
- execute(a)
- ? tail(?)
- p see
- B brf(B,p)
- D option(B,I) //Reconsider (1)
- I filter(B,D,I) //Reconsider (2)
- if not sound(?,B,I) then
- ? plan(B,I)
- end
45Intention Reconsideration
- But intention reconsideration is costly!A
dilemma - an agent that does not stop to reconsider its
intentions sufficiently often will continue
attempting to achieve its intentions even after
it is clear that they cannot be achieved, or that
there is no longer any reason for achieving them - an agent that constantly reconsiders its
attentions may spend insufficient time actually
working to achieve them, and hence runs the risk
of never actually achieving them - Solution incorporate an explicit meta-level
control component, that decides whether or not to
reconsider
46- B B_initial
- I I_initial
- loop
- p see
- B brf(B,p)
- D option(B,I)
- I filter(B,D,I)
- ? plan(B,I)
- while not (empty(?) or succeeded(B,I) or
impossible (B,I) do - a head(?)
- execute(a)
- ? tail(?)
- p see
- B brf(B,p)
- if reconsider(B,I) then //Decide whether to
reconsider or not - D option(B,I)
- I filter(B,D,I)
- end
- if not sound(?,B,I) then
47Overview
- Deductive reasoning agents
- Planning
- Agent-oriented programming
- Concurrent MetateM
- Practical reasoning agents
- Practical reasoning intentions
- Implementation deliberation
- Implementation commitment strategies
- Implementation intention reconsideration