Title: Agents Some Examples
1AgentsSome Examples
- Fariba Sadri
- Imperial College London
- ICCL Summer School Dresden
- August 2008
2Contents
- Teleo-Reactive agents
- Agent-0
- BDI/AgentSpeak(L)
3Teleo-Reactive (TR) Programs
- Some references
- Nilsson, TR Programs for agent control, Journal
of AI Research, 1994, 139-158 - Nilsson, Teleo-reactive programs and the
triple-tower architecture, October 2001
4TR-Programs
- They are named sequences of condition-action
rules - Program for Goal G
- G gt nil i.e. do nothing
- C1 gt A1
- C2 gt A2
- ..
- Cn gt An.
5TR-Programs
- They are intended to
- direct the agent towards a goal, while
- continuously taking into account changing
perceptions of the environment. - No declarative semantics
- Only procedural semantics
6Demo
- http//www.robotics.stanford.edu/users/nilsson/trw
eb/TRTower/TRTower_links.html - http//www.robotics.stanford.edu/users/nilsson/trw
eb/tr.html
7TR-Programs
- The Ci are tests to be evaluated on the world
model. - The Ai are actions the agent can do.
- At each cycle
- observations are made
- the rules are checked from the top.
- The first rule with a true test fires, i.e.
determines the action to be done next. - The action is executed.
- Typically actions of later rules are intended to
eventually result in the test of an earlier rule
to become true (Regression Property). - There is always a rule that will fire.
8TR-Program Examples
- Example (from Nilsson 2001)
- unpile(x) x is a block
- Clear(x) gtnil
- On(y,x) gtmove-to-table(y)
- move-to-table(x) x is a block
- On(x,Ta)gtnil
- Holding(y)gtputdown(y,Ta)
- Clear(x)gtpickup(x)
- Tgtunpile(x)
- Putdown and pickup are primitive actions.
9TR-Programs Example
- Example (from Nilsson 2001)
- move(x,y)x and y are blocks
- On(x,y)gtnil
- Holding(x) ? Clear(y)gtputdown(x,y)
- Holding(z)gtputdown(z,Ta)
- Clear(x) ? Clear(y)gtpickup(x)
- Clear(y)gt unpile(x)
- Tgt unpile(y)
10TR Triple Tower Architecture
11TR Triple Tower ArchitectureExample
?XOn(x,Y)? Holding(Y) ?Clear(Y)
Clear(A) On(A,B) Holding(C)
TR-Program
Sensors
Environment
12TR-Programs
- The actions Ai may be
- primitive,
- Sets of actions that can be executed
simultaneously, or - refer to other TR programs.
- A TR program called will continue until the
original condition leading to it being called
remains the highest one in the original program
that remains true.
13TR-Programs
- New Info from environment deletes old info (TMS).
- Also forward reasoning to derive all provable
facts.
14TR-Programs
- Where do TR-programs fit in within the agent
classification given in the introduction ?
15Agent-0
- Reference
- Yoav Shoham,
- Agent0 A simple agent language and its
interpreter, Proceedings AAAI-91, 1991, 704-709 - One of the early multi-agent models and
programming languages. - Fairly simple
- Motivation partly to gain experience from
implementing an agent model
16Agent-0
- Agents send messages to each other
- Inform, Request, Unrequest
A1
A2
17Agent-0 Mental State
- Mental state is made up of
- Capabilities - fixed
- Commitment rules - fixed
- Beliefs - get updated
- Commitments - get updated
18Agent-0 Capabilities
- cap(time,private action, mental condition)
- e.g.
- cap(T, rotate(Degree1) ,
- not (cmtd(_,do(T,rotate(Degree2)) and
- Degree1\Degree2 ) )
- where Degree1, Degree2, T are variables.
- This says
- The agent is able to rotate (something) by
Degree1 - degrees at some future time T if it does not
already have a commitment to any agent to rotate
(it) by a different number of degrees at the same
time.
19Agent-0 Commitment Rules
- commit(messpattern, mentalcond, agent, action)
- The action can be a single or a sequence of
actions. - e.g.
- commit( (Ag, REQUEST(Act)), (_,myfriend(Ag), Ag,
Act) - This says
- The agent can (perhaps) commit to do Act for
agent Ag if Ag has just requested Act and agent
believes Ag is a friend. - No declarative semantics,
- Just operational semantics.
20Agent-0 Beliefs
- bel(Ag, F) where Ag is the agent who believes
Fact F - AGENT-0 agents trust one another. They believe
anything they are told, incorporate it in their
beliefs, and retract any older contradictory
beliefs. - Only atomic propositions or their negations are
held as beliefs. This is to simplify knowledge
assimilation. It makes consistency checking
trivial.
21Agent-0 Commitments
- cmtd(agent, action)
- where the commitment is to agent.
- The set of commitments implicitly defines the
future actions for the agent. - Commitments are acted upon, by executing the
action when its time comes.
22Agent-0 Time
- Agents measure time as cycle number (number of
cycle executions) and synchronise their cycle
executions using a global clock. - So the time of a committed-to action comes when
the agents cycle number equals the cycle time
embedded in the action description.
23Agent0 Cycle
24Agent0 Cycle
- Initialisation
- Initialises the Capabilities, Commitment rules,
Beliefs, and Commitments - After that the agent is continually involved in
- Updating its beliefs
- Updating its commitments
- Honouring commitments whose time has come
25Agent-0 Commitments
- Commitments are only to primitive actions. So the
agent cannot commit to bringing about a state
that requires any element of planning.
26Agent-0 Actions
- Private
- Can be anything
- Communicative
- Inform(t a fact)
- Request(t, a, action)
- Unrequest(t, a, action)
- Refrain(action)
27Agent-0 Actions
- Actions can be
- Conditional
- If mntlcond then action
- If at time t you believe F holds at time t then
at time t inform a that F holds at t - Unconditional
28BDI/AgentSpeak(L)
- References
- A. Roa, AgentSpeak(L)BDI Agents speak out in a
logical language, Springer LNCS 1038, 1996 - A. Rao, M. Georgeff, An abstarct architecture for
rational agents, Proceedings of the 3rd
International Conference on Principles of
Knowledge Representation and Reasoning, KRR92,
Boston, 1992 - R. Bordini et al, Programming MAS in AgentSpeak
using Jason, Wiley, 2007
29BDI/AgentSpeak(L)
- Motivations
- BDI agents are traditionally specified in a
modal logic with modal operators to represent BDI
(Beliefs, Desires and Intentions). - Their implementations (e.g. PRS, dMARS), however,
have typically simplified their specifications
and used non-logical procedural approaches. - AgentSpeak is a programming language based on
restricted FOL. - AgentSpeak attempts to provide operational and
proof theoretic semantics for PRS and dMARS ( and
thus by a roundabout way for BDI agents)
30BDI/AgentSpeak(L)
- Further Motivations
- To incorporate some practical reasoning
- Means ends reasoning, deciding how to achieve
goals - Reaction to events, for example when something
unexpected happens - Choice deliberation, deciding what we want to
achieve (our intention) from amongst our desires
31BDI/AgentSpeak(L)Internal (Mental) State
- A set of beliefs (similar to Agent0 beliefs)
- A set of current desires (or goals)
- typically of the form !b where b is belief
- interpreted as desire for state of the world in
which b holds. - A set of pending events
- typically perceptions of messages interpreted as
belief updates b, -b or as goals to be
achieved !b - including request messages from other agents
usually recorded as new belief events, perhaps as
a new belief that the request has been made. - A set of intentions (similar to agent0
commitments) - A plan library. A plan has a triggering condition
(an event), a mental state applicability
condition, and a collection of sub-goals and
actions (similar to ECA rules).
32AgentSpeak(L) Beliefs and Event Terms
- No modal opeartors
- Beliefs a conjunction of ground literals
- adjacent(room1, room2) loc(room1)
empty(room1) - Events If b is an atomic belief then the
following are event terms - !b represents an achievement goal, e.g.
!loc(room2) - ?b represents a test goal, e.g. ?empty(room1)
- b, -b representing events of adding or deleting
beliefs (events generated by messages) - !b, -!b
- ?b, -?b
- Agent can have explicit goals, given by events
33AgentSpeak Agent Cycle
plans
generate new intentions
desires
events
intentions
see
execute next step of some intention
action
beliefs
environment
34AgentSpeak Agent Cycle
- Notice external/internal changes
- Update belief and record as events in event
stores - e.g. !location(robot, b), location(waste, a)
- Choose event (from event store) or desire (from
desire store for which there is at least one
plan) - Select plan this becomes new intention
- Drop intentions no longer believed viable
- Resume intention
- Execute an action, or
- Post subgoal as a new goal event
- Repeat cycle
35AgentSpeak Plans
- Each agent has its own repertoire of (primitive)
actions and plan library. - Plans are ECA rules of the form
- eb1,,bm lt- h1..hk
- e is an event term
- the bi are belief terms b1, , bm is called
context - the hi are goals or (primitive) actions
- Plans are used to respond to belief update
events and new goal events
36AgentSpeak Plans Examples
- location(waste, X)
- location(robot,X) location(bin,Y) lt-
- pick(waste) !location(robot,Y)
- drop(waste).
37AgentSpeak Plans Examples
- location(waste, X)
- location(robot,X) location(bin,Y) lt-
- pick(waste) !location(robot,Y)
drop(waste). - Context
- Triggering
- Event-
- Addition Body of the plan
- of a fact
38AgentSpeak Plans Examples
- location(waste, X)
- location(robot,X) location(bin,Y) lt-
- pick(waste) !location(robot,Y)
drop(waste). - The intended reading of this is very similar to
event-condition-action rules (except that the
action part is more sophisticated) - On event of noticing waste at X, if robot is at X
and bin at Y, then (robot) pick waste, make its
location Y and drop waste.
39AgentSpeak Plans Examples
- !location(robot, X)
- location(robot,X) lt- true.
- !location(robot, X)
- location(robot,Y) not XY adjacent(Y,Z)
not location(car,Z) lt- - move(Y,Z) !location(robot,X).
40AgentSpeak Plans Examples
- !location(robot, X)
- location(robot,Y) not XY adjacent(Y,Z)
not location(car,Z) lt- - move(Y,Z) !location(robot,X).
- The intended reading of this is similar to goal
reduction rules - To achieve a goal location(robot,X) .
41AgentSpeak Plan Example
- !quench_thirsthave_glass lt- !have_soft_drink
- fill_glass, drink
- !have_soft_drinksoft_drink_in_fridge
- lt- open_fridge
- get_soft_drink
42AgentSpeak Plans Some statements from Anand Rao
- Rules in a pure logic program are not
context-sensitive as plans. - ????
-
- Situation calculus and its many descendents
- State context - Event calculus - Temporal context
- Conditions/preconditions of plan provide context
43AgentSpeak Plans Some statements from Anand Rao
- Rules execute successfully returning a binding
for unbound variables however, execution of
plans generates a sequence of ground actions that
affect the environment. - compare with Abductive Logic Programs
44- location(robot, X) ?
- current_location(robot,Y)
- XY
- adjacent(Y,Z)
- current_location(car,Z)
- move(Y,Z)
- location(robot,X).
45AgentSpeak Plans Some statements from Anand rao
- In a pure logic program there is no difference
between a goal in the body of a rule and the head
of a rule. In an agent program the head consists
of a triggering event, rather than a goal. - ... allows both goal-directed and data-directed
invocation of plans. - compare with Abductive Logic Programs
46- location(robot, X) ?
- current_location(robot,Y) not XY
adjacent(Y,Z) not current_location(car,Z) - move(Y,Z) location(robot,X).
- location(waste, X) Xbin ? pick(waste)
drop(waste, bin)
47AgentSpeak Plans Some statements from Anand rao
- While a goal is being queried the execution of
that query cannot be interrupted in a logic
program. However, the plans in an agent program
can be interrupted. - compare with Abductive logic programs run
within An Agent Cycle