74.419 Artificial Intelligence Intelligent Agents 1 - PowerPoint PPT Presentation

About This Presentation
Title:

74.419 Artificial Intelligence Intelligent Agents 1

Description:

74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2 – PowerPoint PPT presentation

Number of Views:177
Avg rating:3.0/5.0
Slides: 57
Provided by: MinY250
Category:

less

Transcript and Presenter's Notes

Title: 74.419 Artificial Intelligence Intelligent Agents 1


1
74.419 Artificial Intelligence Intelligent
Agents 1
  • Russell and Norvig, Ch. 2

2
Outline
  • Agents and environments
  • Rationality
  • PEAS (Performance measure, Environment,
    Actuators, Sensors)
  • Environment types
  • Agent types

3
Agents
  • An agent is anything that can be viewed as
    perceiving its environment through sensors and
    acting upon that environment through actuators.
  • Human agent has, for example
  • eyes, ears, and other organs as sensors
  • hands, legs, mouth, and other body parts as
    actuators.
  • Robotic agent has, for example
  • cameras and infrared range finders as sensors
  • various motors as actuators.

4
Agent and Environment
5
The Vacuum-Cleaner Mini-World
  • Environment square A and B
  • Percepts location and status, e.g., A, Dirty
  • Actions left, right, suck, and no-op

6
The Vacuum-Cleaner Mini-World
World State Action
A,Clean Right
A, Dirty Suck
B, Clean Left
B, Dirty Suck
A, Dirty, A, Clean Right
A, Clean, B, Dirty A, Clean, B, Clean ... Suck No-op ...
7
Agent Function
  • The agent function maps from percept histories to
    actions
  • f P ? A
  • An agent is completely specified by the agent
    function mapping percept sequences to actions
  • The agent program runs on the physical
    architecture to produce f.
  • agent architecture program

8
The Vacuum-Cleaner Mini-World
  • function REFLEX-VACUUM-AGENT (location, status)
    return an action
  • if status Dirty then return Suck
  • else if location A then return Right
  • else if location B then return Left
  • Does not work this way. Need full state space
    (table) or memory.

9
The Vacuum-Cleaner Mini-World
World State Action
A, Clean A, Dirty B, Clean B, Dirty A, Dirty, A, Clean A, Clean, B, Dirty B, Dirty, B, Clean B, Clean, A, Dirty A, Clean, B, Clean B, Clean, A, Clean Right Suck Left Suck Right Suck Left Suck No-op No-op
10
Rational Agents
  • Rational Agent For each possible percept
    sequence, a rational agent should select an
    action that is expected to maximize its
    performance measure, given the evidence provided
    by the percept sequence and whatever built-in
    knowledge the agent has.

11
Rationality
  • Rationality ? omniscience
  • An omniscient agent knows the actual outcome of
    its actions.
  • Rationality ? perfection
  • Rationality maximizes expected performance, while
    perfection maximizes actual performance.
  • "Ideal Rational Agent" Always does "the right
    thing".

12
Rationality
  • The proposed definition requires
  • Information gathering/exploration
  • To maximize future rewards
  • Learn from percepts
  • Extending prior knowledge
  • Agent autonomy
  • Compensate for incorrect prior knowledge

13
Rationality
  • What is rational at a given time depends on
  • Performance measure,
  • Prior environment knowledge,
  • Actions,
  • Percept sequence to date (sensors).

14
Task Environment
  • To design a rational agent we must first specify
    its task environment.
  • PEAS description of the task environment
  • Performance
  • Environment
  • Actuators
  • Sensors

15
Task Environment - Example
  • For example, a fully automated taxi driver
  • PEAS description of the environment
  • Performance
  • Safety, destination, profits, legality, comfort
  • Environment
  • Streets/freeways, other traffic, pedestrians,
    weather,,
  • Actuators
  • Steering, accelerating, brake, horn,
    speaker/display,
  • Sensors
  • Video, sonar, speedometer, engine sensors,
    keyboard, GPS,

16
Examples of Agents (Norvig)
17
PEAS
  • Agent Medical diagnosis system
  • Performance measure Healthy patient, minimize
    costs, lawsuits
  • Environment Patient, hospital, staff
  • Actuators Screen display (questions, tests,
    diagnoses, treatments, referrals)
  • Sensors Keyboard (entry of symptoms, findings,
    patient's answers)

18
PEAS
  • Agent Part-picking robot
  • Performance measure Percentage of parts in
    correct bins
  • Environment Conveyor belt with parts, bins
  • Actuators Jointed arm and hand
  • Sensors Camera, joint angle sensors

19
PEAS
  • Agent Interactive English Tutor
  • Performance measure Maximize student's score on
    test
  • Environment Set of students
  • Actuators Screen display (exercises,
    suggestions, corrections)
  • Sensors Keyboard

20
Classification of Environment Types
  • Fully observable (vs. partially observable) An
    agent's sensors give it access to the complete
    state of the environment at each point in time.
  • Deterministic (vs. stochastic) The next state of
    the environment is completely determined by the
    current state and the action executed by the
    agent. (If the environment is deterministic,
    except for the actions of other agents, then the
    environment is strategic)
  • Episodic (vs. sequential) The agent's experience
    is divided into atomic "episodes" (each episode
    consists of the agent perceiving and then
    performing a single action), and the choice of
    action in each episode depends only on the
    episode itself.

21
Environment types
  • Static (vs. dynamic) The environment is
    unchanged while an agent is deliberating. (The
    environment is semidynamic if the environment
    itself does not change with the passage of time
    but the agent's performance score does).
  • Discrete (vs. continuous) A limited number of
    distinct, clearly defined percepts and actions.
  • Single agent (vs. multiagent) An agent operating
    by itself in an environment.

22
Task Environments (Norvig)
  • Agents design depends on task environment
  • deterministic vs. stochastic vs.
    non-deterministic
  • assembly line vs. weather vs. odds gods
  • episodic vs. non-episodic
  • assembly line vs. diagnostic repair robot,
    Flakey
  • static vs. dynamic
  • room without vs. with other agents
  • discrete vs. continuous
  • chess game vs. autonomous vehicle
  • single vs. multi agent
  • solitaire game vs. soccer, taxi driver
  • fully observable vs. partially observable
  • video camera vs. infrared camera - colour?

23
Infrared Picture of an Unpleasant Situation
from www.indigosystems.com
24
Environment types
Solitaire Backgammon Internet shopping Taxi
Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
25
Environment types
Fully vs. partially observable an environment is
fully observable when the sensors can detect all
aspects that are relevant to the choice of
action.
Solitaire Backgammon Internet shopping Taxi
Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
26
Environment types
Fully vs. partially observable an environment is
fully observable when the sensors can detect all
aspects that are relevant to the choice of
action.
Solitaire Backgammon Internet shopping Taxi
Observable?? FULL FULL PARTIAL PARTIAL
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
27
Environment types
Deterministic vs. stochastic if the next
environment state is completely determined by the
current state and the executed action, then the
environment is deterministic.
Solitaire Backgammon Internet shopping Taxi
Observable?? FULL FULL PARTIAL PARTIAL
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
28
Environment types
Deterministic vs. stochastic if the next
environment state is completely determined by the
current state and the executed action, then the
environment is deterministic.
Solitaire Backgammon Internet shopping Taxi
Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic??
Static??
Discrete??
Single-agent??
29
Environment types
Episodic vs. sequential In an episodic
environment, the agents experience can be
divided into atomic steps, where the agent
perceives and then performs a single action. The
choice of action depends only on the episode
itself.
Solitaire Backgammon Internet shopping Taxi
Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic??
Static??
Discrete??
Single-agent??
30
Environment types
Episodic vs. sequential In an episodic
environment, the agents experience can be
divided into atomic steps, where the agent
perceives and then performs a single action. The
choice of action depends only on the episode
itself.
Solitaire Backgammon Internet shopping Taxi
Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static??
Discrete??
Single-agent??
31
Environment types
Static vs. dynamic If the environment can
change, while the agent is choosing an action,
the environment is dynamic. It is semi-dynamic,
if the agents performance changes, even when the
environment remains the same.
Solitaire Backgammon Internet shopping Taxi
Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static??
Discrete??
Single-agent??
32
Environment types
Static vs. dynamic If the environment can
change, while the agent is choosing an action,
the environment is dynamic. It is semi-dynamic,
if the agents performance changes, even when the
environment remains the same.
Solitaire Backgammon Internet shopping Taxi
Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static?? YES YES SEMI NO
Discrete??
Single-agent??
33
Environment types
Discrete vs. continuous This distinction can be
applied to the state of the environment, the way
time is handled and to the percepts/ actions of
the agent.
Solitaire Backgammon Internet shopping Taxi
Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static?? YES YES SEMI NO
Discrete??
Single-agent??
34
Environment types
Discrete vs. continuous This distinction can be
applied to the state of the environment, the way
time is handled and to the percepts/ actions of
the agent.
Solitaire Backgammon Internet shopping Taxi
Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static?? YES YES SEMI NO
Discrete?? YES YES YES NO
Single-agent??
35
Environment types
Single vs. multi-agent Does the environment
contain other agents who are also maximizing some
performance measure that depends on the current
agents actions?
Solitaire Backgammon Internet shopping Taxi
Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static?? YES YES SEMI NO
Discrete?? YES YES YES NO
Single-agent??
36
Environment types
Single vs. multi-agent Does the environment
contain other agents, who are also maximizing
some performance measure that depends on the
current agents actions?
Solitaire Backgammon Internet shopping Taxi
Observable?? FULL FULL PARTIAL PARTIAL
Deterministic?? YES NO YES NO
Episodic?? NO NO NO NO
Static?? YES YES SEMI NO
Discrete?? YES YES YES NO
Single-agent?? YES NO NO NO
37
Examples of Environment Types
  • Chess with Chess w.o. Taxi clock clock
    driving
  • Fully observable Yes Yes No
  • Deterministic Strategic Strategic No
  • Episodic No No No
  • Static Semi Yes No
  • Discrete Yes Yes No
  • Single agent No No No
  • The real world is (of course) partially
    observable, stochastic, sequential, dynamic,
    continuous, multi-agent.

38
Environment types
  • The simplest environment is
  • Fully observable, deterministic, episodic,
    static, discrete and single-agent.
  • Most real situations are
  • Partially observable, stochastic, sequential,
    dynamic, continuous and multi-agent.

39
Agent types
  • How does the inside of the agent work?
  • Agent architecture program
  • All agents have the same skeleton
  • Input current percepts
  • Output action
  • Program manipulates input to produce output
  • Note difference with agent function.

40
Agent types
  • Function TABLE-DRIVEN_AGENT(percept) returns an
    action
  • static percepts, a sequence initially empty
  • table, a table of actions, indexed by percept
    sequence
  • append percept to the end of percepts
  • action ? LOOKUP(percepts, table)
  • return action

This approach is doomed to failure.
41
Agent types
  • Four basic kinds of agent programs will be
    discussed
  • Simple reflex agents
  • Model-based reflex agents
  • Goal-based agents
  • Utility-based agents
  • All these can be turned into learning agents.

42
Simple Reflex Agents
  • Select action on the basis of only the current
    percept.
  • E.g. the vacuum-agent
  • Large reduction in possible percept/action
    situations (next page).
  • Implemented through condition-action rules
  • If dirty then suck

43
Simple Reflex Agent Example (Nilsson)
  • Robot in Maze
  • perceives 8 squares around it
  • low-level percept can robot move to square or
    not
  • higher level percept 2 unit segments
  • 4 basic actions left (west), right (east), up
    (north), down (south)
  • task is to move along a border
  • no 'tight' spaces, at least two free squares

44
Simple Reflex Agent - Example (Nilsson)
Note The description of the left bottom agent
seems to belong to this agent. It will walk
counter- clockwise around the object.
Note The description of the left bottom agent
seems to be wrong. This agent will walk clockwise
along the outside wall.
45
Simple Reflex Agent - Example
Behaviour Routines If x11 and x20 then move
right If x21 and x30 then move down If x31 and
x40 then move left If x41 and x10 then move
up else move up
46
Simple Reflex Agent - Example
47
Simple Reflex Agents
  • function SIMPLE-REFLEX-AGENT(percept) returns an
    action
  • static rules, a set of condition-action rules
  • state ? INTERPRET-INPUT(percept)
  • rule ? RULE-MATCH(state, rules)
  • action ? RULE-ACTIONrule
  • return action
  • Will only work if the environment is fully
    observable.
  • Otherwise infinite loops may occur.

48
The Vacuum-Cleaner Mini-World
  • function REFLEX-VACUUM-AGENT (location, status)
    return an action
  • if status Dirty then return Suck
  • else if location A then return Right
  • else if location B then return Left
  • Does not work this way. Need full state space
    (table) or memory.

49
Model/State-based Agents
  • To tackle partially observable environments.
  • Maintain internal state
  • Over time update state using world knowledge
  • How does the world change.
  • How do actions affect world.
  • ? Model of World

50
Model/State-based Agents
  • function REFLEX-AGENT-WITH-STATE(percept) returns
    an action
  • static rules, a set of condition-action rules
  • state, a description of the current world state
  • (action, the most recent action)
  • state ? UPDATE-STATE(state, (action,) percept)
  • rule ? RULE-MATCH(state, rule)
  • action ? RULE-ACTIONrule
  • return action

51
Goal-based Agents
  • The agent needs a goal to know which situations
    are desirable.
  • Things become difficult when long sequences of
    actions are required to reach the goal.
  • Typically investigated in search and planning
    research.
  • Major difference future is taken into account.
  • Is more flexible since knowledge is represented
    explicitly - to a certain degree - and can be
    manipulated.

52
Utility-based Agents
  • Certain goals can be reached in different ways.
  • Some are better, have a higher utility.
  • Utility function maps a (sequence of) state(s)
    onto a real number.
  • Improvement on goal setting
  • Selecting between conflicting goals.
  • Select appropriately between several goals based
    on likelihood of success.

53
Learning Agents
  • All previous agent-programs describe methods for
    selecting actions.
  • Yet, this does not explain the origin or
    development of these programs.
  • Learning mechanisms can be used.
  • Advantage is the robustness of the program
    towards unknown environments.

54
Learning Agents
  • Learning element introduce improvements in
    performance element.
  • Critic provides feedback on agents performance
    based on fixed performance standards.
  • Performance element selecting actions based on
    percepts.
  • Corresponds to the previous agent programs.
  • Problem generator suggests actions that will
    lead to new and informative experiences.
  • Exploration vs. exploitation

55
Robotic Sensors
  • (digital) camera
  • infrared sensor
  • range finders, e.g. radar, sonar
  • GPS
  • tactile (whiskers, bump panels)
  • proprioceptive sensors, e.g. shaft decoders
  • force sensors
  • torque sensors

56
Robotic Effectors
  • limbs connected through joints
  • degrees of freedom directions in which limb
    can move (incl. rotation axis)
  • drives wheels (land), propellers, turbines (air,
    water)
  • driven through electric motors, pneumatic (gas),
    or hydraulic (fluids) actuation
  • statically stable, dynamically stable
Write a Comment
User Comments (0)
About PowerShow.com