Intelligent Agents - PowerPoint PPT Presentation

1 / 60
About This Presentation
Title:

Intelligent Agents

Description:

... helps machines find solutions to complex problems in a more human-like fashion' ... relies on prior knowledge of its designers, and the less on its percepts, the ... – PowerPoint PPT presentation

Number of Views:53
Avg rating:3.0/5.0
Slides: 61
Provided by: Mar5334
Category:

less

Transcript and Presenter's Notes

Title: Intelligent Agents


1
Intelligent Agents
  • Marco Loog

2
...or ratherRational Agents
  • Marco Loog

3
First So... what is AI?
  • First type
  • Artificial reproduction of biological
    intelligence?
  • Compute game AI is an artificial version of human
    I?
  • Learning, abstraction, imagination...
  • Second type
  • Set of academic techniques
  • AI is a branch of science that helps machines
    find solutions to complex problems in a more
    human-like fashion

4
More So... what is AI?
  • AI is not limited to recreating human
    intelligence
  • AI involves many subjects sharing a significant
    amount of common knowledge
  • CS, math, psychology,...
  • AI has been successful in many domains
  • We need AI engineers to make programmers
    superfluous
  • Programmers dont make the AI

5
And More...
  • So basically, we are not going to define here
    what AI actually is
  • Focus is on techniques, possibly academic, but
    hopefully widely applied some day also in
    computer games, that are generally accepted to
    be part of AI
  • Specifically learning AI

6
Computer Games AI
  • Computer game AI requires the result. It
    doesnt really matter how NPC intelligence is
    achieved
  • AI is in first stage / its infancy
  • scripted behavior and Astar pathfinding compares
    to 2D graphics
  • One of the core questions nowadays how can
    learning replace manual programming / scripting
    etc.

7
Computer Games AI
  • As a comfort to all of you..
  • ... a golden age in AI NPC is looming
  • ...so we better provide you with some knowledge
    on common, yet potentially powerful, AI techniques

8
Some Unnecessary Remarks
  • We will regularly talk about AI without
    necessarily referring to games and / or
    exemplifying everything based on games
  • Academia ltgt practice
  • This may ask a lot from the students creativity,
    autonomy, imagination, etc.

9
Intelligent Agents
  • Agent something that acts
  • Computer agent operating under autonomous
    control, perceiving environment, adapting to
    change
  • Rational agent acts so as to achieve the best
    expected outcome

10
Intelligent Agents
  • Ideally, intelligent agent takes best possible
    action
  • In this lecture, agents, environments, and the
    interactions / relations between them are
    discussed
  • Start out by a kind of taxonomy of possible
    agents and environments

11
Outline of Remainder
  • Agents and environments
  • Rationality
  • PEAS
  • Environment types
  • Agent types

12
Agents and Environments
  • Agent anything that can be viewed as
  • perceiving its environment through sensors
  • acting upon environment through actuators
  • Human agent eyes, ears, and other organs for
    sensors hands, legs, mouth, and other body parts
    for actuators
  • Robotic agent cameras and infrared range
    finders for sensors various motors for actuators

13
Agents and Environments
  • Agent anything that can be viewed as
  • perceiving its environment through sensors
  • acting upon environment through actuators
  • Percepts agents perceptual inputs from the
    environment
  • Percept sequence complete history of percepts
  • Generally, choice of action can depend on entire
    percept sequence

14
Agents and Environments
  • If possible to specify agents action for every
    percept sequence, then agent is fully defined
  • Agents complete behavior is described by this
    specification, which is called the agent function

15
Agents and Environments
  • If possible to specify agents action for every
    percept sequence, then agent is fully defined
  • Agents complete behavior is described by this
    specification, which is called the agent function
  • Mathematically, the agent function f maps percept
    histories P to actions A fP -gt A
  • Note that Ps cardinality is often infinite and
    therefore tabulating it is impossible

16
Agents and Environments
  • Agent function is actually implemented via an
    agent program, which runs on the agent
    architecture
  • Agent program runs on the physical architecture
    to produce f
  • Agent agent function agent architecture

17
Schematically...
18
Vacuum-Cleaner World
  • Percepts location and contents, e.g., A,Dirty
  • Actions Left, Right, Suck, NoOp do nothing

19
A Vacuum-Cleaner Agent
20
?
  • What is the right way to fill out the previous
    percept sequence / action table?
  • What makes an agent good? What makes it bad?
  • How about intelligent?

21
Rational Agents
  • Agent should strive to do the right thing using
  • What it can perceive
  • The actions it can perform
  • Right action one that will cause the agent to
    be most successful
  • Performance measure objective criterion for
    measuring agents success

22
General Rule
  • It is better to design performance measures
    according to what one actually wants in the
    environment, rather than according to how one
    thinks the agent should behave
  • True?
  • And for games? FSM?
  • One way or the other this might be very
    difficult

23
For Our Vacuum-Cleaner Agent
  • Possible performance measures
  • Amount of dirt cleaned up?
  • Amount of time taken?
  • Amount of electricity consumed?
  • Amount of noise generated?
  • Etc.?

24
So...
  • What is rational for an agent depends on the
    following
  • Performance measure that defines success
  • Prior knowledge of the environment
  • Possible actions
  • Percept sequence to date

25
Definition of Rational Agent
  • For each possible percept sequence, a rational
    agent should select an action that is expected to
    maximize its performance measure, given the
    evidence provided by the percept sequence and
    whatever built-in, a priori knowledge the agent
    has

26
Some Necessary Remarks
  • Rationality is distinct from omniscience
  • Agents can perform actions to modify future
    percepts to obtain useful information gathering
    / exploration
  • An agent is autonomous if its behavior is
    determined by its own experience with the
    ability to learn and adapt
  • Initial agent configuration could reflect prior
    knowledge, but may be modified and augmented
    based on its experience
  • Let game agent adapt to new environment...

27
More Agent Function
  • Successful agents split up the task of
    determining the agent function in three periods
  • Design certain perception / action relations
    are directly available f is partially
    pre-defined
  • In action, while functioning need for
    additional calculations before deciding on an
    action right way to apply f
  • Learning from experience mapping between
    percept sequence an actions is reconsidered f is
    altered

28
And More Autonomy
  • The more agent relies on prior knowledge of its
    designers, and the less on its percepts, the less
    autonomous it is
  • A rational agent should be autonomousit should
    learn what it can to compensate for partial or
    incorrect prior knowledge

29
Autonomy in Practice?
  • In practice, an agent may start with some prior
    knowledge ability to learn
  • Hence, the incorporation of learning allows one
    to design a single rational agent that will
    succeed in a vast variety of environments

30
Task Environments
  • Before actually building rational agents, we
    should consider their environment
  • Task environment the problems to which
    rational agents are the solutions

31
Task Environment PEAS
  • PEAS Performance measure, Environment,
    Actuators, Sensors
  • Must first specify these setting for intelligent
    agent design

32
PEAS
  • Consider the task of designing an automated taxi
    driver
  • Performance measure safe, fast, legal,
    comfortable trip, maximize profits
  • Environment roads, other traffic, pedestrians,
    customers
  • Actuators steering wheel, accelerator, brake,
    signal, horn
  • Sensors cameras, sonar, speedometer, GPS,
    odometer, engine sensors, keyboard

33
Mo PEAS
  • Agent medical diagnosis system
  • Performance measure healthy patient, minimize
    costs, lawsuits
  • Environment patient, hospital, staff
  • Actuators screen display questions, tests,
    diagnoses, treatments, referrals
  • Sensors keyboard entry of symptoms, findings,
    patients answers

34
Environment Types
  • Fully observable vs. partially observable
  • Agents sensors give access to complete state of
    the environment at each point in time
  • Fully observable leads to cheating...
  • Deterministic vs. stochastic
  • Next state of the environment is completely
    determined by the current state and the action
    executed by the agent.
  • Strategic deterministic except actions of other
    agents
  • Episodic vs. sequential
  • Agent's experience is divided into atomic
    episodes perceiving and performing a single
    action choice of action in each episode depends
    only on the episode itself

35
Deterministic vs. Stochastic
  • In reality, situations are often so complex that
    they may are better treated as stochastic even
    if they are deterministic
  • may call the latter nondeterministic
  • Luckily, we all like probability theory,
    statistics, etc., i.e., the appropriate tools for
    describing these environments

36
Environment Types
  • Static vs. dynamic
  • Environment is unchanged while an agent is
    deliberating
  • Semidynamic environment itself not change over
    time but agents performance score does
  • Discrete vs. continuous
  • Limited number of distinct, clearly defined
    percepts and actions
  • Not clear-cut in practice
  • Single agent vs. multi-agent
  • Agent operating by itself in an environment

37
Single vs. Multi-Agent
  • Some subtle issue
  • Which entities must be viewed as agents? The
    ones maximizing a performance measure, which
    depends on other agents
  • Competitive vs. cooperative environment
  • Pursuing goal of maximizing performance measure
    implies minimizing some other agents measure,
    e.g. as in chess

38
Environment Types E.g
  • Chess with Chess without Taxi a
    clock a clock driving
  • Fully observable Yes Yes No
  • Deterministic Strategic Strategic No
  • Episodic No No No
  • Static Semi Yes No
  • Discrete Yes Yes No
  • Single agent No No No
  • Environment type largely determines agent design
  • Real world is of course partially observable,
    stochastic, sequential, dynamic, continuous,
    multi-agent
  • Not always cut and dried / definition dependent
  • Chess... castling? En passant capture? Draws by
    repetition?

39
How About the Inside?
  • Earlier mainly only gave description of agents
    behavior, i.e., action that is performed
    following any percept sequence, i.e., the agent
    function
  • Agent architecture program
  • Job of AI design agent program that implements
    agent function
  • Architectures is, more or less, assumed to be
    given in games and other applications design
    may take place simultaneously

40
Agent Functions and Programs
  • An agent is completely specified by the agent
    function mapping percept sequences to actions
  • One agent function or a small equivalence class
    is rational optimal
  • Aim find a way to implement the rational agent
    function concisely

41
Table-lookup Agent
  • Lookup table that relates every percept sequence
    to the appropriate action
  • Drawbacks
  • Huge table really huge in many cases
  • Take a long time to build the table virtually
    infinite in many cases No autonomy
  • Even with learning, need a long time to learn the
    table entries
  • However, it does what it should do...

42
Key Challenge for AI
  • Find out how to write programs that, to the
    extent possible, produce rational behavior from a
    small amount of code rules rather than from a
    large number of table entries
  • We would like it to go a bit further to replace
    rules by learning

43
Four Basic Agent Types
  • Four basic types of agent programs embodying
    principles of almost every intelligent system
  • In order of increasing generality
  • Simple reflex
  • Model-based reflex
  • Goal-based
  • Utility-based

44
Simple Reflex
  • Agent selects action on the basis of current
    percept if... then...

45
Program for a Vacuum-Cleaner
46
From Simple Reflex to...
  • Simple reflex agents work only if the environment
    is fully observable, because decision should be
    made on the current percept

47
Model-Based Reflex
  • Effective way to handle partial observability is
    keeping track of every part of the environment
    the agent cannot see now
  • Some internal state should be maintained, which
    is based on the percept sequence, and which
    reflects some of the unobserved aspects of the
    current state

48
Internal State
  • Updating requires the encoding of two types of
    knowledge in the program
  • Knowledge about evolution of the world
    independent of the agent
  • How do own actions influence the environment?
  • Knowledge about world model of world
  • Model-based agent

49
Model-Based Reflex
50
From Model-Based Reflex to...
  • Knowledge about current state of the world not
    always enough
  • In addition, it may be necessary to define a goal
    / situation that is desirable

51
Goal-Based
52
From Goal-Based to...
  • For most environments, goals alone are not really
    enough to generate high-quality behavior
  • There are many ways in which one can achieve a
    goal...
  • Instead, measure utility happiness
  • Utility function maps states onto real numbers
    describing degree of happiness
  • Can perform tradeoff if there are several ways to
    a goal, or if there are multiple goals using
    likelihood

53
Utility-Based
54
A Remark on Utility
  • Rather theoretical result any rational agent
    must behave as if it possesses a utility function
    whose expected value it tries to maximize

55
But How...
  • ...do these agent programs come into existence?
  • The particular method suggested by Turing the
    one from that machine is to build learning
    agents, that should subsequently be teached
  • One of the advantages of these agents is that
    they can learn to deal with unknown environments

56
Learning Agent
  • Four conceptual components
  • Learning element responsible for making
    improvements
  • Performance element responsible for external
    actions previously considered equal to entire
    agent
  • Critic provides feedback on how agent is doing
    and determines how performance element should be
    modified
  • Problem generator responsible for suggesting
    actions leading to new and informative experience

57
Learning Agent
58
Next Week?
  • More...

59
(No Transcript)
60
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com