Artificial Intelligence - PowerPoint PPT Presentation

1 / 34
About This Presentation
Title:

Artificial Intelligence

Description:

For any given set of environments and actions, we seek the agent (or class of ... steering, accelerator, brake, horn, speaker/display, ... Sensors ... – PowerPoint PPT presentation

Number of Views:37
Avg rating:3.0/5.0
Slides: 35
Provided by: yanz
Category:

less

Transcript and Presenter's Notes

Title: Artificial Intelligence


1
Artificial Intelligence
  • Lecture 3

2
Topics
Intelligent Agents
I
S
L
P
R
M
N
R
F
P
P
P
P
3
Last Lecture
  • Agent and Environment
  • Rationality
  • World Description

4
Overview
  • Review
  • Agent Types
  • Environment Types

5
Agent Environment
6
Rational Agent
  • Agent
  • An entity that perceives and acts
  • A function from percept histories to actions
  • f P -gt A
  • Human, robots, thermostat, smoke detector, etc.
  • Rational Agent
  • For any given set of environments and actions, we
    seek the agent (or class of agent) with the best
    performance

7
Vacuum-Cleaner World
  • Percepts location and status, e.g., A,Dirty
  • Actions Left, Right, Suck, NoOp

8
Vacuum-Cleaner Agent
  • Percept sequence
  • Everything the agent has observed so far
  • Possible percept sequences
  • Mapping percept to action

function Vacuum-Cleaner-Agent(L,S) returns an
action If Sdirty, then return suck Else if LA,
then return right Else if LB, then return left
9
Rationality
10
Performance Measure
  • The criteria that determine how successful an
    agent is
  • Imposed by authority and measured in the long run

11
Performance Measure
  • 100 points for each piece of dirt vacuumed up
  • Minus 1 point for each action taken
  • Minus 1000 points for dumping the dirt in your
    neighbors backyard

A rational agent maximizes the points given the
percept sequence Rational ? Omniscient Rational ?
Clairvoyant Rational ? Successful Rational gt
Exploration, learning, autonomy
12
World Description
13
Task Environment
  • To design a rational taxi agent
  • Performance measure
  • safety, destination, profits, legality, comfort,
  • Environment
  • US streets/freeways, traffic, pedestrians,
    weather,
  • Actuators
  • steering, accelerator, brake, horn,
    speaker/display,
  • Sensors
  • video, accelerometers, gauges, engine sensors,
    keyboard, GPS,

14
Rational Online Auction Agent
  • Performance measure?
  • Environment ?
  • Actuators?
  • Sensors?

15
Agent Types
16
Agent Types
  • Four basic types in order of increasing
    generality
  • Simple reflex agents
  • reflex agents with state
  • goal-based agents
  • utility-based agents
  • All these can be turned into learning agents

17
Simple Reflex Agent
  • Condition Action Rule
  • If condition then action

18
Simple Reflex Function
function Simple-Reflex-Agent(percept) returns
action static rules, a set of condition-action
rules state ? Interpret-Input(percept) rule ?
Rule-Match(state, rules) action ?
Rule-Actionrule return action
19
Reflex Agents with State
  • Keep track of the world

20
Reflex Function with Internal States
function Reflex-Agent-With-State(percept) returns
action static state, a description of the
current world state rules, a set of
condition-action rules state ?
Update-State(state, percept) rule ?
Rule-Match(state, rules) action ?
Rule-Actionrule state ? Update-State(state,
action) return action
21
Goal-Based Agents
22
Utility-Based Agents
  • Utility a function that maps a state to a real
    value, which describes the degree of
    satisfaction.

23
Learning Agents

24
A Learning Taxi Driver
  • Performance element
  • Knowledge and procedures for driving
  • Learning element
  • Formulate goals like learning geography and how
    to drive on wet roads
  • Critic
  • Observes the world and passes information to the
    learning element
  • Problem generator
  • Try a different route to see if it is quicker

25
Environment Types
26
Environment Types
  • Accessible vs. inaccessible
  • Sensory input provides access to the complete
    state of the environment or not
  • Accessible environment no need for an internal
    state
  • Deterministic vs. nondeterministic
  • Next state of the environment is completely
    defined by the current state and the actions of
    the agent
  • No uncertainty in an accessible, deterministic
    environment
  • An inaccessible environment may appear
    nondeterministic
  • Episodic vs. non-episodic
  • Each episode consists of an agent perceiving and
    then acting
  • Subsequent episodes do not depend on the actions
    in previous episodes no need to think ahead
  • A chess tournament is episodic moves in one game
    dont carry over

27
Environment Types
  • Static vs. dynamic
  • Environment can change while the agent is
    deliberating or not
  • In static environments, the agent doesnt need to
    keep looking
  • Semi-dynamic environment doesnt change, but the
    agents performance score does
  • Discrete vs. continuous
  • Limited number of distinct, clearly defined
    percepts and actions
  • Chess is discrete as there is a fixed number of
    moves at each turn
  • The most difficult type of environment
  • Inaccessible, non-deterministic, non-episodic,
    dynamic, continuous

28
Environment Types
29
Environment Types
  • The environment type largely determines the
    agent design
  • The real world is (of course) partially
    observable, stochastic, sequential, dynamic,
    continuous, multi-agent

30
The Environment Program
procedure Run-Environment(state, Update-Fn,
agents, termination) inputs state, the initial
state of the environment Update-Fn,
function to modify the environment agents,
a set of agents termination, a predicate to
test when we are done repeat for each agent in
agents do Perceptagent ?
Get-Percept(agent, state) end for each agent
in agents do Actionagent ?
Programagent(Perceptagent) end state ?
Update-Fn(actions, agents, state) until
termination(state)
31
Performance of Agents
function Run-Eval-Environment(state, Update-Fn,
agents, termination, Performance-Fn) returns
scores local variables scores, a vector the
same size as agents, all 0 repeat for each
agent in agents do Perceptagent ?
Get-Percept(agent, state) end for each agent
in agents do Actionagent ?
Programagent(Perceptagent) end state ?
Update-Fn(actions, agents, state) scores ?
Performance-Fn(scores, agents, state) until
termination(state) return scores
32
Environment Classes
  • Environment class
  • The agent must work in different environments
  • A chess program should play against a wide
    collection of humans and other programs
  • Designing for a particular opponent can exploit
    specific weaknesses, but is not good for general
    play
  • The performance of an agent is averaged over the
    environment class
  • The agent is not allowed to consult the
    environment program.

33
Summary
  • Agent and Environment
  • Rationality
  • World Description
  • Agent Types
  • Environment Types

34
Possible Quiz Questions
  • If there is a quiz next time, it might cover
  • Define rationality
  • Difference between reflex agents and goal-based
    utility-based agents
  • Components in a learning agent
  • Recognize the environment type
  • Relation between environment and agents
Write a Comment
User Comments (0)
About PowerShow.com