Problem Solving Agents - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Problem Solving Agents

Description:

Sometimes reflex fails because the world is complex. ... On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest. Formulate goal: ... – PowerPoint PPT presentation

Number of Views:126
Avg rating:3.0/5.0
Slides: 31
Provided by: systema178
Category:

less

Transcript and Presenter's Notes

Title: Problem Solving Agents


1
Problem Solving Agents
2
So Far
  • Traditional AI begins with some simple premises
  • An intelligent agent lives in a particular
    environment.
  • An intelligent agent has goals that it wants to
    achieve.
  • The environment in which an agent is expected to
    operate has a large effect on what sort of
    behaviors it will need and on what we should
    expect it to be able to do.

3
So Far
  • Simple reflex agent

4
Satisfying a goal through reflex
  • An agent can satisfy its goal with a very simple
    algorithm
  • Sense the world.
  • Act.
  • This approach assumes that the agent can react to
    the world based solely on what it senses from its
    environment.

5
Satisfying a goal through reflex
  • Do intelligent agents really use reflex?
  • You bet! Self-preservation is often a matter of
    good reflexes.
  • In fact, any behavior that is required often,
    that is easy to specify for all or most
    circumstances, and for which there is substantial
    cost to failure is a good candidate for reflex.

6
Satisfying a goal through reflex
  • But
  • Sometimes reflex fails because the world is
    complex.
  • And sometimes it isn't the right approach because
    the cost of failure is too high.
  • We can think of reflex as 'hard-wired', either in
    the agent's program or muscle memory, outside of
    the agent's conscious thought. Try changing a
    reflex some time... It's hard to do!

7
Satisfying a goal through deliberation
  • An agent can satisfy its goal with a very simple
    algorithm
  • Sense the world.
  • Choose an action.
  • Act.
  • Choosing an action can be as simple as reflex
  • Look up the right action in a table.
  • Choosing an action can be arbitrarily complex
  • Plan ahead before choosing an action.

8
The costs of deliberation
  • Planning ahead has its own costs
  • The agent must be able to imagine the effect of
    each action that it and take.
  • How will the world change if I do X?
  • The agent must be able to keep in its own mind
    all the possibilities that its considering while
    choosing its action.

9
Another of AIs basic premises
  • Agents are resource-limited
  • The size of an agents memory is bounded.
  • The amount of time available is usually bounded.

10
Chapter 3 Problem Solving by Searching
  • Problem solving agents decide what to do by
    finding sequences of actions that lead to
    desirable states.
  • But how
  • Keep in mind what an agent knows (PEAS)

11
Last Time I gave you this problem
  • Three missionaries and three cannibals
  • Want to cross a river using one canoe.
  • Canoe can hold up to two people.
  • Can never be more cannibals than missionaries on
    either side of the river.
  • Aim To get all safely across the river without
    any missionaries being eaten.
  • Did you solve it?

12
One Solution
  • Send over 2 Cannibals
  • Send one Cannibal back
  • Send over 2 Cannibals
  • Send one Cannibal back
  • Send over 2 Missionaries
  • Send one Cannibal and one Missionary back
  • Send over 2 Missionaries
  • Send one Cannibal back
  • Send over 2 cannibals
  • Send one cannibal back
  • Send over 2 cannibals

13
Problem Solving Agents
  • Example Traveling in Romania
  • On holiday in Romania currently in Arad.Flight
    leaves tomorrow from BucharestFormulate goal
    be in BucharestFormulate
    problem states various cities actions
    drive between citiesFind solution sequence of
    cities, e.g., Arad, Sibiu, Fagaras, Bucharest

14
Problem Solving Agents
15
Appropriate environment for Searching Agents
  • Observable??
  • Deterministic??
  • Episodic??
  • Static??
  • Discrete??
  • Agents??
  • Yes
  • Yes
  • Either
  • Yes
  • Yes
  • Either

16
Problem Types
  • Deterministic, fully observable ? single-state
    problem
  • Agent knows exactly which state it will be in
  • Solution is a sequence
  • Non-observable ? conformant problem
  • Agent may have no idea where it is
  • Solution (if any) is a sequence
  • Nondeterministic and/or partially observable ?
    contingency problem
  • percepts provide new information about current
    state
  • solution is a tree or policy
  • often interleave search, execution
  • Unknown state space ? exploration problem (
    online )

17
Problem Types
  • Example vacuum world
  • Start in 5.
  • Solution??

Right, Suck
18
Problem Types
  • Deterministic, fully observable ? single-state
    problem
  • Agent knows exactly which state it will be in
  • Solution is a sequence
  • Non-observable ? conformant problem
  • Agent may have no idea where it is
  • Solution (if any) is a sequence
  • Nondeterministic and/or partially observable ?
    contingency problem
  • percepts provide new information about current
    state
  • solution is a tree or policy
  • often interleave search, execution
  • Unknown state space ? exploration problem (
    online )

19
Problem Types
  • Conformant, start in 1,2,3,4,5,6,7,8
    Solution??

20
Problem Types
  • Conformant, start in 1,2,3,4,5,6,7,8
  • e.g., Right goes to 2,4,6,8.

Right, Suck, Left, Suck
21
Problem Types
  • Deterministic, fully observable ? single-state
    problem
  • Agent knows exactly which state it will be in
  • Solution is a sequence
  • Non-observable ? conformant problem
  • Agent may have no idea where it is
  • Solution (if any) is a sequence
  • Nondeterministic and/or partially observable ?
    contingency problem
  • percepts provide new information about current
    state
  • solution is a tree or policy
  • often interleave search, execution
  • Unknown state space ? exploration problem (
    online )

22
Problem Types
  • Contingency, start in 5
  • Murphys Law Suck can dirty a clean carpet
  • Local Sensing dirt, location only.
  • Solution??

Right, if dirt then Suck
23
Problem Types
  • Deterministic, fully observable ? single-state
    problem
  • Agent knows exactly which state it will be in
  • Solution is a sequence
  • Non-observable ? conformant problem
  • Agent may have no idea where it is
  • Solution (if any) is a sequence
  • Nondeterministic and/or partially observable ?
    contingency problem
  • percepts provide new information about current
    state
  • solution is a tree or policy
  • often interleave search, execution
  • Unknown state space ? exploration problem (
    online )

24
Single-State Problem Formulation
  • For the time being, we are only interested in the
    single-state problem formulation
  • A problem is defined by four items
  • initial state e.g., at Arad
  • successor function S(x) set of action-state
    pairs e.g., S(Arad) ltArad ? Zerind, Zerindgt,
  • goal test, can be explicit, e.g., x at
    Bucharest implicit, e.g., NoDirt(x)
  • path cost (additive) e.g., sum of distances,
    number of actions executed, etc. C(x,a,y) is the
    step cost, assumed to be ? 0

25
Single-State Problem Formulation
  • A solution is a sequence of actions leading from
    the initial state to a goal state

26
Problem Formulation
  • Example vacuum cleaner world state space graph
  • States?? Actions?? Goal test?? Path cost??

27
Problem Formulation
  • States?? Integer dirt and robot locations (ignore
    dirt amounts)
  • Actions?? Left, Right, Suck, NoOp
  • Goal test?? No dirt
  • Path cost?? 1 per action (0 for NoOp)

28
Example the 8-puzzle
  • States??Actions?? Goal test?? Path cost??

29
Example The 8-puzzle
  • States?? 9!/2 integer locations of tiles (ignore
    intermediate positions)
  • Actions?? Move blank left, right, up, down
  • Goal test?? goal state (given)
  • Path cost?? 1 per move
  • Note optimal solution of n-Puzzle family is
    NP-hard (although 8-Puzzle is NP-complete)

30
Why do we look at toy problems.
  • Real world is absurdly complex ?state space
    must be abstracted for problem solving
  • (Abstract) state set of real states(Abstract)
    action complex combination of real actions
    e.g., Arad ? Zerind represents a complex set of
    possible route, detours, rest stops, etc.For
    guaranteed realizability, any real state in
    Arad must get to some real state in Zerind
  • (Abstract) solution set of real paths that are
    solutions in the real worldeach abstract action
    should e easier than the original problem!
Write a Comment
User Comments (0)
About PowerShow.com