Artificial Intelligence: Planning - PowerPoint PPT Presentation

1 / 20
About This Presentation
Title:

Artificial Intelligence: Planning

Description:

... what the possible actions are (with capitals to indicate variables) ... Add all the facts in Add list to State. Delete all the facts in Delete list from State. ... – PowerPoint PPT presentation

Number of Views:118
Avg rating:3.0/5.0
Slides: 21
Provided by: supp161
Category:

less

Transcript and Presenter's Notes

Title: Artificial Intelligence: Planning


1
Artificial Intelligence Planning
  • Lecture 6
  • Problems with state space search
  • Planning Operators
  • A Simple Planning Algorithm
  • (Game Playing)

2
AI Planning
  • Planning concerns problem of finding sequence of
    actions to achieve some goal.
  • Action sequence will be systems plan.
  • State-space search techniques, discussed in
    lecture 6, may be viewed as simplest form of
    planning.
  • Based on rules that specify, for possible
    actions, how the problem state changes.
  • But need to consider further how to represent
    these state-change rules.

3
Reminder of Robot Planning expressed as search..
Me Rob Beer
Robot picks up Me
Robit opens door
Me Rob Beer
Me Rob Beer
Robot moves to next room
Me Rob Beer
Etc etc
4
Problem State
  • How do we capture how the different possible
    actions change the state of world (problem
    state)?
  • For Jugs problem, problem state was just a pair
    of numbers, so could specify explicitly how it
    changed.
  • For more complex problems, representing the
    problem state requires specifying all the
    (relevant) things that are true.
  • Can be done using statements in predicate logic
  • in(john, room1)
  • open_door(room1, room2)
  • Note how we give objects in world unique labels
    (e.g., room1).

5
Problem State
  • So, for simple robot planning problem we might
    have an initial state described by
  • in(robot, room1)
  • door_closed(room1, room2)
  • in(john, room)
  • in(beer, room2)
  • And target state must include
  • in(beer, room1)
  • (But many different ways of formulating same
    problem.)

6
Representing Actions
  • We now specify what the possible actions are
    (with capitals to indicate variables)
  • move(R1, R2)
  • carry(R1, R2, Object)
  • open(R1, R2) (open door between R1 and R2)
  • For each action, we need to specify precisely
  • When it is allowed.
  • E.g., can only pick something up when in the same
    room as that object.
  • What the change in the problem state will be.

7
Planning Operators
  • To do this we specify, for each action
  • A list of facts that must be true before the
    action is possible. (Preconditions)
  • A list of facts made true by the action. (Add
    list)
  • A list of facts made false by the action (Delete
    list).
  • E.g., carry(R1, R2, Object)
  • predoor_open(R1, R2), in(robot, R1), in(Object,
    R1)
  • addin(robot, R2), in(Object, R2)
  • delete in(robot, R1), in(Object, R1)

8
Planning Operators
  • We can now check when an operator may be applied,
    and what the new state is.
  • Current state
  • in(robot, room1), door_open(room1, room2),
    in(beer, room1)
  • Action
  • carry(beer, room1, room2)
  • New state
  • in(robot, room2), door_open(room1, room2),
    in(beer, room2)

9
Searching for a Solution
  • How do we now search for a sequence of actions
    that gets us from initial to target state?
  • Can simply use standard search techniques
    discussed last week.
  • We can define a rule that lets us find possible
    successor nodes in our search tree.
  • To find successor NewState of State
  • Find operator with preconditions satisfied in
    State.
  • Add all the facts in Add list to State
  • Delete all the facts in Delete list from State.
  • We then use standard depth/breadth first search

10
Towards an implementation
  • Express plan operators as prolog facts like
  • op(carry(R1, R2, O), door_open(R1, R2),
    in(r, R1), in(O, R1), in(r, R2), in(O,
    R2), in(r, R1), in(O, R1)).
  • Define a successor rule.
  • successor(State, New) - op(Action, Pre, Add,
    Delete), satisfied(Pre, State),
    additems(State, Add, Temp), delitems(Temp,
    Delete, New).

Action Preconds Add Delete
11
  • Now use a simple search algorithm.
  • Simplest just exploits Prologs depth first
    search
  • search(State, State).
  • search(Initial, Target) - succesor(Initial,
    Next), search(Next, Target).
  • ?- search(in(r, room1), .., in(r, room),
    in(beer, room1)).
  • Problems..
  • Order of facts in target state significant.
  • Doesnt yet tell us what the plan IS. Just says
    yes if a plan exists.

12
Forwards versus Backwards..
  • Can search for a solution forwards (from start
    state) or backwards (from target).
  • Backwards search
  • search(Initial, Target) - succesor(Previous,
    Target), search(Initial, Previous).
  • Finds actions that get you to the Target.
  • Works out state youd have to be in for that
    action to apply.
  • Then searches for actions that get you to that
    intermediate state.

13
Problems with Simple Search
  • Search is blind - We consider every action that
    can be done in current state, even if it is
    completely irrelevant to the goal.
  • E.g., if robot could clap, jump, and roll over,
    would consider paths in search tree starting with
    these actions, as well as opening door into the
    other room.
  • Backward search helps a little - but considers
    ALL actions that end up in target state, not
    focusing on those that start in state more
    similar to initial.

14
Means-ends Analysis (MEA)
  • Early planning algorithm that attempted to
    address these issues.
  • Focus the search on actions that reduce the
    difference between current state and target.
  • Combine forward and backward reasoning.
  • Consider actions that cant immediately apply in
    current state.
  • Getting to state where useful action can be
    applied can be set as new subproblem to solve.

15
MEA algorithm
  • Find useful action..
  • Then set as new subproblems getting to a state
    where that action can apply, and getting to
    target from state resulting from that action.

preplan
action
postplan
Mid State1
Mid State 2
Target State
Initial State
16
MEA Algorithm in detail
  • To find plan from Initial to Target
  • If all goals in Target are true in Initial,
    succeed.
  • Otherwise
  • Select an usolved goal from target state.
  • Find an Action that adds goal to target state.
  • Enable Action by finding a plan (preplan) that
    achieves Actions preconditions. Let midstate1 be
    result of applying that plan to initial state.
  • Apply Action to midstate1 to give midstate2.
  • Find a plan (postplan) from midstate2 to target
    state.
  • Return a plan consisting of preplan, action and
    postplan.

17
A little on Game Playing
  • Search techniques may also be applied to game
    playing problems (e.g., board games).
  • Difference is that we have two players, each with
    opposing goals.
  • We can still express this as a search tree.
  • But the way we search has to be a bit different.

18
Search Tree
Player 1s moves
Player 2 s moves
Player 1s moves etc
19
Game Playing
  • Essence of game playing is how to choose a move
    that will maximise your chances of winning on the
    assumption that your opponent will always make
    the move that is best for them.
  • One algorithm for this minimax.
  • Form of best-first search, scoring game states
    according to how close they are to a solution,
    but with assumption that opponent will try and
    minimise your score.

20
Summary
  • Planning Finding sequence of actions to achieve
    goal.
  • Actions specified in terms of preconditions,
    addlist, delete list.
  • Can then use standard search techniques, or
    Means-ends-analysis, which focuses search on
    actions that achieve goals in target.
  • Game playing - have to consider opponent.
Write a Comment
User Comments (0)
About PowerShow.com