AI I: problemsolving and search - PowerPoint PPT Presentation

1 / 78
About This Presentation
Title:

AI I: problemsolving and search

Description:

On holiday in Romania; currently in Arad. Flight leaves tomorrow ... The abstraction is valid if the path between two states is reflected in the real world. ... – PowerPoint PPT presentation

Number of Views:74
Avg rating:3.0/5.0
Slides: 79
Provided by: ikked9
Category:

less

Transcript and Presenter's Notes

Title: AI I: problemsolving and search


1
AI I problem-solving and search
2
Outline
  • Problem-solving agents
  • A kind of goal-based agent
  • Problem types
  • Single state (fully observable)
  • Search with partial information
  • Problem formulation
  • Example problems
  • Basic search algorithms
  • Uninformed

3
Problem-solving agent
  • Four general steps in problem solving
  • Goal formulation
  • What are the successful world states
  • Problem formulation
  • What actions and states to consider give the goal
  • Search
  • Determine the possible sequence of actions that
    lead to the states of known values and then
    choosing the best sequence.
  • Execute
  • Give the solution perform the actions.

4
Problem-solving agent
  • function SIMPLE-PROBLEM-SOLVING-AGENT(percept)
    return an action
  • static seq, an action sequence
  • state, some description of the current world
    state
  • goal, a goal
  • problem, a problem formulation
  • state ? UPDATE-STATE(state, percept)
  • if seq is empty then
  • goal ? FORMULATE-GOAL(state)
  • problem ? FORMULATE-PROBLEM(state,goal)
  • seq ? SEARCH(problem)
  • action ? FIRST(seq)
  • seq ? REST(seq)
  • return action

5
Example Romania
6
Example Romania
  • On holiday in Romania currently in Arad
  • Flight leaves tomorrow from Bucharest
  • Formulate goal
  • Be in Bucharest
  • Formulate problem
  • States various cities
  • Actions drive between cities
  • Find solution
  • Sequence of cities e.g. Arad, Sibiu, Fagaras,
    Bucharest,

7
Problem types
  • Deterministic, fully observable ? single state
    problem
  • Agent knows exactly which state it will be in
    solution is a sequence.
  • Partial knowledge of states and actions
  • Non-observable ? sensorless or conformant
    problem
  • Agent may have no idea where it is solution (if
    any) is a sequence.
  • Nondeterministic and/or partially observable ?
    contingency problem
  • Percepts provide new information about current
    state solution is a tree or policy often
    interleave search and execution.
  • Unknown state space ? exploration problem
    (online)
  • When states and actions of the environment are
    unknown.

8
Example vacuum world
  • Single state, start in 5. Solution??

9
Example vacuum world
  • Single state, start in 5. Solution??
  • Right, Suck

10
Example vacuum world
  • Single state, start in 5. Solution??
  • Right, Suck
  • Sensorless start in 1,2,3,4,5,6,7,8 e.g Right
    goes to 2,4,6,8. Solution??
  • Contingency start in 1,3. (assume Murphys
    law, Suck can dirty a clean carpet and local
    sensing location,dirt only. Solution??

11
Problem formulation
  • A problem is defined by
  • An initial state, e.g. Arad
  • Successor function S(X) set of action-state
    pairs
  • e.g. S(Arad)ltArad ? Zerind, Zerindgt,
  • intial state successor function state space
  • Goal test, can be
  • Explicit, e.g. xat bucharest
  • Implicit, e.g. checkmate(x)
  • Path cost (additive)
  • e.g. sum of distances, number of actions
    executed,
  • c(x,a,y) is the step cost, assumed to be gt 0
  • A solution is a sequence of actions from initial
    to goal state.
  • Optimal solution has the lowest path cost.

12
Selecting a state space
  • Real world is absurdly complex.
  • State space must be abstracted for problem
    solving.
  • (Abstract) state set of real states.
  • (Abstract) action complex combination of real
    actions.
  • e.g. Arad ?Zerind represents a complex set of
    possible routes, detours, rest stops, etc.
  • The abstraction is valid if the path between two
    states is reflected in the real world.
  • (Abstract) solution set of real paths that are
    solutions in the real world.
  • Each abstract action should be easier than the
    real problem.

13
Example vacuum world
  • States??
  • Initial state??
  • Actions??
  • Goal test??
  • Path cost??

14
Example vacuum world
  • States?? two locations with or without dirt 2 x
    228 states.
  • Initial state?? Any state can be initial
  • Actions?? Left, Right, Suck
  • Goal test?? Check whether squares are clean.
  • Path cost?? Number of actions to reach goal.

15
Example 8-puzzle
  • States??
  • Initial state??
  • Actions??
  • Goal test??
  • Path cost??

16
Example 8-puzzle
  • States?? Integer location of each tile
  • Initial state?? Any state can be initial
  • Actions?? Left, Right, Up, Down
  • Goal test?? Check whether goal configuration is
    reached
  • Path cost?? Number of actions to reach goal

17
Example 8-queens problem
  • States??
  • Initial state??
  • Actions??
  • Goal test??
  • Path cost??

18
Example 8-queens problem
  • Incremental formulation vs. complete-state
    formulation
  • States??
  • Initial state??
  • Actions??
  • Goal test??
  • Path cost??

19
Example 8-queens problem
  • Incremental formulation
  • States?? Any arrangement of 0 to 8 queens on the
    board
  • Initial state?? No queens
  • Actions?? Add queen in empty square
  • Goal test?? 8 queens on board and none attacked
  • Path cost?? None
  • 3 x 1014 possible sequences to investigate

20
Example 8-queens problem
  • Incremental formulation (alternative)
  • States?? n (0 n 8) queens on the board, one per
    column in the n leftmost columns with no queen
    attacking another.
  • Actions?? Add queen in leftmost empty column such
    that is not attacking other queens
  • 2057 possible sequences to investigate Yet
    makes no difference when n100

21
Example robot assembly
  • States??
  • Initial state??
  • Actions??
  • Goal test??
  • Path cost??

22
Example robot assembly
  • States?? Real-valued coordinates of robot joint
    angles parts of the object to be assembled.
  • Initial state?? Any arm position and object
    configuration.
  • Actions?? Continuous motion of robot joints
  • Goal test?? Complete assembly (without robot)
  • Path cost?? Time to execute

23
Basic search algorithms
  • How do we find the solutions of previous
    problems?
  • Search the state space (remember complexity of
    space depends on state representation)
  • Here search through explicit tree generation
  • ROOT initial state.
  • Nodes and leafs generated through successor
    function.
  • In general search generates a graph (same state
    through multiple paths)

24
Simple tree search example
  • function TREE-SEARCH(problem, strategy) return a
    solution or failure
  • Initialize search tree to the initial state of
    the problem
  • do
  • if no candidates for expansion then return
    failure
  • choose leaf node for expansion according to
    strategy
  • if node contains goal state then return
    solution
  • else expand the node and add resulting nodes to
    the search tree
  • enddo

25
Simple tree search example
  • function TREE-SEARCH(problem, strategy) return a
    solution or failure
  • Initialize search tree to the initial state of
    the problem
  • do
  • if no candidates for expansion then return
    failure
  • choose leaf node for expansion according to
    strategy
  • if node contains goal state then return
    solution
  • else expand the node and add resulting nodes to
    the search tree
  • enddo

26
Simple tree search example
  • function TREE-SEARCH(problem, strategy) return a
    solution or failure
  • Initialize search tree to the initial state of
    the problem
  • do
  • if no candidates for expansion then return
    failure
  • choose leaf node for expansion according to
    strategy
  • if node contains goal state then return
    solution
  • else expand the node and add resulting nodes to
    the search tree
  • enddo

? Determines search process!!
27
State space vs. search tree
  • A state is a (representation of) a physical
    configuration
  • A node is a data structure belong to a search
    tree
  • A node has a parent, children, and ncludes path
    cost, depth,
  • Here node ltstate, parent-node, action,
    path-cost, depthgt
  • FRINGE contains generated nodes which are not
    yet expanded.
  • White nodes with black outline

28
Tree search algorithm
  • function TREE-SEARCH(problem,fringe) return a
    solution or failure
  • fringe ? INSERT(MAKE-NODE(INITIAL-STATEproblem)
    , fringe)
  • loop do
  • if EMPTY?(fringe) then return failure
  • node ? REMOVE-FIRST(fringe)
  • if GOAL-TESTproblem applied to STATEnode
    succeeds
  • then return SOLUTION(node)
  • fringe ? INSERT-ALL(EXPAND(node, problem),
    fringe)

29
Tree search algorithm (2)
  • function EXPAND(node,problem) return a set of
    nodes
  • successors ? the empty set
  • for each ltaction, resultgt in SUCCESSOR-FNproblem
    (STATEnode) do
  • s ? a new NODE
  • STATEs ? result
  • PARENT-NODEs ? node
  • ACTIONs ? action
  • PATH-COSTs ? PATH-COSTnode
    STEP-COST(node, action,s)
  • DEPTHs ? DEPTHnode1
  • add s to successors
  • return successors

30
Search strategies
  • A strategy is defined by picking the order of
    node expansion.
  • Problem-solving performance is measured in four
    ways
  • Completeness Does it always find a solution if
    one exists?
  • Optimality Does it always find the least-cost
    solution?
  • Time Complexity Number of nodes
    generated/expanded?
  • Space Complexity Number of nodes stored in
    memory during search?
  • Time and space complexity are measured in terms
    of problem difficulty defined by
  • b - maximum branching factor of the search tree
  • d - depth of the least-cost solution
  • m - maximum depth of the state space (may be ?)

31
Uninformed search strategies
  • (a.k.a. blind search) use only information
    available in problem definition.
  • When strategies can determine whether one
    non-goal state is better than another ? informed
    search.
  • Categories defined by expansion algorithm
  • Breadth-first search
  • Uniform-cost search
  • Depth-first search
  • Depth-limited search
  • Iterative deepening search.
  • Bidirectional search

32
BF-search, an example
  • Expand shallowest unexpanded node
  • Implementation fringe is a FIFO queue

A
33
BF-search, an example
  • Expand shallowest unexpanded node
  • Implementation fringe is a FIFO queue

A
B
C
34
BF-search, an example
  • Expand shallowest unexpanded node
  • Implementation fringe is a FIFO queue

A
B
C
E
D
35
BF-search, an example
  • Expand shallowest unexpanded node
  • Implementation fringe is a FIFO queue

A
C
B
D
E
F
G
36
BF-search evaluation
  • Completeness
  • Does it always find a solution if one exists?
  • YES
  • If shallowest goal node is at some finite depth d
  • Condition If b is finite
  • (maximum num. Of succ. nodes is finite)

37
BF-search evaluation
  • Completeness
  • YES (if b is finite)
  • Time complexity
  • Assume a state space where every state has b
    successors.
  • root has b successors, each node at the next
    level has again b successors (total b2),
  • Assume solution is at depth d
  • Worst case expand all but the last node at depth
    d
  • Total numb. of nodes generated

38
BF-search evaluation
  • Completeness
  • YES (if b is finite)
  • Time complexity
  • Total numb. of nodes generated
  • Space complexity
  • Idem if each node is retained in memory

39
BF-search evaluation
  • Completeness
  • YES (if b is finite)
  • Time complexity
  • Total numb. of nodes generated
  • Space complexity
  • Idem if each node is retained in memory
  • Optimality
  • Does it always find the least-cost solution?
  • In general YES
  • unless actions have different cost.

40
BF-search evaluation
  • Two lessons
  • Memory requirements are a bigger problem than its
    execution time.
  • Exponential complexity search problems cannot be
    solved by uninformed search methods for any but
    the smallest instances.

41
DF-search, an example
  • Expand deepest unexpanded node
  • Implementation fringe is a LIFO queue (stack)

A
42
DF-search, an example
  • Expand deepest unexpanded node
  • Implementation fringe is a LIFO queue (stack)

A
B
C
43
DF-search, an example
  • Expand deepest unexpanded node
  • Implementation fringe is a LIFO queue (stack)

A
C
B
D
E
44
DF-search, an example
  • Expand deepest unexpanded node
  • Implementation fringe is a LIFO queue (stack)

A
B
C
D
E
H
I
45
DF-search, an example
  • Expand deepest unexpanded node
  • Implementation fringe is a LIFO queue (stack)

A
C
B
E
D
H
I
46
DF-search, an example
  • Expand deepest unexpanded node
  • Implementation fringe is a LIFO queue (stack)

A
B
C
D
E
H
I
47
DF-search, an example
  • Expand deepest unexpanded node
  • Implementation fringe is a LIFO queue (stack)

A
C
B
E
D
I
J
K
H
48
DF-search, an example
  • Expand deepest unexpanded node
  • Implementation fringe is a LIFO queue (stack)

A
B
C
D
E
H
I
J
K
49
DF-search, an example
  • Expand deepest unexpanded node
  • Implementation fringe is a LIFO queue (stack)

A
B
C
D
E
H
I
J
K
50
DF-search, an example
  • Expand deepest unexpanded node
  • Implementation fringe is a LIFO queue (stack)

A
B
C
F
H
D
E
H
I
J
K
51
DF-search, an example
  • Expand deepest unexpanded node
  • Implementation fringe is a LIFO queue (stack)

A
B
C
F
H
D
E
H
I
J
K
L
M
52
DF-search, an example
  • Expand deepest unexpanded node
  • Implementation fringe is a LIFO queue (stack)

A
B
C
F
H
D
E
H
I
J
K
L
M
53
DF-search evaluation
  • Branching factor b
  • Maximum depth m
  • Completeness
  • Does it always find a solution if one exists?
  • NO
  • unless search space is finite

54
DF-search evaluation
  • Completeness
  • NO unless search space is finite.
  • Time complexity
  • Terrible if m is much larger than d (depth of
    optimal solution)
  • But if many solutions, then faster than BF-search

55
DF-search evaluation
  • Completeness
  • NO unless search space is finite.
  • Time complexity
  • Space complexity
  • Backtracking search uses even less memory
  • One successor instead of all b.

56
DF-search evaluation
  • Completeness
  • NO unless search space is finite.
  • Time complexity
  • Space complexity
  • Optimallity No
  • Same issues as completeness
  • Assume node J and C contain goal states

57
Iterative deepening search
  • What?
  • A general strategy to find best depth limit l.
  • Goals is found at depth d, the depth of the
    shallowest goal-node.
  • Often used in combination with DF-search
  • Combines benefits of DF- and BF-search

58
Iterative deepening search
  • function ITERATIVE_DEEPENING_SEARCH(problem)
    return a solution or failure
  • inputs problem
  • for depth ? 0 to 8 do
  • result ? DEPTH-LIMITED_SEARCH(problem, depth)
  • if result ? cuttoff then return result

59
ID-search, example
  • Limit0

60
ID-search, example
  • Limit1

61
ID-search, example
  • Limit2

62
ID-search, example
  • Limit3

63
ID search, evaluation
  • Completeness
  • YES (no infinite paths)

64
ID search, evaluation
  • Completeness
  • YES (no infinite paths)
  • Time complexity
  • Algorithm seems costly due to repeated generation
    of certain states.
  • Node generation (depth d)
  • level d once
  • level d-1 2
  • level d-2 3
  • level 2 d-1
  • level 1 d

Num. Comparison for b10 and d5 solution at far
right
65
ID search, evaluation
Total number of nodes generated
Num. Comparison for b10 and d5 solution at far
right
66
ID search, evaluation
  • Completeness
  • YES (no infinite paths)
  • Time complexity
  • Space complexity
  • Cfr. depth-first search

67
ID search, evaluation
  • Completeness
  • YES (no infinite paths)
  • Time complexity
  • Space complexity
  • Optimality
  • YES if step cost is 1.
  • Can be extended to iterative lengthening search
  • Same idea as uniform-cost search
  • Increases overhead.

68
Bidirectional search
  • Two simultaneous searches from start and goal.
  • Motivation
  • Check whether the node belongs to the other
    fringe before expansion.
  • Space complexity is the most significant
    weakness.
  • Complete and optimal if both searches are BF.

69
How to search backwards?
  • The predecessor of each node should be
    efficiently computable.
  • When actions are easily reversible.

70
Summary of algorithms
71
Repeated states
  • Failure to detect repeated states can turn a
    solvable problems into unsolvable ones.

72
Graph search, evaluation
  • Optimality
  • GRAPH-SEARCH discard newly discovered paths.
  • This may result in a sub-optimal solution
  • YET when uniform-cost search or BF-search with
    constant step cost
  • Time and space complexity,
  • proportional to the size of the state space
  • (may be much smaller than O(bd)).
  • DF- and ID-search with closed list no longer has
    linear space requirements since all nodes are
    stored in closed list!!

73
Search with partial information
  • Previous assumption
  • Environment is fully observable
  • Environment is deterministic
  • Agent knows the effects of its actions
  • What if knowledge of states or actions is
    incomplete?

74
Search with partial information
  • Partial knowledge of states and actions
  • sensorless or conformant problem
  • Agent may have no idea where it is solution (if
    any) is a sequence.
  • contingency problem
  • Percepts provide new information about current
    state solution is a tree or policy often
    interleave search and execution.
  • Agent can obtain new information from its sensors
    after each action
  • If uncertainty is caused by actions of another
    agent adversarial problem
  • exploration problem
  • When states and actions of the environment are
    unknown.

75
Conformant problems
  • Agent knows effects of its actions, but has no
    sensors
  • start in 1,2,3,4,5,6,7,8 e.g Right goes to
    2,4,6,8. Solution??
  • Right, Suck, Left,Suck
  • Right, Suck goes to 4,8.
  • Right, Suck, Left, Suck goes to 7 goal state.
  • When the world is not fully observable reason
    about a set of states that might be reached
  • belief state

76
Conformant problems
  • Search space of belief states
  • Solution belief state with all members goal
    states.
  • If S states then 2S belief states.

77
Belief state of vacuum-world
78
Contingency problems
  • Contingency, start in 1,3.
  • Agent has position sensor and local dirt sensor
    - but no sensor for detecting dirt in other
    squares .
  • Local sensing dirt, location only.
  • Percept L,Dirty 1,3
  • Suck 5,7
  • Right 6,8
  • Suck in 68 (Success)
  • BUT Suck in 8 failure
  • Solution??
  • Belief-state no fixed action sequence guarantees
    solution
  • Relax requirement
  • Suck, Right, if R,dirty then Suck
  • Select actions based on contingencies arising
    during execution.
Write a Comment
User Comments (0)
About PowerShow.com