Informed search algorithms - PowerPoint PPT Presentation

About This Presentation
Title:

Informed search algorithms

Description:

Order the nodes in fringe in decreasing order of desirability. Special cases: ... Well known adage: 'If at first you don't succeed, try, try again. ... – PowerPoint PPT presentation

Number of Views:47
Avg rating:3.0/5.0
Slides: 42
Provided by: alext8
Category:

less

Transcript and Presenter's Notes

Title: Informed search algorithms


1
Informed search algorithms
  • Chapter 4

2
Outline
  • Best-first search
  • Greedy best-first search
  • A search
  • Heuristics
  • Local search algorithms
  • Hill-climbing search

3
Best-first search
  • Idea use an evaluation function f(n) for each
    node
  • estimate of "desirability"
  • Expand most desirable unexpanded node
  • Implementation
  • Order the nodes in fringe in decreasing order of
    desirability
  • Special cases
  • greedy best-first search
  • A search

4
Romania with step costs in km
5
Greedy best-first search
  • Evaluation function f(n) h(n) (heuristic)
  • estimate of cost from n to goal
  • e.g., hSLD(n) straight-line distance from n to
    Bucharest
  • Greedy best-first search expands the node that
    appears to be closest to goal

6
Greedy best-first search example
7
Greedy best-first search example
8
Greedy best-first search example
9
Greedy best-first search example
10
Properties of greedy best-first search
  • Complete? No can get stuck in loops, e.g., Iasi
    ? Neamt ? Iasi ? Neamt ?
  • Time? O(bm), but a good heuristic can give
    dramatic improvement
  • Space? O(bm) -- keeps all nodes in memory
  • Optimal? No

11
A search
  • Idea avoid expanding paths that are already
    expensive
  • Evaluation function f(n) g(n) h(n)
  • g(n) cost so far to reach n
  • h(n) estimated cost from n to goal
  • f(n) estimated total cost of path through n to
    goal

12
A search example
13
A search example
14
A search example
15
A search example
16
A search example
17
A search example
18
Admissible heuristics
  • A heuristic h(n) is admissible if for every node
    n,
  • h(n) h(n),
  • where h(n) is the true cost to reach the goal
    state from n.
  • An admissible heuristic never overestimates the
    cost to reach the goal, i.e., it is optimistic
  • Example hSLD(n) (never overestimates the actual
    road distance)
  • Theorem If h(n) is admissible, A using
    TREE-SEARCH is optimal

19
Optimality of A (proof)
  • Suppose some suboptimal goal G2 has been
    generated and is in the fringe. Let n be an
    unexpanded node in the fringe such that n is on a
    shortest path to an optimal goal G.
  • f(G2) g(G2) since h(G2) 0 (G2 is a goal)
  • gt g(G) since G2 is sub-optimal
  • g(n) d(n,G) since there is only one
    path from root to G (tree)
  • ? g(n) h(n) since h is admissible
  • f(n) by definition of f
  • Hence f(G2) gt f(n), and A will never select G2
    for expansion.
  • Recall that it is only when a node is picked for
    expansion that we check if it is goal.

20
Properties of A
  • Complete?? Yes
  • Optimal?? Yes
  • Let C be the cost of the optimal solution.
  • A expands all nodes with f(n) lt C
  • A expands some nodes with f(n) C
  • A expands no nodes with f(n) gt C

21
A using Graph-Search
  • A using Graph-Search can produce sub-optimal
    solutions, even if the heuristic function is
    admissible.
  • Sub-optimal solutions can be returned because
    Graph-Search can discard the optimal path to a
    repeated state if it is not the first one
    generated.

22
Consistent heuristics
  • A heuristic is consistent if for every node n,
    every successor n' of n generated by any action
    a,
  • c(n,a,n') h(n') ? h(n)
  • If h is consistent, we have
  • f(n') g(n') h(n')
  • g(n) c(n,a,n') h(n')
  • g(n) h(n)
  • f(n)
  • i.e., f(n) is non-decreasing along any path.
  • Theorem If h(n) is consistent, A using
    GRAPH-SEARCH is optimal

23
Optimality under consistency
  • Recall If h is consistent, then A expands nodes
    in order of increasing f value.
  • First goal node selected for expansion must be
    optimal.
  • Recall that only when a node is selected for
    expansion only then it is tested whether it is a
    goal state or not.
  • Why?
  • Because if a goal node G2 is selected later for
    expansion than G1 this means that
  • f(G1) ? f(G2)
  • g(G1) h(G1) ? g(G2) h(G2)
  • recall h(G1) h(G2)0 since G1 and G2 are goal,
    hence
  • g(G1) ? g(G2)
  • which means that G2 is sub-optimal solution.

24
Straight-Line Distance Heuristic is Consistent
  • Why?
  • We know that the general triangle inequality is
    satisfied when each side is measured by the
    straight-line.
  • And c(n,a,n) is greater or equal to the
    straight-line distance between n and n.
  • d(n.state, goalstate) ? d(n.state, goalstate)
    d(n.state, n.state)
  • By Euclidian distance triangle property
  • Also recall that
  • d(n.state, goalstate) h(n) and
  • d(n.state, goalstate) h(n), and
  • d(n.state, n.state) ? c(n,a,n) for any action
    a. Hence,
  • h(n) ? h(n) d(n.state, n.state) ? h(n)
    c(n,a,n)
  • I.e. h is consistent.

25
Admissible heuristics
  • E.g., for the 8-puzzle
  • h1(n) number of misplaced tiles
  • h2(n) total Manhattan distance x1-x2y1y2
  • (i.e., no. of squares from desired location of
    each tile)
  • h1(S) ?
  • h2(S) ?

26
Admissible heuristics
  • E.g., for the 8-puzzle
  • h1(n) number of misplaced tiles
  • h2(n) total Manhattan distance x1-x2y1y2
  • (i.e., no. of squares from desired location of
    each tile)
  • h1(S) ? 8
  • h2(S) ? 31222332 18

27
Why are they admissible?
  • Misplaced tiles No move can get more than one
    misplaced tile into place,so this measure is a
    guaranteed underestimate and hence admissible.
  • Manhattan In fact, each move can at best
    decrease by one the rectilinear distance of a
    tile from its goal.

28
Are they consistent?
  • c(n,a,n) 1 for any action a
  • Claim h1(n) ? h1(n) c(n,a,n) h1(n) 1
  • Now, no move (action) can get more than one
    misplaced tile into place.
  • Also, no move can create more than one new
    misplaced tile.
  • Hence, the above follows. I.e. h1 is consistent.
  • Similar reasoning for h2 as well.

29
A Graph-Search (without consistent heuristic)
  • The problem is that sub-optimal solutions can be
    returned because Graph-Search can discard the
    optimal path to a repeated state if it is not the
    first one generated.
  • When the heuristic function h is consistent, then
    we proved that the optimal path to any repeated
    state is the first one to be followed.
  • When h is not consistent, we can still use
    Graph-Search if we discard the longer path. We
    store in the hashtable pairs (statekey, pathcost)
  • Graph-Search(problem, fringe)
  • node MakeNode(problem.initialstate)
  • Insert(fringe, node)
  • do
  • if ( Empty(fringe) ) return null //failure
  • node Remove(fringe)
  • if (problem.GoalTest(node.state)) return node
  • key_cost_pair Find( closed,
    node.state.getKey() )
  • if (key_cost_pair null
    key_cost_pair.pathcost gt node.pathcost)
  • Insert (closed, (node.state.getKey(),
    node.pathcost) )
  • InsertAll (fringe, Expand(node, problem) )
  • while (true)

30
A using Graph-Search
  • Lets try it here

31
Dominance
  • If h2(n) h1(n) for all n (both admissible)
  • then h2 dominates h1
  • h2 is better for search
  • Let C be the cost of the optimal solution.
  • A expands all nodes with f(n) lt C
  • A expands some nodes with f(n) C
  • A expands no nodes with f(n) gt C
  • Hence, we want f(n) (g(n)h(n)) to be as big as
    possible. Since we cant do anything about g(n)
    we are interested in having h(n) as big as
    possible.
  • Typical search costs (average number of nodes
    expanded)
  • d12 IDS 3,644,035 nodes A(h1) 227 nodes
    A(h2) 73 nodes
  • d24 IDS too many nodes A(h1) 39,135 nodes
    A(h2) 1,641 nodes

32
The max of heuristics
  • If a collection of admissible heuristics h1, ,
    hm is available for a problem, and none of them
    dominates any of the others, we create a compound
    heuristic as
  • h(n) max h1(n), , hm(n)
  • Is it admissible?
  • Yes, because each component is admissible, so h
    wont over-estimate the distance to the goal.
  • If h1, , hm are consistent, is h consistent as
    well?
  • Yes. For any action (move) a
  • c(n,a,n)h(n)
  • c(n,a,n) max h1(n), , hm(n)
  • max c(n,a,n)h1(n), , c(n,a,n)hm(n) ?
    (from consistency of hi)
  • max h1(n), , hm(n) h(n)

33
Relaxed problems
  • A problem with fewer restrictions on the actions
    is called a relaxed problem
  • The cost of an optimal solution to a relaxed
    problem is an admissible heuristic for the
    original problem
  • If the rules of the 8-puzzle are relaxed so that
    a tile can move anywhere, then h1(n) gives the
    shortest solution
  • If the rules are relaxed so that a tile can move
    to any adjacent square, then h2(n) gives the
    shortest solution

34
Local search algorithms
  • In many optimization problems, the path to the
    goal is irrelevant the goal state itself is the
    solution
  • State space set of "complete" configurations
  • Find configuration satisfying constraints,
    e.g., n-queens
  • In such cases, we can use local search
    algorithms, which keep a single "current" state,
    and try to improve it.

35
Example n-queens
  • Put n queens on an n n board with no two queens
    on the same row, column, or diagonal
  • How many successors from each state?
  • (n-1)n

36
8-queens problem
  • h number of pairs of queens that are attacking
    each other, either directly or indirectly
  • h 17 for the above state

37
Hill-climbing search
  • "Like climbing Everest in thick fog with amnesia"
  • Hill-Climbing chooses randomly among the set of
    best successors, if there is more than one.

38
Hill-climbing search
  • Problem depending on initial state, can get
    stuck in local maxima

39
Hill-climbing search 8-queens problem
  • A local minimum with h 1

40
Local Minima
  • Starting from a randomly generated state of the
    8-queens, hill-climbing gets stuck 86 of the
    time, solving only 14 of problems.
  • It works quickly, taking just 4 steps on average
    when it succeeds, and 3 when it gets stuck not
    bad for a state space with 88 ? 17 million
    states.
  • Memory is constant, since we keep only one state.
  • Since it is so attractive, what can we do in
    order to not get stuck?

41
Random-restart hill climbing
  • Well known adage If at first you dont succeed,
    try, try again.
  • It conducts a series of hill-climbing searches
    from randomly generated initial states, stopping
    when a goal is found.
  • If each hill-climbing search has a probability p
    of success, then the expected number of restarts
    is 1/p.
  • For 8-queens, p0.14, so we need roughly 7
    iterations to find a goal, i.e. ? 22 steps (3
    steps for failures and 4 for success)
Write a Comment
User Comments (0)
About PowerShow.com