Heuristic Informed Search - PowerPoint PPT Presentation

1 / 39
About This Presentation
Title:

Heuristic Informed Search

Description:

Idea: Reduce memory requirement of A* by applying cutoff on values of f. Consistent heuristic function h. Algorithm IDA ... Examples of problems with HC ... – PowerPoint PPT presentation

Number of Views:55
Avg rating:3.0/5.0
Slides: 40
Provided by: jeanc76
Category:

less

Transcript and Presenter's Notes

Title: Heuristic Informed Search


1
Heuristic (Informed) Search
RN Chap. 4, Sect. 4.13
2
Iterative Deepening A (IDA)
  • Idea Reduce memory requirement of A by applying
    cutoff on values of f
  • Consistent heuristic function h
  • Algorithm IDA
  • Initialize cutoff to f(initial-node)
  • Repeat
  • Perform depth-first search by expanding all nodes
    N such that f(N) ? cutoff
  • Reset cutoff to smallest value f of non-expanded
    (leaf) nodes

3
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
4
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
5
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
6
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
7
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
8
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
9
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
10
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
11
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
12
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
13
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
14
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
5
15
Advantages/Drawbacks of IDA
  • Advantages
  • Still complete and optimal
  • Requires less memory than A
  • Avoid the overhead to sort the fringe
  • Drawbacks
  • Cant avoid revisiting states not on the current
    path
  • Essentially a DFS
  • Available memory is poorly used (?
    memory-bounded search, see RN p. 101-104)

16
Another approach
  • Local Search Algorithms
  • Hill-climbing or Gradient descent
  • Simulated Annealing
  • Genetic Algorithms, others

17
Local Search
  • Light-memory search method
  • No search tree only the current state is
    represented!
  • Only applicable to problems where the path is
    irrelevant (e.g., 8-queen), unless the path is
    encoded in the state
  • Many similarities with optimization techniques

18
Hill-climbing search
  • If there exists a successor s for the current
    state n such that
  • h(s) lt h(n)
  • h(s) lt h(t) for all the successors t of n,
  • then move from n to s. Otherwise, halt at n.
  • Looks one step ahead to determine if any
    successor is better than the current state if
    there is, move to the best successor.
  • Similar to Greedy search in that it uses h, but
    does not allow backtracking or jumping to an
    alternative path since it doesnt remember
    where it has been.
  • Not complete since the search will terminate at
    "local minima," "plateaus," and "ridges."

19
Hill climbing on a surface of states
  • Height Defined by Evaluation Function

20
Robot Navigation
Local-minimum problem
f(N) h(N) straight distance to the goal
21
Drawbacks of hill climbing
  • Problems
  • Local Maxima peaks that arent the highest point
    in the space
  • Plateaus the space has a broad flat region that
    gives the search algorithm no direction (random
    walk)
  • Ridges flat like a plateau, but with dropoffs to
    the sides steps to the North, East, South and
    West may go down, but a step to the NW may go up.
  • Remedy
  • Introduce randomness
  • Random restart.
  • Some problem spaces are great for hill climbing
    and others are terrible.

22
Examples of problems with HC
  • http//www.ndsu.nodak.edu/instruct/juell/vp/cs724s
    00/hill_climbing/hill_climbing.html

23
Hill climbing example
start
h 0
goal
h -4
-2
-5
-5
h -3
h -1
-4
-3
h -2
h -3
-4
f(n) -(number of tiles out of place)
24
Example of a local maximum
-5
-5
start
goal
4
0
-5
5
-4
-5
7
25
Steepest Descent
  • S ? initial state
  • Repeat
  • S ? arg minS?SUCCESSORS(S)h(S)
  • if GOAL?(S) return S
  • if h(S) ? h(S) then S ? S else return failure
  • Similar to
  • - hill climbing with h
  • - gradient descent over continuous space

26
Application 8-Queen
  • Repeat n times
  • Pick an initial state S at random with one queen
    in each column
  • Repeat k times
  • If GOAL?(S) then return S
  • Pick an attacked queen Q at random
  • Move Q in its column to minimize the number of
    attacking queens ? new S min-conflicts
    heuristic
  • Return failure

27
Application 8-Queen
  • Why does it work ???
  • There are many goal states that are
    well-distributed over the state space
  • If no solution has been found after a few
    steps, its better to start it all over again
  • Building a search tree would be much less
    efficient because of the high branching factor
  • Running time almost independent of the number
    of queens
  • Repeat n times
  • Pick an initial state S at random with one queen
    in each column
  • Repeat k times
  • If GOAL?(S) then return S
  • Pick an attacked queen Q at random
  • Move Q it in its column to minimize the number of
    attacking queens is minimum ? new S

28
Steepest Descent
  • S ? initial state
  • Repeat
  • S ? arg minS?SUCCESSORS(S)h(S)
  • if GOAL?(S) return S
  • if h(S) ? h(S) then S ? S else return failure
  • may easily get stuck in local minima
  • Random restart (as in n-queen example)
  • Monte Carlo descent

29
Monte Carlo Descent
  • S ? initial state
  • Repeat k times
  • If GOAL?(S) then return S
  • S ? successor of S picked at random
  • if h(S) ? h(S) then S ? S
  • else
  • Dh h(S)-h(S)
  • with probability exp(?Dh/T), where T is called
    the temperature, do S ? S
    Metropolis criterion
  • Return failure
  • Simulated annealing lowers T over the k
    iterations.
  • It starts with a large T and slowly decreases T

30
Simulated annealing
  • Simulated annealing (SA) exploits an analogy
    between the way in which a metal cools and
    freezes into a minimum-energy crystalline
    structure (the annealing process) and the search
    for a minimum or maximum in a more general
    system.
  • SA can avoid becoming trapped at local minima.
  • SA uses a random search that accepts changes that
    increase objective function f, as well as some
    that decrease it.
  • SA uses a control parameter T, which by analogy
    with the original application is known as the
    system temperature.
  • T starts out high and gradually decreases toward
    0.
  • Applet http//www.heatonresearch.com/articles/64/p
    age1.html

31
Simulated annealing (cont.)
  • A bad move from A to B is accepted with a
    probability
  • (f(B)-f(A)/T)
  • e
  • The higher the temperature, the more likely it is
    that a bad move can be made.
  • As T tends to zero, this probability tends to
    zero, and SA becomes more like hill climbing
  • If T is lowered slowly enough, SA is complete and
    admissible.

32
The simulated annealing algorithm
33
Parallel Local Search Techniques
  • They perform several local searches concurrently,
    but not independently
  • Beam search
  • Genetic algorithms
  • See RN, pages 115-119

34
Local Beam Search
  • Idea Keep track of k states rather than just one
  • Start with k randomly generated states
  • Repeat
  • At each iteration, all the successors of all k
    states are generated
  • If any one is a goal state
  • stop
  • Else
  • select the k best successors from the complete
    list and repeat

35
Local Beam Search
  • Not the same as k searches run in parallel!
  • Searches that find good states recruit other
    searches to join them
  • Problem
  • quite often, all k states end up on same local
    hill
  • Solution
  • choose k successors randomly biased towards good
    ones
  • Close analogy to natural selection

36
Genetic Algorithm (GA)
  • GAstochastic local beam search generate
    successors from pairs of states
  • Statea string over a finite alphabet (e.g, a
    string of 0 and 1)
  • E.g, for 8-queen, the position of the queen in
    each column is denoted by a number
  • Cross over and mutation
  • http//www.heatonresearch.com/articles/65/page1.ht
    ml

37
Genetic Algorithm (GA)
38
Genetic Algorithm (GA)
  • Crossover helps iff substrings are meaningful
    components

39
Search problems
Blind search
Heuristic search best-first and A
Variants of A
Construction of heuristics
Local search
Write a Comment
User Comments (0)
About PowerShow.com