Title: Heuristic Informed Search
1Heuristic (Informed) Search
RN Chap. 4, Sect. 4.13
2Iterative Deepening A (IDA)
- Idea Reduce memory requirement of A by applying
cutoff on values of f - Consistent heuristic function h
- Algorithm IDA
- Initialize cutoff to f(initial-node)
- Repeat
- Perform depth-first search by expanding all nodes
N such that f(N) ? cutoff - Reset cutoff to smallest value f of non-expanded
(leaf) nodes
38-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
48-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
58-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
68-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
78-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
88-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
98-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
108-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
118-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
128-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
138-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
148-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
5
15Advantages/Drawbacks of IDA
- Advantages
- Still complete and optimal
- Requires less memory than A
- Avoid the overhead to sort the fringe
- Drawbacks
- Cant avoid revisiting states not on the current
path - Essentially a DFS
- Available memory is poorly used (?
memory-bounded search, see RN p. 101-104)
16Another approach
- Local Search Algorithms
- Hill-climbing or Gradient descent
- Simulated Annealing
- Genetic Algorithms, others
17Local Search
- Light-memory search method
- No search tree only the current state is
represented! - Only applicable to problems where the path is
irrelevant (e.g., 8-queen), unless the path is
encoded in the state - Many similarities with optimization techniques
18Hill-climbing search
- If there exists a successor s for the current
state n such that - h(s) lt h(n)
- h(s) lt h(t) for all the successors t of n,
- then move from n to s. Otherwise, halt at n.
- Looks one step ahead to determine if any
successor is better than the current state if
there is, move to the best successor. - Similar to Greedy search in that it uses h, but
does not allow backtracking or jumping to an
alternative path since it doesnt remember
where it has been. - Not complete since the search will terminate at
"local minima," "plateaus," and "ridges."
19Hill climbing on a surface of states
- Height Defined by Evaluation Function
20Robot Navigation
Local-minimum problem
f(N) h(N) straight distance to the goal
21Drawbacks of hill climbing
- Problems
- Local Maxima peaks that arent the highest point
in the space - Plateaus the space has a broad flat region that
gives the search algorithm no direction (random
walk) - Ridges flat like a plateau, but with dropoffs to
the sides steps to the North, East, South and
West may go down, but a step to the NW may go up. - Remedy
- Introduce randomness
- Random restart.
- Some problem spaces are great for hill climbing
and others are terrible.
22Examples of problems with HC
- http//www.ndsu.nodak.edu/instruct/juell/vp/cs724s
00/hill_climbing/hill_climbing.html
23Hill climbing example
start
h 0
goal
h -4
-2
-5
-5
h -3
h -1
-4
-3
h -2
h -3
-4
f(n) -(number of tiles out of place)
24Example of a local maximum
-5
-5
start
goal
4
0
-5
5
-4
-5
7
25Steepest Descent
- S ? initial state
- Repeat
- S ? arg minS?SUCCESSORS(S)h(S)
- if GOAL?(S) return S
- if h(S) ? h(S) then S ? S else return failure
- Similar to
- - hill climbing with h
- - gradient descent over continuous space
26Application 8-Queen
- Repeat n times
- Pick an initial state S at random with one queen
in each column - Repeat k times
- If GOAL?(S) then return S
- Pick an attacked queen Q at random
- Move Q in its column to minimize the number of
attacking queens ? new S min-conflicts
heuristic - Return failure
27Application 8-Queen
- Why does it work ???
- There are many goal states that are
well-distributed over the state space - If no solution has been found after a few
steps, its better to start it all over again - Building a search tree would be much less
efficient because of the high branching factor - Running time almost independent of the number
of queens
- Repeat n times
- Pick an initial state S at random with one queen
in each column - Repeat k times
- If GOAL?(S) then return S
- Pick an attacked queen Q at random
- Move Q it in its column to minimize the number of
attacking queens is minimum ? new S
28Steepest Descent
- S ? initial state
- Repeat
- S ? arg minS?SUCCESSORS(S)h(S)
- if GOAL?(S) return S
- if h(S) ? h(S) then S ? S else return failure
- may easily get stuck in local minima
- Random restart (as in n-queen example)
- Monte Carlo descent
29Monte Carlo Descent
- S ? initial state
- Repeat k times
- If GOAL?(S) then return S
- S ? successor of S picked at random
- if h(S) ? h(S) then S ? S
- else
- Dh h(S)-h(S)
- with probability exp(?Dh/T), where T is called
the temperature, do S ? S
Metropolis criterion - Return failure
- Simulated annealing lowers T over the k
iterations. - It starts with a large T and slowly decreases T
30Simulated annealing
- Simulated annealing (SA) exploits an analogy
between the way in which a metal cools and
freezes into a minimum-energy crystalline
structure (the annealing process) and the search
for a minimum or maximum in a more general
system. - SA can avoid becoming trapped at local minima.
- SA uses a random search that accepts changes that
increase objective function f, as well as some
that decrease it. - SA uses a control parameter T, which by analogy
with the original application is known as the
system temperature. - T starts out high and gradually decreases toward
0. - Applet http//www.heatonresearch.com/articles/64/p
age1.html
31Simulated annealing (cont.)
- A bad move from A to B is accepted with a
probability - (f(B)-f(A)/T)
- e
- The higher the temperature, the more likely it is
that a bad move can be made. - As T tends to zero, this probability tends to
zero, and SA becomes more like hill climbing - If T is lowered slowly enough, SA is complete and
admissible.
32The simulated annealing algorithm
33Parallel Local Search Techniques
- They perform several local searches concurrently,
but not independently - Beam search
- Genetic algorithms
- See RN, pages 115-119
34Local Beam Search
- Idea Keep track of k states rather than just one
- Start with k randomly generated states
- Repeat
- At each iteration, all the successors of all k
states are generated - If any one is a goal state
- stop
- Else
- select the k best successors from the complete
list and repeat
35Local Beam Search
- Not the same as k searches run in parallel!
- Searches that find good states recruit other
searches to join them - Problem
- quite often, all k states end up on same local
hill - Solution
- choose k successors randomly biased towards good
ones - Close analogy to natural selection
36Genetic Algorithm (GA)
- GAstochastic local beam search generate
successors from pairs of states - Statea string over a finite alphabet (e.g, a
string of 0 and 1) - E.g, for 8-queen, the position of the queen in
each column is denoted by a number - Cross over and mutation
- http//www.heatonresearch.com/articles/65/page1.ht
ml
37Genetic Algorithm (GA)
38Genetic Algorithm (GA)
- Crossover helps iff substrings are meaningful
components
39Search problems
Blind search
Heuristic search best-first and A
Variants of A
Construction of heuristics
Local search