Problem Solving by Search by Jin Hyung Kim Computer Science Department KAIST - PowerPoint PPT Presentation

1 / 69
About This Presentation
Title:

Problem Solving by Search by Jin Hyung Kim Computer Science Department KAIST

Description:

Problem Solving by Search. by Jin Hyung Kim. Computer Science ... Most popular in Human problem solving. No shift of attention to suspended alternatives ... – PowerPoint PPT presentation

Number of Views:132
Avg rating:3.0/5.0
Slides: 70
Provided by: Jinhyu
Category:

less

Transcript and Presenter's Notes

Title: Problem Solving by Search by Jin Hyung Kim Computer Science Department KAIST


1
Problem Solving by Searchby Jin Hyung
KimComputer Science DepartmentKAIST
2
Example of Representation
  • Euler Path

3
Graph Theory
  • Graph consists of
  • A set of nodes may be infinite
  • A set of arcs(links)
  • Directed graph, underlying graph, tree
  • Notations
  • node, start node(root), leaf (tip node), root,
    path, ancestor, descendant, child(children, son),
    parent(father), cycle, DAG, connected, locally
    finite graph, node expansion

4
State Space Representation
  • Basic Components
  • set of states s
  • set of operators o s -gt s
  • control strategy c sn -gt o
  • State space graph
  • State -gt node
  • operator -gt arc
  • Four tuple representation
  • N, A, S, GD, solution path

5
Examples of SSR
  • TIC_TAC_TOE
  • n2-1 Puzzle
  • Traveling salesperson problem (TSP)

6
Search Strategies
  • A strategy is defined by picking the order of
    node expansion
  • Search Directions
  • Forward searching (from start to goal)
  • Backward searching (from goal to start)
  • Bidirectional
  • Irrevocable vs. revocable
  • Irrevocable strategy Hill-Climbing
  • Most popular in Human problem solving
  • No shift of attention to suspended alternatives
  • End up with local-maxima
  • Commutative assumption
  • Applying an inappropriate operators may delay,
    but never prevent the eventual discovery of
    solutions.
  • Revocable strategy Tentative control
  • An alternative chosen, others reserve

7
Evaluation of Search Strategies
  • Completeness
  • Does it always find a solution if one exists ?
  • Time Complexity
  • Number of nodes generated/expanded
  • Space complexity
  • Maximum number of nodes in memory
  • Optimality
  • Does it always find a least-cost solution ?
  • Algorithm is admissible if it terminate with
    optimal solution
  • Time and Space complexity measured by
  • b maximum branching factors of the search tree
  • d depth of least-cost solution
  • m - maximum depth of the state space

8
Implementing Search Strategies
  • Uninformed search
  • Search does not depend on the nature of solution
  • Systematic Search Method
  • Breadth-First Search
  • Depth-First Search (backtracking)
  • Depth-limited Search
  • Uniform Cost Search
  • Iterative deepening Search
  • Informed or Heuristic Search
  • Best-first Search
  • Greedy search (h only)
  • A search (g h)
  • Iterative A search

9
X-First Search Algorithm
yes
Expand n. Put its successors to OPEN
yes
10
Comparison of BFS and DFS
  • Selection strategy from OPEN
  • BFS by FIFO Queue
  • DFS by LIFO - stack
  • BFS always terminate if goal exist
  • cf. DFS on locally finite infinite tree
  • Guarantee shortest path to goal - BFS
  • Space requirement
  • BFS - Exponential
  • DFS - Linear,
  • keep children of a single node
  • Which is better ? BFS or DFS ?

11
Depth-limited Search
  • depth-first search with depth limit
  • Nodes at depth have no successors

12
Uniform Cost Search
  • A Generalized version of Breadth-First Search
  • C(ni, nj) cost of going from ni to nj
  • g(n) (tentative minimal) cost of a path from s
    to n.
  • Guarantee to find the minimum cost path
  • Dijkstra Algorithm

13
Uniform Cost Search Algorithm
yes
yes
n goal ?
14
Iterative Deepening Search
  • Compromise of BFS and DFS
  • Iterative Deepening Search depth first search
    with depth-limit increment
  • Save on Storage, guarantee shortest path
  • Additional node expansion is negligible
  • Can you apply this idea to uniform cost search ?

proc Iterative_Deeping_Search(Root) begin
Success 0 for (depth_bound 1
depth_bound Success 1) depth_first_search
(Root, depth_bound) if goal found, Success
1 end
15
Iterative Deeping (l0)
16
Iterative Deeping (l1)
17
Iterative Deeping (l2)
18
Iterative Deeping (l3)
19
Properties of IDS
  • Complete ??
  • Time Complexity
  • db1 (d-1)b2 bd O(bd)
  • Bottom level generated once, top-level generated
    dth
  • Space Complexity O(bd)
  • Optimal ?? Yes, if step cost 1
  • Can be modified to explore uniform cost tree ?
  • Numerical comparison of speed ( of node
    expanded)
  • b10 and d5, solution at far right
  • N(IDS) 50 400 3,000 20,000 100,000
    123,450
  • N(BFS) 10 100 1000 10000 100000
    999,990 1,111,100

20
Repeated States
  • Failure to detect repeated states can turn a
    linear problem into an exponential one !
  • Search on Graph

21
Summary of Algorithms
22
Informed Search
23
8-Puzzel Heuristics
  • Which is the best move among a, b, c ?

24
Road Map Problem
  • To go Bucharest, which city do you choose to
    visit next from Arad? Zerind, Siblu, Timisoara?
  • Your rationale ?

25
Best-First Search
  • Idea use of evaluation function for each node
  • Estimate of desirability
  • We use notation f( )
  • Special cases depending on f()
  • Greedy search
  • Uniform cost search
  • A search algorithm

26
Best First Search Algorithm( for tree search)
yes
yes
n goal ?
27
Generic form of Best-First Search
  • Best-First Algorithm with f(n) g(n) h(n)
  • where
  • g(n) cost of n from start to node n
  • h(n) heuristic estimate of the cost from n to
    a goal
  • h(G) 0, h(n) gt 0
  • Also called A algorithm
  • Uniform Cost algorithm
  • When h(n) 0 always
  • Greedy Search Algorithm
  • When g(n) 0 always
  • Expands the node that appears to be close to goal
  • Complete ??
  • Local optimal state
  • Greedy may not lead to optimality
  • Algorithm A if h(n) lt h(n)

28
Examples of Admissible heuristics
  • h(n) lt h(n) for all n
  • Air-distance never overestimate the actual road
    distance
  • 8-tile problem
  • Number of misplaced tiles
  • Sum of Manhattan distance
  • TSP
  • Length of spanning tree


Current node
Goal node
1
3
3
6
2
1
8
4
4
8
2
5
5
6
7
7
Number of misplaced tiles 4 Sum of Manhattan
distance 1 2 0 0 0 2 0 1
29
Algorithm A
Start node
  • f(n) g(n) h(n), when h(n) lt h(n) for all n
  • Find minimum f(n) to expand next
  • Role of h(n)
  • Direct to goal
  • Role of g(n)
  • Guard from roaming due to not perfect heuristics

g(n)
Current Node n
h(n)
Goal
30
A Search Example
Romania with cost in km
31
(No Transcript)
32
(No Transcript)
33
(No Transcript)
34
(No Transcript)
35
(No Transcript)
36
Nodes not expanded (node pruning)
37
Algorithm A is admissible
  • Suppose some suboptimal goal G2 has been
    generated and is in the OPEN. Let n be an
    unexpended node on a shortest path to an oprimal
    goal G1
  • f(G2) g(G2) since h(G2) 0
  • gt g(G1) sinee G2 is suboptimal
  • gt f(n) since h is admissible
  • Since f(G2) gt f(n), A will bever select G2 for
    expansion

start
n
G2
G1
38
A expands nodes in the order of increasing f
value
  • Gradually adds f-contours of nodes (cf.
    breadth-first adds layers)
  • Contour i has all nodes with f fi, where fi lt
    fi1
  • f(n) g(n) h(n) lt C will be expanded
    eventually
  • A terminate even in locally finite graph
    completeness

39
Monotonicity (consistency)
  • A heuristic function is monotone if
  • for all states ni and nj suc(ni)
  • h(ni) - h(nj) cost(ni,nj)
  • and h(goal) 0
  • Monotone heuristic is admissible

40
Uniform Cost Searchs f-contours
Start
Goal
41
As f-contours
Start
Goal
42
Greedys f-contours
Start
Goal
43
More Informedness (Dominate)
  • For two admissible heuristic h1 and h2, h2 is
    more informed than h1 if
  • h1(n) h2(n) for all n
  • for 8-tile problem
  • h1 of misplaced tile
  • h2 sum of Manhattan distance
  • Combining several admissible heuristics
  • h(n) max h1(n), , hn(n)

h(n)
h1(n)
h2(n)
0
44
Semi-admissible heuristics andRisky heuristics
  • If h(n) h(n)(1 e) , C(n)
    (1e) C(n)
  • Small cost sacrifice save a lot in search
    computation
  • Semi-admissible heuristics saves a lot in
    difficult problems
  • In case when costs of solution paths are quite
    similar
  • e-admissible
  • Use of non-admissible heuristics with risk
  • Utilize heuristic functions which are admissible
    in the most of cases
  • Statistically obtained heuristics

PEARL, J., AND KIM, J. H. Studies in
semi-admissible heuristics. IEEE Trans. PAMI-4, 4
(1982), 392-399
45
Dynamic Use of Heuristics
  • f(n) g(n) h(n) e1-d(n)/N h(n)
  • d(n) depth of node n
  • N (expected) depth of goal
  • At shallow level depth first excursion
  • At deep level assumes admissibility
  • Modify Heuristics during search
  • Utilize information obtained in the of search
    process to modify heuristics to use in the later
    part of search

46
Inventing Admissible Heuristics Relaxed
Problems
  • An admissible heuristics is exact solution of
    relaxed problem
  • 8-puzzle
  • Tile can jump number of misplaced tiles
  • Tile can move adjacent cell even if that is
    occupied - Manhattan distance heuristic
  • Automatic heuristic generator ABSOLVER
    (Prieditis, 1993)
  • Traveling salesperson Problem
  • Cost of Minimum spanning tree lt Cost of TSP tour
  • Minimum spanning tree can be computed O(n2)

47
Inventing Admissible Heuristics SubProblems
  • Solution of subproblems
  • Take Max of heuristics of sub-problems in the
    pattern database
  • 1/ 1000 of nodes are generated in 15 puzzle
    compared with Manhattan heuristic
  • Disjoint sub-problems
  • 1/ 10,000 in 24 puzzle compared with Manhattan

48
Iterative Deeping A
  • Iterative Deeping version of A
  • use threshold as depth bound
  • To find solution under the threshold of f(.)
  • increase threshold as minimum of f(.) of
  • previous cycle
  • Still admissible
  • same order of node expansion
  • Storage Efficient practical
  • but suffers for the real-valued f(.)
  • large number of iterations

49
Iterative Deepening A Search Algorithm ( for
tree search)
set threshold as h(s)
yes
threshold min( f(.) , threshold )
yes
n goal ?
Expand n. calculate f(.) of successor if f(suc) lt
threshold then Put successors to OPEN if
pointers back to n
50
Memory-bounded heuristic Search
  • Recursive best-first search
  • A variation of Depth-first search
  • Keep track of f-value of the best alternative
    path
  • Unwind if f-value of all children exceed its best
    alternative
  • When unwind, store f-value of best child as its
    f-value
  • When needed, the parent regenerate its children
    again.
  • Memory-bounded A
  • When OPEN is full, delete worst node from OPEN
    storing f-value to its parent.
  • The deleted node is regenerated when all other
    candidates look worse than the node.

51
Performance Measure
  • Penetrance
  • how search algorithm focus on goal rather than
    wander off in irrelevant directions
  • P L / T
  • L depth of goal
  • T total number of node expanded
  • Effective Branching Factor (B)
  • B B2 B3 ..... BL T
  • less dependent on L

52
Local Search and Optimization
53
Local Search is irrevocable
  • Local search irrevocable search
  • less memory required
  • Reasonable solutions in large (continuous) space
    problems
  • Can be formulated as Searching for extreme value
    of Objective function
  • find i ARGMAX Obj(pi)
  • where pi is parameter

54
Search for Optimal Parameter
  • Deterministic Methods
  • Step-by-step procedure
  • Hill-Climbing search, gradient search
  • ex error back propagation algorithm
  • Finding Optimal Weight matrix in Neural Network
    training
  • Stochastic Methods
  • Iteratively Improve parameters
  • Pseudo-random change and retain it if it improves
  • Metroplis algorithm
  • Simulated Annealing algorithm
  • Genetic Algorithm

55
Hill Climbing Search
  • 1. Set n to be the initial node
  • 2. If obj(n) gt max obj(childi(n)) then exit
  • 3. Set n to be the highest-value child of n
  • 4. Return to step 2
  • No previous state information
  • No backtracking
  • No jumping
  • Gradient Search
  • Hill climbing with continuous, differentiable
    functions
  • step width ?
  • Slow in near optimal

56
State space landscape
Real World
57
Hill-climbing Drawbacks
  • Local maxima
  • At Ridge
  • Stray in Plateau
  • Slow in Plateau
  • Determination of proper Step size
  • Cure
  • Random restart
  • Good for Only few local maxima

Global Maximum
58
Local Beam Search
  • Keep track of best k states instead of 1 in
    hill-climbing
  • Full utilization of given memory
  • Variation Stochastic beam search
  • Select k successors randomly

59
Iterative Improvement Algorithm
  • Basic Idea
  • Start with initial setting
  • Generate a random solution
  • Iteratively improve the quality
  • Good For hard, practical problems
  • Because it keeps current state only and No
    look-ahead beyond neighbors
  • Advanced Implementation
  • Metropolis algorithm
  • Simulated Annealing algorithm
  • Genetic algorithm

60
Metropolis algorithm
  • A Monte Carlo method (statistical simulation with
    random number)
  • objective is to reach state minimizing energy
    function
  • Instead always going downhill, try to go downhill
    most of the time
  • Escape local minima by allowing some bad moves
  • 1. Randomly generate a new state, Y, from state X
  • 2. If ?E(energy difference between Y and X) lt 0
  • then move to Y (set Y to X) and goto 1
  • 3. Else
  • 3.1 select a random number, ?
  • 3.2 if ? lt exp(- ?E / T)
  • then move to Y (set Y to X) and goto 1
  • 3.3 else goto 1

61
From Statistical Mechanics
  • In thermal equilibrium, probability of state i
  • energy of state i
  • absolute temperature
  • Boltzman constant
  • In NN
  • define

62
Probability of State Transition by DE
63
Simulated Annealing
  • Simulates slow cooling of annealing process
  • Applied for combinatorial optimization problem by
    S. Kirkpatric (83)
  • To overcome local minima problem
  • Escape local minima by allowing some bad moves
    but gradually decrease their size and frequency
  • Widely used in VLSI layout and airline
    scheduling, etc.
  • What is annealing?
  • Process of slowly cooling down a compound or a
    substance
  • Slow cooling let the substance flow around ?
    thermodynamic equilibrium
  • Molecules get optimum conformation

64
Simulated Annealing algorithm
  • function Simulated-Annealing(problem, schedule)
    returns a solution state
  • inputs problem, a problem
  • local variables current, a node
  • next, a node
  • T, a temperature controlling the probability
    of downward steps
  • current ? Make-Node(Initial-Stateproblem)
  • for t?1 to infinity do
  • T ? schedulet
  • if T0 then return current
  • next ? a randomly selected successor of current
  • DE ? Valuenext Valuecurrent
  • if DEgt0 then current?next
  • else current?next only with probability eDE/T

65
Simulated Annealing parameters
  • Temperature T
  • Used to determine the probability
  • High T large changes
  • Low T small changes
  • Cooling Schedule
  • Determines rate at which the temperature T is
    lowered
  • Lowers T slowly enough, the algorithm will find a
    global optimum
  • In the beginning, aggressive for searching
    alternatives, become conservative when time goes
    by

66
Simulated Annealing Cooling Schedule
T0
T(t)
Tf
t
  • if Ti is reduced too fast, poor quality
  • if Tt gt T(0) / log(1t) - Geman
  • System will converge to minimun configuration
  • Tt k/1t - Szu
  • Tt a T(t-1) where a is in between 0.8 and 0.99

67
Tips for Simulated Annealing
  • To avoid of entrainment in local minima
  • Annealing schedule by trial and error
  • Choice of initial temperature
  • How many iterations are performed at each
    temperature
  • How much the temperature is decremented at each
    step as cooling proceeds
  • Difficulties
  • Determination of parameters
  • If cooling is too slow ?Too much time to get
    solution
  • If cooling is too rapid ? Solution may not be the
    global optimum

68
Iterative algorithm comparison
  • Simple Iterative Algorithm
  • 1. find a solution s
  • 2. make s, a variation of s
  • 3. if s is better than s, keep s as s
  • 4. goto 2
  • Metropolis Algorithm
  • 3 if (s is better than s) or ( within Prob),
    then keep s as s
  • With fixed T
  • Simulated Annealing
  • T is reduced to 0 by schedule as time passes

69
Simulated Annealing Local Maxima
Write a Comment
User Comments (0)
About PowerShow.com