Heuristic Optimization Athens 2004 - PowerPoint PPT Presentation

About This Presentation
Title:

Heuristic Optimization Athens 2004

Description:

Greedy algorithms systematic. Hill-climbing (based on neighbourhood search) randomized ... A greedy algorithm would do this would be: ... – PowerPoint PPT presentation

Number of Views:49
Avg rating:3.0/5.0
Slides: 44
Provided by: datsi
Category:

less

Transcript and Presenter's Notes

Title: Heuristic Optimization Athens 2004


1
Heuristic OptimizationAthens 2004
  • Department of Architecture and Technology
  • Universidad Politécnica de Madrid
  • Víctor Robles
  • vrobles_at_fi.upm.es

2
Teachers
  • Universidad Politécnica de Madrid
  • Víctor Robles (coordinator)
  • María S. Pérez
  • Vanessa Herves
  • Universidad del País Vasco
  • Pedro Larrañaga

3
Course outline and Class hours
  • Day 1 / 1000-1400 / Víctor
  • Introduction to optimization
  • Some optimization problems
  • About heuristics
  • Greedy algorithms
  • Hill climbing
  • Simulated Annealing

4
Course outline and Class hours
  • Day 2 / 930-1330 / Víctor, María
  • Learn by practice Simulated Annealing
  • Genetic Algorithms
  • Day 3 / 930-1330 / Vanessa
  • Learn by practice Genetic Algorihtms
  • Day 4 / 1000-1330 / Pedro
  • Estimation of Distribution Algorithms

5
Introduce yourself
  • Name
  • University / Country
  • Optimization experience
  • Expectations of the course

6
Optimization
  • A optimization problem is a par
  • being the search space (all the possible
    solutions), and a function,
  • The solution is optime if,
  • Combinatorial optimization
  • Systematic search

7
State space landscape
  • Objective function defines state space landscape

8
Some optimization problems
  • TSP Travel Salesman Problem
  • The assignment problem
  • SAT Satistiability problem
  • The 0-1 knapsack problem
  • Important tasks
  • Find a representation of possible solutions
  • To be able to evaluate each of the possible
    solutions ? fitness function or objective
    function

9
TSP
  • A salesman has to find a route which visits each
    of n cities, and which minimizes the total
    distance travelled
  • Given an integer and a n x n matrix
  • where each is a nonnegative integer. Which
    cyclic permutation of integers from 1 to n
    minimizes the sum ?

10
TSP representations
  • Binary representation
  • Each city is encoded as a string of log2n bits.
  • Example 8 cities ? 3 bits
  • Path representation
  • A list is represented as a list of n cities. If
    city i is the j-th element of the list, city i is
    the j-th city to be visited
  • Adjancecy representation
  • City j is listed in position i if the tour leads
    from city i to city j
  • (3 5 7 6 4 8 2 1) ? tour 1-3-7-2-5-4-6-8

11
The assignment problem
  • A set of n resources is available to carry out n
    tasks. If resource i is assigned to task j, it
    cost
  • units.
  • Find an assignment that
    minimizes
  • Solution permutition of the numbers

12
SAT
  • The satisfiability problem consists on finding a
    truth assignment that satisfied a well-formed
    Boolean expression
  • Many applications VLSI test and verification,
    consistency maintenance, fault diagnosis, etc
  • MAX-SAT Find an assignment which satisfied the
    maximum number of clauses

13
SAT
  • Data sets in conjunctive normal form (cnf)
  • Example
  • Literals
  • Clauses
  • Representation? Fitness function?

...
14
The 0-1 knapsack problem
  • A set of n items is available to be packed into a
    knapsack with capacity C units. Item i has value
    vi and uses up ci units of capacity. Determine
    the subset of items which should be packed to
    maximize the total value without exceding the
    capacity
  • Representation? Fitness function?

15
Heuristics
  • Faster than mathematical optimization (branch
    bound, simplex, etc)
  • Well developed ? good solutions for some problems
  • Special heuristics
  • Greedy algorithms systematic
  • Hill-climbing (based on neighbourhood search)
    randomized

16
Greedy algorithms
  • Step-by-step algorithms
  • Sometimes works well for optimization problems
  • A greedy algorithm works in phases. At each
    phase
  • You take the best you can get right now, without
    regard for future consequences
  • You hope that by choosing a local optimum at each
    step, you will end up at a global optimum

17
Example Counting money
  • Suppose you want to count out a certain amount of
    money, using the fewest possible bills and coins
  • A greedy algorithm would do this would beAt
    each step, take the largest possible bill or coin
    that does not overshoot
  • Example To make 6.39, you can choose
  • a 5 bill
  • a 1 bill, to make 6
  • a 25 coin, to make 6.25
  • A 10 coin, to make 6.35
  • four 1 coins, to make 6.39
  • For US money, the greedy algorithm always gives
    the optimum solution

18
A failure of the greedy algorithm
  • In some (fictional) monetary system, krons come
    in 1 kron, 7 kron, and 10 kron coins
  • Using a greedy algorithm to count out 15 krons,
    you would get
  • A 10 kron piece
  • Five 1 kron pieces, for a total of 15 krons
  • This requires six coins
  • A better solution would be to use two 7 kron
    pieces and one 1 kron piece
  • This only requires three coins
  • The greedy algorithm results in a solution, but
    not in an optimal solution

19
Practice
  • Develop a greedy algorithm for the knapsack
    problem. Groups of 2 persons

20
Local search algorithms
  • Based on neighbourhood system
  • Neighbourhood system
  • being X the search space, we define the
    neighbourhood system N in X
  • Examples TSP (2-opt), SAT, assignment and knap

21
Local search algorithms
  • Basic principles
  • Keep only a single (complete) state in memory
  • Generate only the neighbours of that state
  • Keep one of the neighbours and discard others
  • Key features
  • No search paths
  • Neither systematic nor incremental
  • Key advantages
  • Use very little memory (constant amount)
  • Find solutions in search spaces too large for
    systematic algorithms

22
TSP 2 opt
A
B
C
D
New distance Old dist dist(A-D) dist(B-C)
dist(A-B) dist (C-D)
23
Neighbourhood search (Reeves93)
  • (Initialization)
  • Select a starting solution
  • Current best and
  • (Choice and termination)
  • Choice . If choice
    criteria cannot be satisfied or if other
    termination criteria apply, then the method stops
  • (Update)
  • Re-set , and if
    , perform step 1.ii. Return to Step 2

24
Hill climbing
  • Diferent procedures depending on choice criteria
    and termination criteria
  • Hill climbing only permit moves to neighbours
    that improve the current
  • (Choice and termination)
  • Choose such that
  • and terminate if no such can be
    found

25
Hill-climbing 8-Queens problem
  • Complete state formulation
  • All 8 queens on the board,one per column
  • Neighbourhood move one queen to a different
    place in the same column
  • Fitness function number of pairs of queens that
    are attacking each other

26
8-Queens problemfitness values of neighbourhood
27
8-Queens problemLocal minimun
28
Problematic landscapes
  • Local maximum a peek that is higher than all its
    neighbours, but not a global maximum
  • Plateau an area where the elevation is constant
  • Local maximum
  • Shoulder
  • Ridge a long, narrow, almost plateau-like
    landscape

29
Random-Restart Hill-Climbing
  • Method
  • Conduct a series of hill-climbing searches from
    randomly generated initial states
  • Stop when a goal is found
  • Analysis
  • Requires 1/p restarts where p is the probability
    of success
  • (1 success 1/p failures)

30
Hill-Climbing Performance on the 8-Queen Problem
  • From randomly generated start state
  • Success rate
  • 86 - gets stuck
  • 14 - solves problem
  • Average cost
  • 4 steps to success
  • 3 steps to get stuck

31
Hill-Climbing with Sideways Moves
  • Sideways moves moves at same fitness
  • Must limit number of sideways moves!
  • Performance on the 8-Queen problem
  • Success rate
  • 6 - get stucks
  • 94 - solves problem
  • Average cost
  • 21 steps to succeed
  • 64 steps to get stuck

32
Hill-climbing Further variants
  • Stochastic hill-climbing
  • Choose at random from among uphill moves
  • First-choice hill-climbing
  • Generate neighbourhood in random order
  • Move to first generated that represents an uphill
    move

33
Practice
  • Develop a hill-climbing algorithm for the
    knapsack problem. Groups of 2 persons

34
Shape of State Space Landscape
  • Success of hill-climbing depends on shape of
    landscape
  • Shape of landscape depends on problem formulation
    and fitness function
  • Landscapes for realistic problems often look like
    a worst-case scenario
  • NP-hard problems typically have exponential
    number of local-maxima
  • Despite the above, hill-climbers tend to have
    good performance

35
Simulated annealing
  • Failing on neighbourhood search
  • Propensity to deliver solutions which are only
    local optima
  • Solutions depend on the initial solution
  • Reduce the chance of getting stuck in a local
    optimum by allowing moves to inferior solutions
  • Developed by Kirkpatrick 83 Simulation of the
    cooling of material in a heat bath could be used
    to search the feasible solutions of an
    optimization problem

36
Simulated annealing
  • If a move from one solution to another
    neighbouring but inferior solution
  • results in a change in value ,
    the move to is still accepted if
  • T (temperature) control parameter
  • uniform random number

37
Simulated annealing Intuition
  • Minimization problem imagine a state space
    landscape on table
  • Let ping-pong ball from random point ? local
    minimum
  • Shake table ?ball tends to find different minimum
  • Shake hard at first (high temperature) but
    gradually reduce intensity (lower temperature)

38
Simulated annealing Algorithm
  • current problem.initialSate
  • for t1 to
  • T schedule(t)
  • if T0 then return
  • a random neighbour of
  • if then
  • else with probability

39
Simulated annealing Simple example
  • Maximize
  • x coded as a 5-bit binary integer in 0,31
  • maximum (01010) ? f4100
  • With greedy we can find 3 local maxima

40
Simulated annealing Simple example
The temperature is not high enough to move out of
this local optimum
41
Simulated annealing Simple example
Optimum found!!!
42
Simulated annealing Generic decisions
  • Initial temperature
  • Should be suitable high. Most of the initial
    moves must be accepted (gt 60)
  • Cooling schedule
  • Temperature is reduced after every move
  • Two methods

a close to 1 b close to 0
43
Simulated annealing Generic decisions
  • Number of iterations
  • Other factors
  • Reannealing
  • Restricted neighbourhoods
  • Order of searching
Write a Comment
User Comments (0)
About PowerShow.com