POPULATION BASED METAHEURISTICS - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

POPULATION BASED METAHEURISTICS

Description:

Graph coloring problem: Vertices are ordered in. decreasing order of their degree ... For each vertex, select a color in order to reduce the number of pairs of ... – PowerPoint PPT presentation

Number of Views:146
Avg rating:3.0/5.0
Slides: 33
Provided by: ferl1
Category:

less

Transcript and Presenter's Notes

Title: POPULATION BASED METAHEURISTICS


1
POPULATION BASEDMETAHEURISTICS
  • Jacques A. Ferland
  • Department of Informatique and Recherche
    Opérationnelle
  • Université de Montréal
  • ferland_at_iro.umontreal.ca
  • http//www.iro.umontreal.ca/ferland/
  • Spring School in Optimization
  • Ho Chi Minh

    March 2007

2
Population Based Techniques (PBT)
  • A Population Based Technique (PBT) can be seen as
    an iterative process starting with an initial
    population P0 including N feasible solutions. x
    is the best solution of the population P0 .
  • At each iteration ( generation)
  • - During a collective process, the solutions
    of the current population are compared and
    combined to generate new offspring-solutions
    inheriting the prevaling characteristics of the
    parent-solutions.
  • - Then the new offspring-solutions evolve
    individually according to an individual process.
  • - The best solution x is updated.
  • - Finally a new population of size N is
    generated.
  • The procedure continues until some stopping
    criterion is satisfied.

3
POPULATION BASED
  • Genetic Algorithm
  • Ant Colony
  • Particle Swarm
  • http//www.iro.umontreal.ca/ferland/

4
Initial Population
5
Problem used to illustrate
  • General problem
  • min f(x)

  • x ? X
  • Assignment type problem Assignment of resources
    j to activities i
  • min f(x)

  • Subject to ?1 j m xij 1 1 i n

  • xij 0 or 1 1 i n,
    1 j m

6
Problem Formulation
  • Assignment type problem
  • min f(x)
  • Subject to ?1 j m xij 1 1
    i n
  • xij 0 or 1
    1 i n, 1 j m
  • Graph coloring problem Graph G (V,E).
  • V i 1 i n E (i, l) (i,
    l) edge of G.
  • Set of colors j 1 j m
  • min ? 1 j m ? (i, l) ? E xij
    xlj
  • Subject to ?1 j m xij 1 1
    i n
  • xij 0 or 1 1 i n,
    1 j m

7
Heuristic Constructive Techniques
  • Values of the variables are determined
    sequentially
  • at each iteration, a variable is
    selected,
  • and its value is determined

8
Heuristic Constructive Techniques
  • Values of the variables are determined
    sequentially
  • at each iteration, a variable is
    selected,
  • and its value is determined
  • The value of each variable is never modifided
    once it is determined

9
Heuristic Constructive Techniques
  • Values of the variables are determined
    sequentially
  • at each iteration, a variable is
    selected,
  • and its value is determined
  • The value of each variable is never modifided
    once it is determined

10
Greedy method
  • Next variable to be fixed and its value are
    selected to optimize the objective function given
    the values of the variables already fixed

11
Greedy method
  • Next variable to be fixed and its value are
    selected to optimize the objective function given
    the values of the variables already fixed
  • Graph coloring problem
  • Vertices are ordered in
  • decreasing order of their degree
  • Vertices selected in that order
  • For each vertex, select a color in order to
    reduce the number of pairs of adjacent vertices
    already colored with the same color

12
GRASPGreedy Randomized Adaptive Search Procedure
  • Next variable to be fixed is selected randomly
    among those inducing the smallest increase.

13
GRASPGreedy Randomized Adaptive Search Procedure
  • Next variable to be fixed is selected randomly
    among those inducing the smallest increase.
  • Referring to the general problem,
  • i) let J j
    xj is not fixed yet
  • and dj be the increase induces by the
    best value that xj can take ( j ? J )

14
GRASPGreedy Randomized Adaptive Search Procedure
  • Next variable to be fixed is selected randomly
    among those inducing the smallest increase.
  • Referring to the general problem,
  • i) let J j
    xj is not fixed yet
  • and dj be the increase induces by the
    best value that xj can take ( j ? J )
  • ii) Denote d min
    j ? J dj
  • and
    a ? 0, 1 .

15
GRASPGreedy Randomized Adaptive Search Procedure
  • Next variable to be fixed is selected randomly
    among those inducing the smallest increase.
  • Referring to the general problem,
  • i) let J j
    xj is not fixed yet
  • and dj be the increase induces by the
    best value that xj can take ( j ? J )
  • ii) Denote d min
    j ? J dj
  • and
    a ? 0, 1 .
  • iii) Select randomly j ? j ? J
    dj ( 1 / a ) d
  • and fix the value of xj

16
Genetic Algorithm
17
Encoding the solution
  • The phenotype form of the solution x ? Rn is
    encoded (represented) as a genotype form vector z
    ? Rm (or chromozome) where m may be different
    from n.
  • For example in the assignment type problem
  • let x be the following solution for each
    1 i n,

  • xij(i) 1

  • xij 0 for all other j
  • x ? Rnxm can be encoded as z ? Rn where
  • zi j(i)
    i 1, 2, , n
  • i.e., zi is the index of the resource
    j(i) assigned to activity i

18
Genetic Algorithm (GA)
  • An initial population of size N is generated
  • At each iteration (generation) three different
    operators are first applied to generate a set of
    new (offspring) solutions using the N solutions
    of the current population
  • selection operator selecting from the
    current population parent-solutions
  • that
    reproduce themselves
  • crossover (reproduction) operator
    producing offspring-solutions from each
  • pair of
    parent-solutions
  • mutation operator modifying (improving)
    individual offspring-solution
  • A fourth operator (culling operator) is applied
    to determine a new population of size N by
    selecting among the solutions of the current
    population and the offspring-solutions according
    to some strategy

19
Two variants of GA
  • At each iteration of the Classical genetic
    algorithm
  • - N parent solutions are selected and
  • paired two by two
  • - A crossover operator is applied to
  • each pair of parent-solutions according to
    some probability to generate two
    offspring-solutions. Otherwise the two
    parent-solutions become their own
    offspring-solutions
  • - A mutation operator is applied
  • according to some probability to
  • each offspring-solution.
  • - The population of the next iteration
  • includes the offspring-solutions
  • At each iteration of the Steady-state population
    genetic algorithm
  • - An even number of parent-solutions
  • are selected and paired two by two
  • - A crossover operator is applied
  • to each pair of parent-solutions to
  • generate two offspring-solutions.

  • - A mutation operator is applied
  • according to some probability to
  • each offspring-solution.
  • -The population of the next iteration
  • includes the N best solutions among the
  • current population and the offspring-
  • solutions

20
Selection operator
  • This operator is used to select an even number
    (2, or 4, or , or N) of parent-solutions.
  • Each parent-solution is selected from the current
    population according to some strategy or
    selection operator.
  • Note that the same solution can be selected more
    than once.
  • The parent-solutions are paired two by two to
    reproduce themselves.
  • Selection operators
  • Random selection operator
  • Proportional (or roulette whell) selection
    operator
  • Tournament selection operator
  • Diversity preserving selection operator

21
Random selection operator
  • Select randomly each parent-solution from the
    current (entire) population
  • Properties
  • Very straightforward
  • Promotes diversity of the population
    generated

22
Proportional (Roulette whell) selection operator
  • Each parent-solution is selected as follows
  • i) Consider any ordering of the solutions
    z1, z2, , zN in P
  • ii) Select a random number a in the interval

  • 0, ?1k N ( 1 / f( zk) )
  • iii) Let t be the smallest index such that

  • ?1k t (1 / f( zk ) ) a
  • iv) Then zt is selected
  • 1 / f( z1 ) 1 / f( z2 )
    1 / f( z3)

    1 / f( zN)

  • t
  • a
  • The chance of selecting zk increases with its
    fittness 1 / f( zk)

For the problem Min f (x) where x is
encoded as z 1/f (z) measures the fittness
of the solution z
23
Tournament selection operator
  • Each parent-solution is selected as the best
    solution in a subset of randomly chosen solutions
    in P
  • i) Select randomly N solutions one by
    one from P (i.e., the same solution
  • can be selected more than once) to
    generate the subset P
  • ii) Let z be the best solution in the
    subset P

  • z argmin z ? P f(z)
  • iii) Then z is selected as a
    parent-solution

24
Elitist selection
  • The main drawback of using elitist selection
    operator like Roulette whell and Tournament
    selection operators is premature converge of the
    algorithm to a population of almost identical
    solutions far from being optimal.
  • Other selection operators have been proposed
    where the degree of elitism is in some sense
    proportional to the diversity of the population.

25
Diversity preserving selection operator
26
Crossover (recombination) operators
  • Crossover operator is used to generate new
    solutions including interesting components
    contained in different solutions of the current
    population.
  • The objective is to guide the search toward
    promissing regions of the feasible domain X while
    maintaining some level of diversity in the
    population.
  • Pairs of parent-solutions are combined to
    generate offspring-solutions according to
    different crossover (recombination) operators.

27
One point crossover
  • The one point crossover generates two
    offspring-solutions from the two parent-solutions
  • z1 z11, z21, ,
    zm1
  • z2 z12, z22, ,
    zm2
  • as follows
  • i) Select randomly a position (index) ?,
    0 ? m.
  • ii) Then the offspring-solutions are
    specified as follows
  • oz1 z11, z21, , z?1,
    z?12, , zm2
  • oz2 z12, z22, , z?2,
    z?11, , zm1
  • The first ? components of offspring oz1
    (offspring oz2) are the corresponding ones of
    parent 1 (parent 2), and the rest of the
    components are the corresponding ones of parent 2
    (parent 1)

28
Two points crossover
  • The two points crossover generates two
    offspring-solutions from the two parent-solutions
  • z1 z11, z21, ,
    zm1
  • z2 z12, z22, ,
    zm2
  • as follows
  • i) Select randomly two positions
    (indices) µ,?, 1 µ ? m.
  • ii) Then the offspring-solutions are
    specified as follows
  • oz1 z11, , zµ-11, zµ2, ,
    z?2, z?11, , zm1
  • oz2 z12, , zµ-12, zµ1, ,
    z?1, z?12, , zm2
  • The offspring oz1 (offspring oz2) has
    components µ, µ1, , ? of parent 2 (parent 1),
    and the rest of the components are the
    corresponding ones of parent 1 (parent 2)

29
Uniform crossover
  • The uniform crossover requires a vector of bits
    (0 or 1) of dimension m to generate two
    offspring-solutions from the two parent-solutions
  • z1 z11, z21, , zm1 ,
    z2 z12, z22, , zm2
  • i) Generate randomly a vector of bits,
    for example 0, 1, 1, 0, , 1, 0
  • ii) Then the offspring-solutions are
    specified as follows
  • parent 1 z11, z21, z31,
    z41,, zm-11, zm1
  • parent 2 z12, z22, z32,
    z42,, zm-12, zm2
  • Vector of bits 0 , 1 , 1 , 0 , ,
    1 , 0
  • Offspring oz1 z11, z22, z32, z41,,
    zm-12, zm1
  • Offspring oz2 z12, z21, z31, z42,,
    zm-11, zm2

The ith component of oz1 (oz2) is the ith
component of parent 1 (parent 2) if the
ith component of the vector of bits is
0, otherwise, it is equal to the ith component of
parent 2 (parent 1)

30
Ad hoc crossover operator
  • The preceding crossover operators are
    sometimes too general to be efficient. Hence,
    whenever possible, we should rely on the
    structure of the problem to specify ad hoc
    problem dependent crossover operator in order to
    improve the efficiency of the algorithm.

31
Recovery procedure
  • Furthermore, whenever the structure of the
    problem is such that the offspring-solutions are
    not necessarily feasible, then an auxiliary
    procedure is required to recover feasibility.
    Such a procedure is used to transform the
    offspring-solution into a feasible solution in
    its neighborhood.

32
Mutation operator
  • Mutation operator is an individual process to
    modify offspring-solutions
  • In traditional implementations of Genetic
    Algorithm the mutation operator is used to modify
    arbitrarely each componenet zi with a small
    probability
  • For i 1 to m
  • Generate a
    random number ß ? 0, 1
  • If ß lt ßmax
    then select randomly a new value for zi
  • where ßmax is small enough in order to
    modify zi with a small probability
  • Mutation operator simulates random events
    perturbating the natural evolution process
  • Mutation operator not essential, but the
    randomness that it introduces in the process,
    promotes diversity in the current population and
    may prevent premature convergence to a bad local
    minimum
Write a Comment
User Comments (0)
About PowerShow.com