MOPSO : A Proposal for Multiple Objective Particle Swarm Optimization - PowerPoint PPT Presentation

1 / 13
About This Presentation
Title:

MOPSO : A Proposal for Multiple Objective Particle Swarm Optimization

Description:

... for Multiple Objective Particle Swarm Optimization ... an example of natural swarming ... Particle swarm optimization (PSO) is a heuristic inspired by the ... – PowerPoint PPT presentation

Number of Views:1205
Avg rating:3.0/5.0
Slides: 14
Provided by: pruetb
Category:

less

Transcript and Presenter's Notes

Title: MOPSO : A Proposal for Multiple Objective Particle Swarm Optimization


1
MOPSO A Proposal for Multiple Objective
Particle Swarm Optimization
  • Computational Intelligence 2002
  • Carlos A. Coello Coello
  • Maximino Salazar Lechuga
  • Pruet _at_ DSSG

2
Outline
  • Introduction
  • Multi-Objective Particle Swarm Optimization
    (MOPSO)
  • Evaluation

Bird Flocking, an example of natural swarming
Figure from http//www.aip.org/png/html/bird.htm
3
Introduction
  • To increase the efficiency of multi-objective
    optimization algorithm
  • diversity of the result has to be maintained
  • use of small population sizes.
  • Particle swarm optimization (PSO) is a heuristic
    inspired by the choreography of a bird flock.
  • Most of the works in PSO focus on
    single-objective optimization.
  • The authors propose an algorithm called multi
    objective particle swarm optimization (MOPSO)
  • MOPSO allows the PSO algorithm to be able to deal
    with multiobjective optimization problems.

4
Introduction
  • Particle Swarm Optimization (PSO)
  • The PSO can bee seen as a distributed behavioral
    algorithm performs muldimentional search.
  • The behavior of each individual is affected by
    either the local best or the global best.
  • The approach uses the concept of population and a
    measure of performance similar to the fitness
    function used in EA.
  • Moreover, the adjustments of individuals are
    analogous to the use of crossover operators.
  • However, PSO introduces the use of flying
    potential solution through hyperspace (used to
    accelerate convergence).
  • Also, PSO allows individual to benefit from their
    past experiences, in contrast with EA, which
    focus on present experience, i.e. recent
    population pool.
  • Elitism or Archive can be seen as approaches to
    preserve the past best, but they are pool-wise,
    not individual-wise.

5
MOPSO
  • In this paper, the concept of Pareto ranking
    scheme is used for measuring the efficiency of
    multiobjective optimization algorithm.
  • The authors suggest that the use of Pareto in
    multi-objective PSO makes sense because of the
    similarity between PSO and EA.
  • Multiple Objective Particle Swarm Optimization
    (MOPSO)
  • The historical record of best solution found by a
    particle could be used to store nondominated
    solutions generated in the past.
  • Therefore, the author propose the idea of having
    a global repository in which every particle will
    deposit it flight experiences after each flight
    cycle.
  • Additionally, the updates to the repository are
    performed considering a geographically-based
    system defined in terms of the objective function
    values of each individual.
  • To maintain diversity
  • This technique is inspired from PAES (Pareto
    Archive Evolution Strategy)
  • The repository is used by the particles to
    identify a leader that will guide the search.
  • random search should be added to avoid local
    optimum.

6
MOPSO Algorithm
  • Initialize the population POP
  • FOR i 0 to MAX / MAX population size /
  • initialize POPi / POPi is the position
    of particle i in search space /
  • Initialize the speed of each particle
  • FOR i 0 to MAX
  • VELi 0
  • Evaluate each of the particles in POP
  • Store the positions of the particles that is
    nondominated in the repository REP.
  • Generate hypercube of the search space explored
    so far, and locate the particles using these
    hypercube as a coordinate system (each axis
    correspond to an objective function)
  • Initialize the memory of each particle (this
    memory serves as a guide to travel through the
    search space.)
  • FOR i 0 to MAX
  • PBESTi PIPI

7
MOPSO Algorithm
  • WHILE maximum number of cycles has not been reach
    DO
  • Compute the speed of each particle
  • W is inertia weight (less than 1)
  • R1 and R2 are random numbers in the range of
    0..1
  • PBESTi is the best position so far of particle
    i
  • REPh is a value from repository, h is selected
    by
  • those hypercube containing more than one
    particle are assigned a fitness equal to the
    result of diving any number x gt 1 by the number
    of particles that they contain. Then, the result
    is used in roulette wheel selection to select a
    hypercube. A particle in the selected hypercube
    is selected randomly as h.
  • POPi is the current value of the particle i
  • Compute the new position of each particle
  • Maintain (remove ?) particles which go outside
    search space

8
MOPSO Algorithm
  • Evaluate each of the particles in POP
  • Update the contents of REP togeher with the
    content of hypercube.
  • Insert nondominated particles into the
    repository.
  • Remove all dominated particles in the repository.
  • If the repository is full, the particles in
    crowed hypercube are removed.
  • When the current position of the particle is
    better than the the position in its PBEST, the
    particle updates PBESTi POPi
  • better in term of dominance.

9
Evaluation
  • Two matrices
  • Average running time of algorithm.
  • Average distance to Pareto optimal set
  • Y,Y ? Y are the set of objective vectors that
    correspond to a set of pairwise nondominating
    decision vectors X,X ? X. X corresonds to the
    decision variable of the problem.
  • Compared with
  • NSGA II
  • Population size 200, cross over rate 0.8, use
    tournament selection, and mutation rate
    1/(number of decision variables of the problem)
  • PAES
  • Archive size 200, mutation rate 1/L L
    length of the chromosome
  • MOPSO
  • Population size 40, repository size 200, 30
    division per grid.

10
Test Function 1
  • Average running time (4000 cycles)
  • NSGA II 2.402
  • PAES 2.08
  • MOPSO 0.076
  • M1 avr/sd
  • NSGA 0.002536/0.000138
  • PAES 0.002881/0.00213
  • MOPSO 0.002057/0.000286

11
Test Function 2
  • Average running time (1200 cycles)
  • NSGA II 0.812
  • PAES 0.339
  • MOPSO 0.046
  • M1 avr/sd
  • NSGA 0.001594/0.000122
  • PAES 0.070003/0.158081
  • MOPSO 0.00147396/0.00020178

12
Test Function 3
  • Average running time (3200 cycles)
  • NSGA II 2.1165
  • PAES 1.641
  • MOPSO 0.098
  • M1 avr/sd
  • NSGA 0.094644/0.117608
  • PAES 0.259664/0.57386
  • MOPSO 0.0011611/0.0007205

13
Conclusion
  • MOPSO performed reasonably well in terms of the
    average distance to Pareto front, with lower
    computational times.
  • PSO is an unconstrained search technique, it is
    necessary to develop an additional mechanism to
    deal with constrained multiobjective optimization
    problems.
Write a Comment
User Comments (0)
About PowerShow.com