Title: Swarm intelligence and metaheuristics for engineering optimization: real-like applications on turbomachinery
1Swarm intelligence and metaheuristics for
engineering optimization real-like applications
on turbomachinery
- Seminary associated to Mimic learning 3rd level
course, Prof. E. Piccolo and Prof. G. Squillero
Enrico Ampellio, PhD student in Aerospace
Engineering, 2nd year Polytechnic of Turin, Cycle
XXVII
November 15th, 2013
Academic Tutor Prof. F. Larocca
Avio Aero Tutor Ing. F. Bertini
2Contents
- Part I Swarm Intelligence technicalities
- General introduction
- Swarm Intelligence (SI)
- Particle Swarm Optimization (PSO) Differential
Evolution (DE) - Artificial Bee Colony (ABC)
- Artificial super-Bee enhanced Colony (AsBeC)
- ABC vs. AsBeC on benchmark test functions
- Part II Engineering optimization
- Contextualization of real-like problems on
turbomachinery - Implementing of bee colony for optimization
purposes - Introduction of other techniques
- Gradient Descent (GD)
- Interpolated Random Walk (IRW)
- Genetic Algorithm (GeDEA)
- Artificial Neural Network (ANN)
- Overall comparisons
- Conclusive remarks
3Part I Swarm Intelligence technicalities
4Part I General introduction (1)
- The applicative field of numerical optimization
in engineering is normally characterized by
simulation based problems heavily time and
resource consuming (CFD, FEM, non-linear models,
etc.) - There is a strong need for fast techniques
allowing to optimize many parameters under very
few function evaluations. - Since the simulated objective function shape and
properties are generally not well-known, the most
widespread techniques lay in the class of
metaheuristic methods and Artificial Intelligence
(AI). - Mainly diffused and advanced are evolutionary
algorithms (GA, ES and EP) and Artificial Neural
Networks (ANN) as surrogate meta-models, but also
simpler approaches like random path-based
methods (Hill Climber, Simulated Annealing) have
been and are still used.
5Part I General introduction (2)
- Another new and nature-inspired strategy to be
considered is the promising Swarm Intelligence
(SI). At first, swarm methods could not appear
suited, since a colony needs multiple function
evaluation at every optimization step, without
any guaranteed improvement of the solution.
Although some researchers have proven brilliant
performance . - SI class gathers a lot of different algorithms,
among which Particle Swarm Optimization (PSO),
Differential Evolution (DE) and especially other
very recent developments like Artificial Bee
Colony (ABC) seems to offer excellent qualities. - Starting from ABC and limiting the total number
of function evaluations, my research was focused
on modifying the original algorithm in order to
increase its speed and solution accuracy. The
final up-to-date version of ABC is the subject of
an in-dept scientific paper and is called AsBeC.
6Part I Swarm Intelligence (1)
- By definition is the collective behavior of
decentralized, self-organized systems, natural or
artificial. The expression was introduced by
Gerardo Beni and Jing Wang in 1989, in the
context of cellular robotic systems. In
principle, it should be a multi-agent
self-organized system that shows some intelligent
behavior. - SI systems are population-based and consist
typically of a collection of simple agents or
bird-like-objects (boids), interacting locally
with one another and with their environment. The
agents follow very simple rules to move in their
neighborhood and although there is no centralized
control structure the interactions between such
agents let emerge an "intelligent" global
behavior. The inspiration often comes from
nature, especially biological systems. Natural
examples of SI include ant colonies, bird
flocking, animal herding, bacterial growth, and
fish schooling.
7Part I Swarm Intelligence (2)
- Ant colony optimization (ACO)
- Artificial bee colony algorithm (ABC)
- Differential Evolution (DE)
- Gravitational search algorithm (GSA)
- Glowworm Swarm Optimization (GSO) Firefly
Algorithm (FA) - Intelligent water drops (IWD) River Formation
Dynamics (RFD) - Particle swarm optimization (PSO)
- Stochastic diffusion search (SDS)
8Part I PSO (1)
- Firstly introduced by James Kennedy and Russel
Ebhart in 1995, it is an algorithm capable of
optimizing non-linear and multidimensional
problems. It usually reaches good solutions
efficiently while requiring minimal
parameterization. - The basic concept is to create a swarm of
particles which move in the problem space
searching for the place which best suits their
fitness function. There are two main ideas behind
its optimization properties - A single particle can determine how good its
current position is. It benefits not only from
its space exploration knowledge but also from the
knowledge shared by the other particles - A stochastic factor in each particle's velocity
makes them move through unknown problem space
regions. This property combined with a good
initial distribution of the swarm enable an
extensive exploration of the problem space -
9Part I PSO (2)
10Part I PSO (3)
11Part I PSO (4)
- Good explorative skills but poor local search for
refinement, so slow convergence rate, and
possibility to get trapped in local minima if the
swarm clusters to early (premature convergence or
collapse) - Local Best
- This variation reduces the sharing of information
between particles to a smaller neighborhood,
overlapping the congregations in order to enable
convergence to the global best. This version is
slower to converge but it is less susceptible to
local minima - Inertia Weight
- This variation aims to balance the exploitation
of good solutions and the exploration of new
areas, by multiplying the momentum component in
velocity formulation by a specific inertia weight
0.9ltwlt1.2 -
12Part I PSO (5)
- Antennas
- Biomedical
- Control
- Design
- Distribution Networks Artificial Neural
Networks - Electronics and Electromagnetics
- Engines and Motors
- Fuzzy and Neuro-fuzzy
- Image, Graphics , Video and Visualization
- Metallurgy
- Power Systems and Plants
- Prediction and Forecasting
- Robotics
- Scheduling
- Signal Processing
-
13Part I DE (1)
- Differential Evolution optimizes a problem by
iteratively trying to improve a candidate
solution with regard to a given measure of
quality. Typical example of metaheuristic - It make no assumptions about the problem being
optimized - It can search very large spaces
- It does not guarantee an optimal solution is ever
found - DE is used for multidimensional real-valued
functions but does not use the gradient. DE can
therefore be used on optimization problems that
are discontinuous, noisy, change over time, etc. - DE maintains a population of candidate solutions
and creates new candidates by combining existing
ones according to a simple formulae. If the new
position of an agent is an improvement it is
accepted and forms part of the population,
otherwise it is simply discarded.
14Part I DE (2)
15Part I ABC (1)
- The algorithm was developed by Karaboga in 2005.
It is one of the newest and most promising
nature-inspired metaheuristic, which combines PSO
and DE. It reproduces the behavior of a honey bee
colony searching the best nectar source into a
target area. - Some bees (employees) are each assigned to a food
source and search the space near it
(exploration). Then they come back to the hive
and communicate by dancing the position of the
best food sources found to other bees
(onlookers), that help the first ones in the most
promising regions (exploitation). Nectar sources
that reveal themselves non-productive are
abandoned in place of eventual new fruitful
positions, investigated by a travelling bee
(scout). - In optimization context food sources represents
input configurations and the comparisons among
them is based on the objective function to
optimize non-productive food sources represent
configuration not improved for some time.
16Part I ABC (2)
17Part I ABC (3)
- ABC algorithm tries to balance exploration and
exploitation, offering worthy global and local
search skills at once. If compared with other
competitive methods (genetic, PSO and its
variants and also FA) ABC demonstrates
high-quality, speed, robustness and flexibility
for a great variety of optimization problems. - The main qualities of the algorithm are the
following - Simple and easy to implement
- It can be parallelized
- It can be hybridized
- It needs few control parameters
- It is flexible and robust to wide range of
problems - While the deficiencies can be outlined in
- No exploitation of the history of points analyzed
- Local search and refinement skills are less
efficient with respect to global search attitude
18Part I ABC (4)
19Part I ABC (5)
- The bee movement in for the food source j is
based on the modification on a single parameter
i, chosen randomly between all the possible ones.
Another food source k?j is chosen randomly and
the new position xjnew(i) for the bee associated
to the food source j is - For as regards onlookers, they are assigned to
food sources by a stochastic rule, assuming a
certain probability pj related with a fitness
value of the configuration xj of the food source
j - Where SN is the food sources number and fit(xj)
is inversely proportional to the objective
function f(xj). Usually fit is set as
20Part I AsBeC (1)
- Since the original paper by Karaboga many
researches on the topic were developed, but no
one underlines a performance gain even with few
function evaluations. This framework motivates
the willingness of introducing and analyzing
modifications effective with small bee colonies
and few iterations. - Some of the improvements here applied exploits
the basic principles brought in standard ABC by
other authors, but some others introduce novel
ideas. These technologies allow to address ABC
deficiencies. - The technologies presented try to speed up the
best solutions in their neighborhood, without
clustering the swarm and leading to premature
overall convergence. In fact, the aim of this
work is to improve the local search skills of
original ABC (exploitation) without worsening its
global attitude (exploration), especially during
the first search phases.
21Part I AsBeC (2)
- The technologies have been classified into two
main groups, that explain the name of the new
algorithm Artificial super-Bee enhanced Colony
(AsBeC). - 1. Enhancements
- These are modifications that do not alter the
architecture of the original ABC, but make it
work in a slight different way to match specific
goals, such as improve the velocity on the short
optimization period - Each squad of bees can have more time to evolve
their nectar sources (Postponed hive dance) - Exploration can be privileged setting more than
one parameter to change (Multiple parameter
selection) - For small swarms, the exploitation of the best
food sources can be privileged, penalizing always
the worst ones (Strictly biased onlooker
assignment ) - The scout can be relocated in a range that
depends on the position of the food sources
(Smart scout repositioning)
22Part I AsBeC (3)
- 2. Hybridizations super-bee concept
- These technologies alter the original
pseudo-random movement of the bees, trying to
accelerate the optimization process and its
accuracy. Therefore with these modification a bee
assumes new abilities and it will be called
super-bee - The local behavior of the objective function can
be estimated by linearity (Opposite principle) - A further evolution is to approximate local
concavity of the objective function (Second order
interpolation) - Data history can be used to make a prediction of
the next best search direction (Prophet) - All the possible combination of technologies were
tested in order to capture all the interactions
between them. A statistical analysis on results
obtained for an extensive benchmark test bed
allows to select the best combination among the
dominating solutions.
23Part I AsBeC (4)
24Part I ABC vs. AsBeC (1)
- A set of 10 analytical mathematical test
functions have been selected as a benchmark. Even
if this set is far from represent a good sample
of real-world numerical optimizations, it tries
to gather many characteristics that appear in
engineering problems. It contains unimodal,
multimodal, separable and not-separable functions
with domain dimensions between 5 and 50. It
contains functions with few far local minima,
thousands of closed local minima, stochastic
noise and very narrow holes.
Function name Characteristics Dimension Range for each dimension
Sphere US 50 -100ltxilt100
Dixon Price UN 20 -10ltxilt10
Schwefel MS 5 -500ltxilt500
Stochastic Styblinski Tang (15 noise) MS 5 -5ltxilt5
Levy MS 10 -100ltxilt100
Rastrigin MS 10 -10ltxilt10
Perm MN 5 -5ltxilt5
Rosenbrock MN 10 -5ltxilt5
Ackley MN 10 -20ltxilt70
Griewank MN 30 -600ltxilt600
25Part I ABC vs. AsBeC (2)
Sphere
Dixon-Price
Schwefel
Styblinski Tang
26Part I ABC vs. AsBeC (3)
Levy
Rastrigin
Perm
Rosenbrock
27Part I ABC vs. AsBeC (4)
Ackley
Griewank
28Part I ABC vs. AsBeC (5)
- For each test function and for each configuration
of technologies were performed 300 runs with a
colony of 16 bees, limit parameter equal to 10
and 100 overall iterations, corresponding to a
maximum of 1600 function evaluation. MATLAB
coding. - We analyzed the gain G with respect to the
standard ABC, intended to be a delta performance
estimator and defined as - Starting from the previous it is possible to
derive the Mean Logarithmic Gain (MLG) over all
the benchmark functions
29Part I ABC vs. AsBeC (6)
30Part I ABC vs. AsBeC (7)
Postponed hive dance Check3 Opposition principle Second order interpolation Strictly biased onlooker assignment Prophet Step0.5
31Part I ABC vs. AsBeC (8)
32Part I ABC vs. AsBeC (9)
- Since a modern workstation offers great
calculation power thanks to numerous processing
units, it is straightforward to take advantage of
this technology even without make use of
distributed computing or clusters. As a
consequence, the serial AsBeC code have been
modified into parallel versions. - Number of onlookers, employees and food sources
is taken equal to 8. The same optimization
procedure can be carried out in up to 8 times
less with a swarm of 16 elements. In case where
function evaluation is the bottleneck with
respect to the threads creation and
communications, then parallelization factor is
close to 8. - Three possible parallelization of bee-colony
based algorithm, already presented in literature,
are considered. They will be implemented together
with AsBeC technologies.
33Part I ABC vs. AsBeC (10)
- Multi-start parallel approach
- It is the simplest way to take advantages from
parallelization, consisting in running many
independent instances of the optimization process
in parallel, with different random seeds. - Multi-swarm parallel approach
- It is thought to be a better way to exploit the
Multi-Start parallel approach considering the
same number of total function evaluations.
Multi-Swarm comprises communication among the
different colony that are running in parallel. - Bee-by-Bee parallel approach
- In the BbB half the colony moves all together in
parallel. Losses in performance are expected
since there is no improvement communication
during the 8 parallel runs and no sequential
adjourning of upgraded food sources. The colony
convergence slows down but its explorative skills
are intensified. This approach is affordable when
time bottleneck are not in threads communication
but in function evaluation.
34Part I ABC vs. AsBeC (11)
35Part I ABC vs. AsBeC (12)
36Part I ABC vs. AsBeC (13)
37Part I ABC vs. AsBeC (14)
38Part I AsBeC (19)
- Tests with 105 function evaluations
39Part I AsBeC (20)
Rastrigin function 2D, -2ltxilt3
AsBeC
ABC
40Part II Engineering optimization
41Part II Real-like LPT problems (1)
- Modern aeronautic Low Pressure gas Turbines
(LPTs) for aeronautics are already characterized
by high quality standards, thus they offer very
narrow margins of improvement. Typical design
process starts with a Concept Design (CD) phase,
defined using mean-line 1D and other low-order
tools, and evolves through a Preliminary Design
(PD) phase, which allows the geometric definition
in details.
42Part II Real-like LPT problems (2)
- In this framework, the intensive application and
tuning of multidisciplinary high-performance and
multi-objective optimization strategies is the
only way to properly handle the complicated
peculiarities of the design. - During the years, different strategies and
algorithms that have been implemented, from the
simplest to the forefront ones - A basic gradient method
- A path-based semi-random second order method,
Interpolated Random Walk (IRW) - Multi-objective Genetic Diversity Evolutionary
Algorithm (GeDEA, University of Padua, Prof. E.
Benini and Dr. L. Dal Mas) - A multi-objective response surface approach based
on Artificial Neural Network (ANN) and Latin
Optimal Hypercube (LOH) - The brand new AsBeC algorithm, SI of bee colony.
- Parallelization, speedup arrangements and hybrid
strategies.
43Part II Real-like LPT problems (3)
- PD phase was selected as a real-like design
benchmark to illustrate results. In this phase,
3D blades local geometries (typically 5, 50
and 95) are refined by means of Q3D CFD
simulations (from 15 s to 30 m per run).
44Part II Real-like LPT problems (4)
- In the PD framework, two different type of
optimization problems have been addressed in
single row environment - Fitting operations 3D/Q3D
- It ensures a reliable geometry optimization,
consisting in overlapping the isentropic 3D/Q3D
Mach profiles. Challenging 5 dimensional quasi
mono-objective optimization problem,
characterized by jagged and very large boundaries
not well-known. The solution may not be unique
and the domain space usually presents many minima
with close objective function. - Geometrical optimization
- Core of the PD phase. Inherently multi-objective
with strongly contrasting targets, but
easy-knowable boundaries to set for feasibility.
Typically multidisciplinary, at least
aero-mechanical, it is a 6 dimensional problem
and 3 reference objective are set efficiency,
target area and MachConvergence.
45Part II Real-like LPT problems (5)
Fitting 3D/Q3D
Radius at Leading Edge
Axial Chord
Tangential Chord
Unguided Turning
Inlet Blade angle
Inlet Wedge Angle
Leading Edge Radius
Exit Blade Angle
Radius at Trailing Edge
Number of Blades()
Throat()
Leading Edge Eccentricity
Trailing Edge Eccentricity
Geometrical optimization
46Part II Implementing of bee colony (1)
- HUB section is selected as real-like example
benchmark for ABC vs. AsBeC comparisons. 24
identical runs was performed for serial and MS
versions and then averaged at least 8 runs for
BbB. - Fitting results are presented for 100 function
evaluations (serial) and 550 (parallel),
corresponding to 30 m of machine time.
47Part II Implementing of bee colony (2)
- Fitting problem represent one of the severest
test case to be fine solved quickly for
population-based algorithms, due to boundary
settings. Path-based algorithm (IRW) are
advantaged. - Bee colony range independence is impressive,
higher for AsBeC than ABC, especially if compared
to GeDEA. To prove this statement, three set of
Boundaries have been considered and optimization
procedures re-performed for averaged results.
Algorithm Standard deviation of final ErrorRatio - against Range setting
ABC 1.27
AsBeC 0.23
ABC BbB
AsBeC BbB
GeDEA
Range Da K33 K66 KTE DPout V -
Large 5 20 20 20 10 4.00E-03
Narrow 3 5 5 5 1 3.75E-06
Custom 5 15 10 15 3 3.38E-04