Title: Evolutionary Programming
1Evolutionary Programming
2EP quick overview
- Developed USA in the 1960s
- Early names D. Fogel
- Typically applied to
- traditional EP machine learning tasks by finite
state machines - contemporary EP (numerical) optimization
- Attributed features
- very open framework any representation and
mutation OK - crossbred with ES (contemporary EP)
- consequently hard to say what standard EP is
- Special
- no recombination
- self-adaptation of parameters standard
(contemporary EP)
3EP technical summary
4Historical EP perspective
- EP algorithms try to emulate the natural
evolutionary behavior of RNA (Ribonucleic Acid)
coded entities such as viruses. - Try replicate the fact viruses adapt very fast to
environmental changes. - Dissimilar to DNA (Deoxyribonucleic acid) coded
creatures (as ourselves) who rely on mating
(crossover) for evolutionary adaptation. - Viruses rely on heavy mutation to evolve.
- So, even though memory of the evolutionary past
is lost, a highly developed (and especially fast)
evolution scheme is adopted.
5Historical EP perspective
- Initial EP algorithms evolved finite state
machines - Fogel (1966) described finite state automata that
were evolved to predict symbol strings generated
from Markov processes and non-stationary time
series.
6EP versus ES
- EP and ES very similar although the two
approaches developed independently. - Main differences between ES and EP are
- Selection EP typically uses stochastic
tournament selection. - Recombination traditionally EP did not use
crossover however now many hybrid EP/ES
algorithms now exist. - Choice of EP genotype representation and which
variation operators to use is always problem
dependent.
7Example Economic dispatch of isolated power
systems using EP
- EP applied as an advanced control technique for
isolated power networks (e.g. Europe's island
communities) with integrated renewable energy
sources (e.g. wind power). - Economic dispatch planning the contribution of
each generating unit in a power network in order
to meet customer demand at the lowest possible
production cost. - Operational constraints use renewable energy
sources (e.g. wind - which is highly intermittent
and unpredictable) whenever possible in order
to minimize operating costs.
8Economic dispatch of isolated power systems using
EP
- EP algorithms in economic dispatch have clear
advantages over traditional methods (and GAs) - They do not need any special coding of
individuals. - In economic dispatch the desired outcome is the
optimal operating point of each of the dispatched
units (a real number), each of the individuals
can be directly presented as a set of real
numbers, each one being the produced power of the
unit it concerns. - Since each of the individuals codes within itself
its own mutation rate, and since it is itself
mutated, the EP algorithm provides a
self-regulating adaptive scheme.
9 EP algorithm for economic dispatch
10Fitness function
11EP algorithm input
- User defined properties of EP algorithm
- - Population size 10
- - Number of generations 200
- - Penalty for overload (parameter of the fitness
function) - - Penalty for power losses (parameter of the
fitness function)
12Case study Power System of Crete
- Power network includes
- - 25 generator buses
- - 5 synchronous generators (on bars 0, 1, 2,
3, 4, 5, 7) - - 8 asynchronous generators (on bars 17, 18,
19, 20, 21, 22, 23, 34) - - each with capacitor bank (not shown) and 6
transmission lines.
13Case study Results
- Able to evolve an effective solution in real time
(142 seconds). - No special requirements for the objective
function and constraints. - Robust solutions in a complex domain containing
multiple local optima, multiple objectives, non
convex and, non differentiable functions.
Best solution for the dispatch minimizing power
losses
14Modern EP
- In general No predefined representation
- Thus no predefined mutation operator (must match
representation) - Often self-adaptation of mutation parameters
- Here we present one EP variant, not the canonical
EP
15Representation
- For continuous parameter optimization
- Chromosomes consist of two parts
- Object variables x1,,xn
- Mutation step sizes ?1,,?n
- Full size ? x1,,xn, ?1,,?n ?
16Mutation
- Chromosomes ? x1,,xn, ?1,,?n ?
- ?i ?i (1 ? N(0,1))
- xi xi ?i Ni(0,1)
- ? ? 0.2
- boundary rule ? lt ?0 ? ? ?0
- Other variants proposed tried
- Lognormal scheme as in ES
- Other distributions, e.g, Cauchy instead of
Gaussian
17Recombination
- Traditionally none.
- Rationale one point in the search space stands
for a species, not for an individual and there
can be no crossover between species. - Much historical debate mutation vs. crossover.
- Pragmatic approach seems to prevail today.
18Parent selection
- Each individual creates one child by mutation
- Thus
- Deterministic (each parent mutated to produce one
child) - Not biased by fitness (as is typically the case
in ES)
19Survivor selection
- P(t) ? parents, P(t) ? offspring
- Pair-wise competitions in round-robin format
- Each solution x from P(t) ? P(t) is evaluated
against q other randomly chosen solutions - For each comparison, a "win" is assigned if x is
better than its opponent - The ? solutions with the greatest number of wins
are retained to be parents of the next generation - Parameter q allows tuning selection pressure
- Typically q 10
20Example Co-evolution of predator-prey strategies
- Co-evolving two robot controllers in competition
with each other. - Using EP/GA hybrid to derive weight vectors for
effective neural network controllers. - Evolutionary process is very different when two
populations are co-evolved in competition with
each other since the performance of each robot
depends on the performance of the other robot.
21Example Co-evolution of predator-prey strategies
- Predator robot vision system of 36 degrees
- Prey robot simple sensors for detecting an
object at up-to 2 cm of distance, but twice as
fast as the predator - Robots were co-evolved in a square arena and each
pair of predator and prey robots were let free to
move for 2 minutes (or less if the predator could
catch the prey).
22Co-evolution of predator-prey strategies
Algorithm specifics
- Population 1(Predator) 100 genotypes each 8
(30 input-output neuron connections 2 output
unit thresholds). - Population 2 (Prey) 100 genotypes each 8 (20
input-output neuron connections 2 output unit
thresholds). - Genotypes encoded as bit strings (each weight
variable in vector encoded using 8 bits)
23Co-evolution of predator-prey strategies
Algorithm specifics
- Mutation bit substitution (applied to each bit)
with 0.02 degree of probability. - No crossover (No difference noticed in
comparisons with previous experiments). - Evolutionary run length 100 generations.
- Each genotype tested against best 10 competitors
within same (initially) and previous generations
(later) survivor selection via tournaments
between each genotype and fittest genotypes. - Fitness function 1 for the predator and 0 for
the prey if the predator was able to catch the
prey. - Conversely 0 for the predator and 1 for the prey
if it was able to escape the predator.
24Co-evolved strategies Following the prey
- Circa 20 generations Predators developed the
ability to search for the prey and follow it. - However, since the prey was twice as fast, this
strategy did not always pay off for predators
(prey is white, predator is black)
25Co-evolved strategies Anticipating prey
trajectory
- Circa 45 generations predators watched the prey
from far and eventually attacked it anticipating
its trajectory. - As a consequence, the prey began to move so fast
along the walls that often the predator missed
the prey and crashed into the wall.
26Co-evolved strategies Spider strategy
- Circa 70 generations predators developed a
"spider strategy". - Instead of attempting to go after the prey, the
predator moved towards a wall and waited there
for the prey which moved so fast (along the
walls) that could not detect the predator early
enough to avoid it!
27Initial strategies transferred to predator and
prey robots