Title: Introduction to Genetic Algorithms
1Introduction to Genetic Algorithms
- For CSE 848 and ECE 802/601
- Introduction to Evolutionary Computation
- Prepared by Erik Goodman
- Professor, Electrical and Computer Engineering
- Michigan State University, and
- Co-Director, Genetic Algorithms Research and
Applications Group (GARAGe) - Based on and Accompanying Darrell Whitleys
Genetic Algorithms Tutorial
2Genetic Algorithms
- Are a method of search, often applied to
optimization or learning - Are stochastic but are not random search
- Use an evolutionary analogy, survival of
fittest - Not fast in some sense but sometimes more
robust scale relatively well, so can be useful - Have extensions including Genetic Programming
(GP) (LISP-like function trees), learning
classifier systems (evolving rules), linear GP
(evolving ordinary programs), many others
3The Canonical or Classical GA
- Maintains a set or population of strings at
each stage - Each string is called a chromosome, and encodes a
candidate solution CLASSICALLY, encodes as a
binary string (and now in almost any conceivable
representation)
4Criterion for Search
- Goodness (fitness) or optimality of a strings
solution determines its FUTURE influence on
search process -- survival of the fittest - Solutions which are good are used to generate
other, similar solutions which may also be good
(even better) - The POPULATION at any time stores ALL we have
learned about the solution, at any point - Robustness (efficiency in finding good solutions
in difficult searches) is key to GA success
5Classical GA The Representation
- 1011101010 a possible 10-bit string
representing a possible solution to a problem - Bit or subsets of bits might represent choice of
some feature, for example. 4WD 2-door
4-cylinder closed cargo area blue might be
meaning of chrom above, to evaluate as the new
standard vehicles for the US Post Office - Each position (or each set of positions that
encodes some feature) is called a LOCUS (plural
LOCI) - Each possible value at a locus is called an ALLELE
6How Does a GA Operate?
- For ANY chromosome, must be able to determine a
FITNESS (measure performance toward an objective) - Objective may be maximized or minimized usually
say fitness is to be maximized, and if objective
is to be minimized, define fitness from it as
something to maximize
7GA OperatorsClassical Mutation
- Operates on ONE parent chromosome
- Produces an offspring with changes.
- Classically, toggles one bit in a binary
representation - So, for example 1101000110 could mutate
to 1111000110 - Each bit has same probability of mutating
8Classical Crossover
- Operates on two parent chromosomes
- Produces one or two children or offspring
- Classical crossover occurs at 1 or 2 points
- For example (1-point) (2-point)
- 1111111111 or 1111111111
- X 0000000000 0000000000
- 1110000000 1110000011
- and 0001111111 0001111100
9Canonical GA Differences from Other Search Methods
- Maintains a set or population of solutions at
each stage (see blackboard) - Classical or canonical GA always uses a
crossover or recombination operator (domain is
PAIRS of solutions (sometimes more)) - All we have learned to time t is represented by
time ts POPULATION of solutions
10Contrast with Other Search Methods
- indirect -- setting derivatives to 0
- direct -- hill climber (already described)
- enumerative -- already described
- random -- already described
- simulated annealing -- already described
- Tabu (already described)
- RSM -- fits approx. surf to set of pts, avoids
full evaluations during local search
11GA -- When Might Be Any Good?
- Highly multimodal functions
- Discrete or discontinuous functions
- High-dimensionality functions, including many
combinatorial ones - Nonlinear dependencies on parameters
(interactions among parameters) -- epistasis
makes it hard for others - DONT USE if a hill-climber, etc., will work well.
12Genetic Algorithm -- Meaning?
- classical or canonical GA -- Holland (60s,
book in 75) -- binary chromosome, population,
selection, crossover (recombination), low rate
mutation - More general GA population, selection, (
recombination) ( mutation) -- may be hybridized
with LOTS of other stuff
13Representation
- Problem is represented as a string, classically
binary, known as an individual or chromosome - Whats on the chromosome is GENOTYPE
- What it means in the problem context is the
PHENOTYPE (e.g., binary sequence may map to
integers or reals, or order of execution, etc.) - Each position called locus or gene a particular
value for a locus is an allele. So locus 3 might
now contain allele 5, and the thickness gene
(which might be locus 8) might be set to allele
2, meaning its second-thinnest value.
14Optimization Formulation
- Not all GAs used for optimization -- also
learning, etc. - Commonly formulated as given F(X1,Xn), find set
of Xis (in a range) that extremize F, often also
with additional constraint equations (equality or
inequality) Gi(X1,Xn) lt Li, that must also be
satisfied. - Encoding obviously depends on problem being solved
15Discretization, etc.
- If real-valued domain, discretize to binary --
typically powers of 2 (need not be in GALOPPS),
with lower, upper limits, linear/exp/log scaling,
etc. - End result (classically) is a bit string
16Defining Objective/Fitness Functions
- Problem-specific, of course
- Many involve using a simulator
- Dont need to possess derivatives
- May be stochastic
- Need to evaluate thousands of times, so cant be
TOO COSTLY
17The What Function?
- In problem-domain form -- absolute or raw
fitness, or evaluation or performance or
objective function - Relative (to population), possibly inverted
and/or offset, scaled fitness usually called the
fitness function. Fitness should be MAXIMIZED,
whereas the objective function might need to be
MAXIMIZED OR MINIMIZED.
18Selection
- Based on fitness, choose the set of individuals
(the intermediate population) to - survive untouched, or
- be mutated, or
- in pairs, be crossed over and possibly mutated
- forming the next population
- One individual may be appear several times in the
intermediate population (or the next population)
19Types of Selection
- Using relative fitness (examples)
- roulette wheel -- classical Holland -- chunk of
wheel relative fitness - stochastic uniform sampling -- better sampling --
integer parts GUARANTEED - Not requiring relative fitness
- tournament selection
- rank-based selection (proportional or cutoff)
- elitist (mu, lambda) or (mulambda) from ES
20Scaling of Relative Fitnesses
- Trouble as evolution progresses, relative
fitness differences get smaller (as population
gets more similar to each other). Often helpful
to SCALE relative fitnesses to keep about same
ratio of best guy/average guy, for example.
21Recombination or Crossover
- On parameter encoded representations
- 1-pt example
- 2-pt example
- uniform example
- Linkage loci nearby on chromosome, not usually
disrupted by a given crossover operator (cf.
1-pt, 2-pt, uniform re linkage) - But use OTHER crossover operators for reordering
problems (later)
22Mutation
- On parameter encoded representations
- single-bit fine for true binary encodings
- single-bit NOT good for binary-mapped
integers/reals -- Hamming cliffs - Binary ints (use Gray codes and bit-flips) or
use random legal ints or use 0-mean, Gaussian
changes, etc.
23What is a GA DOING?-- Schemata and Hyperstuff
- Schema -- adds to alphabet, means dont
care any value - One schema, two schemata (forgive occasional
misuse in Whitley) - Definition ORDER of schema H -- o(H) of
non-s - Def. Defining Length of a schema, D(H)
distance between first and last non- in a
schema for example D (1010) 5
( number of positions where 1-pt crossover can
disrupt it). - (NOTE diff. xover ? diff. relationship to
defining length) - Strings or chromosomes or individuals or
solutions are order L schemata, where L is
length of chromosome (in bits or loci).
Chromosomes are INSTANCES (or members) of
lower-order schemata
24Cube and Hypercube
Vertices are order ? schemata Edges are order ?
schemata Planes are order ? schemata Cubes (a
type of hyperplane) are order ? schemata 8
different order-1 schemata (cubes) 0, 1,
0, 1, 0, 1, 0, 1
25Hypercubes, Hyperplanes, etc.
- (See pictures in Whitley tutorial or blackboard)
- Vertices correspond to order L schemata (strings)
- Edges are order L-1 schemata, like 10 or 101
- Faces are order L-2 schemata
- Etc., for hyperplanes of various orders
- A string is an instance of 2L-1 schemata or a
member of that many hyperplane partitions (-1
because all s, the whole space, is not
counted as a schema, per Holland) - List them, for L3
26GA Sampling of Hyperplanes
- So, in general, string of length L is an instance
of 2L-1 schemata - But how many schemata are there in the whole
search space? - (how many choices each locus?)
- Since one string instances 2L-1 schemata, how
much does a population tell us about schemata of
various orders? - Implicit parallelism one strings fitness tells
us something about relative fitnesses of more
than one schema.
27Fitness and Schema/ Hyperplane Sampling
Whitleys illustration of various partitions of
fitness hyperspace Plot fitness versus one
variable discretized as a K 4-bit binary
number then get ? First graph shades 0 Second
superimposes 1, so crosshatches are ? Third
superimposes 010
28How Do Schemata Propagate? Proportional Selection
Favors Better Schemata
- Select the INTERMEDIATE population, the parents
of the next generation, via fitness-proportional
selection - Let M(H,t) be number of instances (samples) of
schema H in population at time t. Then
fitness-proportional selection yields an
expectation of - In an example, actual number of instances of
schemata (next page) in intermediate generation
tracked expected number pretty well, in spite of
small pop size
29Results of example run (Whitley) showing that
observed numbers of instances of schemata track
expected numbers pretty well
30Crossover Effect on Schemata
- One-point Crossover Examples (blackboard)
- 11 and 11
- Two-point Crossover Examples (blackboard)
- (rings)
- Closer together loci are, less likely to be
disrupted by crossover, right? A compact
representation is one that tends to keep alleles
together under a given form of crossover
(minimizes probability of disruption).
31Linkage and Defining Length
- Linkage -- coadapted alleles (generalization of
a compact representation with respect to
schemata) - Example, convincing you that probability of
disruption of schema H of length D(H) is
D(H)/(L-1)
32The Fundamental Theorem of Genetic Algorithms --
The Schema Theorem
- Holland published in 1975, had taught it much
earlier (by 1968, for example, when I started
Ph.D. at UM) - It provides lower bound on change in sampling
rate of a single schema from generation t to t1.
Well derive it in several steps, starting from
the change caused by selection alone
33Schema Theorem Derivation (cont.)
- Now we want to add effect of crossover
- A fraction pc of pop undergoes crossover, so
- Will make a conservative assumption that
crossover within the defining length of H is
always disruptive to H, and will ignore gains
(were after a LOWER bound -- wont be as tight,
but simpler). Then
34Schema Theorem Derivation (cont.)
- Whitley considers one non-disruption case that
Holland didnt, originally - If cross H with an instance of itself, anywhere,
get no disruption. Chance of doing that, drawing
second parent at random, is P(H,t)
M(H,t)/popsize so prob. of disruption by x-over
is - Then can simplify the inequality, dividing by
popsize and rearranging re pc - This version ignores mutation and assumes second
parent is chosen at random. But its usable,
already!
35Schema Theorem Derivation (cont.)
- Now, lets recognize that well choose the second
parent for crossover based on fitness, too - Now, lets add mutations effects. What is the
probability that a mutation affects schema H? - (Assuming mutation always flips bit or changes
allele) - Each fixed bit of schema (o(H) of them) changes
with probability pm, so they ALL stay UNCHANGED
with probability
36Schema Theorem Derivation (cont.)
- Now we have a more comprehensive schema theorem
- (This is where Whitley stops. We can use this
but) - Holland earlier generated a simpler, but less
accurate bound, first approximating the mutation
loss factor as (1-o(H)pm), assuming pmltlt1.
37Schema Theorem Derivation (cont.)
- That yields
- But, since pmltlt1, we can ignore small
cross-product terms and get - That is what many people recognize as the
classical form of the schema theorem. - What does it tell us?
38Using the Schema Theorem
- Even a simple form helps balance initial
selection pressure, crossover mutation rates,
etc. - Say relative fitness of H is 1.2, pc .5, pm
.05 and L 20 What happens to H, if H is long?
Short? High order? Low order? - Pitfalls slow progress, random search,
premature convergence, etc. - Problem with Schema Theorem important at
beginning of search, but less useful later...
39Building Block Hypothesis
- Define a Building block as a short, low-order,
high-fitness schema - BB Hypothesis Short, low-order, and highly fit
schemata are sampled, recombined, and resampled
to form strings of potentially higher fitness we
construct better and better strings from the best
partial solutions of the past samplings. - -- David Goldberg, 1989
- (GAs can be good at assembling BBs, but GAs
are also useful for many problems for which BBs
are not available)
40Lessons (Not Always Followed)
- For newly discovered building blocks to be
nurtured (made available for combination with
others), but not allowed to take over population
(why?) - Mutation rate should be
(but contrast with SA, ES, (1l),
) - Crossover rate should be
- Selection should be able to
- Population size should be (oops what can we say
about this? so far)
41A Traditional Way to Do GA Search
- Population large
- Mutation rate (per locus) 1/L
- Crossover rate moderate (lt0.3)
- Selection scaled (or rank/tournament, etc.) such
that Schema Theorem allows new BBs to grow in
number, but not lead to premature convergence
42Schema Theorem and Representation/Crossover Types
- If we use a different type of representation or
different crossover operator - Must formulate a different schema theorem, using
same ideas about disruption of schemata. - See Whitley (Fig. 4) for paths through search
space under crossover
43Uniform Crossover Linkage
- 2-pt crossover is superior to 1-point
- Uniform crossover chooses allele for each locus
at random from either parent - Uniform crossover is thus more disruptive than
1-pt or 2-pt crossover - BUT uniform is unbiased relative to linkage
- If all you need is small populations and a rapid
scramble to find good solutions, uniform xover
sometimes works better but is this what you
need a GA for? Hmmmm - Otherwise, try to lay out chromosome for good
linkage, and use 2-pt crossover (or Bookers 1987
reduced surrogate crossover, (described below))
44Inversion An Idea to Try to Improve Linkage
- Tries to re-order loci on chromosome BUT NOT
changing meaning of loci in the process - Means must treat each locus as (index, value)
pair. Can then reshuffle pairs at random, let
crossover work with them in order APPEAR on
chromosome, but fitness function keep association
of values with indices of fields, unchanging.
45Classical Inversion Operator
- Example reverses field pairs i through k on
chromosome - (a,va), (b,vb), (c,vc), (d,vd), (e,ve), (f, vf),
(g,vg) - After inversion of positions 2-4, yields
- (a,va), (d,vd), (c,vc), (b,vb), (e,ve), (f, vf),
(g,vg) - Now fields a,d are more closely linked, 1-pt or
2-pt crossover less likely to separate them - In practice, seldom used must run problem for
an enormous time to have such a second-level
effect be useful. Need to do on population level
or tag each inversion pattern (and force mates to
have matching tags) or do repairs to crossovers
to keep chromosomes legal i.e., possess one
pair of each type.
46Inversion NOT a Reordering Operator
- In contrast, if trying to solve for the best
permutation of 0,N, use other reordering
crossovers well discuss later. Thats NOT
inversion!
47Crossover Between Similar Individuals
- As search progresses, more individuals tend to
resemble each other - When two similar individuals are crossed, chances
of yielding children different from parents are
lower for 1,2-pt than uniform - Can counter this with reduced surrogate
crossover (1-pt, 2-pt)
48Reduced Surrogates
- Given 0001111011010011 and
- 0001001010010010, drop matching
- Positions, getting
- ----11---1-----1 and
- ----00---0-----0, reduced surrogates
- If pick crossover pts IGNORING DASHES, 1-pt, 2-pt
still search similarly to uniform.
49The Case for Binary Alphabets
- Deals with efficiency of sampling schemata
- Minimal alphabet ? maximum hyperplanes directly
available in encoding, for schema processing and
higher rate of sampling low-order schemata than
with larger alphabet - (See p. 20, Whitley, for tables)
- Half of a random init. pop. samples each order 1
schema, and ¼ samples each order-2 schema, etc. - If use alpha_size 10, many schemata of order 2
will not be sampled in an initial population of
50. (Of course, each order-1 schema sampled gave
us info about a 3-bit allele
50Case Against
- Antonisse raises counter-arguments on a
theoretical basis, and the question of
effectiveness is really open. - But, often dont want to treat chromosome as bit
string, but encode ints, allow crossover only
between int fields, not at bit boundaries, use
problem-specific representations. - Losses in schema search efficiency may be
outweighed by gains in naturalness of mapping,
keeping fields legal, etc. - So we will most often use non-binary strings
- (GALOPPS lets you go either way)
51The N3 Argument (Implicit or Intrinsic
Parallelism)
- Assertion A GA with pop size N can usefully
process on the order of N3 hyperplanes (schemata)
in a generation. - (WOW! If N100, N3 1 million)
- Derivation -- Assume
- Random population of size N.
- Need f instances of a schema to claim we are
processing it in a statistically significant
way in one generation.
52The N3 Argument (cont.)
- Example to have 8 samples (on average) of 2nd
order schemata in a pop., (there are 4 distinct
(CONFLICTING) schemata in each 2-position pair
for example, 00, 01, 10, 11),
wed need 4 bit patterns x 8 instances 32
popsize. - In general, the highest ORDER of schema, ,
that is processed is log (N/f) in our case,
log(32/8) log(4) 2. (log means log2)
53The N3 Argument (cont.)
- But the number of distinct schemata of order
- is , the number of ways to pick
different positions and assign all possible
binary values to each subset of the positions.
- So we are trying to argue that ,
- which implies that ,
since - log(N/f).
-
54The N3 Argument (cont.)
- Rather than proving anything general, Fitzpatrick
Grefenstette (88) argued as follows - Assume
- Pick f8, which implies
- By inspection (plug in Ns, get s, etc.), the
number of schemata processed is greater than N3.
So, as long as our population size is REASONABLE
(64 to a million) and L is large enough (problem
hard enough), the argument holds. - But this deals with the initial population, and
it does not necessarily hold for the latter
stages of evolution. Still, it may help to
explain why GAs can work so well
55Exponentially Increasing Sampling and the K-Armed
Bandit Problem
- Schema Theorem says M(H,t1) gt k M(H,t)
- (if we neglect certain changes)
- That is, Hs instances in population grow
exponentially, as long as small relative to pop
size and kgt1 (H is a building block). - Is this a good way to allocate trials to
schemata? Argument that SHOULD devote
exponentially increasing fraction of trials to
schemata that have performed better in samples so
far
56Two-Armed Bandit Problem(from Goldberg, 89)
- 1-armed bandit slot machine
- 2-armed bandit slot machine with 2 handles, NOT
necessarily yielding same payoff odds (2
different slot machines) - If can make a total of N pulls, how should we
proceed, so as to maximize expected final total
payoff Ideas???
57Two-Armed Bandit, cont.
- Assume LEFT pays with (unknown to us) expected
value m1 and variance s12, and RIGHT pays m2,
with variance s22. - The DILEMMA Must EXPLORE while EXPLOITING.
Clearly a tradeoff must be made. Given that one
arm seems to be paying off better than the other
SO FAR, how many trials should be given to the
BETTER (so far) arm, and how many to the POORER
(so far) arm?
58Two-Armed Bandit, cont.
- Â Classical approach SEPARATE EXPLORATION from
EXPLOITATION If will do N trials, start by
allocating n trials to each arm (2nltN) to decide
WHICH arm appears to be better, and then allocate
ALL remaining (N-2n) trials to it. - DeJong calculated the expected loss (compared to
the OPTIMUM) of using this strategy - L(N,n) m1 - m2 . (N-n) q(n)
n(1-q(n)),where q(n) is the probability that the
WORST arm is the OBSERVED BEST arm after n trials
on each machine.
59Two-Armed Bandit, cont.
- This q(n) is well approximated by the tail of the
normal distribution - , where
- (x is signal difference to noise ratio times
sqrt(n).) - (Lets call signal difference to noise ratio c.)
q(n)
x
60Two-Armed Bandit, cont.
- The LARGER x becomes, the LESS probable q(n)
becomes (i.e., smaller chance of error). You can
see that q(n) (chance of error) DECLINES as n is
picked larger, or as the differences in expected
values INCREASES or as the sum of the variances
DECREASES. - The equation shows two sources of expected loss
- L(N,n) m1 - m2 . (N-n) q(n) n(1-q(n)),
- Due to wrong arm later
wrong during exploration
61Two-Armed Bandit, cont.
- For any N, solve for the optimal experiment size
n by setting the derivative of the loss equation
to 0. Graph below (after Fig. 2.2 in Goldberg,
89) shows the optimal n as a function of total
number of trials, N, and c, the ratio of signal
difference to noise.
From graph, see that total number of experiments
N grows at a greater-than-exponential function of
the ideal number of trials n in the exploration
period -- that means, according to classical
decision theory, that we should be allocating
trials to the BETTER (higher measured fitness
during the exploration period) of the two arms,
at a GREATER THAN EXPONENTIAL RATE.
62Two-Armed Bandit, K-Armed Bandit
- Now, let our arms represent competing schemata.
Then the future sampling of the better one (to
date) should increase at a larger-than-exponential
rate. A GA, using selection, crossover, and
mutation, does that (when set properly, according
to the schema theorem). If there are K competing
schemata over a set of positions, then its a
K-armed bandit. - But at any time, MANY different schemata are
being processed, with each competing set
representing a K-armed bandit scenario. So maybe
the GAs way of allocating trials to schemata is
pretty good!
63Early Theory for GAs
- Vose and Liepins (91) produced most well-known
GA theory model - The main elements
- vector of size 2L containing proportion of
population with genotype i at time t (before
selection), P(Si,t), whole vector denoted pt, - matrix rij(k) of probabilities that crossing
strings i and j will produce string k. - Then
64Vose Liepins (cont.)
- r is used to construct M, the mixing matrix
that tells, for each possible string, the
probability that it is created from each pair of
parent strings. Mutation can also be included to
generate a further huge matrix that, in theory,
could be used, with an initial population, to
calculate each successive step in evolution.
65Vose Liepins (cont.)
- The problem is that not many theoretical results
with practical implications can be obtained,
because for interesting problems, the matrices
are too huge to be usable, and the effects of
selection are difficult to estimate. More recent
work in a statistical mechanics approach to GA
theory seems to me to hold far more interest.
66What are Common Problems when Using GAs in
Practice?
- Hitchhiking BB1.BB2.junk.BB3.BB4 junk adjacent
to building blocks tends to get fixed can be
a problem - Deception a 3-bit deceptive function
- Epistasis nonlinear effects, more difficult to
capture if spread out on chromosome
67In PRACTICE GAs Do a JOB
- DOESNT mean necessarily finding global optimum
- DOES mean trying to find better approximate
answers than other methods do, within the time
available! - People use any dirty tricks that work
- Hybridize with local search operations
- Use multiple populations/multiple restarts, etc.
- Use problem-specific representations and
operators - The GOALS
- Minimize of function evaluations needed
- Balance exploration/exploitation so get best
answer can during time available (AVOIDING
premature convergence)
68Different Forms of GA
- Generational vs. Steady-State
- Generation gap 1.0 means replace ALL by newly
generated children at lower extreme, generate
1 (or 2) offspring per generation (called
steady-state) - (GALOPPS allows either, by setting crossover
rates)
69Different Forms of GA
- Replacement Policy
- Offspring replace parents
- K offspring replace K worst ones
- Offspring replace random individuals in
intermediate population - Offspring are crowded in
- (GALOPPS allows 1,3,4 easily, 2 takes mods)
70Crowding
- Crowding (DeJong) helps form niches and avoid
premature takeover by fit individuals - For each child
- Pick K candidates for replacement, at random,
from intermediate population - Calculate pseudo-Hamming distance from child to
each - Replace individual most similar to child
- Effect?
71Elitism
- Artificially protects fittest K members of
population against replacement in next generation - Often useful, but beware if using multiple
subpopulations - K often 1 may be larger, even large
- (ES often keeps k best of offspring, or of
offspring and parents, throws away the rest)
72Example GA Packages GENITOR (Whitley)
- Steady-state GA
- Child replaces worst-fit individual
- Fitness is assigned according to rank (so no
scaling is needed) - (elitism is automatic)
- (Can do in GALOPPS except worst replacement
user must rewrite that part)
73Example GA Packages CHC (Eshelman)
- Elitism -- (ml) from ES generate l offspring
from m parents, keep best m of the ml parents
and children. - Uses incest prevention (reduction) pick mates
on basis of their Hamming dissimilarity - HUX form of uniform crossover, highly
disruptive - Rejuvenate with cataclysmic mutation when
population starts converging, which is often
(small populations used) - GALOPPS allows last three, not first one
- I dont favor except for relatively easy problem
spaces
74Hybridizing GAs a Good Idea!
- IDEA combine a GA with local or
problem-specific search algorithms - HOW typically, for some or all individuals,
start from GA solution, take one or more steps
according to another algorithm, use resulting
fitness as fitness of chromosome. - If also change genotype, Lamarckian if dont,
Baldwinian (preserves schema processing) - Helpful in many constrained optimization problems
to repair infeasible solutions to nearby
feasible ones
75Other Representations/OperatorsPermutation/Optim
al Ordering
- Chromosome has EXACTLY ONE copy of each int in
0,N-1 - Must find optimal ordering of those ints
- 1-pt, 2-pt, uniform crossover ALL useless
- Mutations swap 2 loci, scramble K adjacent
loci, shuffle K arbitrary loci, etc. - (See blackboard for example)
76Crossover Operators for Permutation Problems
- What properties do we want
- 1) Want each child to combine building blocks
from both parents in a way that preserves
high-order schemata in as meaningful a way as
possible, and - 2) Want all solutions generated to be feasible
solutions.
77Example Operators for Permutation-Based
Representations, Using TSP Example PMX --
Partially Matched Crossover
- 2 sites picked, intervening section specifies
cities to interchange between parents - A 9 8 4 5 6 7 1 3 2 10
- B 8 7 1 2 3 10 9 5 4 6
- A 9 8 4 2 3 10 1 6 5 7
- B 8 10 1 5 6 7 9 2 4 3
- (i.e., swap 5 with 2, 6 with 3, and 7 with 10 in
both children.) - Thus, some ordering information from each parent
is preserved, and no infeasible solutions are
generated.
78Example Operators forPermutation-Based
Representations Order Crossover
- A 9 8 4 5 6 7 1 3 2 10 (segment A
and B) - B 8 7 1 2 3 10 9 5 4 6
- gt B 8 H 1 2 3 10 9 H 4 H (repl. 5 6 7
with Hs) - gt B 2 3 10 H H H 9 4 8 1 (promote
segment from B, gather Hs, append rest, with
wrap-around) - gt B 2 3 10 5 6 7 9 4 8 1
- Similarly, A 5 6 7 2 3 10 1 9 8 4
- Order crossover preserves more information about
RELATIVE ORDER than does PMX, but less about
ABSOLUTE POSITION of each city (for TSP
example).
79Example Operators forPermutation-Based
Representations Cycle Crossover
- Cycle crossover forces the city in each position
to come from that same position on one of the two
parents - C 9 8 2 1 7 4 5 10 6 3
- D 1 2 3 4 5 6 7 8 9 10
- 9 - - - - - - - - -
- gt 9 - - 1 - - - - - -
- gt 9 - - 1 - 4 - - 6 - , which completes 1st
cycle then (depending on whose cycle crossover
you choose), (i) start from first unassigned
position in D and perform another cycle, or (ii)
just fill in the rest of the numbers from
chromosome D - (i) yields gt 9 2 - 1 - 4 - 8 6 10
- gt 9 2 3 1 - 4 - 8 6 10
- gt C 9 2 3 1 7 4 5 8 6 10
D is done similarly. - (ii) yields gt C 9 2 3 1 5 4 7 8 6 10. D
is done similarly.
80Example Operators forPermutation-Based
Representations Uniform Order-Based Crossover
- ( lt Lawrence Davis, Handbook of Genetic
Algorithms) - Analogous to uniform crossover for ordinary
list-based chromosomes. Uniform crossover
effectively acts as if many one- or two-point
crossovers were performed at once on a pair of
chromosomes, combining parents genes on a
locus-by-locus basis, so is quite disruptive of
longer schemata. (I dont like it much, as it
jumbles information and is too disruptive for
effectiveness with many problems, I believe. But
it works quite well for some others.) - A 1 2 3 4 5 6 7 8
- B 8 6 4 2 7 5 3 1
- Binary Template 0 1 1 0 1 1 0 0
(random) - gt - 2 3 - 5 6 - -
- (then, reordering rest of As nodes to the order
THEY appear in B) - gt A 8 2 3 4 5 6 7 1
- (and similarly for B, gt 8 4 5 2 6 7 3 1
81Parallel GAs Independent of Hardware (My Bias)
- Three primary models coarse-grain (island),
fine-grain (cellular), and micro-grain (trivial) - Trivial (not really a parallel GA just a
parallel implementation of a single-population
GA) pass out individuals to separate processors
for evaluation (or run lots of local tournaments,
no master) still acts like one large population - (The GALOPPS micro-grain release is not current)
82Coarse-Grain (Island) Parallel GA
- N independent subpopulations, acting as if
running in parallel (timeshared or actually on
multiple processors) - Occasionally, migrants go from one to another, in
pre-specified patterns - Strong capability for avoiding premature
convergence while exploiting good individuals, if
migration rates/patterns well chosen
83GALOPPS An Island Parallel GA
- Can run 1-99 subpopulations
- Can run all in one process
- Can run any number in separate processes on one
uni- or multi-processor - Can run any number of subpopulations on each of K
processors need only share a common DISK
directory
84Migrant Selection Policy
- Who should migrate?
- Best guy?
- One random guy?
- Best and some random guys?
- Guy very different from best of receiving subpop?
(incest reduction) - If migrate in large of population each
generation, acts like one big population, but
with extra replacements could actually SPEED
premature convergence
85Migrant Replacement Policy
- Who should a migrant replace?
- Random individual?
- Worst individual?
- Most similar individual (Hamming sense)
- Similar individual via crowding?
86How Many Subpopulations?(Crude Rule of Thumb)
- How many total evaluations can you afford?
- Total population size and number of generations
and generation gap determine run time - What should minimum subpopulation size be?
- Smaller than 40-50 USUALLY spells trouble rapid
convergence of subpop 100-200 better for some
problems - Divide to get how many subpopulations you can
afford
87Fine-Grain Parallel GAs
- Individuals distributed on cells in a
tessellation, one or few per cell (often,
toroidal checkerboard) - Mating typically among near neighbors, in some
defined neighborhood - Offspring typically placed near parents
- Can help to maintain spatial niches, thereby
delaying premature convergence - Interesting to view as a cellular automaton
88Refined Island Models Heterogeneous/
Hierarchical GAs
- For many problems, useful to use different
representations/levels of refinement/types of
models, allow them to exchange nuggets - GALOPPS was first package to support this
- Injection Island architecture arose from this,
now used in HEEDS, etc. - Hierarchical Fair Competition is newest
development (Jianjun Hu), breaking populations by
fitness bands
89Multi-Level GAs
- Pioneering Work DAGA2, MSU (based on GALOPPS)
- Island GA populations are on lower level, their
parameters/operators/ neighborhoods on chromosome
of a single higher-level population that controls
evolution of subpopulations - Excellent performance reproducible trajectories
through operator space, for example
90Examples of Population-to-Population Differences
in a Heterogeneous GA
- Different GA parameters (pop size, crossover
type/rate, mutation type/rate, etc.) - 2-level or without a master pop
- Examples of Representation Differences
- Hierarchy one-way migration from least refined
representation to most refined - Different models in different subpopulations
- Different objectives/constraints in different
subpops (sometimes used in Evolutionary
Multiobjective Optimization (EMOO)) (someone
pick an EMOO paper?)
91Additional GA Topics to Come
- MOGA Multi-objective optimization using GAs
- Differential Evolution GA with a twist
- PCX Parent-Centered Crossover
- CMA-ES? (maybe)