Integer%20Programming - PowerPoint PPT Presentation

About This Presentation
Title:

Integer%20Programming

Description:

Infeasible. INFEASIBLE. there is no better solution. than 21: fathom. optimal ... you can fathom the subproblem by a bounding argument. ... – PowerPoint PPT presentation

Number of Views:24
Avg rating:3.0/5.0
Slides: 24
Provided by: michal77
Category:

less

Transcript and Presenter's Notes

Title: Integer%20Programming


1
Integer Programming
  • Linear Programming (Simplex method)
  • Integer Programming (Branch Bound method)
  • Lasdon, L.S Optimization Theory for Large
    Systems, MacMillan, 1970 (Section 1.2)
  • CPLEX Manual
  • Cornuejols, G., Trick, M.A., and Saltzman, M.J.
    A Tutorial on Integer Programming, Summer 1995,
    http//mat.gsia.cmu.edu/orclass/integer/integer.ht
    ml


2
A problem and its solution
  • maximise z x1 3x2
  • subject to - x1 x2 ? 1
  • x1 x2 ? 2
  • x1 ? 0 , x2 ? 0

extreme point
x2
-x1x2 1
x13x2 c
(1/2,3/2)
c5
x1
x1x2 2
c3
c0
3
Linear Programme in standard form
  • Indices
  • j1,2,...,n variables
  • i1,2,...,m equality constraints
  • Constants
  • c (c1,c2,...,cn) cost coefficients
  • b (b1,b2,...,bm) constraint left-hand-sides
  • A (aij) m n matrix of constraint
    coefficients
  • Variables
  • x (x1, x2,...,xn)

SIMPLEX
  • Linear programme
  • maximise
  • z ?j1,2,...,n cjxj
  • subject to
  • ?j1,2,...,m aijxj bi , i1,2,...,m
  • xj ? 0 , j1,2,...,n
  • Linear programme (matrix form)
  • maximise
  • cx
  • subject to
  • Ax b
  • x ? 0

n gt m rank(A) m
4
Transformation of LPs to the standard form
  • slack variables
  • ?j1,2,...,m aijxj ? bi to ?j1,2,...,m
    aijxj xni bi , xni ? 0
  • ?j1,2,...,m aijxj ? bi to ?j1,2,...,m
    aijxj - xni bi , xni ? 0
  • nonnegative variables
  • xk with unconstrained sign xk xk? - xk? , xk?
    ? 0 , xk? ? 0
  • Exercise transform the following LP to the
    standard form
  • maximise z x1 x2
  • subject to 2x1 3x2 ? 6
  • x1 7x2 ? 4
  • x1 x2 3
  • x1 ? 0 , x2 unconstrained in sign

5
Basic facts of Linear Programming
standard form
  • feasible solution - satisfying constraints
  • basis matrix - a non-singular m m submatrix of
    A
  • basic solution to a LP - the unique vector
    determined by a basis matrix n-m variables
    associated with columns of A not in the basis
    matrix are set to 0, and the remaining m
    variables result from the square system of
    equations
  • basic feasible solution - basic solution with all
    variables nonnegative (at most m variables can be
    positive)
  • Theorem 1.
  • The objective function, z, assumes its maximum
    at an extreme point of the constraint set.
  • Theorem 2.
  • A vector x (x1, x2,...,xn) is an extreme point
    of the constraint set if and only if x is a basic
    feasible solution.

6
Integer Programming
  • original task
  • (IP) maximise cx
  • subject to Ax b
  • x ? 0 and integer
  • linear relaxation
  • (LR) maximise cx
  • subject to Ax b
  • x ? 0

paper 4
  • The optimal objective value for (LR) is greater
    than or equal to
  • the optimal objective for (IP).
  • If (LR) is infeasible then so is (IP).
  • If (LR) is optimised by integer variables, then
    that solution
  • is feasible and optimal for (IP).
  • If the cost coefficients c are integer, then the
    optimal objective for (IP)
  • is less than or equal to the round down of the
    optimal objective for (LR).

application to network design paper 2, section
4.1
7
Branch and Bound (a Knapsack Problem)
  • maximise 8x1 11x2 6x3 4x4
  • subject to 5x1 7x2 4x3 3x4 ? 14
  • xj ??0,1 , j1,2,3,4
  • (LR) solution x1 1, x2 1, x3 0.5, x4 0,
    z 22
  • no integer solution will have value greater than
    22

x1 1, x2 1, x3 0, x4 0.667
x1 1, x2 0.714, x3 1, x4 0
8
Branch and Bound cntd.
  • we know that the optimal integer solution is not
    greater than 21.85 (21 in fact)
  • we will take a subproblem and branch on one of
    its variables
  • - we choose an active subproblem (here not
    chosen before)
  • - we choose a subproblem with highest solution
    value

no further branching, not active
x1 0.6, x2 1, x3 1, x4 0
x1 1, x2 0, x3 1, x4 1
9
Branch and Bound cntd.
there is no better solution than 21 fathom
optimal
x1 0, x2 1, x3 1, x4 1
x1 1, x2 1, x3 1, x4 ?
10
Branch Bound - summary
  • Solve the linear relaxation of the problem. If
    the solution is integer, then we are done.
    Otherwise create two new subproblems by branching
    on a fractional variable.
  • A subproblem is not active when any of the
    following occurs
  • you have already used the subproblem to branch on
  • all variables in the solution are integer
  • the subproblem is infeasible
  • you can fathom the subproblem by a bounding
    argument.
  • Choose an active subproblem and branch on a
    fractional variable. Repeat until there are no
    active subproblems.
  • Remarks
  • If x is restricted to integer (but not
    necessarily to 0 or 1), then if x 4.27 you
    would branch with the constraints x???4 and
    x???5.
  • If some variables are not restricted to integer
    you do not branch on them.

11
Basic approaches to combinatorial optimisation
Combinatorial Optimisation Problem given a
finite set S (solution space) and evaluation
function F S ? R find i ? S minimising F(i)
  • Local Search
  • Simulated Annealing
  • Simulated Allocation
  • Evolutionary Algorithms

12
Local Search
Combinatorial Optimisation Problem given a
finite set S (solution space) and evaluation
function F S ? R find i ? S minimising
F(i) N(i) ? S - neighbourhood of a feasible
point (configuration) i? S
A modification steepest descent
begin choose an initial solution i ? S
repeat choose a neighbour j?N(i) if
F(j) lt F(i) then i j until ??j?N(i),
F(j) ? F(i) end
Unable to leave local minima Quality (rather
poor) of LS depends on the range of the
neighbourhood relation- ship and on the initial
solution.
13
Local Search for the Knapsack Problem
Solution space S x wx ? W, x ? 0, x -
integer Problem find x ? S maximising
cx Neighbourhood N(x) y?S y is
obtained from x by adding or exchanging one
object
Suspicious!
14
Simulated Annealing
Combinatorial Optimisation Problem given a
finite set S (solution space) and evaluation
function F S ? R find i ? S minimising
F(i) N(i) ? S - neighbourhood of a feasible
point (configuration) i? S
  • uphill moves are permitted but only with a
    certain (decreasing) probability (temperature
    dependent) according to the so called
  • Metropolis test
  • neighbours are selected at random

paper 5, section 3
Johnson, D.S. et al. Optimization by Simulated
Annealing an Experimental Evaluation, Operations
Research, Vol.39, No.1, 1991
15
Simulated Annealing - algorithm
begin choose an initial solution i? S
select an initial temperature T gt 0 while
stopping criterion not true count 0
while count lt L choose randomly a
neighbour j?N(i) ?F F(j) - F(i)
if ?F ? 0 then i j
else if random(0,1) lt exp (-?F / T) then i
j count count 1 end while
reduce temperature end while end
Metropolis test
16
Simulated Annealing - limit theorem
  • limit theorem global optimum will be found
  • for fixed T, after sufficiently number of steps
  • Prob X i exp(-F(i)/T) / Z(T)
  • Z(T) ?j?S exp(-F(j)/T)
  • for T?0, Prob X i remains greater than 0
    only for optimal configurations i?S

this is not a very practical result too many
moves (number of states squared) would have to
be made to achieve the limit sufficiently closely
17
Simulated Annealing applied to the Travelling
Salesman Problem (TSP)
Combinatorial Optimisation Problem given a
finite set S (solution space) and evaluation
function F S ? R find p ? S minimising
F(p), N(p) ? S - neighbourhood of a feasible
point (configuration) p ? S
TSP S p set of all cyclic permutations of
length n with cycle n p(i) - city visited
just after city no. i F(p) ? i1,2,,n
cip(i) ( cip(i) - distance between city i and
p(i) ) N(p) - neighbourhood of p (next slide)
application to network design paper 2, section
4.2
18
TSP - neighbourhood relation
j
q
j
p
i
i
Hamiltonian circuit to Hamiltonian circuit any p
reachable from any q
19
Evolutionary algorithms - basic notions
  • population a set of ? chromosomes
  • generation a consecutive population
  • chromosome a sequence of genes
  • individual, solution (point of the solution
    space)
  • genes represent internal structure of a solution
  • fitness function cost function

Michalewicz, Z. Genetic Algorithms Data
Structures Evolution Programs, Springer, 1996
20
Genetic operators
  • mutation
  • is performed over a chromosome with certain (low)
    probability
  • it perturbates the values of the chromosomes
    genes
  • crossover
  • exchanges genes between two parent chromosomes to
    produce an offspring
  • in effect the offspring has genes from both
    parents
  • chromosomes with better fitness function have
    greater chance to become parents

In general, the operators are problem-dependent
21
( ? ? ) - Evolutionary Algorithm
begin n 0 initialise(P0) select an
initial temperature T gt 0 while stopping
criterion not true On ? for i 1
to ? do On On?crossover(Pn) for ? ?On do
mutate(?) n n1, Pn
select_best(On?Pn) end while end
22
Evolutionary Algorithm for the flow problem
  • Chromosome x (x1,x2,...,xD)
  • Gene
  • xd (xd1,xd2,...,xdm(d)) - flow pattern for
    the demand d
  • 5 2 3 3 1 4
  • 1 2 0 0 3 5
  • 1 0 2 1 chromosome
  • 2 3

paper 7
23
Evolutionary Algorithm for the flow problem cntd.
  • crossover of two chromosomes
  • each gene of the offspring is taken from one of
    the parents
  • for each d1,2,,D xd xd(1) with probability
    0.5
  • xd xd(2) with probability 0.5
  • better fitted chromosomes have greater chance to
    become parents
  • mutation of a chromosome
  • for each gene shift some flow from one path to
    another
  • everything at random
Write a Comment
User Comments (0)
About PowerShow.com