ESI 6448 Discrete Optimization Theory - PowerPoint PPT Presentation

1 / 34
About This Presentation
Title:

ESI 6448 Discrete Optimization Theory

Description:

ESI 6448. Discrete Optimization Theory. Section number 5643. Lecture 11. Last class ... PRIMES. Given an integer K, is K a prime number? Pseudo-polynomial algorithms ... – PowerPoint PPT presentation

Number of Views:100
Avg rating:3.0/5.0
Slides: 35
Provided by: Min26
Category:

less

Transcript and Presenter's Notes

Title: ESI 6448 Discrete Optimization Theory


1
ESI 6448Discrete Optimization Theory
  • Section number 5643
  • Lecture 11

2
Last class
  • Polynomial reduction
  • P, Q ? NP and P ?p Q,
  • Q ? P ? P ? P
  • P ? NPC ? Q ? NPC
  • More examples
  • Co-NP
  • Strong NP-completeness
  • Pseudo-polynomial algorithms
  • Generalizations and special cases
  • NP-hardness

3
0-1 KNAPSACK
  • 0-1 KNAPSACK
  • Given integers cj, j 1, , n, and K, is there a
    subset S of 1, , n s.t. ?j?S cj K?
  • 3-EXACT COVERS ?p 0-1 KNAPSACK
  • Convert F into integers cjs and K.
  • all sets in F be bit-vectors of length 3m
  • u1, u5, u6 ? 100011, u2, u4, u6 ? 010101
  • Interpret bit-vectors as integers in base-(n1)
    system.
  • cj ?ui?Sj (n1)i-1
  • K ?3m-1j0 (n1)j, corresponds to 111
  • There is an exact cover C of u1, , u3m iff
    there is a subset of cis summing to K.

4
PARTITION
  • PARTITION
  • Given integers c1, , cn, is there a subset S ?
    1, , n s.t. ?j?S cj ?j?S cj?
  • 0-1 KNAPSACK ?p PARTITION
  • Convert c1, , cn, K into c1, , cn, cn1,
    cn2
  • M ?nj1 cj (gt K)
  • cn1 2M, cn2 3M 2K
  • There is a subset S s.t. ?i?S cj K iff there is
    a subset S s.t. ?j?S cj ?j?S cj.

5
INTEGER KNAPSACK
  • INTEGER KNAPSACK
  • Given integers cj, j 1, , n, and K, are there
    integers yis s.t. ?nj1 cjyj K?
  • 0-1 KNAPSACK ?p INTEGER KNAPSACK
  • Convert (c1, , cn, K) into (d1, , d2n, L)
  • M 2n(n1)K (gt K)
  • L nMn1 ?nj1Mj K
  • dj Mn1 Mj cj if j ? n Mn1
    Mj-n o.w.
  • There are integers xis s.t. ?nj1 cj K iff
    there are integers yis s.t. ?2nj1 djyj L.

6
co-NP
  • If A is a problem in P, then the complement A of
    A is also in P.
  • The class co-NP is the class of all problems that
    are complements of problems in NP.
  • If the complement of an NP-complete problem is in
    NP, then NP co-NP.
  • P ? NP, P ? co-NP, then P ? NPC.
  • COMPOSITE NUMBERS
  • Given an integer K, are there integers m, n gt 1
    s.t. K mn?
  • PRIMES
  • Given an integer K, is K a prime number?

7
Pseudo-polynomial algorithms
  • There is a path from 0 to K in G(c1, , cn, K)
    iff the instance (c1, , cn, K) of INTEGER
    KNAPSACK has a solution.
  • 3x1 7x2 13?

0
1
2
3
4
5
6
7
8
9
10
11
12
13
  • Any instance (c1, , cn, K) of INTEGER KNAPSACK
    can be solved in O(nK) time.
  • not polynomial

8
Pseudo-polynomial algorithms
  • I instance of a problem.number(I) the
    largest integer appearing in I
  • I (c1, , cn, K) ? number(I) K
  • An algorithm for a problem A is pseudo-polynomial
    if it solves any instance I of A in time bounded
    by a polynomial in I and number(I).
  • O(nK) algorithm for INTEGER KNAPSACK is a
    pseudo-polynomial algorithm.

9
Strong NP-completeness
  • A problem, f function mapping N to N.Af A
    restricted to instances I for which number(I) ?
    f(I).Then A is strongly NP-complete if Ap is
    NP-complete for some polynomial p(n).
  • CLIQUE strongly NP-complete
  • INTEGER KNAPSACK not
  • Unless P NP, there can be no pseudo-polynomial
    algorithm form any strongly NP-complete problem.

10
Generalizations
  • The more general a problem is, the harder it is
    to solve.
  • DIRECTED HAMILTON CIRCUIT
  • generalization of HAMILTON CIRCUIT
  • G (V, E) is a special case of D (V, A) s.t.
    whenever (u, v) ? E, (u, v) and (v, u) ? A
  • HAMILTON CIRCUIT ?p DIRECTED HAMILTON CIRCUIT
    trivial.
  • SUBGRAPH ISOMORPHISM
  • Given two graphs G and G, is there a subgraph of
    G that is isomorphic to G?
  • CLIQUE and HAMILTON CIRCUIT are special cases

11
Restrictions
  • Special cases of NP-complete problems need not be
    hard.
  • CLIQUE vs. PLANAR CLIQUE
  • Planar graph can not have a clique with 5 or more
    nodes.
  • PLANAR CLIQUE can be solved in O(V4) time.
  • HAMILTON CIRCUIT for a graph s.t.
  • each node has degree 4 or less, oreach node has
    degree exactly 3
  • still NP-complete

12
NP-hard
  • A is NP-hard if all NP problems polynomially
    reduce to A. (A ? NP?)
  • Optimization problems for which decision problem
    lies in NPC are also NP-hard.
  • Kth HEAVIEST SUBSET
  • Given integers c1, , cn, K and L, are there K
    distinct subsets S1, , SK s.t. ?j?Si cj ? L for
    i 1, , K?
  • PARTITION ?p Kth HEAVIEST SUBSET
  • The solver A for Kth HEAVIEST SUBSET can be used
    to solve PARTITION.
  • Running A n times and by binary search, PARTITION
    can be solved.

13
Linear algebra review
  • A finite collection of vectors x1, ..., xk ? Rn
    is linearly independent if the unique solution to
    ?ki1 ?ixi 0 is ?i 0, i 1, ..., n.
    Otherwise, the vectors are linearly dependent.
  • A finite collection of vectors x1, ..., xk ? Rn
    is affinely independent if the unique solution to
    ?ki1 ?ixi 0, ?ki1 ?i 0, is ?i 0, i 1,
    ..., n.
  • x1, ..., xk ? Rn are affinely independent iffx2
    x1, ..., xk x1 are linearly independent
    iff(x1, 1), ..., (xk, 1) ? Rn1 are linearly
    independent
  • If x ? Rn Ax b ? ?, the maximum number of
    affinely independent solutions to Ax b is n 1
    rank(A).

14
Linear algebra review
  • A nonempty subset H ? Rn is called a subspace if
    ?x ?y ? H ?x, y ? H and ??, ? ? R.
  • A linear combination of a collection of vectors
    x1, ..., xk ? Rn is any vector y ? Rn s.t. y
    ?ki1 ?ixi for some ? ? Rk.
  • The span of a collection of vectors x1, ..., xk ?
    Rn is the set of all linear combinations of those
    vectors.
  • Given a subspace H ? Rn, a collection of linearly
    independent vectors whose span is H is called a
    basis of H. The number of vectors in the basis is
    the dimension of the subspace.

15
Linear algebra review
  • The span of the columns of a matrix A is a
    subspace called the column space or the range,
    denoted range(A).
  • The span of the rows of a matrix A is a subspace
    called the row space.
  • rank(A) dimensions of the column space and row
    space
  • Clearly, rank(A) ? min m, n. If rank(A)
    minm, n, then A is said to have full rank.
  • The set x ? Rn Ax 0 is called the nullspace
    of A (null(A)) and has dimension n rank(A).

16
Polyhedra
  • A polyhedron is a set of the form x ? Rn Ax ?
    b, where A ? Rm?n and b ? Rm.
  • A polyhedron P ? Rn is bounded if there exists a
    constant K s.t. x lt K ?x ? S, ?i ? 1, n.
  • A bounded polyhedron is called a polytope.
  • Let a ? Rn and b ? R be given.
  • The set x ? Rn aTx bis called a hyperplane.
  • The set x ? Rn aTx ? b is called a half-space.

17
Convex
  • A set S ? Rn is convex if ?x, y ? S, ? ? 0, 1,
    we have ?x (1 ?)y ? S.
  • Let x1, ..., xk ? Rn and ? ? Rk be given such
    that ?T1 1. Then
  • the vector ?ki1 ?ixi is said to be a convex
    combination of x1, ..., xk.
  • the convex hull of x1, ..., xk is the set of all
    convex combinations of these vectors.
  • A set is convex iff for any two points in the
    set, the line segment joining those two points
    lies entirely in the set.
  • All polyhedra are convex.

18
Dimensions
  • A polyhedron P is of dimension k, denoted dim(P)
    k, if the maximum number of affinely
    independent points in P is k 1.
  • A polyhedron P ? Rn is full-dimensional if dim(P)
    n.
  • Let
  • M 1, ..., m,
  • M i ? M aix bi ?x ? P (the equality
    set),
  • M? M \ M (the inequality set).
  • (A, b) (A?, b?) be the corresponding rows of
    (A, b).
  • If P ? Rn, then dim(P) rank(A, b) n.

19
Inner (interior) points
  • x ? P is called an inner point of P if aix lt bi
    ?i ? M?.
  • x ? P is called an interior point of P if aix lt
    bi ?i ? M.
  • Every nonempty polyhedron has an inner point.
  • A polyhedron has an interior point iff it is
    full-dimensional.

20
Valid inequalities
  • The inequality denoted by (?, ?0) is called a
    valid inequality for P if ?x ? ?0 ?x ? P.
  • (?, ?0) is a valid inequality iff P lies in the
    half-space x ? Rn ?x ? ?0 iff max?x x ? P
    ? ?0.
  • If (?, ?0) is a valid inequality for P and F x
    ? P ?x ?0, F is called a face of P and we
    say that (?, ?0) represents or defines F.
  • A face is said to be proper if F ? ?, and F ? P.
  • The face represented by (?, ?0) is nonempty iff
    max?x x ? P ?0.
  • If the face F is nonempty, we say it supports P.
  • The set of optimal solutions to an LP is always a
    face of the feasible region.

21
Descriptions
  • If P x ? Rn Ax ? b, then the inequalities
    corresponding to the rows of (A, b) are called a
    description of P.
  • Every polyhedron has an infinite number of
    descriptions.
  • We assume that all inequalities are supporting.
  • If (?, ?0) and (?, ?0) are two valid inequalities
    for a polyhedron P ? Rn, we say (?, ?0)
    dominates (?, ?0) if there exists u gt 0 such that
    ? ? u? and ?0 ? u?0.
  • A valid inequality (?, ?0) is redundant in the
    description of P if there exists a linear
    combination of the inequalities in the
    description that dominates (?, ?0).

22
Facets
  • A face F is said to be a facet of P if dim(F)
    dim(P) 1.
  • Facets are all we need to describe polyhedra.
  • If F is a facet of P, then in any description of
    P, there exists some inequality representing F.

23
Representations
  • Every full-dimensional polyhedron P has a unique
    (to within scalar multiplication) representation
    that consists of one inequality representing each
    facet of P.
  • If dim(P) n k with k gt 0, then P is described
    by a maximal set of linearly independent rows of
    (A, b), as well as one inequality representing
    each facet of P.
  • If a facet F of P is represented by (?, ?0), then
    the set of all representations of F is obtained
    by taking scalar multiples of (?, ?0) plus linear
    combinations of the equality set of P.

24
Extreme points
  • x is an extreme point of P if there do not exist
    x1, x2 ? P s.t. x 1/2 x1 1/2 x2.
  • x is an extreme point of P iff x is a
    zero-dimensional face of P.
  • If a (A, b) is a description of P ? ?, and
    rank(A) n k, then P has a face of dimension k
    and no proper face of lower dimension.
  • P has an extreme point iff rank(A) n.

25
Extreme rays
  • Let P0 be r ? Rn Ar ? 0. r ? P0 \ 0 is
    called a ray of P.
  • r is an extreme ray of P if there do not exist
    rays r1 and r2 of P s.t. r 1/2 r1 1/2 r2.
  • If P ? ?, then r is an extreme ray of P iff ?r
    ? ? R is a one-dimensional face of P0.
  • A polyhedron has a finite number of extreme
    points and extreme rays.

26
Polarity
  • ? (?, ?0) ? Rn1 ?Tx ? ?0 ?x ? P is the
    polar of the polyhedron P x ? Rn Ax ? b.
  • Let P ? Rn be a polyhedron with extreme points
    xkk?K and extreme rays rjj?J. Then ? (?,
    ?0)is a (polyhedral) cone that satisfies
  • ?Txk ?0 ? 0 ?k ? K
  • ?Trj ? 0 ?j ? J

27
Polarity
  • Duality between P and ?
  • dim(P) n, rank(A) n
  • The facets of P are the extreme rays of the polar
    of P
  • ?Tx ? ?0 defines a facet of ? iff x is an extreme
    point of P
  • ?Tr ? 0 defines a facet of ? iff r is an extreme
    ray of P

28
Polarity
  • If aTx ? b is a valid inequality for P, b gt 0
  • Scale each inequality by the RHS and rewrite the
    polytope as P x ? Rn Ax ? 1.
  • The 1-polar of P is?1 ? ? Rn ?Txk ? 1 ?k ?
    K
  • If P x ? Rn Ax ? 1 is a full-dimensional
    polytope, then ?1 is a full-dimensional polytope
    and P is the 1-polar of ?1.

29
Polarity
  • If P is full-dimensional and bounded, and 0 is an
    interior point of P, then
  • P x ?tx ? 1 for t ?T, ?tt?T are the
    extreme points of ?1
  • ?1 ? ?xk ? 1 for k ?K, xkk?K are the
    extreme points of P
  • x ? P iff max?x ? ? ?1 ? 1
  • ? ? ?1 iff max?x x ? P ? 1
  • Given a linear program, if we can optimization
    problem in polynomial time, then we can solve
    separation problem in polynomial time using the
    polarity.

30
Ellipsoid algorithm
  • First polynomial-time algorithm for linear
    programming
  • Computationally impractical, but provides a
    connection between separation and optimization
    problems
  • Efficient Optimization Property
  • For a given class of optimization problems (P)
    max cx x ?X ? Rn, there exists an efficient
    (polynomial) algorithm.
  • Efficient Separation Property
  • There exists an efficient algorithm for the
    separation problem associated with the problem
    class.

31
Ellipsoid Property
  • Ellipsoid w/ center y E E(D, y) x ? Rn
    (x y)TD-1(x y) ? 1,where D n?n positive
    definite matrix.
  • Ellipsoid property
  • Given an ellipsoid E E(D, y), the
    half-ellipsoid H E ? x ? Rn dx ? dy
    obtained by any inequality dx ? dy through its
    center is contained in an ellipsoid E with the
    property thatvol(E) / vol(E) ? e-1/2(n1).

E
E
32
Ellipsoid algorithm
  • 1. Find ellipsoid E0 ? P
  • 2. Find the center x0 of E0
  • 3. Test if x0 ? P
  • 4. If x0 ? P, stop. O.w. find the violated
    inequality (?, ?0) passing through x0
  • 5. From (?, ?0), get a half-ellipsoid HE ? P
  • 6. Find a new ellipsoid E1 ? HE s.t. vol(E1) /
    vol(E0) ? e-1/2(n1) lt 1
  • 7. E0 E1. Go to 2.

33
Ellipsoid method
  • Shrinking ellipsoid
  • In a polynomial number of steps, we can show
  • A point x in P, or
  • P is empty
  • Given a linear program, if we can solve
    separation problem in polynomial time, then we
    can solve the optimization problem in polynomial
    time using the ellipsoid algorithm.

34
Equivalence of separation and optimization

  • Ellipsoid
  • Separate over P in P ? Solve LP over P in P

  • Polarity
  • Solve LP over P in P ? Separate over ?1 in P

  • Ellipsoid
  • Separate over ?1 in P ? Solve LP over ?1 in P

  • Polarity
  • Solve LP over ?1 in P ? Separate over P in P
Write a Comment
User Comments (0)
About PowerShow.com