Improving Performance of The Interior Point Method by Preconditioning PowerPoint PPT Presentation

presentation player overlay
1 / 11
About This Presentation
Transcript and Presenter's Notes

Title: Improving Performance of The Interior Point Method by Preconditioning


1
Improving Performance of The Interior Point
Methodby Preconditioning
  • Project Proposal by Ken Ryals
  • For AMSC 663-664
  • Fall 2007-Spring 2008

2
Background
  • The IPM method solves a sequence of constrained
    optimization problems such that the sequence of
    solutions approaches the true solution from
    within the valid region. As the constraints
    are relaxed and the problem re-solved, the
    numerical properties of the problem often become
    more interesting.

µ?0
3
Application
  • Why is the IPM method of interest?
  • It applies to a wide range of problem types
  • Linear Constrained Optimization
  • Semidefinite Problems
  • Second Order Cone Problems
  • Once in the good region of a solution to the
    set of problems in the solution path (of µ s)
  • Convergence properties are great (quadratic).
  • It keeps the iterates in the valid region.
  • Specific Research Problem
  • Optimization of Distributed Command and Control

4
Optimization Problem
  • The linear optimization problem can be formulated
    follows
  • inf cTx Ax b.
  • The search direction is implicitly defined by the
    system
  • ?x p ?z r
  • A ?x 0
  • AT ?y ?z 0.
  • For this, the Reduced Equation is
  • A p AT ?y -Ar ( b)
  • From ?y we can get ?x r - p ( -AT ?y ).
  • Note The Nesterov-Todd direction corresponds to
    p D?D, where p z x.
  • i.e., D is the metric geometric mean of X and Z-1
    ?D X½(X½ZX½)-½X½

x is the unknown y is the dual of x z is the
slack
5
The Problem
  • From these three equations, the Reduced Equations
    for ?y are
  • A p AT ?y -Ar ( b)
  • The optimization problem is reduced to solving
    a system of linear equations to generate the next
    solution estimate.
  • If p cannot be evaluated with sufficient
    accuracy, solving these equations becomes
    pointless due to error propagation.
  • Aside Numerically, evaluating r - p ?z can
    also be a challenge. Namely, if the iterates
    converge, then ?x ( r - p ?z) approaches zero,
    but r does not hence the accuracy in ?x can
    suffer from problems, too.

6
Example A Poorly-Conditioned Problem
  • Consider a simple problem
  • Lets change A1,1 to make it ill-conditioned

Constraint Matrix ?
7
Observations
  • The AD2AT condition exhibits an interesting dip
    around iteration 4-5, when the solution enters
    the region of the answer.
  • How can we exploit whatever caused the dip?
  • The standard approach is to use factorization to
    improve numerical performance.
  • The Cholesky factorization is UTU A p AT
  • Factoring a matrix into two components often
    trades one matrix with a condition of M for
    two matrices with conditions of vM.
  • My conjecture is that AAT and D2 interacted to
    lessen the impact of the ill conditioning in A
  • ?Can we precondition with AAT somehow?

8
Conjecture - Math
  • We are solving A p AT ?y -Ar
  • A is not square, so it isnt invertible but AAT
    is
  • What if we pre-multiplied by it?
  • (AAT)-1 A p AT ?y - (AAT)-1 Ar

Note Conceptually, we have Since, this looks
like a similarity transform, it might have
nice properties
- (AAT)-1 Ar
- (AAT)-1 b
(AAT)-1 A p AT ?y
(AT)-1 p AT ?y
9
Conjecture - Numbers
  • Revisit the ill-conditioned simple problem,
  • Condition of A p AT (p is D2) 4.2e014
  • Condition of (AAT) 4.0e014
  • Condition of (AAT)-1 A D2 AT 63.053
  • (which is a little less than 1014)
  • How much would it cost?
  • AAT is m by m (m constraints)
  • neither AAT or(AAT)-1 is likely to be sparse
  • Lets try it anyway
  • (If it behaves nicely, it might be worth
    figuring out how to do it efficiently)

10
Experiment - Results
  • It does work ?
  • The condition number stays low (lt1000) instead of
    hovering in the 1014 range.
  • It costs more ?
  • Need inverse of AAT once.
  • (AAT)-1 gets used every iteration.
  • The inverse is needed later, rather than early in
    the process thus, it could be iteratively
    developed during the iteration process

Solution enters region of Newton convergence
11
Project Proposal
  • Develop a system to for preconditioned IPM.
  • Create Matlab version to define structure of
    system.
  • Permits validation against SeDuMi and SPDT3
  • Use C transition from Matlab to pure C.
  • Create MEX modules from C code to use in Matlab
  • Apply the technique to the Distributed C2 problem
  • Modify the system to develop (AAT)-1 iteratively
    or to solve the system of equations iteratively
  • Can we use something like the Sherman-Morrison-Woo
    dbury Formula?
  • (A - ZVT )-1 A-1 A-1Z(I -
    VTA-1Z)-1VTA-1
  • Can the system can be solved using the
    Preconditioned Conjugate Gradient Method?
  • Time permitting, parallel-ize the system
  • Inverse generation branch, and
  • Iteration branch using current version of
    Inverse.
  • Testing Many test optimization problems can be
    found online
  • AFIRO is a good start (used in AMSC 607
    Advanced Numerical Optimization)
Write a Comment
User Comments (0)
About PowerShow.com