Title: Paper review of ENGG6140 Optimization Techniques
1Paper review of ENGG6140 Optimization Techniques
Paper Review -- Interior-Point
Methods for LP
Yanrong Hu Ce Yu Mar. 11, 2003
2Outline
- General introduction
- The original algorithm description
- A variant of Karmarkars algorithm
- - a detailed simple example
- Another example
- More issues
3General introduction
- Interior-point methods for linear programming
- The most dramatic new development in operations
research (linear programming) during the 1980s. - Opened in 1984 by a young mathematician at ATT
Bell Labs, N. Karmarkar. - Polynomial-time algorithm and has great
potential for solving huge linear programming
problems beyond the reach of the simplex method. - Karmarkars method stimulated development in both
interior-point and simplex methods.
4General introduction
Difference between Interior point methods and the
simplex method The big difference between
Interior point methods and the simplex method
lies in the nature of trial solutions.
5Karmarkars original Algorithm
Karmarkar assumes that the LP is given in
Canonical form of the problem
Assumptions
To apply the algorithm to LP problem in standard
form, a transformation is needed.
Min Z CX s.t. AX ?b X ? 0
6Karmarkars original Algorithm
An example of transformation
from standard form
to canonical form
Min z y1y2 (zcy) s.t. y12y2 ? 2
(AY?b) y1, y2 ? 0
Min z5x1 5x2 s.t. 3x18x23x3-2x40 (AX0)
x1 x2 x3 x41 (1X 1) xj ? 0,
j1,2,3,4
7Karmarkars original Algorithm
The principal idea The algorithm creates a
sequence of points
having decreasing values of the objective
function. In the step, the point
is brought into the center of the simplex by a
projective transformation.
8Karmarkars original Algorithm
The steps of Karmarkars algorithm are Step 0.
start with the solution point
and compute the step length
parameters
Step k. Define
and compute
X is brought to the center by
9A Variant
The Affine Variant of Karmarkars
Algorithm Concept 1 Shoot through the interior
of the feasible region toward an optimal
solution. Concept 2 Move in a direction that
improves the objective function value
at the fastest feasible rate. Concept 3
Transform the feasible region to place the
current trail solution near its
center, thereby enabling a large improvement when
concept 2 is implemented.
10An Example
The Problem
Max Z x1 2x2 s.t. x1 x2?8 x1,x2? 0
Optimal solution is (x1,x2)(0,8) with Z16
11An Example
- The algorithm begins with an initial solution
that lies in - the interior of the feasible region.
- ? Arbitrarily choose (x1,x2)(2,2) to be the
initial solution.
Concept 1 Shoot through the interior of the
feasible region toward an optimal solution.
Max Z x1 2x2 s.t. x1 x2?8 x1,x2? 0
Concept 2 Move in a direction that improves the
objective function value at the fastest feasible
rate.
- The direction is perpendiculars to (and toward)
the - objective function line. (3,4)(2,2)(1,2),
where the - vector (1,2) is the gradient of the Objective
Function. - The gradient is the coefficient of the
objective function.
12An Example
The algorithm actually operates on the augmented
form.
Max Z cT X s.t. AX b
X ? 0
The augmented form can be written in general as
Initial solution (2,2) Gradient of obj.
fn. (1,2)
(2,2,4) (1,2,0)
13An Example
Using Projected Gradient to implement Concept 1
2
Show The augmented form graphically
- Adding the gradient to the initial leads to
(3,4,4) (2, 2, 4) (1,2,0) (infeasible) - To remain feasible, the algorithm project the
point (3,4,4) down onto the feasible triangle. - To do so, the projected gradient gradient
projected onto the feasible region is used. - Then the next trial solution move in the
direction of projected gradient
- Feasible region the triangle - Optimum
(0,8,0), Initial solution (2,2,4) - Gradient of
Obj. fn. cT 1, 2, 0
14An Interior-point Algorithm
Using Projected Gradient to implement Concept 1
2
- ? a formula is available for computing the
projected gradient - projection matrix P I-AT(AAT)-1A
- projected gradient cp Pc
- now we are ready to move from (2,2,4)
- in the direction of cp to a new point
- ? determines how far we move. large ? ?
too close to the boundary small ? ? more
iterations - we have chosen ? 0.5, so the new trial solution
move to - x ( 2, 3, 3)
15An Interior-point Algorithm
Centering scheme for implementing Concept 3 Why
-- The centering scheme keeps turning the
direction of
the projected gradient to point more nearly
toward an optimal solution as the algorithm
converges toward this solution. How -- Simply
changing the scale (units) for each of the
variable so that the trail solution becomes
equidistant from the constraint boundaries in the
new coordinate system. Define Ddiagx,
Concept 3 Transform the feasible region to
place the current trail solution
near its center, thereby enabling a large
improvement when concept 2 is implemented.
bring x to the center
in the new coordinate
16An Interior-point Algorithm
Initial trial solution (x1,x2,x3) (2,2,4)
In this new coordinate system, the problem
becomes
17 Summary and Illustration of the Algorithm
Iteration 1. Given the initial solution
(x1,x2,x3) (2,2,4)
Summary of the General Procedure
Step1. given the current trial solution
(x1,x2,,xn), set
X is brought to the center (1,1,1) by
18Summary and Illustration of the Algorithm
Iteration 1. (cont.)
Summary of the General Procedure (cont.)
To compute projected gradient
19Summary and Illustration of the Algorithm
Iteration 1. (cont.)
Summary of the General Procedure (cont.)
Compute projected gradient
20Summary and Illustration of the Algorithm
Iteration 1. (cont.)
step4. Determine how far to move by identify v.
Then make move by calculating
step4. define v as the absolute value of the
negative component of cp having the largest
value, so that v-22. In this coordinate, the
algorithm moves from the current trial solution
to the next one.
21Summary and Illustration of the Algorithm
Iteration 1. (cont.)
Summary of the General Procedure (cont.)
Step 5. In the original coordinate, the solution
is
Step 5. Back to the original coordinate by
calculating
this completes the iteration 1. The new solution
will be used to start the next iteration.
22Summary and Illustration of the Algorithm
Iteration 2. Given the current trial solution
X is brought to the center (1,1,1,) in the new
coordinate, and move to (0.83, 1.4,
0.5), corresponds (2.08, 4.92,1.0) in the
original coordinate
23Summary and Illustration of the Algorithm
Iteration 3. Given the current trial solution
x(2.08, 4.92, 1.0)
X is brought to the center (1,1,1,) in the new
coordinate, and move to (
0.54,1.30,0.50), corresponds (1. 13, 6.37, 0.5)
in the original coordinate
24An Example
Effect of rescaling of each iteration Sliding
the optimal solution toward (1,1,1) while the
other BF solutions tend to slide away.
A
B
C
D
25Summary and Illustration of the Algorithm
More iterations. Starting from the current trial
solution x, following the steps, x is moving
toward the optimum (0,8). When the trial solution
is virtually unchanged from the proceeding one,
then the algorithm has virtually converged to an
optimal solution. So stop.
26Another Example
The problem
MAX Z 5x1 4x2 ST 6x1 4x2lt24
x1 2x2lt6 -x1
x2lt1 x2lt2 x1, x2gt0
Augmented form
MAX Z 5x1 4x2 ST 6x1 4x2 x324 x1
2x2 x46 -x1 x2 x51
x2 x62
xjgt0, j1,2,3,4,5,6
27Another Example
Starting from an initial solution x(1,1,14,3,1,1)
The trial solution at each iteration
28More issues
- Interior-point methods is designed for dealing
with big problems. Although the claim that its
much faster than the simplex method is
controversy, many tests on huge LP problems show
its outperformance. - After Karmarkars paper, many related methods
have been developed to extend the applicability
of Karmarkars algorithm, e.g. - Infeasible interior points method -- remove the
assumption that there always exits a nonempty
interior. - Methods applying to LP problems in standard form.
29More issues
- Methods dealing with finding initial solution,
and estimating the optimal solution. - Methods working with primal-dual problems.
- Studies about moving step-long/short steps.
- Studies about efficient implementation and
complexity of various methods. - Karmarkars paper not only started the
development of interior point methods, but also
encouraged rapid improvement of simplex methods.
30Reference
1 N. Karmarkar, 1984, A New Polynomial - Time
Algorithm for Linear Programming, Combinatorica 4
(4), 1984, p. 373-395. 2 M.J. Todd, (1994),
Theory and Practice for Interior-point method,
ORSA Jounal on Computing 6 (1), 1994, p.
28-31. 3 I. Lustig, R. Marsten, D. Shanno,
(1993), Interior-point Methods for Linear
Programming Computational State of the Art, ORSA
Journal on Computing, 6 (1), 1994, p. 1-14. 4
Hillier,Lieberman,Introduction to Operations
Research (7th edition) 320-334 5 Taha,
Operations Research An Introduction (6th
edition) 336-345 6 E.R. Barnes, 1986, A
Variation on Karmarkars Algorithm for Sloving
Linear Programming problems, Mathematical
Programming 36, 1986, p. 174-182. 7 R.J.
Vanderbei, M.S. Meketon and B.A. Freeman, A
Modification of karmarkars Linear Programming
Algorithm, Algorithmica An International Journal
in Computer Science 1 (4), 1986 p. 395 407. 8
D. Gay (1985) A Variant of Karmarkars Linear
Programming Algorithm for Problems in Standard
form, Mathematical Programming 37 (1987) 81-90
more
31Paper Review of ENGG6140 Optimization Techniques
Thank You!
Yanrong Hu Ce Yu Mar. 11 2003