Tier I: Mathematical Methods of Optimization - PowerPoint PPT Presentation

1 / 149
About This Presentation
Title:

Tier I: Mathematical Methods of Optimization

Description:

Title: Slide 1 Author: Daniel Grooms Last modified by: Agnes Devarieux-Martin Created Date: 6/22/2004 9:53:06 PM Document presentation format: On-screen Show – PowerPoint PPT presentation

Number of Views:150
Avg rating:3.0/5.0
Slides: 150
Provided by: DanielG162
Category:

less

Transcript and Presenter's Notes

Title: Tier I: Mathematical Methods of Optimization


1
(No Transcript)
2
Tier I Mathematical Methods of Optimization
  • Section 2
  • Linear Programming

3
Linear Programming (LP)
  • Linear programming (linear optimization) is the
    area of optimization problems with linear
    objective functions and constraints
  • Example
  • minimize f(x) 6x1 5x2 2x3 7x4subject
    to 2x1 8x3 x4 20
  • x1 5x2 2x3 3x4 -5

4
Linear Programming cont
  • None of the variables are multiplied by another
    variable, raised to a power, or used in a
    nonlinear function
  • Because the objective function and constraints
    are linear, they are convex. Thus, if an optimal
    solution to an LP problem is found, it is the
    global optimum

5
LP Standard Form
  • LP Standard form
  • minimize f cx
  • subject to Ax b
  • xi 0 i 1, , n
  • where c is called the cost vector (1 by n), x is
    the vector of variables (n by 1), A is the
    coefficient matrix (m by n), and b is a m by 1
    vector of given constants.

6
Standard Form Basics
  • For a maximization problem, we can transform
    using
  • max(f(x)) ? min(-f(x))
  • For inequality constraints, use slack
    variables 2x1 3x2 5 ? 2x1 3x2 s1
    5 where s1 0

7
Using Slack Variables
  • When we transform the equation
  • 2x1 3x2 5 to 2x1 3x2 s1 5
  • If the left-hand side (LHS) (2x1 3x2) is less
    than the right-hand side (RHS) (5), then s1 will
    take a positive value to make the equality true.
    The nearer the value of the LHS is to the RHS,
    the smaller the value of s1 is. If the LHS is
    equal to the RHS, s1 0. s1 cannot be negative
    because the LHS cannot be greater than the RHS.

8
Standard Form Example
  • Example
  • Write in Standard Form
  • maximize f x1 x2
  • subject to 2x1 3x2 6
  • x1 7x2 4
  • x1 x2 3
  • x1 0, x2 0
  • Define slack variables x3 0 x4 0

9
Example Problem Rewritten
  • The problem now can be written
  • minimize g x1 x2
  • subject to 2x1 3x2 x3 6
  • x1 7x2 x4 4
  • x1 x2 3
  • x1 0, x2 0, x3 0, x4 0

10
Linear Algebra Review
  • The next few slides review several concepts from
    linear algebra that are the basis of the methods
    used to solve linear optimization problems

11
Vectors Linear Independence
  • Vectors
  • A k-vector is a row or column array of k numbers.
    It has a dimension of k.
  • Linear Independence (LI)
  • A collection of vectors a1, a2, , ak, each of
    dimension n, is called linearly independent if
    the equation means that for j1, 2,
    , k

12
Linear Independence cont
  • In other words, a set of vectors is linearly
    independent if one vector cannot be written as a
    combination of any of the other vectors.
  • The maximum number of LI vectors in a
    n-dimensional space is n.

13
Linear Independence cont
  • For example, in a 2-dimension space
  • The vectors are not
  • linearly independent because x2 5x1.
  • are LI because there is
  • no constant you can multiply one by to get the
    other.


14
Spanning Sets
  • A set of vectors a1, a2, , ak in a n-dimensional
    space is said to span the space if any other
    vector in the space can be written as a linear
    combination of the vectors
  • In other words, for any vector b, there must
    exist scalars l1, l2, , lk such that

15
Bases
  • A set of vectors is said to be a basis for a
    n-dimensional space if
  • The vectors span the space
  • If any of the vectors are removed, the set will
    no longer span the space
  • A basis of a n-dimensional space must have
    exactly n vectors
  • There may be many different bases for a given
    space

16
Bases cont
  • An example of a basis is the coordinate axis of a
    graph. For a 2-D graph, you cannot remove one of
    the axes and still form any line with just the
    remaining axis.
  • Or, you cannot have three axes in a 2-D plot
    because you can always represent the third using
    the other two.

17
Systems of Equations
  • Linear Algebra can be used to solve a system of
    equations
  • Example
  • 2x1 4x2 8 3x1 2x2 11
  • This can be written as an augmented matrix

18
Systems of Equations cont
  • Row operations may be performed on the matrix
    without changing the result
  • Valid row operations include the following
  • Multiplying a row by a constant
  • Interchanging two rows
  • Adding one row to another

19
Solving SOEs
  • In the previous example, we want to change the A
    matrix to be upper triangular
  • multiply top
  • row by ½
  • add -3 times the
  • top row to the
  • bottom row

20
Solving SOEs cont
  • multiply bottom
  • row by -1/8
  • From the upper triangular augmented matrix, we
    can easily see that x2 1/8 and use this to get
    x1
  • x1 4 2 . 1/8 15/4

21
Matrix Inversion
  • The inverse of a matrix can be found by using row
    operations
  • Example
  • Form the augmented matrix (A, I)
  • Transform to (I, A-1)
  • using row operations

22
Optimization Equations
  • We have seen that the constraints can be written
    in the form .
  • We should have more variables than equations so
    that we have some degrees of freedom to optimize.
  • If the number of equations are more than or equal
    to the number of variables, the values of the
    variables are already specified.

23
General Solution to SOEs
  • Given a system of equations in the form
  • Assume m (number of equations) lt n (number of
    variables) ? underspecified system
  • We can split the system into (n-m) independent
    variables and (m) dependent variables. The
    values of the dependent variables will depend on
    the values we choose for the independent
    variables.

24
General Solution cont
  • We call the dependent variables the basic
    variables because their A-matrix coefficients
    will form a basis. The independent variables
    will be called the nonbasic variables.
  • By changing the variables in the basis, we can
    change bases. It will be shown that this allows
    examining different possible optimum points.

25
General Solution cont
  • Separate the A matrix in the following way
  • Or,

26
General Solution cont
  • Define matrices B N as the following
  • where B is a m by m matrix, N is a m by (n-m)
    matrix, aj is the jth column of the A matrix
  • B is called the basic matrix and N is called
    the nonbasic matrix

27
General Solution cont
  • The B matrix contains the columns of the A-matrix
    that correspond to the x-variables that are in
    the basis. Order must be maintained.
  • So, if x4 is the second variable of the basis, a4
    must be the second column of the B-matrix
  • The N matrix is just the columns of the A-matrix
    that are left over.

28
General Solution cont
  • Similarly, define
  • We will see later how to determine which
    variables to put into the basis. This is an
    important step to examine all possible optimal
    solutions.

29
General Solution cont
  • Now, we have
  • Multiply both sides by B-1
  • So,

30
Basic Solution
  • We can choose any values for (n-m) variables (the
    ones in xN) and then solve for the remaining m
    variables in xB
  • If we choose xN 0, thenThis is called a basic
    solution to the system
  • Basic Solution

31
Basic Feasible Solutions
  • Now we have a solution to Ax b. But that was
    just one of two sets of constraints for the
    optimization problem. The other was xi 0, i
    1, , n (non-negativity)
  • A basic feasible solution (BFS) is a basic
    solution where every x is non-negative
  • A BFS satisfies all of the constraints of the
    optimization problem

32
Extreme Points
  • A point is called an extreme point (EP) if it
    cannot be represented as a strict (0 lt l lt 1)
    convex combination of two other feasible points.
  • Remember a convex combination of two points is a
    line between them.
  • So, an EP cannot be on a line of two other
    feasible points.

33
Extreme Points (Graphical)
  • Given a feasible region, an extreme point cannot
    lie on a line between two other feasible points
    (it must be on a corner)
  • In a n-dimensional space, an extreme point is
    located at the intersection of n constraints

Not Extreme Points
FeasibleRegion
Extreme Point
34
Optimum Extreme Points
c
Starting Point
  • We have a maximization problem, so we want to go
    as far in the direction of the c (objective
    function) vector as we can
  • Can we determine anything about the location of
    the optimum point?

35
Optimum Extreme Points
c
  • If we start on a line, we can move along that
    line in the direction of the objective function
    until we get to a corner
  • In fact, for any c vector, the optimum point will
    always be on a corner

36
Basic Feasible Solutions
  • In a n-dimensional space, a BFS is formed by the
    intersection of n equations.
  • In 2-D

Basic Feasible Solution
Constraint 1
Constraint 2
  • But, we just saw that an extreme point is also
    a corner point. So, a BFS corresponds to an EP.

37
Tying it Together
  • We just saw that a basic feasible solution
    corresponds to an extreme point.
  • This is very important because for LP problems,
    the optimum point is always at an extreme point.
  • Thus, if we can solve for all of the BFSs
    (EPs), we can compare them to find the optimum.
  • Unfortunately, this takes too much time.

38
Simplex Method Introduction
  • The simplex method is the most common method for
    solving LP problems.
  • It works by finding a BFS determining whether it
    is optimal and if it isnt, it moves to a
    better BFS until the optimal is reached.
  • This way, we dont have to calculate every
    solution.

39
Simplex Method Algebra
  • Recall

Sum over all non-basic variables
Objective Function
substitute
into above equation
40
Simplex Method Algebra
Multiply through and collect xj terms
where
41
Simplex Method Equations
  • So, minimize
  • If (cj zj) 0 for all j ? N, then the current
    BFS is optimal for a minimization problem.
  • Because, if it were lt 0 for some j, that nonbasic
    variable, xj, could enter the basis and reduce
    the objective function.

Subject to
42
Entering Variables
  • A nonbasic variable may enter the basis and
    replace one of the basic variables
  • Since xN 0, and we have non-negativity
    constraints, the entering variable must increase
    in value.
  • The entering variables value will increase,
    reducing the objective function, until a
    constraint is reached.

43
Entering Variable Equation
  • The equation to determine which variable enters
    is . Calculate for all nonbasic indices j
  • For a minimization problem, choose the index j
    for which cj - zj is the most negative
  • If cj - zj 0 for all j, the solution is optimal
  • For a maximization problem, choose the index j
    for which cj - zj is the most positive
  • If cj - zj 0 for all j, the solution is optimal

44
Leaving Variables
  • As the value of the entering variable increases,
    the value of at least one basic variable will
    usually decrease
  • If not, the problem is called unbounded and the
    value of the minimum objective function is -?
  • The variable whose value reaches zero first will
    be the variable that leaves the basis

45
Entering Leaving Variables
  • Example x1 is entering the basis while x2, x3
    x4 are the current basic variables
  • As soon as x2 reaches zero, we must stop because
    of the non-negativity constraints. But, now x2
    0, so it is a nonbasic variable and x1 gt 0, so
    it is a basic variable. So, x2 leaves the basis
    x1 enters the basis.

x4
x3
x2
x1
46
Leaving Variable Equation
  • Let j be the index of the variable that is
    entering the basis and i be the index of the
    variable that is leaving the basis
  • Meaning, for every index i that is in the basis
    and has , calculate .
    The index of the value that is the minimum is the
    index of the leaving variable.

47
Leaving Variable Equation
  • The previous expression is obtained from the
    equation
  • which applies when a constraint is reached

48
The Example Revisited
  • x2, x3, x4 start out at (B-1b)i (i2, 3, 4)
    and have slopes of (B-1aj)i (i2, 3, 4) where
    j1 because 1 is the index of the entering
    variable (x1)
  • Thus, the distance we can go before a basic
    variable reaches zero is for B-1a1 gt
    0. But, if (B-1a1)i lt 0 (like x3), it wont ever
    reach zero.

49
The Example Revisited
x4
x3
x2
x1
  • We can also see how if none of the variables
    decreased, we could keep increasing x1 and
    improving the objective function without ever
    reaching a constraint This gives an unbounded
    solution

50
Example Problem
  • Minimize f -x1 x2
  • Subject to x1 x2 5
  • 2x1 x2 4
  • x1 3 x1, x2 0
  • Given The starting basis is x1, x2, x3.
  • Insert slack variables x3, x4, x5.

51
Example
  • Minimize f -x1 x2
  • Subject to x1 x2 x3 5
  • 2x1 x2 x4 4
  • x1 x5 3
  • x1, x2, x3, x4, x5 0

52
Example
  • 1st Iteration

53
Example
  • Now, check optimality
  • x4
  • x5

lt 0
gt 0
54
Example
  • So, x4 enters the basis since its optimality
    indicator is lt 0.

55
Example
  • So, x3 is the leaving variable

56
Example
a4 has been substituted for a3
  • 2nd Iteration

57
Example
  • Optimality Indicators
  • x3
  • x5

58
Example Solution
  • All of the optimality indicators are 0, so this
    is the optimal solution.
  • So,

59
Simplex Algorithm Steps
  • With the chosen basis, get B and solve xB B-1b
    and f cBxB.
  • Calculate cj zj for all nonbasic variables, j.
  • For a min. problem, if all cj zjs are 0, the
    current solution is optimal. If not, choose the
    index with the most negative cj zj.
  • For a max. problem, if all cj zjs are 0, the
    current solution is optimal. If not, choose the
    index with the most positive cj zj.

60
Simplex Algorithm Steps
  • Using the equation choose the leaving variable.
  • If all (B-1aj)is are 0, then the solution is
    unbounded
  • Let xj enter the basis and xi leave the basis.
    Obtain the new B matrix and start again with step
    1.

61
Choosing A Starting Basis
  • In the example, we were given a starting basis.
    How to come up with one on our own?
  • Case 1 max (or min) problem with
  • Ax b (all inequalities) and
  • all entries of the b vector 0.
  • Insert slack variables into constraint equations
    and use the resulting identity matrix as the
    starting basis

62
Choosing A Starting Basis
  • Let s vector of slack variables
  • The problem will become

Subject to
Where I The Identity Matrix
. . .
. . .
. . .
63
Choosing A Starting Basis
  • Choose the slack variables to be the starting
    basis
  • The starting basis matrix (B) is the coefficients
    of the slack variables. This happens to be the
    identity matrix.
  • We can see that the starting basis is feasible
    (xB 0)

64
Example Problem 2
  • Minimize -x1 3x2
  • Subject to 2x1 3x2 6
  • -x1 x2 1 x1, x2 0
  • Insert slack variables
  • 2x1 3x2 x3 6
  • -x1 x2 x4 1
  • x1, x2, x3, x4 0

65
Example 2
  • Use the slacks as the starting basis


66
Example 2
  • Optimality Indicators
  • j1
  • j2

c2 - z2 is the most negative, so x2 enters the
basis
67
Example 2
x2 is entering the basis
68
Example 2
So, x4 is the leaving variable.
69
Example 2
  • 2nd Iteration

x2 replaced x4
70
Example 2
  • Optimality Indicators
  • j1
  • j4
  • So, x1 enters the basis

71
Example 2
  • Leaving Variable
  • So, x3 leaves the basis and x1 replaces it.

72
Example 2
  • 3rd Iteration

73
Example 2
  • Optimality Indicators
  • j3
  • j4
  • Both cj-zjs are 0, so the current solution is
    optimal

74
Example 2
  • This graph shows the path taken.
  • The dashed lines are perpendicular to the cost
    vector, c.

x2
Increasing c
x1
75
Example 2
  • Since we were minimizing, we went in the opposite
    direction as the cost vector

x2
Increasing c
x1
76
More on Starting Bases
  • Case 2 max (or min) problem with
  • Ax b (at least some constraints)
  • All entries of the b vector 0
  • Add slacks to make the problem become
  • Ax Is b x, s 0.
  • We cannot do the same trick as before because now
    we would have a negative identity matrix as our B
    matrix.

77
Case 2 cont
  • 2-Phase Method
  • Introduce artificial variables (y) where needed
    to get an identity matrix. If all constraints
    were , the problem will become
  • Ax Is Iy b x, s, y 0

78
Artificial Variables
  • Artificial variables are not real variables.
  • We use them only to get a starting basis, so we
    must get rid of them.
  • To get rid of them, we solve an extra
    optimization problem before we start solving the
    regular problem.

79
2-Phase Method
  • Phase 1
  • Solve the following LP starting with B I and xB
    y b
  • Minimize y
  • Subject to Ax Is Iy b x, s, y 0
  • If y ? 0 at the optimum, stop the problem is
    infeasible. If y 0, then use the current basis
    and continue on to phase 2.

80
2-Phase Method cont
  • Phase 2
  • Using the objective function from the original
    problem, change the c vector and continue solving
    using the current basis.
  • Minimize (or Maximize) cx
  • Subject to Ax Is b x, s 0

81
Artificial vs. Slack Variables
  • Slack variables are real variables that may be
    positive in an optimum solution, meaning that
    their constraint is a strict (lt or gt) inequality
    at the optimum.
  • Artificial variables are not real variables.
    They are only inserted to give us a starting
    basis to begin the simplex method. They must
    become zero to have a feasible solution to the
    original problem.

82
Artificial Variable Example 1
  • Consider the constraints
  • x1 2x2 4
  • -3x1 4x2 5
  • 2x1 x2 6 x1, x2 0
  • Introduce slack variables
  • x1 2x2 x3 4
  • -3x1 4x2 x4 5
  • 2x1 x2 x5 6

83
AV Example 1
  • We can see that we cannot get an identity matrix
    in the coefficients and positive numbers on the
    right-hand side. We need to add artificial
    variables
  • x1 2x2 x3 y1 4
  • -3x1 4x2 x4 y2 5
  • 2x1 x2 x5 6

84
AV Example 1
  • Now we have an identity matrix, made up of the
    coefficient columns of y1, y2, x5.
  • We would solve the problem with the objective of
    minimizing y1 y2 to get rid of the artificial
    variables, then use whatever basis we had and
    continue solving, using the original objective
    function.

85
Artificial Variable Example 2
  • Consider the constraints
  • x1 2x2 5x3 -4
  • 3x1 x2 3x3 2
  • -x1 x2 x3 -1 x1, x2, x3 0
  • Introduce slack variables
  • x1 2x2 5x3 x4 -4
  • 3x1 x2 3x3 x5 2
  • -x1 x2 x3 -1

86
AV Example 2
  • We dont have to add an artificial variable to
    the 1st constraint if we multiply by -1.
  • When we multiply the last constraint by -1 and
    add an artificial variable, we have
  • -x1 2x2 5x3 x4 4
  • 3x1 x2 3x3 x5 2
  • x1 x2 x3 y1 1
  • x1, x2, x3, x4, x5, y1 0

87
Constraint Manipulation
  • So, after adding slacks, we must make the
    right-hand side numbers positive. Then we add
    artificial variables if we need to.

88
Artificial Variable Example 3
  • Consider the problem
  • Maximize -x1 8x2
  • Subject to x1 x2 1
  • -x1 6x2 3
  • x2 2 x1, x2 0

89
AV Example 3
  • Insert slacks
  • x1 x2 x3 1
  • -x1 6x2 x4 3
  • x2 x5 2
  • So, we need an artificial variable in the 1st
    constraint.

90
AV Example 3
  • Insert artificial variable
  • x1 x2 x3 y1 1
  • -x1 6x2 x4 3
  • x2 x5 2

91
AV Example 3
  • So, Phase 1 is
  • Minimize y1
  • Subject to x1 x2 x3 y1 1
  • -x1 6x2 x4 3
  • x2 x5 2
  • Our starting basis is y1, x4, x5.

92
AV Example 3

93
AV Example 3
  • Optimality Indicators
  • j1
  • j2
  • j3

94
AV Example 3
  • Its a tie between x1 x2 pick x1 to enter the
    basis

x1 is entering the basis
95
AV Example 3
So, x1 replaces y1 in the basis
96
AV Example 3
97
AV Example 3
  • Optimality Indicators
  • j2
  • j3
  • j6

98
AV Example 3
  • All of the optimality indicators are 0, so this
    is an optimum solution.
  • So, we keep this basis and change the objective
    function to the original one
  • Maximize x1 8x2
  • Our basis is still x1, x4, x5.

99
AV Example 3
  • Back to original problem

The basis remains the same
100
AV Example 3
  • Optimality Indicators
  • j2
  • j3

Since we are maximizing now, we want the most
positive. So, x2 enters the basis.
101
AV Example 3
x2 is entering the basis
102
AV Example 3
Minimum
So x4 leaves the basis
103
AV Example 3
104
AV Example 3
  • Optimality Indicators
  • j3
  • j4

105
Artificial Variable Example 3
  • All of the optimality indicators are 0, so this
    is the optimum solution

106
KKT Conditions
  • The Karush-Kuhn-Tucker (KKT) Conditions can be
    used to see optimality graphically
  • We will use them more in nonlinear programming
    later, but we can use a simplified version here

107
KKT Conditions for LP
  • Change the constraints so that they are all
    constraints.
  • The optimum point is the point where the gradient
    of the objective function lies within the cone
    formed by the vectors normal to the intersecting
    constraints.

108
KKT Conditions
  • Reminder
  • The gradient (?) of a function f with n variables
    is calculated

Example
109
KKT Constraints Example
  • Example In the example problem 2, we had the
    problem
  • Minimize f -x1 3x2
  • Subject to 2x1 3x2 6
  • -x1 x2 1
  • x1, x2 0
  • The gradient of the cost function, -x1 3x2 is

110
KKT Example
  • Previously, we saw that this problem looks like

x2
Constraint 2
(3/5, 8/5)
Extreme Points
(0, 1)
Constraint 1
(3, 0)
(0, 0)
x1
111
KKT Example
  • Change to constraints to all
  • g1 -2x1 3x2 -1
  • g2 x1 x2 -1
  • g3 x1 0
  • g4 x2 0

112
KKT Example
  • The gradients of the four constraints (counting
    the non-negativity constraints), g1, , g4 are

113
KKT Example
  • The graph of the problem with the normals of the
    constraints becomes

x2
Constraint 2
The gradient corresponding to each constraint
(?gi) is perpendicular to the constraint i.
(3/5, 8/5)
?g2
?g1
?g3
(0, 1)
Constraint 1
?g2
?g4
?g4
(3, 0)
(0, 0)
x1
?g3
?g1
114
KKT Example
  • c (-1, -3) looks like this
  • So, whichever cone this vector fits into
    corresponds to the optimal extreme point.

115
KKT Example
x2
(3/5, 8/5)
?g2
?g1
?g3
(0, 1)
?g2
?g4
?g4
(3, 0)
x1
(0, 0)
?g3
?g1
116
KKT Example
  • So, we get the same optimal point as when we used
    the simplex method
  • This can also be done for problems with three
    variables in a 3-D space
  • With four or more variables, visualization is not
    possible and it is necessary to use the
    mathematical definition

117
Mathematical Definition of KKT Conditions for LP
  • Given a LP minimization problem
  • Modify the constraints so that we have

Where gi(x) is the linear constraint equation i.
The bi that was on the right side of the
inequality sign is moved to the left side and
included in gi.
118
Mathematical Definition of KKT Conditions for LP
  • If there exists a solution for x the lis for
    the conditions below, then x is the global
    optimum

Equation 1
Equation 2
Equation 3
Equation 4
119
Explanation of Equation 1
  • Equation 1 mathematically states that the
    objective function vector must lie inside the
    cone formed by the vectors normal to the active
    constraints at the optimal point

120
Explanation of Equation 2
  • Equation 2 forces li to be zero for all of the
    inactive constraints called the complementary
    slackness condition
  • If the constraint is active, gi(x) 0, so li
    may be positive and ?gi will be part of the cone
    in Equation 1.
  • If the constraint is inactive, gi(x) ? 0, so li
    must be zero. ?gi will not be included in the
    cone in Equation 1 because it will be multiplied
    by zero.

121
Explanation of Equations 3 4
  • Equation 3 ensures that x is feasible
  • Equation 4 ensures that the direction of the cone
    is correct.
  • If the lis were negative, the cone would be in
    the opposite direction. So, this equation
    prevents that from happening.

122
KKT Conditions Summary
  • The KKT Conditions are not very useful in solving
    for optimal points, but they can be used to check
    for optimality and they help us visualize
    optimality
  • We will use them frequently when we deal with
    nonlinear optimization problems in the next
    section

123
Automated LP Solvers
  • There are many software programs available that
    will solve LP problems numerically
  • Microsoft Excel is one program that solves LP
    problems
  • To see the default Excel examples for
    optimization problems, search for and open the
    file solvsamp.xls (it should be included in a
    standard installation of Microsoft Office)

124
Excel LP Example 1
  • Lets solve the first example problem in this
    chapter with Excel
  • The problem was
  • Minimize f -x1 x2
  • Subject to x1 x2 5
  • 2x1 x2 4
  • x1 3 x1, x2 0

125
Excel LP Example 1
  • Here is the Excel spreadsheet with the necessary
    data

In the spreadsheet, A2 is the cell reference for
x1 B2 is the reference for x2
126
Excel LP Example 1
  • You can see that under the value heading for
    the constraints objective function, we simply
    use the given functions to calculate the value of
    the function

127
Excel LP Example 1
  • On the right side of the constraints, in the
    limit column, we write the value of the bi
    for that constraint
  • Obviously, the objective function doesnt have a
    limit

128
Excel LP Example 1
  • So, the spreadsheet looks like this

129
Excel LP Example 1
  • Now, we need to use the Excel solver feature
  • Look under the Tools menu for solver
  • If it is not there, go to Add-Ins under Tools
    and select the Solver Add-In

130
Excel LP Example 1
  • The Solver toolbox should look something like
    this

131
Excel LP Example 1
  • This is a minimization problem, so select Min
    and set the target cell as the objective function
    value
  • The variables are x1 x2, so in the By Changing
    Cells box, select A2 B2

132
Excel LP Example 1
  • Now add the constraints
  • For the Cell Reference, use the value of the
    constraint function and for the Constraint, use
    the number in the Limit column
  • The constraints are all , so make sure that lt
    is showing between the Cell Reference and
    Constraint boxes

133
Excel LP Example 1
  • Now, the Solver window should look like this

134
Excel LP Example 1
  • Finally, click the Options button
  • All of the variables are specified as being
    positive, so check the Assume Non-Negative box

135
Excel LP Example 1
  • Since this is an LP problem, check the Assume
    Linear Model box
  • Finally, the default tolerance of 5 is usually
    much too large. Unless the problem is very
    difficult, a tolerance of 1 or even 0.1 is
    usually fine

136
Excel LP Example 1
  • Click Solve and the Solver Results box should
    appear
  • Under Reports, select the Answer Report and
    click OK
  • A new worksheet that contains the Answer Report
    is added to the file

137
Excel LP Example 1
  • The spreadsheet with the optimum values should
    look like this

138
Excel LP Example 1
  • The values for x1 x2 are the same as when we
    solved the problem using the simplex method
  • Also, if you look under the Answer Report, you
    can see that all of the slack variables are equal
    to zero, which is also what we obtained with the
    simplex method

139
Excel Example 2
  • Lets solve one more LP problem with Excel
  • Maximize 5x1 2x2 x3
  • Subject to 2x1 4x2 x3 6
  • 2x1 x2 3x3 2
  • x1, x2 0
  • x3 unrestricted in sign

140
Excel Example 2
  • Entering the equations into the spreadsheet
    should give

141
Excel Example 2
  • Unlike last time, all the variables are not
    specified as being positive, so we cannot use the
    Assume Non-negative option for all of the
    variables.
  • So we have to manually specify x1 x2 to be
    non-negative by adding two more constraints

142
Excel Example 2
  • Now, the formulas in the spreadsheet should look
    like this

143
Excel Example 2
  • Now, open the solver toolbox and specify
  • The Target Cell,
  • The range of variable cells,
  • Maximization problem
  • The constraints
  • The first is and the rest are constraints.

144
Excel Example 2
  • Click the Options button and check the Assume
    Linear Model box.
  • Remember, since x3 is unrestricted in sign, do
    not check the Assume Non-negative box
  • You can reduce the tolerance if you would like

145
Excel Example 2
  • The Solver window should look like this

146
Excel Example 2
  • After solving, the spreadsheet should look like
    this

147
Excel Example 2
  • Notice that because x3 was unrestricted in sign,
    it was able to have a negative value and this
    improved the solution
  • To see how much of a difference this made in the
    solution, re-solve the problem with the Assume
    Non-negative option selected

148
Solving LP Problems With Excel
  • From these examples, you can see that Excel can
    be an efficient tool to use for solving LP
    optimization problems
  • The method for solving the problems that was
    outlined here is obviously just one way and the
    user should feel free to experiment to find their
    own style

149
References
  • Linear Programming and Network Flows Bazaraa,
    Mokhtar John Jarvis Hanif Sherali.
  • Optimization of Chemical Processes 2nd Ed.
    Edgar, Thomas David Himmelblau Leon Lasdon.
  • Pollution Prevention Through Process Integration
    El-Halwagi, Mahmoud
Write a Comment
User Comments (0)
About PowerShow.com