Title: Mathematical Modeling and Optimization: Summary of
1Mathematical Modeling and OptimizationSummary
of Big Ideas
2A schematic view of modeling/optimization process
assumptions, abstraction,data,simplifications
Real-world problem
Mathematical model
makes sense? change the model, assumptions?
optimization algorithm
Solution to real-world problem
Solution to model
interpretation
3Mathematical models in Optimization
- The general form of an optimization model
- min or max f(x1,,xn) (objective function)
- subject to gi(x1,,xn) 0 (functional
constraints) - x1,,xn ? S (set constraints)
- x1,,xn are called decision variables
- In words,
- the goal is to find x1,,xn that
- satisfy the constraints
- achieve min (max) objective function value.
4Types of Optimization Models
Stochastic (probabilistic information on data)
Deterministic (data are certain)
Discrete, Integer (S Zn)
Continuous (S Rn)
Linear (f and g are linear)
Nonlinear (f and g are nonlinear)
5What is Discrete Optimization?
- Discrete Optimization
- is a field of applied mathematics,
- combining techniques from
- combinatorics and graph theory,
- linear programming,
- theory of algorithms,
- to solve optimization problems
- over discrete structures.
6Solution Methods for Discrete Optimization
Problems
- Integer Programming
- Network Algorithms
- Dynamic Programming
- Approximation Algorithms
7Integer Programming
- Programming Planning in this context
- In a survey of Fortune 500 firms, 85 of those
responding said that they had used linear or
integer programming. - Why is it so popular?
- Many different real-life situations can be
modeled as IPs. - There are efficient algorithms to solve IPs.
8Topics in this class about Integer Programming
- Modeling real-life situations as integer programs
- Applications of integer programming
- Solution methods (algorithms) for integer
programs - (maybe) Using software (called AMPL) to solve
integer programs
9IP modeling techniques
- Using binary variables
- Restrictions on number of options
- Contingent decisions
- Variables (functions) with k possible values
- Either-Or Constraints
- Big M method
- Balance constraints
- Fixed Charges
- Making choices with non-binary variables
- Piecewise linear functions
10IP applications
- Facility Location Problem
- Knapsack Problem
- Multi-period production planning
- Inventory management
- Fair representation in electoral systems
- Consultant hiring
- Bin Packing Problem
- Pairing Problem
- Traveling Salesman Problem
11Difficulties of real-life modeling
- The problems that you will encounter in the real
world are always a lot messier than the clean''
problems we looked at in this class there are
always side constraints that complicate getting
even a feasible solution. - Most real world problems have multiple
objectives, and it is hard to choose from among
them. - In the real world you must gain experience with
how to adapt the idealized models of academia to
each new problem you are asked to solve.
12Utilizing the relationship between problems
- Important modeling skill
- Suppose you know
- how to model Problems A1,,Ap
- You need to solve Problem B
- Notice the similarities between Problems Ai and
B - Build a model for Problem B, using the model for
Problem Ai as a prototype. - Example
- The version of the facility location problem as a
special case of the knapsack problem - Solving the committee assignment problem by graph
coloring
13Complexity of Solving Discrete Optimization
Problems
- Two classes
- Class 1 problems have polynomial-time algorithms
for solving the problems optimally. - Examples Min. Spanning Tree Problem,
- Assignement Problem
- For Class 2 problems (NP-hard problems)
- No polynomial-time algorithm is known
- And more likely there is no one.
- Examples Traveling Salesman Problem
- Coloring Problem
- Most discrete optimization problems are in the
second class.
14- Three main directions to solve
- NP-hard discrete optimization problems
- Integer programming techniques
- Approximation algorithms
- Heuristics
- Important Observation Any solution method
suggests a tradeoff between time and
accuracy. - On time-accuracy tradeoff schedule
Integer programming
Approximation algorithms
Heuristics
Brute force
Least accuracy
Most accuracy
Worst time
Best time
15Solving Integer Programs (IP) vs solving Linear
Programs (LP)
- LP algorithms
- Simplex Method
- Interior-point methods
- IP algorithms use the above-mentioned LP
algorithms as subroutines. - The algorithms for solving LPs are much more
time-efficient than the algorithms for IPs. - Important modeling consideration Whenever
possible avoid integer variables in your model.
16LP-relaxation-based solution methods for Integer
Programs
- Branch-and-Bound Technique
-
- Cutting Plane Algorithms
-
17Basic Concepts of Branch-and-Bound
- The basic concept underlying the branch-and-bound
technique - is to divide and conquer.
- Since the original large problem is hard to
solve directly, - it is divided into smaller and smaller
subproblems - until these subproblems can be conquered.
- The dividing (branching) is done by partitioning
- the entire set of feasible solutions into
smaller and smaller subsets. - The conquering (fathoming) is done partially by
- (i) giving a bound for the best solution in the
subset - (ii) discarding the subset if the bound
indicates that - it cant contain an optimal solution.
18Summary of branch-and-bound
- Steps for each iteration
- Branching Among the unfathomed subproblems,
select the one that was created most recently.
(Break ties according to which has larger LP
value.) - Choose a variable xi which has a noninteger
value xi in the LP solution of the subproblem.
Create two new subproblems by adding the
respective constraints xi ? ? xi? and xi ?
xi? . - Bounding Solve the new subproblems, record their
LP solutions. Based on the LP values, update the
incumbent, and the lower and upper bounds for
OPT(IP) if necessary. - Fathoming For each new subproblem, apply the
three fathoming tests. Discard the subproblems
that are fathomed. - Optimality test If there are no unfathomed
subproblems left then return the current
incumbent as optimal solution - (if there is no incumbent then IP is
infeasible.) - Otherwise, perform another iteration.
19Importance of tight lower and upper bounds in
branch-and-bound
- Having tight lower and upper bounds on the IP
optimal value might significantly reduce the
number of branch-and-bound iterations. - For maximization problem,
- A lower bound can be found
- by applying a fast heuristic algorithm to the
problem. - An upper bound can be found by solving a
relaxation of the problem (e.g., LP-relaxation). - If the current lower and upper bounds are close
enough, we can stop the branch-and-bound
algorithm and return the current incumbent
solution - (it cant be too far from the optimum).
20General Idea of Cutting Plane Technique
- Add new constraints (cutting planes) to the
problem such that - (i) the set of feasible integer solutions
remains the same, i.e., we still have the same
integer program. - (ii) the new constraints cut off some of the
fractional solutions making the feasible region
of the LP-relaxation smaller. - Smaller feasible region might result in a better
LP value (i.e., closer to the IP value), thus
making the search for the optimal IP solution
more efficient. - Each integer program might have many different
formulations. - Important modeling skill
- Give as tight formulation as possible.
- How? Find cutting planes that make the
formulation of the original IP tighter. -
21Methods of getting Cutting Planes
- Exploit the special structure of the problem
- to get cutting planes
- (e.g., bin packing problem, pairing problem)
- Often can be hard to get
- Topic of intensive research
- More general methods are also available
- Can be used automatically for many problems
- (e.g., knapsack-type constraints)
22Branch-and-cut algorithms
- Integer programs are rarely solved based solely
on cutting plane method. - More often cutting planes and branch-and-bound
are combined to provide a powerful algorithmic
approach for solving integer programs. - Cutting planes are added to
- the subproblems created in branch-and-bound
- to achieve tighter bounds
- and thus to accelerate the solution process.
- This kind of methods are known as
- branch-and-cut algorithms.
23Network Models
- Minimum Spanning Tree Problem
- Assignment Problem
- Traveling Salesman Problem
- Coloring Problem
- Min. Spanning Tree and Assignment Problem are in
Class 1 (has polynomial-time algorithms for
solving the problem optimally). - Traveling Salesman Problem and Graph Coloring are
in Class 2 (NP-hard problem).
24Methods for solving NP-hard problems
- Three main directions to solve
- NP-hard discrete optimization problems
- Integer programming techniques
- Heuristics
- Approximation algorithms
- We gave examples of all three methods for TSP.
- 2-approximation algorithm for TSP was given and
analyzed in details.
25Some Characteristics of Approximation Algorithms
- Time-efficient (sometimes not as efficient as
heuristics) - Dont guarantee optimal solution
- Guarantee good solution within some factor of the
optimum - Rigorous mathematical analysis to prove the
approximation guarantee - Often use algorithms for related problems as
subroutines - The 2-approximation algorithm for TSP used the
algorithm of finding a minimum spanning tree as
subroutine.
26Performance of TSP algorithms in practice
- A more sophisticated algorithm (which again uses
the MST algorithm as a subroutine) guarantees a
solution within factor of 1.5 of the optimum
(Christofides). - For many discrete optimization problems, there
are benchmarks of instances on which algorithms
are tested. - For TSP, such a benchmark is TSPLIB.
- On TSPLIB instances, the Christofides algorithm
outputs solutions which are on average 1.09 times
the optimum. - For comparison, the nearest neighbor algorithm
outputs solutions which are on average 1.26 times
the optimum. - A good approximation factor often leads to good
performance in practice.
27Dynamic Programming
- Dynamic programming is a widely-used mathematical
technique for solving problems that can be
divided into stages and where decisions are
required in each stage. - The goal of dynamic programming is to find a
combination of decisions that optimizes a certain
amount associated with a system.
28General characteristics of Dynamic Programming
- The problem structure is divided into stages
- Each stage has a number of states associated with
it - Making decisions at one stage transforms one
state of the current stage into a state in the
next stage. - Given the current state, the optimal decision for
each of the remaining states does not depend on
the previous states or decisions. This is known
as the principle of optimality for dynamic
programming. - The principle of optimality allows to solve the
problem stage by stage recursively.
29Problems that can be solved by Dynamic Programming
- Shortest path problems
- Multi-period production planning
- Resource allocation problems
30Advantages of Dynamic Programming
- More time-efficient compared to integer
programming (but large-scale real-life problems
might require lots of states) - Can solve a variety of problems (however integer
programming is a more universal technique) - Can handle stochastic data (for example,
stochastic demand in multi-period production
planning)