Improving Search - PowerPoint PPT Presentation

1 / 75
About This Presentation
Title:

Improving Search

Description:

Algebraic Conditions for Improving and Feasible Directions. Unimodal and ... A solution is a choice of values for all ... Tractability in a model means ... – PowerPoint PPT presentation

Number of Views:37
Avg rating:3.0/5.0
Slides: 76
Provided by: jonahC
Category:

less

Transcript and Presenter's Notes

Title: Improving Search


1
Improving Search
  • Material from Chapter 3 of Optimization in
    Operations Research by Rardin
  • 6/16/05

2
Topic Plan
  • Improving Search, Local and Global Optima
  • Search with Improving and Feasible Directions
  • Algebraic Conditions for Improving and Feasible
    Directions
  • Unimodal and Convex Model Forms
  • Searching and Starting Feasible Solutions
  • iSIGHT Formulation and Gradients

3
Improving Search
A solution is a choice of values for all decision
variables.
If a model has n decision variables then
solutions are n-dimensional and can be
represented with a vector a onedimensional
array of n components.
Length or norm of a vector
Dot product of two vectors
4
Sams Club Location
Choose a location for the next Sams Club
department store. Dots on map below show 3
population centers of areas to be served.
Population center 1 has 60,000 persons,center 2
has 20,000 and center 3 has 30,000. Locate store
to maximize business from three populations. The
center can be located anywhere except in
congested areaswithin .5 mile of chosen
location.
Experience shows that business attracted from any
population follows a gravity pattern
proportional to population and inversely
proportional to 1 square of its distance from a
chosen location
5
Sams Club Optimization Model
Lets assume a starting point of x1 -5 and x2 0
6
3D View of Sams Club Patronage Function
7
Improving Searches
Improving searches are numerical algorithms that
begin at a feasible solution to a given
optimization model and advance along a search
path of feasible points with ever improving
objective function value
8
Neighborhood Perspective
The neighborhood of a current solution x(t)
consists of allnearby points all points within
a small distance of x(t)
9
Local Optima
A solution is a local optimum (local maximum) for
a maximizeproblem or local minimum for a
minimize problem) if it is feasible and if
sufficiently small neighborhoods surroundingit
contain no points that are both feasible and
superior in objective value.
Improving searches stop if they encounter a local
optimum
10
Local vs Global Optima
A solution is a global optimum (global maximum
for a maximize problem orglobal minimum for a
minimize problem) if it is feasible and no other
feasiblesolution has superior objective value.
Global optima are always local optima
11
Local versus Global Optima
Local optima may not be global optima
12
Dealing with Local Optima
Improving searches can guarantee no more than a
localoptimum because they stop whenever one is
encountered.How do we know if we have a global
optima?
Certain mathematical forms assure that every
local optimumis a global optimum.
When models have local optima that are not
global, the mostsatisfactory available analysis
is often to run several independentimproving
searches and accept the best local
optimumdiscovered as a heuristic or approximate
optimum.
How do you run from multiple starting points in
iSIGHT?
13
2. Search with Improving and Feasible Directions
Direction-Step Paradigm Improving searches
advance from current solution x(t) tonew
solution x(t1) as
where vector Dx defines a move direction of
solution changeat x(t) and step size multiplier
l gt 0 determines how far topursue the direction.
14
Direction Example
Two slides back on Local versus Global Optimum
plot, first move took us from x (0) (-5,0) to x
(1) (0.5, -2.75) Most search algorithms will
calculate a direction vector Dx and then apply a
suitable step size multiplier l. In case above,
vector and step sizes could be Dx x (1) x (0)
(0.5, -2.75) - (-5,0) (5.5, -2.75) a step
size of l 1 yields the movex (1) x (0) lDx
(-5,0)1(5.5,-2.75)(0.5,-2.75)
15
Sample Exercise
An improving search beginning at solution w (0)
(5,1,-1,11)employs first move direction Dw (1)
(0,1,1,3) for step l11/3, then Dw (2)
(2,0,1/4,-1) for step l2 4, and finally,Dw (3)
(1,-1/3,0,2) for step l3 1. Determine the
solutionsvisited.
16
Sample Exercise Solution
17
Improving Directions
Vector Dx is an improving direction at current
solution x (t) if theobjective function value at
x (t) lDx is superior to that of x (t)for all
l gt 0 sufficiently small.
18
5
Non Improving Search Directions
19
Sample Exercise Recognizing Improving Directions
Graphically
Figure plots contours of minimizing objective
over decision variables y (1) and y (2) .
Determine graphically whether each of the
following directions improves the point.
  • Dy (1,-1) at y (1) (1,1)
  • Dy (0,1) at y (1) (1,1)
  • Dy (0,1000) at y (1) (1,1)
  • Dy (-1,0) at y (2) (5,3)

20
Feasible Directions
Vector Dx is a feasible direction at current
solution x (t) if point x (t) lDx violates no
model constraint if l gt 0 is sufficiently small.
21
Step Size How Far?
Once you have found a feasible move direction
from currentsolution, how far should it be
followed?
Improving search normally apply the maximum step
l forwhich the selected move direction continues
to retain feasibilityand improve the objective
function.
22
Sample Exercise Determining Maximum Step Size
Suppose that we are searching for an optimal
solution to themathematical program
For current point w (19) (4,5), determine the
maximumstep in improving feasible direction
Dw(-3,-8)
23
Sample Exercise Solution
24
Many authors call this a useable, feasible
direction
25
When Improving Search Stops
No optimization model solution at which an
improving feasibledirection is available can be
a local optimum.
When a continuous improving search terminates at
a solutionadmitting no improving feasible
direction, and mild assumptionshold, the point
is a local optimum.
26
3. Algebraic Conditions for Improving and
Feasible Directions
What distinguishes one implementation of
improving searchalgorithm from another is the
process to identify feasibledirections at step 2
(or to prove that none exists).
If the optimization model is smooth (i.e.
differentiable withrespect to all decision
variables) then gradients are used.
The gradient of f(x)f(x1,,xn) denoted Df(x), is
the vectorof partial derivatives D f(x)(d f/d
x1,,d f/d xn) evaluated at x.
27
Determine Gradient of Sams Club Optimization
Model at (2,0)
max
28
Gradients of Sams Club
Gradients shown graphically as vectors
perpendicular to contours of objective function
andpoint in the direction of the most rapid
objective value increase.
29
Gradient Conditions for Improving Directions
Directions Dx is improving for maximize objective
function at point x if
f(x) . Dx gt 0. On the other hand, if
f(x) . Dx lt0, Dx does not improve at x.
Directions Dx is improving for minimize objective
function at point x if
f(x) . Dx lt 0. On the other hand, if
f(x) . Dx gt0, Dx does not improve at x.
30
Exercise
31
Exercise Solution
32
Objective Function Gradients as Move Directions
33
Exercise
34
Active Constraints and Feasible Directions
Whether a direction is feasible at a solution x
depends on whether it would lead to immediate
violation of any active constraint at x, i.e.,
any constraint satisfied as equality at x.
35
Exercise
36
Conditions for Feasible Directions with Linear
Constraints
Direction Dx(Dx1,,Dxn) is feasible for a
linearly constrainedoptimization model at
solution x (x1,,xn) if and only if
37
4. Unimodal and Convex Model Forms Tractable For
Improving Search
Tractability in a model means convenience for
analysis.
The models considered most tractable to improving
searchare ones where every local optimum is
necessarily global.
An objective function f(x) is unimodal (one hump)
if the straightlinedirection from every point in
its domain to every better pointis an improving
direction. That is, for every x (1) and everyx
(2) with a better objective function value,
directionDx (x (2) x (1) ) should be
improving at x (1)
38
(No Transcript)
39
Linear Objective Functions
Linear objective functions are unimodal in both
maximizeand minimize optimization models
If the objective function of an optimization
model is unimodal,every unconstrained local
optimum is an unconstrained globaloptimum.
40
Constraints and Local Optima
41
Convex Feasible Sets
The feasible set of an optimization problem is
convex if the linesegment between every pair of
feasible points falls entirelywithin the
feasible region.
42
Convex Sets
Discrete feasible sets are never convex (except
in the trivialcase where there is only one
feasible point).
The line segment between vector solutions x (1)
and x (2)consists of all points of the form x
(1) l (x (2) x (1) )with 0 lt l lt 1.
If all constraints of an optimization model are
linear (both mainand variable-type), its
feasible space is convex.
If the objective function of optimization model
is unimodaland the constraints produce a convex
feasible set, every localoptimum of the model is
a global optimum.
43
Searching for Feasible Solutions
We have assumed that we are always starting at a
feasiblestarting point. What if initial solution
is infeasible?
  • Two possible methods considered
  • Two-phase
  • Big-M

44
Searching for Starting Feasible Solutions
45
Phase 1
Phase 1 constraints are derived from those of the
original model by considering each in relation
to the starting solution chosen. Satisfied
constraints simply become part of the Phase 1
model. Violated ones are augmented with a
nonnegative artificial variable.
Two Crude Model Revisited (assume (0,0) starting
point)
min 20x1 15x2 s.t. 0.3x1 0.4x2 gt
2.0 0.4x1 0.2x2 gt 1.5 0.2x1 0.3x2 gt
0.5 x1 lt 9 x2 lt 6 x1,x2 gt 0
46
Add Artificial Variables
0.3x1 0.4x2 x3 gt 2.0 0.4x1 0.2x2 x4
gt 1.5 0.2x1 0.3x2 x5 gt 0.5 x1 lt
9 x2 lt 6 x1,x2 gt 0 x3, x4, x5 gt0
47
Phase 1 Objective Function
The Phase 1 objective function minimizes the sum
of theartificial variables
min x3 x4 x5 s.t. 0.3x1 0.4x2 x3 gt
2.0 0.4x1 0.2x2 x4 gt 1.5 0.2x1 0.3x2
x5 gt 0.5 x1 lt 9 x2 lt 6 x1,x2 gt
0 x3, x4, x5 gt0
48
Starting Artificial Solution
After fixing the original variables at their
arbitrary chosenvalues, each artificial variable
is initialized at the smallestvalue still needed
to achieve feasibility in the correspondingconstr
aint.
0.3x1 0.4x2 x3 gt 2.0 0.4x1 0.2x2 x4
gt 1.5 0.2x1 0.3x2 x5 gt 0.5 x1 lt
9 x2 lt 6 x1,x2 gt 0 x3, x4, x5 gt0
x10, x20, x32, x41.5, x5.5 is starting
feasible solutionfor Phase 1.
49
Phase 1 Outcomes
If Phase 1 terminates with solution having (Phase
1) objectivefunction value 0, the components
of the Phase 1 solutioncorresponding to original
variables provide a feasible solutionfor the
original model.
If Phase 1 terminates with a global minimum
having Phase 1objective function gt 0, the
original model is infeasible.
If Phase 1 terminates with a local minimum that
may notbe global but has (Phase 1) objective
function value gt 0,we can conclude nothing.
Phase 1 search should berepeated from a new
starting solution.
50
Exercise
51
Big M
  • Two phase dealt with feasibility and optimality
    separately
  • Phase 1 test feasibility
  • Phase 2 proceeds to optimum
  • Big M combines feasibility and optimality
    considerations.

52
Big M
Uses a large positive multiplier, M, to combine
feasibilityand optimality in a single objective
function of form max (original objective)
M(artificial variable sum) for an original
maximize problem Or min (original objective)
M(artificial variable sum) for an original
minimize problem
53
Sams Club Example
54
Big M Outcomes
If a Big M search terminates with a locally
optimal solutionhaving all artificial
variables0, the components of the solution
corresponding to original variables form a
locallyoptimal solution for the original model.
If M is sufficiently large and Big M search
terminates with aglobal optimum having some
artificial variables gt0, the originalmodel is
infeasible.
If Big M terminates at local optimum and some
artificialvariables gt 0, M is not large enough
or conclude nothing.Increase M or try new
starting point.
55
Lab
  • Answer questions 1b,2a,3a,b paper and pencil
  • Advanced Lab on Sams Club (Lab 2 problem 2 and 3)
    manual and DOE multiple starting points
  • Optional questions 4,5,6 paper and pencil

56
Lab Question 1
57
Lab Question 1 Solution
58
Lab Question 2
59
Lab Question 2a Solution
60
Lab Question 3
61
Lab Question 3 Solutions
62
Lab Question 4
63
Lab Question 4 - Solution
64
Lab Question 5
65
Lab Question 6
66
Lab 6 Solution
67
Advanced Lab - Sams Club
  • The objective of this lab is to demonstrate
    ability to automate the invocation of an iSIGHT
    optimization from
  • multiple starting points to insure that we have
    obtained a global optimum.
  • Task 1. Manually run two optimization tasks to
    see if converge to same optimal point.
  • In iSIGHT create a task called SamsClub.
    Implement the model as shown on slide 5. Create a
    task plan with only one technique for
    Generalized Reduced Gradient.
  • What is the optimal point achieved when you start
    at -5,0?
  • Unset the best state from the Main Menu Execution
    pulldown menu. Change the starting point to (0,
    -6) and run the optimization. What is the
    optimal point achieved?
  • Task 2. Automate the running of multiple
    optimization tasks by creating a task called
    AutomateExploration.This task should call the
    Task SamsClub (i.e. a two level hierarchy). The
    AutomateExploration should have thesame inputs
    and outputs as SamsClub.
  • The TaskPlan for AutomateExploration should be a
    Full Factorial DOE. Have the two inputs vary
    fromthe three levels of -10, 0 and 10.
  • Insure that you call the api_UnsetBestRunInfo in
    the prologue of the Optimization Plan for
    SamsClub.
  • How many of the nine cases achieved the same
    optimal point?

Lab continued on next page
68
Advanced Lab Continued
Task 3 A drawback of Task 2 is that you had to
create a Task Hierarchy. The Task Hierarchyis
very powerful but takes some time to set up. The
iSIGHT task plan will allow you towrite a tcl
block to evaluate multiple optimization runs
from the same task. Create a tcl block to run
the optimization from the same nine points that
were used for task 2. Insure that you use
api_UnsetBestRunInfo in the prologue of the
Optimization Plan. Task 4 A drawback of Task
2 is that you had to create a Task Hierarchy. The
Task Hierarchyis very powerful but takes some
time to set up. The iSIGHT rule system will allow
you toevaluate multiple optimization runs from
the same task. Create rules to run the
optimization from the same nine points that were
used for task 2. Insure that you use
api_UnsetBestRunInfo in the prologue of the
Optimization Plan.
69
iSIGHT Asides (Optional)
  • Gradients
  • Default step sizes
  • Internal Optimization Model
  • Active Constraints in iSIGHT
  • Overall iSIGHT Penalty

70
Gradient Approximations in iSIGHT
Forward difference formula
Central difference formula
What should e be?
71
Gradient Approximations in iSIGHT
Defaults for iSIGHT algorithms which use
gradients Technique Relative Gradient Min Abs
Grad StepConmin .01 .01 ADS .01 .001 LSGRG
.0001 NLPQL .001 .0001 MOST .001
72
iSIGHT Internal Optimization Model
Scale DV, ObjectiveConstraints ? ?
73
Active Constraints in iSIGHT
iSIGHT has an active inequality constraint if it
is exactly deadon it. iSIGHT treats inequality
constraints as violated if gt 0.0can override
with api_SetDeltaForInEqualityConstraintViolation
iSIGHT treats equality constraints as violated if
it /-.00001.
Can overide withapi_SetDeltaForEqualityConstraint
Violation
74
An Overall Penalty is always calculated by iSIGHT
75
iSIGHT Objective and Penalty
ObjectiveAndPenalty is Penalty term from previous
pageadded on to Objective
If no constraints are violated then
ObjectiveObjectiveAndPenalty
ObjectiveAndPenalty is used by ASA and
MultiIsland GA
All penalty terms can be set through
apis api_SetPenaltyBase api_SetPenaltyMultiplier
api_SetPenaltyViolationExponent
Write a Comment
User Comments (0)
About PowerShow.com