MATLAB Optimization Toolbox - PowerPoint PPT Presentation

1 / 45
About This Presentation
Title:

MATLAB Optimization Toolbox

Description:

Title: MATLAB Optimization Toolbox Author: rmbhatt Last modified by: Farrukh Created Date: 2/7/2003 10:06:44 PM Document presentation format: On-screen Show – PowerPoint PPT presentation

Number of Views:2298
Avg rating:3.0/5.0
Slides: 46
Provided by: rmbh
Category:

less

Transcript and Presenter's Notes

Title: MATLAB Optimization Toolbox


1
PART I
2
Optimization Tree
Figure 1 Optimization tree.
3
What is Optimization?
  • Optimization is an iterative process by which a
    desired solution
  • (max/min) of the problem can be found while
    satisfying all its
  • constraint or bounded conditions.

Figure 2 Optimum solution is found while
satisfying its constraint (derivative must be
zero at optimum).
  • Optimization problem could be linear or
    non-linear.
  • Non linear optimization is accomplished by
    numerical Search
  • Methods.
  • Search methods are used iteratively before a
    solution is achieved.
  • The search procedure is termed as algorithm.

4
What is Optimization?(Cont.)
  • Linear problem solved by Simplex or Graphical
    methods.
  • The solution of the linear problem lies on
    boundaries of the feasible
  • region.

Figure 3 Solution of linear problem
Figure 4 Three dimensional solution of
non-linear problem
  • Non-linear problem solution lies within and on
    the boundaries of the
  • feasible region.

5
Fundamentals of Non-Linear Optimization
  • Single Objective function f(x)
  • Maximization
  • Minimization
  • Design Variables, xi , i0,1,2,3..
  • Constraints
  • Inequality
  • Equality

Figure 5 Example of design variables and
constraints used in non-linear optimization.
  • Optimal points
  • Local minima/maxima points A point or Solution
    x is at local point
  • if there is no other x in its Neighborhood less
    than x
  • Global minima/maxima points A point or Solution
    x is at global
  • point if there is no other x in entire search
    space less than x

6
Fundamentals of Non-Linear Optimization (Cont.)
Figure 6 Global versus local optimization.
Figure 7 Local point is equal to global point if
the function is convex.
7
Fundamentals of Non-Linear Optimization (Cont.)
  • Function f is convex if f(Xa) is less than value
    of the corresponding
  • point joining f(X1) and f(X2).
  • Convexity condition Hessian 2nd order
    derivative) matrix of
  • function f must be positive semi definite (
    eigen values ve or zero).

Figure 8 Convex and nonconvex set
Figure 9 Convex function
8
Mathematical Background
  • Slop or gradient of the objective function f
    represent the
  • direction in which the function will
    decrease/increase most rapidly
  • Taylor series expansion
  • Jacobian matrix of gradient of f with respect
    to several variables

9
Mathematical Background (Cont.)
  • First order Condition (FOC)
  • Hessian Second derivative of f of several
    variables
  • Second order condition (SOC)
  • Eigen values of H(X) are all positive
  • Determinants of all lower order of H(X) are ve

10
Optimization Algorithm
  • Deterministic - specific rules to move from one
    iteration to next ,
  • gradient, Hessian
  • Stochastic probalistic rules are used for
    subsequent iteration
  • Optimal Design Engineering Design based on
    optimization algorithm
  • Lagrangian method sum of objective function
    and linear
  • combination of the constraints.

11
Optimization Methods
  • Deterministic
  • Direct Search Use Objective function values to
    locate minimum
  • Gradient Based first or second order of
    objective function.
  • Minimization objective function f(x) is used
    with ve sign
  • f(x) for maximization problem.
  • Single Variable
  • Newton Raphson is Gradient based technique
    (FOC)
  • Golden Search step size reducing iterative
    method
  • Multivariable Techniques ( Make use of Single
    variable Techniques
  • specially Golden Section)
  • Unconstrained Optimization
  • a.) Powell Method Quadratic (degree 2)
    objective function polynomial is
  • non-gradient based.
  • b.) Gradient Based Steepest Descent (FOC)
    or Least Square minimum
  • (LMS)
  • c.) Hessian Based -Conjugate Gradient (FOC)
    and BFGS (SOC)

12
Optimization Methods Constrained
  • Constrained Optimization
  • a.) Indirect approach by transforming into
    unconstrained
  • problem.
  • b.) Exterior Penalty Function (EPF) and
    Augmented Lagrange
  • Multiplier
  • c.) Direct Method Sequential Linear
    Programming (SLP), SQP and
  • Steepest Generalized Reduced Gradient
    Method (GRG)

Figure 10 Descent Gradient or LMS
13
Optimization Methods (Cont.)
  • Global Optimization Stochastic techniques
  • Simulated Annealing (SA) method minimum
    energy principle of cooling metal crystalline
    structure
  • Genetic Algorithm (GA) Survival of the
    fittest
  • principle based upon evolutionary theory

14
Optimization Methods (Example)
Multivariable Gradient based optimization J is
the cost function to be minimized in
two dimension The contours of the J paraboloid
shrinks as it is decrease function retval
Example6_1(x) example 6.1 retval 3 (x(1) -
1.5x(2))2 (x(2) - 2)2 gtgt
SteepestDescent('Example6_1', 0.5 0.5, 20,
0.0001, 0, 1, 20) Where 0.5 0.5 -initial
guess value 20 -No. of
iteration 0.001 -Golden search tol. 0
-initial step size 1 -step
interval 20 -scanning step gtgt
ans 2.7585 1.8960
Figure 11 Multivariable Gradient based
optimization
Figure 12 Steepest Descent
15
MATLAB Optimization Toolbox
PART II
16
Presentation Outline
  • Introduction
  • Function Optimization
  • Optimization Toolbox
  • Routines / Algorithms available
  • Minimization Problems
  • Unconstrained
  • Constrained
  • Example
  • The Algorithm Description
  • Multiobjective Optimization
  • Optimal PID Control Example

17
Function Optimization
  • Optimization concerns the minimization or
    maximization of
  • functions
  • Standard Optimization Problem

Subject to
Equality Constraints
Inequality Constraints
Side Constraints
Where
is the objective function, which measure and
evaluate the performance of a system. In a
standard problem, we are minimizing the function.
For maximization, it is equivalent to
minimization of the ve of the objective function.
is a column vector of design variables, which
can affect the performance of the system.
18
Function Optimization (Cont.)
  • Constraints Limitation to the design space.
    Can be linear or
  • nonlinear, explicit or implicit functions

Equality Constraints
Inequality Constraints
Most algorithm require less than!!!
Side Constraints
19
Optimization Toolbox
  • Is a collection of functions that extend the
    capability of MATLAB.
  • The toolbox includes routines for
  • Unconstrained optimization
  • Constrained nonlinear optimization, including
    goal attainment
  • problems, minimax problems, and semi-infinite
    minimization
  • problems
  • Quadratic and linear programming
  • Nonlinear least squares and curve fitting
  • Nonlinear systems of equations solving
  • Constrained linear least squares
  • Specialized algorithms for large scale problems

20
Minimization Algorithm
21
Minimization Algorithm (Cont.)
22
Equation Solving Algorithms
23
Least-Squares Algorithms
24
Implementing Opt. Toolbox
  • Most of these optimization routines require the
    definition of an M-
  • file containing the function, f, to be
    minimized.
  • Maximization is achieved by supplying the
    routines with f.
  • Optimization options passed to the routines
    change optimization
  • parameters.
  • Default optimization parameters can be changed
    through an
  • options structure.

25
Unconstrained Minimization
  • Consider the problem of finding a set of values
    x1 x2T that
  • solves
  • Steps
  • Create an M-file that returns the function value
    (Objective
  • Function). Call it objfun.m
  • Then, invoke the unconstrained minimization
    routine. Use fminunc

26
Step 1 Obj. Function
  • function f objfun(x)
  • fexp(x(1))(4x(1)22x(2)24x(1)x(2)2x(2)
    1)

Objective function
27
Step 2 Invoke Routine
Starting with a guess
x0 -1,1 options optimset(LargeScale,off
) xmin,feval,exitflag,output fminunc(objf
un,x0,options)
Optimization parameters settings
Output arguments
Input arguments
28
Results
  • xmin
  • 0.5000 -1.0000
  • feval
  • 1.3028e-010
  • exitflag
  • 1
  • output
  • iterations 7
  • funcCount 40
  • stepsize 1
  • firstorderopt 8.1998e-004
  • algorithm 'medium-scale Quasi-Newton
    line search'

Minimum point of design variables
Objective function value
Exitflag tells if the algorithm is converged. If
exitflag gt 0, then local minimum is found
Some other information
29
More on fminunc Input
xmin,feval,exitflag,output,grad,hessian
fminunc(fun,x0,options,P1,P2,)
fun Return a function of objective
function. x0 Starts with an initial guess.
The guess must be a vector
of size of number of design
variables. Option To set some of the
optimization parameters. (More after few
slides) P1,P2, To pass additional parameters.
30
More on fminunc Output
xmin,feval,exitflag,output,grad,hessian
fminunc(fun,x0,options,P1,P2,)
  • xmin Vector of the minimum point (optimal
    point). The size is the number of design
    variables.
  • feval The objective function value of at the
    optimal point.
  • exitflag A value shows whether the
    optimization routine is terminated
    successfully. (converged if gt0)
  • Output This structure gives more details about
    the optimization
  • grad The gradient value at the optimal point.
  • hessian The hessian value of at the optimal
    point

31
Options Setting optimset
Options optimset(param1,value1,
param2,value2,)
  • The routines in Optimization Toolbox has a set
    of default
  • optimization parameters.
  • However, the toolbox allows you to alter some of
    those
  • parameters, for example the tolerance, the
    step size, the gradient
  • or hessian values, the max. number of
    iterations etc.
  • There are also a list of features available, for
    example displaying
  • the values at each iterations, compare the
    user supply gradient or
  • hessian, etc.
  • You can also choose the algorithm you wish to
    use.

32
Options Setting (Cont.)
Options optimset(param1,value1,
param2,value2,)
  • Type help optimset in command window, a list of
    options
  • setting available will be displayed.
  • How to read? For example

LargeScale - Use large-scale algorithm if
possible on off
The default is with
Value (value1)
Parameter (param1)
33
Options Setting (Cont.)
Options optimset(param1,value1,
param2,value2,)
LargeScale - Use large-scale algorithm if
possible on off
Since the default is on, if we would like to turn
off, we just type
Options optimset(LargeScale, off)
and pass to the input of fminunc.
34
Useful Option Settings
Highly recommended to use!!!
  • Display - Level of display off iter notify
    final
  • MaxIter - Maximum number of iterations allowed
    positive integer
  • TolCon - Termination tolerance on the constraint
    violation
  • positive scalar
  • TolFun - Termination tolerance on the function
    value positive
  • scalar
  • TolX - Termination tolerance on X positive
    scalar

35
fminunc and fminsearch
  • fminunc uses algorithm with gradient and hessian
    information.
  • Two modes
  • Large-Scale interior-reflective Newton
  • Medium-Scale quasi-Newton (BFGS)
  • Not preferred in solving highly discontinuous
    functions.
  • This function may only give local solutions..
  • fminsearch is generally less efficient than
    fminunc for
  • problems of order greater than two. However,
    when the problem
  • is highly discontinuous, fminsearch may be
    more robust.
  • This is a direct search method that does not use
    numerical or
  • analytic gradients as in fminunc.
  • This function may only give local solutions.

36
Constrained Minimization
Vector of Lagrange Multiplier at optimal point
xmin,feval,exitflag,output,lambda,grad,hessian
fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,opti
ons,P1,P2,)
37
Example
function f myfun(x) f-x(1)x(2)x(3)
Subject to
38
Example (Cont.)
For
Create a function call nonlcon which returns 2
constraint vectors C,Ceq
function C,Ceqnonlcon(x) C2x(1)2x(2) Ceq

Remember to return a null Matrix if the
constraint does not apply
39
Example (Cont.)
Initial guess (3 design variables)
x0101010 A-1 -2 -21 2 2 B0 72' LB
0 0 0' UB 30 30 30' x,fevalfmincon(_at_m
yfun,x0,A,B,,,LB,UB,_at_nonlcon)
CAREFUL!!!
fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P
1,P2,)
40
Example (Cont.)
41
Multiobjective Optimization
  • Previous examples involved problems with a
    single
  • objective function.
  • Now let us look at solving problem with
    multiobjective
  • function by lsqnonlin.
  • Example is taken for data curve fitting
  • In curve fitting problem the the error is
    reduced at each time step producing
    multiobjective function.

42
lsqnonlin in Matlab Curve fitting
clc recfit.m clear global data data
0.6000 0.999 0.6500 0.998 0.7000 0.997
0.7500 0.995 0.8000 0.982 0.8500 0.975
0.9000 0.932 0.9500 0.862 1.0000 0.714
1.0500 0.520 1.1000 0.287 1.1500 0.134
1.2000 0.0623 1.2500 0.0245 1.3000
0.0100 1.3500 0.0040 1.4000 0.0015 1.4500
0.0007 1.5000 0.0003 experimental
data,1st coloum x, 2nd coloum R xdata(,1)
Rexpdata(,2) plot(x,Rexp,'ro') plot
the experimental data hold on b01.0 1.0
start values for the parameters
blsqnonlin('recfun',b0) run the
lsqnonlin with start value b0, returned parameter
values stored in b Rcal1./(1exp(1.0986/b(1)(x-
b(2)))) calculate the fitted value
with parameter b plot(x,Rcal,'b') plot the
fitted value on the same graph
Find b1 and b2 gtgtrecfit gtgtb 0.0603 1.0513
recfun.m function yrecfun(b) global data
xdata(,1) Rexpdata(,2) Rcal1./(1exp(1.0
986/b(1)(x-b(2)))) the calculated value from
the model ysum((Rcal-Rexp).2) yRcal-Rexp
the sum of the square of the difference
between calculated value and experimental value
  • Link to this Page
  • Short tutorial Model Fitting last edited on 26
    October 2003 at 722 pm by westlake.che.gatech.edu

43
Simulink Example
Jeff_fly basket.mdl
Shooting a flying box
Eq. of ball motion in z horz. direction
Eq. of ball motion in h vert. direction
Aerodynamic drag force
Angle of ball
44
Simulink example shooting ball
Start_flyBasketBall.m InitialGuess pi/2.5 X
fminsearch('Distflysim', InitialGuess)180/pi f
printf('\nShoot at f deg \n', X)
function P Distflysim(theta_0) F025.0
N cart_mass2 kg x_dot_max50
m/sec ro_air1.224 kg/m3 h00.5
m z00 Cd1 r_ball0.05 m A_ballpir_ball2
ball_mass0.1 kg g-9.8 m/sec2 theta_0
rad V050 m/secF015.0 N AeroFacCdA_ballro
_air/2 theta_0 assignin('base','F0',F0) assignin
('base','cart_mass',cart_mass) assignin('base','x
_dot_max',x_dot_max) assignin('base','AeroFac',Ae
roFac) assignin('base','ball_mass',ball_mass) as
signin('base','g',g) assignin('base','V0',V0) as
signin('base','theta_0',theta_0)
Newrtprsimgetrtp('jeff_basket') save
ShotParams.mat Newrtp !jeff_basket -p
ShotParams.mat load jeff_basket t,x,ysim('je
ff_flybasket',0 10) npmax(size(y)) xfy(np,1)
zfy(np,2) hfy(np,3) P(xf-zf)2(hf-25)2

BasketflyBallnit1.m F025.0 N cart_mass1
kg x_dot_max50 m/sec ro_air1.224
kg/m3 Cd1 r_ball0.05 m A_ballpir_ball2
ball_mass0.05 kg g-9.8 m/sec2 theta_0pi/2.
5 rad V050 m/sec AeroFacCdA_ballro_air/2
45
Optimization toolbox for use with MATLAB, User
Guide, The MathWorks Inc. 2006 2. Applied
Optimization with MATLAB Programming, P.
Venkataraman, Wiley InterScience, 2002 3.
Optimization for Engieering Design, Kalyanmoy
Deb, Prentice Hall, 1996.4. http//mathdemos.gcs
u.edu/mathdemos/maxmin/max_min.html5.
http//www.math.ucdavis.edu/kouba/CalcOneDIRECTOR
Y/maxmindirectory /MaxMin.html6.
http//users.powernet.co.uk/kienzle/octave/optim.h
tml7. http//www.cse.uiuc.edu/eot/modules/optim
ization/SteepestDescent/
REFERENCES
Write a Comment
User Comments (0)
About PowerShow.com