Title: Optimization
1Optimization Quantifying Management Decisions
- Goals
- Describe the use of formal optimization
methods in management decisions - analytical
- graphical
- fixed-form
- non-linear searches
- dynamic programming
- Identify situations that favour each of these
methods
2Is this a review?
- Previous lectures/labs discusedoptimization of
parameter values - Least Squares
- Maximum Likelihood
- Now Examine Optimization of
- Control Variables in Management
- Stratagies and Tactics
- yield levels
- regulation limits
- timing of events
- Equilibrium solutions
- the answer
- directly comparable to parameter estimation
- feedback policies
- dynamic, responsive
3Optimize
- Search for
- Global Maximum
- Global Minimum
- Some Desired Value
- Objective defined within context of problem
- remember SS and Max.Likelihood
4Analytic Methods
- Based on calculus
- define objective function
- find value of control variable(s) where first
derivatives 0
- Optimization of
- equilibrium policies
- like MSY
- classic, but done already
- feedback policies
- difficult and subject to special conditions
- cannot easily handle
- age-structure
- differing currencies
- fish and
- stochactic stock dynamics
5Graphical Methods
- Not really optimization
- Plot of the objective function
- ideally 1 or 2 control variables
- optimized by eye
- Advantages
- Easily accommodates complex models
- Results easily understood
- Many different objectives can be quickly
compared - Construction
- run simulation with fixed control values
- store objective value
- repeat for new controls
- fill-in grid of control values
6Classic Example Yield/Recruit
- Objective Yield/Recruit
- indicated by contours
- Controls F, Year of recruitment
- Consider
- optimal F for t 5
- optimal t for F 0.73
7Multiple Objectives
Coho Size Limit
Chinook Size Limit
- Controls size limits
- coho vs chinook
- Objectives
- escapement
- catch
- subjective weights
- can be quantified
8Fixed-Form Optimization
- Fixed-Form
- relationship between control variables
- expressed as a mathematical function
- function represents the policy
- parameters are control variables
- example from last lecture
- CATCH f( STOCK)
-
9Search Procedures
- Topology of Objectives can be complex
- many local max. and min.
- esp. if objective is the function of a number
of indicators - Many Methods available
- differ in
- speed
- ability to avoid local optima
- types
- grid searches (iterative)
- derivative based methods
- derivative-free method
- random searches
- stochastic approximation
- genetic algorithms
- area of active mathematical resarch
- limited by you own computing skills
10Grid Search (Brute Force)
2nd Control Variable
1st Control Variable
- value of function is 3rd dimension
- a.k.a. bracketing, bisection
- conceptualize graphically
- difficult to plot in many dimensions
- very time-consuming or intractable for larger
problems
11Newtons Method
- basis of other derivative methods
- uses slope to adjust step size and converge
on optimum more quickly - can get stuck on local optima
12Other Search Methods
- available in software libaries
- as components of
- spreadsheets
- math systems (mathematica)
- know assumptions before use!!!
NOW!! - Genetic Algorithms
- relatively recent technique
- employ evolutionary principles to optimization
- program numerical problem as a series of
genes - selection explores topology
- cross-overs
- non-linear exploration of strategy sets or
values - mutations (avoid local peaks)
13Explicit Feedback Policies by Stochastic Dynamic
Programming
- Optimize for decisions based upon
- current state (discrete)
- stock size, debit, hold fullness
- time relative to time horizon
- last move
- work backwards from the final time period
- Allows the easy integration of different
currencies - fish abundance
- values and costs in
- random factors
- weather
- search
- Set of optimal (decisions state)
- vary with time
- only diverge toward terminal T
14A Simple Example A Child Goes Fishing
- Times For Fishing
- morning
- afternoon
- MUST be home for supper!!!
- Probabilities Involved
- Catch a Fish
- Fall in and lose everything
- Decisions
- Go fishing
- Relax (no risk)
15 DYNAMIC PROGRAM OUTPUT
LUCK .1
RISK .1 OPTIMAL DECISION
FOR EACH STATE STATE BREAKFAST
LUNCH SUPPER
NO FISH GO FISH
GO FISH RELAX ONE FISH
RELAX RELAX RELAX
TWO FISH RELAX RELAX
RELAX VALUE
OF STATES STATE BREAKFAST
LUNCH SUPPER
NO FISH 0.171900
0.090000 0.000000 ONE FISH
1.000000 1.000000
1.000000 TWO FISH 2.000000
2.000000 2.000000
DYNAMIC PROGRAM OUTPUT
LUCK .2 RISK .1
OPTIMAL DECISION FOR EACH STATE STATE
BREAKFAST LUNCH
SUPPER
NO FISH
GO FISH GO FISH RELAX
ONE FISH GO FISH GO FISH
RELAX TWO FISH RELAX
RELAX RELAX
VALUE OF STATES STATE
BREAKFAST LUNCH
SUPPER
NO FISH
0.342000 0.180000
0.000000 ONE FISH 1.155600
1.080000 1.000000 TWO FISH
2.000000 2.000000
2.000000
16A Fishery Problem Setting Optimal Escapement
- State Variable
- Stock Size (S)
- Decision Variable
- Harvest (H)
- match to stock discretization
- State Transitions
- Matrix P (S t1 S t - H t)
- calculated from S-R ?
- based on observations ?
- Value (Terminal Time)
- calculated by state i
- Vit value of state i at time t
- limiting case
- all fish caught at last time
17Vi,T i value per fish for each i
Vi,T-1 Hk Hk Vj,T
- The value at T-1 given a harvest K
- is the harvest plus the value
- of the resulting final stock size
- weighted mean if more than one possible
transition - use K that gives best V
Repeat for all states Repeat for T-1, T-3 , ...
18Cavaets of SDP
- numerically intensive
- program in compiled language
- access to fast machine
- speed limits number of
- states
- values of states
- times
- MANY parameters to estimate
- potential variability in results
- more optimizations
- probability distributions must be carefully
discretized - Result is state-time matrix of decisions
- not all states equally likely
- interpret expected system behavior from
forward iteration of result