Stochastic Nonlinear Programming with Linear Constraints by MonteCarlo Estimators - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

Stochastic Nonlinear Programming with Linear Constraints by MonteCarlo Estimators

Description:

feasible solutions for linearly constrained stochastic problems ... Robins-Monro (1951), Mikhalevitch et al (1987), Kushner (1997), Han-Fu-Chen ... – PowerPoint PPT presentation

Number of Views:171
Avg rating:3.0/5.0
Slides: 28
Provided by: matematiko
Category:

less

Transcript and Presenter's Notes

Title: Stochastic Nonlinear Programming with Linear Constraints by MonteCarlo Estimators


1
Stochastic Nonlinear Programming with Linear
Constraints by Monte-Carlo Estimators
Leonidas Sakalauskas Institute of Mathematics and
Informatics Akademijos st 4, 08663 Vilnius,
Lithuania Fax3705 2109323 E-mail
ltsakal_at_ktl.mii.lt
2
OUTLINE
  • Introduction
  • Unconstrained nonlinear stochastic optimization
    by Monte-Carlo method
  • Optimization with nonlinear stochastic
    constraints
  • - feasible solutions for linearly constrained
    stochastic problems
  • Application for portfolio optimization
  • Conclusions

3
1. Introduction
  • We consider nonlinear stochastic programming
    problem
  • with linear constraints

  • (1)
  • ,
    ,
  • P is absolute continuous with density
    ,
  • , is
    a bounded set
  • , is the
    matrix

4
  • The methods of stochastic approximation were
    first proposed to solve stochastic optimization
    problems
  • Robins-Monro (1951), Mikhalevitch et al (1987),
    Kushner (1997), Han-Fu-Chen (2002), Ermoliev et
    al (2003), etc.
  • Problems
  • the rate of convergence of stochastic
    approximation slows down for constrained problems
    (Polyak (1987), Uriasyev (1990))
  • the gradient - type projection method, usually
    applied here, can no converge when constraints
    are linear due to zigzagging or jamming (see
    Bertsekas (1982), Polyak (1987), etc.)
  • procedures for optimality testing arent
    developed, etc.

5
  • Application of Monte-Carlo method, frequently
    applied to stochastic problems (Prekopa (1999),
    Ermoliev et al (2003), etc,), is based on
    replacement of the objective function, being
    mathematical expectation, by Monte-Carlo
    estimators (see, e.g. Shapiro (1989), etc.)
  • Problems
  • choice of the sample size N
  • numerical complexity
  • optimality testing and accuracy of solution
  • zigzagging or jamming

6
2. Unconstrained nonlinear stochastic
Optimization by Monte-Carlo method
2.1. Stochastic differentiation
(Rubinstein (1983), Prekopa (1999), Ermolyev et
al (2003), Uriasyev (1994), etc.)
,
7
Assume possibilities I) to simulate
Monte-Carlo samples
II) to compute vectors
8
III) Denote sampling statistics
Remark. Statistics are asymptotically normal !!!
9
2.2. Optimality testing
  • if the hypothesis of equality of gradient to
    zero is rejected
  • or the confidence interval of the objective
    function
  • is longer than admissible value

there is no a reason to accept the optimality of
decision
10
Testing functions
subject to
11
Fig 1. Probability to accept the solution on
distance r,
12
2.3. Optimization by Monte-Carlo estimators
Convergence a.s. of such procedure to the optimal
solution is proved with linear rate by martingale
approach (Sakalauskas, Informatica, 2001) Since
the sample size can be taken small at first
iterations (about 20-100) adjustment of sample
size enables us essentially to decrease the
volume of computations needed to get the optimal
decision with admissible accuracy.
13
Due to linear rate of convergence
Thus, if we have a certain resource to compute
one value of the objective or constraint function
with an admissible accuracy, then the
optimization requires in fact only several times
more computations.
14
2.4. The General Algorithm.
and initial step size
be given,
Step 0. Let initial decision
of size
Step 1. Simulate the sample
Step 2. To test the hypothesis of equality of
gradient to zero
Step 3. To test the confidence interval of the
objective function
Step 4.
Step 5. Otherwise, terminate the algorithm
15
3. Optimization with nonlinear stochastic
constraints3.1. Problem statement
16
3.2. Lagrange method by Monte-Carlo estimators
Sakalauskas, EJOR (2002)
17
3.3. Optimality testing
Together with an hypothesis of zero Lagrange
function gradient and admissible confidence
interval of the objective function
a validity and the confidence interval of the
constraint are tested
18
Fig. 2. Change of the objective, constraint,
sample size, solution
19
4. - feasible solutions for linearly
constrained stochastic problems
4.1. The feasible set
The necessary optimality condition
20
4.2. The - feasible set
where
21
4.2. Optimization by - feasible solutions
and Monte-Carlo method
(2)
is
- feasible gradient projection
Sakalauskas, Informatica (2004)
22
Theorem 1. Let the function
be differentiable,
the gradient of this function be Lipshitzian with
the constant
Assume, the set
to be bounded and having more than one element,
- matrix.
,
is the
Then, starting from any initial approximation
,
formulae (2) define the sequence
so that
and there exist values
such that
a.s.
for
.
23
4.3. Application to portfolio problem
Financial planning in the case of uncertainty is
frequently can be reduced to stochastic nonlinear
optimization with linear Constraints (D.Duffie
and J.Pan (1997), R.Mansini et al (2003)). Let
us to consider an application of developed
approach to optimization of portfolio of the
Lithuanian Stock Market with n4 securities
(Table 1). We make the analysis for daily
returns of the following assets ENRG joint
stock company Lietuvos energija (power
industry) MAZN - joint stock company Mazeikiu
Nafta (oil refinery) ROKS - joint stock company
Rokiskio suris (dairy products) RST - joint
stock company Rytu skirstomieji tinklai

(power industry)
24
Table 1
The problem is to maximize a probability of
portfolio return to exceed the desired threshold
R under the assumption of lognormal distribution
of assets
subject to a simple set of constitutional
constraints
25
Table 2
results of optimization







26
5. Conclussion
  • The numerical methodology for stochastic
    nonlinear
  • programming by Monte-Carlo estimators has been
  • developed, based on gradient-type algorithms.
  • The methodology distinguishes itself by
    peculiarities
  • the optimality of the solution is tested with
    respect to
  • statistical criteria
  • the Monte-Carlo sample size is adjusted in an
    iterative way so as to guarantee the convergence
    to optimal solution with linear rate and to
    estimate the objective function with an
    admissible confidence after a finite number of
    series.
  • The method of -feasible approximations for
    stochastic
  • nonlinear problems with linear constraints has
    been
  • developed as a framework of general method.

27
THANK YOU !
Write a Comment
User Comments (0)
About PowerShow.com