Title: HighPerformance and Parallel Optimization
1High-Performance and Parallel Optimization
- Mark P. Wachowiak, Ph.D.
- Department of Computer Science and Mathematics
- Nipissing University
- June 3, 2007
2Objectives
- Brief overview of optimization and applications.
- Parallel computing and parallel optimization.
- Specific examples of parallel optimization.
- Future directions.
3Optimization
- Find the best, or optimum value of an objective
(cost) function. - Very large research area.
- Multitude of applications.
4Applications of Optimization
Engineering design
Business and industry
Biology and medicine
Radiotherapy planning
Economics
Design of materials
Manufacturing design
Bioinformatics Proteomics
Finance
Systems biology
Management
Simulation and modeling
Image registration
5New Applications of Optimization
- Simulation-based optimization
- Objective function values are the results of
experiments and/or simulations. - Analytic forms for the objective function are
unavailable. - Black box cost function computation.
- Evaluating f is extremely costly.
- PDE-constrained optimization.
- Very large, complex, and high-dimensional.
- Requires massive computing resources.
- Inverse problems.
- Costly computation of the objective function.
6Applications of PDE-Constrained Optimization
- Electromagnetics.
- Aerodynamics.
- Cardiac electrophysiology.
- Image processing.
- Reservoir simulation.
- Compressible and incompressible flows.
- Earthquake modeling.
7Example PDE-Constrained Optimization
Ground-truth model of distribution of wave speed
of an elastic earth medium (portion of the Los
Angeles Basin)
Inverted model from solving an optimization
problem with 17 million inversion
parameters. Solution required 24 hours using 2048
CPUs at the PSC.
O. Ghattas, 2003
8Parameter Estimation
- For phenomena modeled by systems of ODEs and
PDEs. - Optimization used to find the system parameters
that best fit experimental and/or observed data. - Applications
- Biochemical pathways.
- Environmental modeling.
- Economics.
9Definitions
- Objective/cost function
- Search space
- Minimizer
- Global optimization
- Local optimization
- Convergence criteria
- Stationary point
- Deterministic method
- Stochastic method
- Multiobjective optimization
- Constrained
- Unconstrained
10Journals on Optimization
- SIAM Journal on Optimization
11Combinatorial Optimization
- A linear or nonlinear function is defined over a
very large finite set of solutions. - Categories
- Network problems.
- Scheduling.
- Transportation.
12Combinatorial Optimization (2)
- If the function is piecewise linear, the function
can be optimized exactly with mixed integer
program methods using branch and bound. - Approximate solutions can be obtained with
heuristic methods - Simulated annealing.
- Tabu search.
- Genetic algorithms.
13General Unconstrained Problems
- A nonlinear function is defined over the real
numbers. - The function is not subject to constraints, or
has simple bound constraints. - Partitioning strategies utilize a priori
knowledge of how rapidly the function varies
(e.g. the Lipschitz constant). - Interval methods can be used if the objective
function can be expressed analytically.
14General Unconstrained Problems Statistical
Methods
- Stochastic in nature.
- Use partitioning to decompose the search space.
- Use a priori information or assumptions to model
the objective function. - Usually inexact.
15General Unconstrained Problems Statistical
Methods Examples
- Simulated annealing.
- Genetic algorithms.
- Continuation methods,
- Transforms the function into a smoother function
with fewer local minimizers. - Applies local minimization procedure to trace the
minimizers back to the original function.
16General Constrained Problems
- A nonlinear function is defined over the real
numbers. - The optimization is subject to constraints.
- These problems are usually solved by adapting
techniques for unconstrained problems to address
constraints.
17Unconstrained Nonlinear Optimization
- Unconstrained minimization Find x that
minimizes a function f(x).
Objective function
18Bound-Constrained Nonlinear Optimization
- Find x that minimizes a function f(x).
- subject to
19General Constrained Nonlinear Optimization
- Constrained minimization
- subject to
- Either f(x) and/or the c(x) are nonlinear.
Equality constraints
Inequality constraints
20General Constrained Nonlinear Optimization (2)
- In general, the problem is to minimize f over a
compact subset W of RD.
21Nonlinear Equations Problem
- Determine a root of a system of D nonlinear
equations in D variables -
22Example
23Global Optimization
- Relatively new vis-Ã -vis local methods.
- Many important objective functions are
- Non-convex.
- Irregular.
- Derivatives are not available or cannot be easily
computed. - The high computational cost of many global
methods precludes their use in time
critical-applications.
24Local and Global Optimization
End
End
Start
25Optimization Techniques
Global
Local
Stochastic
Deterministic
Derivative
Derivative-free
Simulated annealing
Gradient descent
Nelder-Mead simplex
Interval analysis
Genetic algorithms
Trust region methods
Powells direction set
Homotopy methods
Evolutionary strategies
Newton-based methods
Pattern search
Response surface techniques
Particle swarm
DIRECT
Multidirectional search
26Optimization in Biology and Biomedicine
Engineering design
Fitting models to experimental data
Optimizing relevant functions
Radiotherapy planning
Design of biomaterials
Soft tissue biomechanics
Biomechanics
Bioinformatics Proteomics
Systems biology
Biomedical Imaging
Biochemical pathways
27Optimization in Tomotherapy and Intensity
Modulated Radiotherapy
- Particle swarm (Li et al., 2005).
- Cimmino algorithm (Xiao et al., 2004) .
- Exhaustive search (Wang et al., 2004).
- Simulated annealing (Rosen et al., 2005).
- Gradient descent (Rosen et al., 2005).
- Costlets (Kessler et al., 2005).
http//www.lrcc.on.ca/research/tomo/index.xml
28Derivative-Based Optimization
- If derivatives can be computed or accurately
estimated (e.g. finite differences), a
derivative-based method is preferable. - If second derivatives are available, Newton based
approaches should be considered. - Generally, these are the most successful,
efficient approaches. - The best known methods for local optimization.
29Problems with Derivative-Based Approaches
- Derivatives may not always be available.
- Finite difference approximations are too
expensive or inaccurate. - There is no a priori information about f.
- High dimensional problems preclude accurate
estimates of the gradient.
30Automatic Differentiation
- A technique to compute derivatives based on the
computer code for f. - A new program is generated wherein analytical
derivates are computed based on the original
code. - Utilizes the chain rule and differentials for
generating the differentiation code. - Can be useful for gradient-based optimization of
difficult functions.
31Automatic Differentiation Disadvantages
- Derivatives may not be accurate due to truncation
error in functions involving solutions to PDEs. - Black-box function code contains branching.
- Black-box code may not be available.
- Usually requires the original program to be
written in a specified manner. - Interfacing generated code with original program
may be difficult.
32Limited Impact of Optimization
- Computational difficulties
- High computational cost (especially G.O.).
- Plethora of methods (which to use?).
- Disconnect between CAD and CAE.
- Realism of the underlying model.
http//www.cas.mcmaster.ca/mopta/mopta01/abstract
s.htmljones
33Parallel and High-Performance Computing
- Parallel computing
- The use of multiple computer resources (usually
processors or execution threads) to improve the
efficiency of computationally expensive
algorithms. - High-performance computing
- Parallel computing, or
- Optimizing algorithm performance by explicitly
taking advantage of computational abilities
inherent in the system hardware and/or software.
34HPC and Parallel Computing
www.sharcnet.ca
35Limits to Improvement via Parallelization
- Not all useful algorithms are inherently
parallel. - They do not have a natural parallel
implementation. - Overhead required in parallelization negatively
affects performance. - Cost of creating and joining threads.
- Synchronization of threads and/or processes.
- Communication overhead and/or network issues.
- Inefficiencies resulting from introducing
parallelism into an optimal serial algorithm.
36Parallel Computing Terminology
- Coarse-grained
- Each processor has a large amount of work. A
relatively small number of large computations is
performed. - Fine-grained
- Each processor works on a small subtask.
- A relatively large number of small computations
is performed.
37Parallel Computing Terminology
- Distributed memory
- Processors each have their own memory.
- Processors communicate through passing messages.
- Useful for coarse-grained algorithms.
- Relatively difficult to program (e.g. MPI, PVM).
- Shared memory
- All processors share one memory.
- Inter-processor communication is less important.
- Useful for fine-grained algorithms, as well as
for some coarse-grained implementations. - Relatively simple to program (e.g. OpenMP,
pThreads).
38Speedup and Efficiency
P Number of processors
Scalability For a specified algorithm, the
increase in speedup as a function of the number
of processors P. In general, distributed memory
systems are more scalable than shared memory
systems.
39Amdahls Law
- Quantifies the efficiency benefits of
parallelization for multiprocessing systems. - Assumption The problem size is fixed.
- Notation
- P number of processors
- s fraction of intrinsically serial code (0 to 1)
- 1 s fraction of code that can be parallelized
40Amdahls Law (2)
s 0.01
Speedup(P)
s 0.05
s 0.2
s 0.5
P
41Gustafsons Law
- Quantifies massive parallelism.
- Underlying principle the parallel or vector part
of a program scales with the problem size. - Serial components, including start-up, program
loading, serial bottlenecks, and I/O do not grow
with problem size (are constant). - If a problem is run on n processors, speedup is
linear
42Gustafsons Law (2)
- The most efficient way to use a parallel computer
is to have each processor perform similar work,
but on different sections of the data (Hillis,
1998). - In this way, the fixed, sequential portion of the
code can be considered as constant.
43Three Levels for Introducing Parallelism into
Optimization
- Parallelize the function, gradient, and
constraint evaluations. - Parallelize the numerical computations (linear
algebra) and numerical libraries. - Parallelize the optimization algorithm at a high
level i.e. adopt new optimization approaches.
Schnabel, 1995
44Example Image Registration
- Aligning and combining images from the same or
different type of image. - Linear
- Nonlinear/elastic
- Useful in simulation, modeling, and in planning
surgical procedures. - Employs concepts from probability theory,
information theory, geometry, topology,
optimization, parallel computing, and many other
areas. - Once the images are registered, they are fused
together to provide complementary information.
45Registration and Fusion
MRI
MRI
Ultrasound
MRI Ultrasound
PET
Histology cryosection
46Similarity Metric
Small x, y, and z translation errors
Correct registration.
Rotation errors around all three axes.
Large translation and rotation errors.
Wachowiak et al., Proc. SPIE Medical Imaging, 2003
47Entropy and Mutual Information
Entropy
Mutual Information (MI)
Normalized Mutual Information (NMI)
48Parallel Optimization Linear Registration
Coarse-grained
DIRECT
Fine-grained
Powell
MDS
Compute transformation matrix
Apply transformation to all voxels.
Determine valid coordinates.
Interpolate image intensities.
Compute marginal and joint densities.
Calculate the similarity metric.
Wachowiak and Peters, 2005 2006
49Examples of Parallel Optimization Approaches
Deterministic Methods
- DIRECT (Dividing Rectangles) (Jones et al., 1993
Wachowiak et al., 2006). - Parallel trust region methods (Hough and Meza,
2002). - Asynchronous pattern search (local method)
(Kolda, 2003). - Interval analysis (Eriksson and Lindström, 1995).
- Parallel Newton-Krylov methods for
PDE-constrained optimization (Biros and Ghattas,
1999). - Various surface response techniques.
50Examples of Parallel Optimization Approaches
Stochastic Methods
- Various parallel simulated annealing approaches.
- Parallel evolutionary strategies and genetic
algorithms. - Parallel particle swarm optimization (Schutte et
al., 2004).
51Response Surface Methods
- Model the unknown function f as an analytic
approximation . - Response surface.
- Surrogate model.
- The model is optimized over W.
- Assume a deterministic function of D variables at
n points.
Regis and Shoemaker, 2007
52Response Surface
Linear regression
Normally distributed error terms N(0, s)
Unknown coefficients to be estimated
Linear or nonlinear function of x.
532D Response Surface Fitting Polynomials
d 8
d 4
f(x)
Response
d 6
x
54Radial Basis Functions
- Radially-symmetric basis functions for modeling
scattered data. - Used in many response surface approaches.
- RBFs have nice mathematical properties that
facilitate analysis (Powell, 1992).
55Scattered Point Interpolation with Radial Basis
Functions
Interpolate scattered points
Original point cloud from segmented contours in
CT volume.
Enhanced point cloud
Radial basis interpolation
Surface normals
Final RBF model
Courtesy Derek Cool, Robarts Research Imaging
Laboratories
56Radial Basis Functions
Space of polynomials in D variables of degree ? m.
57Examples of Radial Basis Functions
r ? 0
58Compact Support RBFs
r gt 0
otherwise
Compact support ? Positive definite matrices.
59Examples RBFs
Gaussian with g 2
5th degree CSRBF
f(r)
6th degree CSRBF
8th degree CSRBF
r
60Radial Basis Functions
Matrix of evaluated radial basis functions
61Radial Basis Functions
- The RBF that interpolates (xi, f(xi)) for i
1,,n is obtained by solving
Polynomial not required if CSRBFs are used.
Coefficients l and c can be found by inversion.
62Determining New Points
- Resample in areas having promising function
values as predicted by the model (Sóbester et
al., 2004). - The next point selected maximizes the expected
improvement in the objective function (Jones et
al. 1998). - Search in areas of high predicted approximation
error, using stochastic techniques (Sóbester et
al., 2004). - Select a guess fn for the global minimum, and
choose a new point xnew such that some function
snew interpolating all (xi, fi) and (xnew, fn)
such that some quality measure on snew is
maximized (e.g. minimizing the bumpiness of
snew (Gutmann, 2001 Regis and Shoemaker, 2007).
63Constrained Optimization using Response Surfaces
- Choose points so that they are at a maximum
distance from previously-evaluated points. - Define the maximum distance between any point not
evaluated and evaluated points as
Maximin distance
Regis and Shoemaker, 2005.
64Constrained Optimization using Response Surfaces
(2)
- Let bi ? 0, 1 denote a user-defined distance
factor. - The next point chosen is required to have a
distance of at least bi?i from all
previously-evaluated points.
65Auxiliary Constrained Minimization Subproblem
- The auxiliary problem of selecting the next point
for evaluation is expressed as - k denotes the initial number of points
evaluated. - Auxiliary subproblem is generally nonconvex.
66Solving the Auxiliary Subproblem
- The bi are typically a search pattern balancing
global and local search - b1 1 ? global exploratory search.
- bN1 0 ? greedy local search.
- The RBF model can be differentiated.
- Gradient-based optimization software.
- Multistart nonlinear programming solver.
- Global optimization techniques, such as
constrained DIRECT.
67Parallelization
- Each search pattern contains Ns 1 values,
where Ns is the cycle length. - If P ? Ns 1, the search pattern is cycled
through by the available processors. - If If P gt Ns 1, more values are added to the
search pattern - E.g. With ??0.9, 0.75, 0.25, 0.05, 0.03, 0? (Ns
5), - P 8 ? ??0.9, 0.9, 0.75, 0.25, 0.05, 0.03,
0.03, 0?
68Full Algorithm
Select initial points S1 (Latin hypercubes, etc.)
Fit or update using data points x ?Si
Select the next candidate point xki by solving
the constrained auxiliary subproblem. 0 ? bi ? 1
is set by the user.
Evaluate f at points in Si and select value
Convergence?
NO
Evaluate f at xki and update the best function
value thus far.
YES
End
Si1 ?? Si ?? xki Di1 ?? Di ??
(xki,f(xki)) i ? i 1
69Example
y
x
RBF surface after first iteration
RBF surface after iteration 2
RBF surface after iteration 3
Initial RBF surface
70DIRECT
- Relatively new method for bound-constrained
global optimization (Jones et al., 1993). - Lipschitzian approach, but the Lipschitz constant
need not be explicitly specified. - Balance of global and local search.
- Inherently parallel.
Wachowiak and Peters, 2006 Dru, Wachowiak, and
Peters, 2006.
71DIRECT Algorithm
Sample at points around centers with dimensions
of maximum side length.
Evaluate function values at sampled points.
Normalize
Divide rectangles according to function value.
Identify potentially optimal rectangles.
Convergence?
Group rectangles by their diameters.
NO
YES
END
72Potentially Optimal Hyperboxes
Potentially optimal HBs have centers that define
the bottom of the convex hull of a scatter plot
of rectangle diameters versus f(xi) for all HB
centers xi.
e 0.001
e 0.1
73Example
74Example
Normalize the search space and evaluate the
center point of a D-dimensional rectangle.
75Example
Evaluate points around the center.
Iteration 1
76Example
Divide the rectangle according to the function
values.
77Example
Use Lipschitz conditions and the rectangle
diameters to determine which rectangles should be
divided next.
78Division of the Search Space
Potentially optimal rectangles with these
centres are divided in the next iteration.
Estimate of Lipschitz constant
Locality parameter
79Example
Evaluate points in the potentially optimal
rectangles.
Iteration 2
80Example
Iterate (evaluate, divide, find potentially
optimal rectangles) until stopping criteria are
met.
81Example
82Example
Iteration 3
83Example
Iteration 4
Iteration 5
Iteration 10
Iteration 20
Iteration 110
Iteration 50
84Conclusion
- Many new applications of optimization need
methods not relying on derivatives. - Cost of computing f is extremely costly.
- Parallel computation, particularly coarse-grained
parallelism, provides the efficiency needed to
solve large black-box problems.
85Future Directions
- The new focus is on deterministic global
approaches. - New global optimization algorithms should be
inherently parallel to maximize scalability. - Converge proofs
- Easier for derivative-based methods.
- Especially important for stochastic methods.
86Future Directions (2)
- Load balancing
- Asynchronous versions of parallel techniques.
- Selecting the appropriate methods for specific
problems. - If derivatives are available, use gradient-based
methods. - Otherwise, parallel direct methods should be
used. - If all else fails, consider parallel stochastic
methods.
87Thank you.
88Radial Basis Functions
- The RBF that interpolates (xi, f(xi)) for i
1,,n is obtained by solving
Polynomial not required if CSRBFs are used.
Coefficients l and c can be found by inversion.
89Radial Basis Functions
90Response Surface Methods
- Model the unknown function f as an analytic
approximation . - Response surface.
- Surrogate model.
- The model is optimized over W.
- Assume a deterministic function of D variables at
n points. - Denote sampled point i by xi (xii , xiD), i
1, , n. - The associated function value is yi y(xi).
Regis and Shoemaker, 2007
91DIRECT Step 1
Normalize the search space and evaluate the
center point of an n-dimensional rectangle.
92DIRECT Step 2
Evaluate points around the center.
93DIRECT Step 3
Divide the rectangle according to the function
values.
94Division of the Search Space
Potentially optimal rectangles with these
centres are divided in the next iteration.
Estimate of Lipschitz constant
Locality parameter
95DIRECT Step 4
Use Lipschitz conditions and the rectangle
diameters to determine which rectangles should be
divided next.
96DIRECT Iteration
Repeat steps 2 4 (Evaluate, find potentially
optimal rectangles, divide).
97DIRECT Convergence
Based on rectangle clustering, number of
non-improving iterations, and function value
tolerance.
98- Suppose that xi, i 1, , n are previously
evaluated points.
99Steepest Descent
- Derivative-based local optimization method.
- Simple to implement.
Gradient of f(x) using current x
Current x
x in next iteration
Step size
100Steepest Descent Disadvantages
- Unreliable convergence properties.
- The new gradient at the minimum point of any line
minimization is orthogonal to the direction just
traversed. - ? Potentially many small steps are taken, leading
to slow convergence.
101Conjugate Gradient Methods
- Assume that both f(x) and ??f(x) can be computed.
- Assume that f can be approximated as a quadratic
form -
-
- ?Optimality condition is
-
102Conjugates
- Given that A is a symmetric matrix, two vectors g
and h are said to be conjugate w.r.t. A if gTAh
0. - Orthogonal vectors are a special case of
conjugate vectors gTh 0. - Therefore, the solution to the n ? n quadratic
equation is
103Conjugate Gradient Method
- Select successive direction vectors as a
conjugate version of the successive gradients
obtained as the method progresses. - Conjugate directions are generated as the
algorithm proceeds.
http//www.srl.gatech.edu/education/ME6103/NLP-Unc
onstrained-Multivariable.ppt
104Conjugate Gradient Method (2)
Choose x0 Compute g0 ?f(x0) Set h0 -g0
Using a line search, find ak that minimizes f(xk
akhk).
Set xk1 xk akhk.
Fletcher-Reeves
Stopping criteria met?
NO
YES
Return x
Polack-Robiere
Compute new conjugate direction hk1 -gk1
bk hk
105Branch and Bound
106Clustering Methods
- Used for unconstrained real-valued functions.
- Multistart procedure - local searches are
performed from mutliple points distributed over
the search space. - The same local minimum may identified by multiple
points. Clustering methods attempt to avoid this
inefficiency by careful selection of points at
which the local search is initiated.
107Clustering Methods General Algorithm
- Sample points in the search space.
- Transform the sampled points to group them around
the local minima. - Apply a clustering technique to identify groups
representing convergence basins of local minima.
108Clustering Methods Disadvantages
- Because a potentially large number of points are
randomly sampled to identify the clusters in
neighborhoods of local minima, the objective
function must be relatively inexpensive to
compute. - Clustering methods are not suited to
high-dimensional problems (more than a few
hundred variables). - However, coarse-grained parallelism may be useful
in improving efficiency.
109Genetic Algorithms and Evolutionary Strategies
110Genetic Algorithms
Initialize the population
Evaluate the function values of the population
Perform competitive selection
Generate new solutions from genetic operators
NO
Stopping criteria met?
YES
END
111(No Transcript)
112Global and local optimization
113Global and local optimization
114Local Optimization
Start
115Local Optimization
End
116Global Optimization
Start
117Global Optimization
End
118Particle Swarm Optimization (PSO)
- Relatively new GO technique (Kennedy Eberhart,
1995). - Iterative, population-based GO method.
- Based on a co-operative model of individual
(particle) interactions. - In contrast to the generally competitive model of
genetic algorithms and evolutionary strategies.
119Particle Swarm Optimization
- Each individual, or particle, has memory of the
best position it found so far (best response),
and the best position of any particle. - The next move of the particle is determined by
its velocity, v, which is a function of its
personal best and global best locations.
120Position Update in PSO
Velocity for particle j, parameter i, (t1)-th
iteration
Previous velocity
Effect of the best position found by particle j
(personal best)
Effect of the best position found by any particle
during the entire search (global best)
Position update
121PSO Parameters
- Maximum velocity vmax
- Prevents particles from moving outside the region
of feasible solutions. - Inertial weight w
- Determines the degree to which the particle
should stay in the same direction as the last
iteration. - C1, C2
- Control constants
- j1, j2
- Random numbers to add stochasticity to the
update.
122Particle Swarm Example
Iteration 1
Iteration 5
Iteration 10
Iteration 20
Iteration 30
Last iteration (41)
123Proposed Adaptation of PSO to Biomedical Image
Registration
- Incorporation of constriction factor c (Clerc
Kennedy, 2002). - Registration functions using generalized
similarity metrics and function stretching
(Parsopoulos et al., 2001) in the global search. - Addition of new memory term that of initial
orientation, as initial position is usually
carefully chosen.
124Proposed Adaptation of PSO to Biomedical Image
Registration
Constriction factor
Previous velocity
Personal best term
Global best term
Initial orientation term
Initial orientation
j1, j2, j3 Control parameters u1, u2, u3
Uniformly distributed random numbers
125Proposed Adaptation of PSO to Biomedical Image
Registration
126Parameter Estimation
- Given a set of experimental data, calibrate the
model so as to reproduce the experimental results
in the best possible way (Moles et al., 2003). - Inverse problem.
- Solutions to these problems are instrumental in
the development of dynamic models, that promote
functional understanding at the systems level.
127Parameter Estimation (2)
- Goal Minimize an objective function that
measures the goodness of the fit of the model
with respect to a given experimental data set,
subject to the dynamics of the system
(constraints).
http//www.genome.org/cgi/content/full/13/11/2467
128Example Biochemical Pathways
- Nonlinear programming problem with differential
algebraic constraints. - The problems are frequently nonconvex and
multimodal. Therefore, global methods are
required. - Multistart strategies do not work in this case
multiple convergence to the same local minimum.
129Biochemical Pathways (2)
130Grid Computing
131Parallel Optimization
Coarse-grained
DIRECT
Fine-grained
Powell
MDS
Compute transformation matrix
Apply transformation to all voxels.
Determine valid coordinates.
Interpolate image intensities.
Compute marginal and joint densities.
Calculate the similarity metric.
Wachowiak and Peters, IEEE TITB 2006. Wachowiak
and Peters, IEEE HPCS 2005, 50-56.
132Optimization Software
- OPT (http//hpcrd.lbl.gov/meza/projects/opt/O
PTdoc/html/index.html)
133References
- http//www.cs.sandia.gov/opt/survey/
134Example
y
y
x
x