CS5321 Numerical Optimization - PowerPoint PPT Presentation

About This Presentation
Title:

CS5321 Numerical Optimization

Description:

CS5321 Numerical Optimization 05 Conjugate Gradient Methods Conjugate gradient methods For convex quadratic problems, the steepest descent method is slow in ... – PowerPoint PPT presentation

Number of Views:59
Avg rating:3.0/5.0
Slides: 12
Provided by: cheru7
Category:

less

Transcript and Presenter's Notes

Title: CS5321 Numerical Optimization


1
CS5321 Numerical Optimization
  • 05 Conjugate Gradient Methods

2
Conjugate gradient methods
  • For convex quadratic problems,
  • the steepest descent method is slow in
    convergence.
  • the Newtons method is expensive in solving Axb.
  • the conjugate gradient method solves Axb
    iteratively.
  • Outline
  • Conjugate directions
  • Linear conjugate gradient method
  • Nonlinear conjugate gradient method

3
Quadratic optimization problem
  • Consider the quadratic optimization problem
  • A is symmetric positive definite
  • The optimal solution is at ?f (x) 0
  • Define r(x) ?f (x) Ax - b (the residual).
  • Solve Axb without inverting A. (Iterative method)

4
Steepest descentline search
  • Given an initial guess x0.
  • The search direction pk -?fk -rk b -Axk
  • The optimal step length
  • The optimal solution is
  • Update xk1 xk ?kpk. Goto 2.

5
Conjugate direction
  • For a symmetric positive definite matrix A, one
    can define A-inner-product as ?x, y?A xTAy.
  • A-norm is defined as
  • Two vectors x and y are A-conjugate for a
    symmetric positive definite matrix A if xTAy0
  • x and y are orthogonal under A-inner-product.
  • The conjugate directions are a set of search
    directions p0, p1, p2,, such that piApj0 for
    any i ? j.

6
Example
7
Conjugate gradient
  • A better result can be obtained if the current
    search direction combines the previous one
  • pk1 -rk ?kpk
  • Let pk1 be A-conjugate to pk. (
    )

8
The linear CG algorithm
  • With some linear algebra, the algorithm can be
    simplified as
  • Given x0, r0Ax0-b, p0 -r0
  • For k 0,1,2, until rk 0

9
Properties of linear CG
  • One matrix-vector multiplication per iteration.
  • Only four vectors are required. (xk, rk, pk, Apk)
  • Matrix A can be stored implicitly
  • The CG guarantees convergence in r iterations,
    where r is the number of distinct eigenvalues of
    A
  • If A has eigenvalues ?1 ? ?2 ? ? ?n,

10
CG for nonlinear optimization
  • The Fletcher-Reeves method
  • Given x0. Set p0 -?f0,
  • For k 0,1,, until ?f00
  • Compute optimal step length ?k and set xk1xk
    ?kpk
  • Evaluate ?fk1

11
Other choices of ?
  • Polak-Ribiere
  • Hestens-Siefel
  • Y.Dai and Y.Yuan
  • (1999)
  • WW.Hager and H.Zhang
  • (2005)
Write a Comment
User Comments (0)
About PowerShow.com