CSE 245: Computer Aided Circuit Simulation and Verification - PowerPoint PPT Presentation

About This Presentation
Title:

CSE 245: Computer Aided Circuit Simulation and Verification

Description:

CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods (II) Chung-Kuan Cheng Error and Convergence Definition: error e = x ... – PowerPoint PPT presentation

Number of Views:93
Avg rating:3.0/5.0
Slides: 61
Provided by: Haik150
Learn more at: https://cseweb.ucsd.edu
Category:

less

Transcript and Presenter's Notes

Title: CSE 245: Computer Aided Circuit Simulation and Verification


1
CSE 245 Computer Aided Circuit Simulation and
Verification
Matrix Computations Iterative Methods
(II) Chung-Kuan Cheng
2
Outline
  • Introduction
  • Direct Methods
  • Iterative Methods
  • Formulations
  • Projection Methods
  • Krylov Space Methods
  • Preconditioned Iterations
  • Multigrid Methods
  • Domain Decomposition Methods

3
Introduction
Iterative Methods
Direct Method LU Decomposition
Domain Decomposition
General and Robust but can be complicated if Ngt
1M
Preconditioning
Conjugate Gradient GMRES
Jacobi Gauss-Seidel
Multigrid
Excellent choice for SPD matrices Remain an art
for arbitrary matrices
4
Formulation
  • Error in A norm
  • Matrix A is SPD (Symmetric and Positive Definite
  • Minimal Residue
  • Matrix A can be arbitrary

5
Formulation Error in A norm
  • Min (x-x)TA(x-x), where Ax b, and A is SPD.
  • Min E(x)1/2 xTAx - bTx
  • Search space xx0Vy,
  • where x0 is an initial solution,
  • matrix Vnxm has m bases of subspace K
  • vector ym contains the m variables.

6
Solution Error in A norm
  • Min E(x)1/2xTAx - bTx,
  • Search space xx0Vy, rb-Ax
  • VTAV is nonsingular if A is SPD, V is full ranked
  • y(VTAV)-1VTr0
  • xx0V(VTAV)-1VTr0
  • VTr0
  • E(x)E(x0)-1/2 r0TV(VTAV)-1VTr0

7
Solution Error in A norm
  • For any VVW, where V is nxm and W is a
    nonsingular mxm matrix, the solution x remains
    the same.
  • Search space xx0Vy, rb-Ax
  • VTAV is nonsingular if A is SPD V is full
    ranked
  • xx0V(VTAV)-1VTr0
  • x0V(VTAV)-1VTr0
  • VTrWTVTr0
  • E(x)E(x0)-1/2 r0TV(VTAV)-1VTr0

8
Steepest Descent Error in A norm
  • Min E(x)1/2 xTAx - bTx,
  • Gradient Ax-b -r
  • Set xx0yr0
  • r0TAr0 is nonsingular if A is SPD
  • y(r0TAr0)-1r0Tr0
  • xx0r0(r0TAr0)-1r0Tr0
  • r0Tr0
  • E(x)E(x0)-1/2 (r0Tr0)2/(r0TAr0)

9
Lanczos Error in A norm
  • Min E(x)1/2 xTAx-bTx,
  • Set xx0Vy
  • v1r0
  • vi is in Kr0, A, i
  • Vv1,v2, ,vm is orthogonal
  • AVVHmvm1 emT
  • VTAVHm
  • Note since A is SPD, Hm is Tm Tridiagonal

10
Conjugate Gradient Error in A norm
  • Min E(x)1/2 xTAx-bTx,
  • Set xx0Vy
  • v1r0
  • vi is in Kr0, A, i
  • Vv1,v2, ,vm is orthogonal in A norm, i.e.
    VTAV diag(viTAvi)
  • yi (viTAvi)-1

11
Formulation Residual
  • Min r2b-Ax2, for an arbitrary square matrix
    A
  • Min R(x)(b-Ax)T(b-Ax)
  • Search space xx0Vy
  • where x0 is an initial solution,
  • matrix Vnxm has m bases of subspace K
  • vector ym contains the m variables.

12
Solution Residual
  • Min R(x)(b-Ax)T(b-Ax)
  • Search space xx0Vy
  • VTATAV is nonsingular if A is nonsingular and V
    is full ranked.
  • y(VTATAV)-1VTATr0
  • xx0V(VTATAV)-1VTATr0
  • VTATr 0
  • R(x)R(x0)-r0TAV(VTATAV)-1VTATr0

13
Steepest Descent Residual
  • Min R(x)(b-Ax)T(b-Ax)
  • Gradient -2AT(b-Ax)-2ATr
  • Let xx0yATr0
  • VTATAV is nonsingular if A is nonsingular where
    VATr0.
  • y(VTATAV)-1VTATr0
  • xx0V(VTATAV)-1VTATr0
  • VTATr 0
  • R(x)R(x0)-r0TAV(VTATAV)-1VTATr0

14
GMRES Residual
  • Min R(x)(b-Ax)T(b-Ax)
  • Gradient -2AT(b-Ax)-2ATr
  • Let xx0yATr0
  • v1r0
  • vi is in Kr0, A, i
  • Vv1,v2, ,vm is orthogonal
  • AVVHmvm1 emTVm1Hm
  • xx0V(VTATAV)-1VTATr0
  • x0V(HmTHm)-1 HmTe1r02

15
Conjugate Residual Residual
  • Min R(x)(b-Ax)T(b-Ax)
  • Gradient -2AT(b-Ax)-2ATr
  • Let xx0yATr0
  • v1r0
  • vi is in Kr0, A, i
  • (AV)TAV D Diagonal Matrix
  • xx0V(VTATAV)-1VTATr0
  • x0VD-1VTATr0

16
Conjugate Gradient Method
  • Steepest Descent
  • Repeat search direction
  • Why take exact one step for each direction?

Search direction of Steepest descent method
17
Orthogonal Direction
Pick orthogonal search direction
We like to leave the error of xi1 orthogonal to
the search direction di, i.e.
  • We dont know !!!

18
Orthogonal ? A-orthogonal
  • Instead of orthogonal search direction, we make
    search direction A orthogonal (conjugate)

19
Search Step Size
20
Iteration finish in n steps
Initial error
A-orthogonal
The error component at direction dj is eliminated
at step j. After n steps, all errors are
eliminated.
21
Conjugate Search Direction
  • How to construct A-orthogonal search directions,
    given a set of n linear independent vectors.
  • Since the residue vector in
    steepest descent method is orthogonal, a good
    candidate to start with

22
Construct Search Direction -1
  • In Steepest Descent Method
  • New residue is just a linear combination of
    previous residue and
  • Let

We have
Krylov SubSpace repeatedly applying a matrix to
a vector
23
Construct Search Direction -2
let
For i gt 0
24
Construct Search Direction -3
  • can get next direction from the previous one,
    without saving them all.

let
then
25
Conjugate Gradient Algorithm
Given x0, iterate until residue is smaller than
error tolerance
26
Conjugate gradient Convergence
  • In exact arithmetic, CG converges in n steps
    (completely unrealistic!!)
  • Accuracy after k steps of CG is related to
  • consider polynomials of degree k that is equal to
    1 at step 0.
  • how small can such a polynomial be at all the
    eigenvalues of A?
  • Eigenvalues close together are good.
  • Condition number ?(A) A2 A-12
    ?max(A) / ?min(A)
  • Residual is reduced by a constant factor by
    O(?1/2(A)) iterations of CG.

27
Other Krylov subspace methods
  • Nonsymmetric linear systems
  • GMRES for i 1, 2, 3, . . . find xi ? Ki
    (A, b) such that ri (Axi b) ? Ki (A,
    b)But, no short recurrence gt save old vectors
    gt lots more space (Usually restarted every k
    iterations to use less space.)
  • BiCGStab, QMR, etc.Two spaces Ki (A, b) and Ki
    (AT, b) w/ mutually orthogonal bases Short
    recurrences gt O(n) space, but less robust
  • Convergence and preconditioning more delicate
    than CG
  • Active area of current research
  • Eigenvalues Lanczos (symmetric), Arnoldi
    (nonsymmetric)

28
Preconditioners
  • Suppose you had a matrix B such that
  • condition number ?(B-1A) is small
  • By z is easy to solve
  • Then you could solve (B-1A)x B-1b instead of Ax
    b
  • B A is great for (1), not for (2)
  • B I is great for (2), not for (1)
  • Domain-specific approximations sometimes work
  • B diagonal of A sometimes works
  • Better blend in some direct-methods ideas. . .

29
Preconditioned conjugate gradient iteration
x0 0, r0 b, d0 B-1 r0, y0
B-1 r0 for k 1, 2, 3, . . . ak
(yTk-1rk-1) / (dTk-1Adk-1) step length xk
xk-1 ak dk-1 approx
solution rk rk-1 ak Adk-1
residual yk B-1 rk
preconditioning
solve ßk (yTk rk) / (yTk-1rk-1)
improvement dk yk ßk dk-1
search direction
  • One matrix-vector multiplication per iteration
  • One solve with preconditioner per iteration

30
Outline
  • Iterative Method
  • Stationary Iterative Method (SOR, GS,Jacob)
  • Krylov Method (CG, GMRES)
  • Multigrid Method

31
What is the multigrid
  • A multilevel iterative method to solve
  • Axb
  • Originated in PDEs on geometric grids
  • Expend the multigrid idea to unstructured problem
    Algebraic MG
  • Geometric multigrid for presenting the basic
    ideas of the multigrid method.

32
The model problem
Ax b
33
Simple iterative method
  • x(0) -gt x(1) -gt -gt x(k)
  • Jacobi iteration
  • Matrix form x(k) Rjx(k-1) Cj
  • General form x(k) Rx(k-1) C (1)
  • Stationary x Rx C (2)

34
Error and Convergence
  • Definition error e x - x (3)
  • residual r b Ax (4)
  • e, r relation Ae r (5) ((3)(4))
  • e(1) x-x(1) Rx C Rx(0) C Re(0)
  • Error equation e(k) Rke(0) (6) ((1)(2)(3))
  • Convergence

35
Error of diffenent frequency
  • Wavenumber k and frequency ?
  • k?/n
  • High frequency error is more oscillatory between
    points

36
Iteration reduce low frequency error efficiently
  • Smoothing iteration reduce high frequency error
    efficiently, but not low frequency error

Error
k 1
k 2
k 4
Iterations
37
Multigrid a first glance
  • Two levels coarse and fine grid

?2h
A2hx2hb2h

Ahxhbh
?h
Axb
38
Idea 1 the V-cycle iteration
  • Also called the nested iteration

Start with
?2h
A2hx2h b2h
A2hx2h b2h
Iterate gt
Prolongation ?
Restriction ?
?h
Ahxh bh
Iterate to get
Question 1 Why we need the coarse grid ?
39
Prolongation
  • Prolongation (interpolation) operator
  • xh x2h

40
Restriction
  • Restriction operator
  • xh x2h

41
Smoothing
  • The basic iterations in each level
  • In ?ph xphold ? xphnew
  • Iteration reduces the error, makes the error
    smooth geometrically.
  • So the iteration is called smoothing.

42
Why multilevel ?
  • Coarse lever iteration is cheap.
  • More than this
  • Coarse level smoothing reduces the error more
    efficiently than fine level in some way .
  • Why ? ( Question 2 )

43
Error restriction
  • Map error to coarse grid will make the error more
    oscillatory

K 4, ? ?
K 4, ? ?/2
44
Idea 2 Residual correction
  • Known current solution x
  • Solve Axb eq. to
  • MG do NOT map x directly between levels
  • Map residual equation to coarse level
  • Calculate rh
  • b2h Ih2h rh ( Restriction )
  • eh Ih2h x2h ( Prolongation )
  • xh xh eh

45
Why residual correction ?
  • Error is smooth at fine level, but the actual
    solution may not be.
  • Prolongation results in a smooth error in fine
    level, which is suppose to be a good evaluation
    of the fine level error.
  • If the solution is not smooth in fine level,
    prolongation will introduce more high frequency
    error.

46
Revised V-cycle with idea 2
?2h ?h
  • Smoothing on xh
  • Calculate rh
  • b2h Ih2h rh
  • Smoothing on x2h
  • eh Ih2h x2h
  • Correct xh xh eh


Restriction
Prolongation
47
What is A2h
  • Galerkin condition

48
Going to multilevels
  • V-cycle and W-cycle
  • Full Multigrid V-cycle

h 2h 4h
h 2h 4h 8h
49
Performance of Multigrid
  • Complexity comparison

Gaussian elimination O(N2)
Jacobi iteration O(N2log?)
Gauss-Seidel O(N2log?)
SOR O(N3/2log?)
Conjugate gradient O(N3/2log?)
Multigrid ( iterative ) O(Nlog?)
Multigrid ( FMG ) O(N)
50
Summary of MG ideas
  • Important ideas of MG
  • Hierarchical iteration
  • Residual correction
  • Galerkin condition
  • Smoothing the error
  • high frequency fine grid
  • low frequency coarse grid

51
AMG for unstructured grids
  • Axb, no regular grid structure
  • Fine grid defined from A

52
Three questions for AMG
  • How to choose coarse grid
  • How to define the smoothness of errors
  • How are interpolation and prolongation done

53
How to choose coarse grid
  • Idea
  • C/F splitting
  • As few coarse grid point as possible
  • For each F-node, at least one of its neighbor is
    a C-node
  • Choose node with strong coupling to other nodes
    as C-node

1
2
4
3
5
6
54
How to define the smoothness of error
  • AMG fundamental concept
  • Smooth error small residuals
  • r ltlt e

55
How are Prolongation and Restriction done
  • Prolongation is based on smooth error and strong
    connections
  • Common practice I

56
AMG Prolongation (2)
57
AMG Prolongation (3)
  • Restriction

58
Summary
  • Multigrid is a multilevel iterative method.
  • Advantage scalable
  • If no geometrical grid is available, try
    Algebraic multigrid method

59
The landscape of Solvers
More Robust
Less Storage (if sparse)
60
References
  • G.H. Golub and C.F. Van Loan, Matrix
    Computataions, Third Edition, Johns Hopkins, 1996
  • Y. Saad, Iterative Methods for Sparse Linear
    Systems, Second Edition, SIAM, 2003.
Write a Comment
User Comments (0)
About PowerShow.com