Iterative Solution of Linear Systems Jacobi Method - PowerPoint PPT Presentation

1 / 13
About This Presentation
Title:

Iterative Solution of Linear Systems Jacobi Method

Description:

Iterative Solution of Linear Systems ... 2bx has a minimum when Ax = b Algorithm starts with initial guess and proceeds along a set of orthogonal search ... – PowerPoint PPT presentation

Number of Views:394
Avg rating:3.0/5.0
Slides: 14
Provided by: P555
Category:

less

Transcript and Presenter's Notes

Title: Iterative Solution of Linear Systems Jacobi Method


1
Iterative Solution of Linear SystemsJacobi Method
while not converged do

2
Gauss Seidel Method
while not converged do

3
Stationary Iterative Methods
  • Iterative method can be expressed as
  • xnewc Mxold, where M is an iteration matrix.
  • Jacobi
  • Ax b, where A LDU, i.e.,
  • (LDU)x b gt Dx b - (LU)x
  • gt x D-1(b-(LU)x) D-1b - D-1 (LU)x
  • xn1 D-1(b-(LU)xn) c Mxn
  • Gauss Seidel
  • (LDU)x b gt (LD)x b - Ux
  • gt xn1 (LD)-1(b-Uxn) (LD)-1b - (LD)-1 Uxn

4
Conjugate Gradient Method
  • A non-stationary iterative method that is very
    effective for symmetric positive definite
    matrices.
  • The method was derived in the context of
    quadratic function optimization
  • f(x) xTAx - 2bx has a minimum when Ax b
  • Algorithm starts with initial guess and proceeds
    along a set of orthogonal search directions in
    successive steps.
  • Guaranteed to reach solution (in exact
    arithmetic) in at most n steps for an nxn system,
    but in practice gets close enough to solution in
    far fewer iterations.

5
Conjugate Gradient Algorithm
  • Steps in CG algorithm in solving system Axy
  • so r0 y - Ax0
  • ak rkTrk/skTAsk
  • xk1 xk aksk
  • rk1 rk - akAsk
  • bk1 rk1Trk1/rkTrk
  • sk1 rk1 bk1sk
  • s is the search direction, r is the residual
    vector, x is the solution vector a and b are
    scalars
  • a represents the extent of move along the search
    direction
  • New search direction is the new residual plus
    fraction b of the old search direction.

6
Pre-conditioning
  • The convergence rate of an iterative method
    depends on the spectral properties of the matrix,
    i.e. the range of eigenvalues of the matrix.
    Convergence is not always guaranteed - for some
    systems the solution may diverge.
  • Often, it is possible to improve the rate of
    convergence (or facilitate convergence in a
    diverging system) by solving an equivalent system
    with better spectral properties
  • Instead of solving Axb, solve MAx Mb,where M
    is chosen to be close to A-1. The closer MA is to
    the identity matrix, the faster the convergence.
  • The product MA is not explicitly computed, but
    its effect incorporated via an additional
    matrix-vector multiplication or a triangular
    solve.

7
Communication Requirements
  • Each iteration of an iterative linear system
    solver requires a sparse matrix-vector
    multiplication Ax. A processor needs xi iff any
    of its rows has a nonzero in column i.

P0
P1
P2
P3
8
Communication Requirements
  • The associated graph of a sparse matrix is very
    useful in determining the communication
    requirements for parallel sparse matrix-vector
    multiply.

P0
P0
P1
P2
P1
P3
Comm required 8 values
P2
P3
P2
P1
P0
P3
Alternate mapping 5 values
9
Minimizing communication
  • Communication for parallel sparse matrix-vector
    multiplication can be minimized by solving a
    graph partitioning problem.

10
Communication for Direct Solvers
  • The communication needed for a parallel direct
    sparse solver is very different from that for an
    iterative solver.
  • If rows are mapped to processors, comm. is reqd.
    between procs owning rows j and k (kgtj) iff Akj
    is nonzero.
  • The associated graph of thematrix is not very
    useful in producing a load-balanced partitioning
    since it does not capture the temporal
    dependences in the elimination process.
  • A different graph structure called the
    elimination tree is useful in determining a
    load-balanced low-communication mapping.

11
Elimination Tree
  • The e-tree is a tree data structure that
    succintly captures the essential temporal
    dependences between rows during the elimination
    process.
  • The parent of node j in the tree is the row of
    first non-zero below diagonal in row j (using the
    filled-in matrix).
  • If row j updates row k, k must be an ancestor in
    the e-tree.
  • Row k can only be updated by a node that is in
    its subtree.

12
Using the E-Tree for Mapping
  • Recursive mapping strategy.
  • Sub-trees that are entirely mapped to a processor
    need no communication between those rows.
  • Subtrees that are mapped amongst a subset of
    procs only need communication among that group,
    e.g. rows 36 only needs comm. from P1

13
Iterative vs. Direct Solvers
  • Direct solvers
  • Robust not sensitive to spectral properties of
    matrix
  • User can effectively apply solver without much
    understanding of algorithm or properties of
    matrix
  • Best for 1D problems very effective for many 2D
    problems
  • Significant increase in fill-in for 3D problems
  • More difficult to parallelize than iterative
    solvers poorer scalability
  • Iterative solvers
  • No fill-in problem no explosion of operation
    count for 3D problems excellent scalability for
    large sparse problems
  • But convergence depends on eigenvalues of matrix
  • Preconditioners are very important good ones
    usually domain-specific
  • The effectiveness of iterative solvers may
    require good understanding of mathematical
    properties of equations in order to derive good
    preconditioners
Write a Comment
User Comments (0)
About PowerShow.com