Title: Iterative Solution of Linear Equations
1Iterative Solution of Linear Equations
- Iterative solution of linear equations that arise
in the Finite Difference Solution of PDEs. - Relaxation methods Jacobi, GS, SOR.
2General remarks on direct and iterative methods
for linear systems
- Until fairly recently, in many real applications,
direct methods were often preferred to iterative
methods because of their robustness and
predictable behaviour. - However the increasing need for solving very
large sparse systems, together with substantial
progress in developing new efficient iterative
techniques, has led to a rapid shift towards
iterative methods.
3General remarks (continued)
- In the early days iterative methods were often
specialised, built for a particular application. - Their efficiency tended to rely on careful
choices for a number of parameters. - Recently iterative methods started to approach
the quality and robustness of direct solvers.
4General remarks (continued)
- In 3D (or large 2D) models iterative methods are
inevitable. - Both memory and computational requirements in
such cases are serious challenges for the best
direct solvers currently available. - Iterative methods offer better prospects in both
terms and are easier to implement on modern
high-performance (parallel) architectures.
5General remarks (continued)
- Iterative methods are based on computing a
sequence of iterates, such that - In practice, the iteration continues until the
approximate solution reaches the true solution
within some prescribed tolerance -
6Classification of iterative methods
- Relaxation methods (Stationary Iterations)
- Jacobi
- Gauss-Seidel
- SOR, SSOR
- ADI
- Non-stationary Iterations
- Projection (minimisation) methods (COMP60092).
- steepest descent.
- minimal residual.
- Krylov subspace methods (COMP60092).
- Multigrid method (COMP60092).
7Relaxation Methods
- These were the first iterative techniques used
for the solution of large sparse systems. - Starting with a given initial solution, these
methods modify one (or a few) components of the
solution at a time until convergence is reached. - Each of the modifications (relaxation steps)
annihilates one (or a few) components of a
residual vector. - Relaxation methods are rarely used today as
stand-alone methods they are often combined
with more sophisticated iterative techniques
(like Krylov iterative methods) to form very
successful methods.
8Relaxation Methods
- Iterative methods can be viewed as an attempt to
approximate the inverse of the matrix A by the
inverse of part of the matrix A that is easy to
compute. - Consider the matrix decomposition
- where Ddiag(A) is the diagonal part of A, E
is the strictly lower triangular part of A, and
F is the strictly upper triangular part of A
9Jacobis Method
- Consider the system of linear equations
- The i-th equation is given by
-
10Jacobis Method
- Determine the i-th component of the new
approximation so as to annihilate the i-th
component of the residual vector -
- Repeat this step for all components of x
11Jacobis Method Applied to Laplaces Equation
- Finite Difference Approximation to Laplaces
equation - Hence Jacobis method is
12Vector Form of Jacobis Method
- Return to the general system of linear equations
- Jacobis method is given by
- This can be written in matrix-vector form as
-
13Gauss-Seidel iteration
- Determine the i-th component of the new
approximation so as to annihilate the i-th
component of the residual vector (using the
latest (most up-to-date) data to determine the
residual) - Repeat this step for all components of x
-
14Gauss-Seidel iteration
15Gauss Seidel Iteration Applied to Laplaces
Equation
- Finite Difference Approximation to Laplaces
equation - Hence the Gauss Seidel iteration is
16Vector Form of Gauss Seidel Iteration
- Return to the general system of linear equations
- The Gauss Seidel iteration is given by
- This can be written in matrix-vector form as
-
17Jacobi and Gauss-Seidel Iterations
- Both Jacobi and Gauss-Seidel iterations are of
the form -
- where A is a splitting of A.
- In Jacobis case we have
- In Gauss-Seidels case we have
18SOR (Successive Over-Relaxation)
- The Gauss-Seidel iteration is attractive because
of its simplicity. Unfortunately, if the
spectral radius of
is close to 1, the method can be prohibitively
slow, because the error tends to 0 like
. To rectify this slow convergence,
consider the following modification of the
Gauss-Seidel iteration
19SOR Applied to Laplaces Equation
- Finite Difference Approximation to Laplaces
equation - Hence SOR is
20SOR
- The SOR iteration can be viewed as taking a
weighted average of the current iterate and the
next Gauss-Seidel iterate - Equivalently, SOR can be viewed as using the step
to the next Gauss-Seidel iterate as a search
direction along which to update the current
approximation
21SOR
- The previous formula defines Successive
OverRelaxation (SOR). In matrix terms, the SOR
iteration is given by - where
- and
22SOR
- Thus SOR can be written as
23SOR
- For a few structured (but important) problems
(like the discrete Poisson equation), the optimal
value of the relaxation parameter is known.
In more complicated problems, it is necessary to
perform a fairly sophisticated eigenvalue
analysis of the matrix in
order to determine an appropriate value for .
24Convergence of iterative methods
- Important questions
- Does our method converge to a true solution?
- How fast does our method converge?
25Splittings and Convergence
- The Jacobi and Gauss-Seidel iterations are
typical members of a large family of iterations
of the form - where
- is a so-called splitting of the matrix A.
26Splittings and Convergence
- Whether or not the iteration () converges to
-
- depends on the eigenvalues of the iteration
matrix - Definition the spectral radius of the matrix A
is given by -
-
27Splittings and Convergence
- Theorem
- Assume that A is non-singular so that
- is well-defined.
-
- If M is non-singular and the spectral radius of
-
28Convergence of Jacobis method
- Jacobis method
- We can show that
- provided that the matrix A is strictly
diagonally dominant by rows. - In general, the more diagonally dominant A the
more rapid the convergence, but there are
counterexamples to this rule.
29Convergence of Gauss Seidel iteration
- Gauss Seidel iteration
- We can show that, provided that A is symmetric
and positive definite, then the Gauss-Seidel
iteration converges for any starting vector
30Convergence of SOR
- SOR iteration
- We can also show that, provided that A is
symmetric and positive definite and
then SOR converges for any starting
vector
31NonStationary Iterations
- A symmetric, positive definite,
- Conjugate Gradient (CG) Method,
- Preconditioned CG Method.
- A nonsymmetric or indefinite,
- GMRES, Generalized minimum residual method,
- QMR, Quasi-minimal residual method,
- CGS, Conjugate gradients squared method,
- BiCG, BiConjugate Gradient method,
- Bi-CGSTAB, BiConjugate gradients stabilized
method.
32References for further reading
- Y.Saad Iterative Methods for Sparse Linear
Systems, PWS Publishing, Boston, 1995. - G.H.Golub C.F.Van Loan Matrix Computations,
Johns Hopkins University Press, 1996. - I.K. Eriksson, D. Estep, P. Hansbo, C. Johnson
Computational Differential Equations, Cambridge,
1996.