Engineering Computation - PowerPoint PPT Presentation

About This Presentation
Title:

Engineering Computation

Description:

Gauss elimination ... The number of operations (FLOPS) used in the Gauss method is: Pass 1. Pass 2 ... Gauss elimination (example) E. T. S. I. Caminos, Canales ... – PowerPoint PPT presentation

Number of Views:79
Avg rating:3.0/5.0
Slides: 87
Provided by: dptomatema
Category:

less

Transcript and Presenter's Notes

Title: Engineering Computation


1
EngineeringComputation
  • Part 3

2
Learning Objectives for Lecture
1. Motivate Study of Systems of Equations and
particularly Systems of Linear Equations 2.
Review steps of Gaussian Elimination 3. Examine
how roundoff error can enter and be magnified in
Gaussian Elimination 4. Introduce Pivoting and
Scaling as defenses against roundoff. 5.
Consider what an engineer can do to generate well
formulated problems.
3
Systems of Equations
  • In Part 2 we have tried to determine the value x,
    satisfying f(x)0. In this part we try to obtain
    the values x1,x2, xn, satisfying the system of
    equations
  • These systems can be linear or nonlinear, but in
    this part we deal with linear systems

4
Systems of Equations
  • where a and b are constant coefficients, and n is
    the number of equations.
  • Many of the engineering fundamental equations are
    based on conservation laws. In mathematical
    terms, these principles lead to balance or
    continuity equations relating the system behavior
    with respect to the amount of the magnitude being
    modelled and the extrenal stimuli acting on the
    system.

5
Systems of Equations
  • Matrices are rectangular sets of elements
    represented by a single symbol. If the set if
    horizontal it is called row, and if it is
    vertical, it is called column.

Column 3
Row 2
Column vector
Row vector
6
Systems of Equations
  • There are some special types of matrices

Identity matrix
Symmetric matrix
Diagonal matrix
Upper triangular matrix
7
Systems of Equations
Lower triangular matrix
Banded matrix
Half band width
All elements are null with the exception of
thoise in a band centered around the main
diagonal. This matrix has a band width of 3 and
has the name of tridiagonal.
8
Systems of Equations
Linear Algebraic Equations a11x1 a12x2 a13x3
a1nxn b1 a21x1 a22x2 a23x3
a2nxn b2 .. an1x1 an2x2 an3x3 anxn
bn where all aij's and bi's are constants. In
matrix form
n x n n x 1 n x 1 or simply Ax b
9
Systems of Equations
  • Matrix representation of a system

Matrix product
Resulting dimensions
10
Systems of Equations
  • Graphic Solution Systems of equations are
    hyperplanes (straight lines, planes, etc.). The
    solution of a system is the intersection of these
    hyperplanes.

Compatible and determined system. Vectors are
linearly independent. Unique solution.
Determinant of A is non-null.
11
Systems of Equations
Incompatible system, Linearly dependent vectors.
Null determinant of A. There is no solution.
Compatible but undetermined system. Linearly
dependent vectors. Null determinant of A. There
exists an infinite number of solutions.
12
Systems of Equations
Compatible and determined system. Linearly
independent vectors. Nonnull determinant of A,
but close to zero. There exists a solution but it
is difficult to find precisely. It is an ill
conditioned system leading to numerical errors.
13
Gauss elimination
  • Naive Gauss elimination method The Gauss method
    has two phases Forward elimination and
    backsustitution. In the first, the system is
    reduced to an upper triangular system
  • First, the unknown x1 is eliminated. To this end,
    the first row is multiplied by -a21/a11 and added
    to the second row. The same is done with all
    other succesive rows (n-1 times) until only the
    first equation contains the first unknown x1.

Pivot equation
substract
pivot
14
Gauss elimination
  • This operation is repeated with all variables xi,
    until an upper triangular matrix is obtained.
  • Next, the system is solved by backsustitution.
  • The number of operations (FLOPS) used in the
    Gauss method is

Pass 1
Pass 2
15
Gauss elimination
1. Forward Elimination (Row Manipulation) a.
Form augmented matrix Ab
b. By elementary row manipulations, reduce Ab
to Ub' where U is an upper triangular
matrix DO i 1 to n-1 DO k i1 to
n Row(k) Row(k) - (aki/aii)Row(i) ENDDO E
NDDO
16
Gauss elimination
2. Back Substitution Solve the upper triangular
system Ux b
xn b'n / unn DO i n-1 to 1
by (-1) END
17
Gauss elimination (example)
Consider the system of equations
To 2 significant figures, the exact solution is
We will use 2 decimal digit arithmetic with
rounding.
18
Gauss elimination (example)
Start with the augmented matrix Multiply the
first row by 1/50 and add to second
row. Multiply the first row by 2/50 and add
to third row Multiply the second row by 6/40
and add to third row
19
Gauss elimination (example)
Now backsolve
(vs. 0.091, et 2.2) (vs. 0.041, et
2.5) (vs. 0.016, et 0)
20
Gauss elimination (example)
Consider an alternative solution interchanging
rows After forward elimination, we
obtain Now backsolve x3 0.095 (vs.
0.091, et 4.4) x2 0.020 (vs. 0.041, et
50) x1 0.000 (vs. 0.016, et
100) Apparently, the order of the equations
matters!
21
Gauss elimination (example)
  • WHAT HAPPENED?
  • When we used 50 x1 1 x2 2 x3 1 to solve
    for x1, there was little change in other
    equations.
  • When we used 2 x1 6 x2 30 x3 3 to solve
    for x1 it made BIG changes in the other
    equations. Some coefficients for other equations
    were lost!
  • The second equation has little to do with x1.
  • It has mainly to do with x3.
  • As a result we obtained LARGE numbers in the
    table, significant roundoff error occurred and
    information was lost.
  • Things didn't go well!
  • If scaling factors aji / aii are ? 1 then the
    effect of roundoff errors is diminished.

22
Gauss elimination (example)
  • Effect of diagonal dominance
  • As a first approximation roots are
  • xi ? bi / aii
  • Consider the previous examples

23
Gauss elimination (example)
Goals 1. Best accuracy (i.e. minimize error)
2. Parsimony (i.e. minimize effort)
Possible Problems A. Zero on diagonal term ?
by zero. B. Many floating point operations
(flops) cause numerical precision problems and
propagation of errors. C. System may be
ill-conditioned detA ? 0. D. No solution or
an infinite of solutions detA 0. Possible
Remedies A. Carry more significant figures
(double precision). B. Pivot when the diagonal
is close to zero. C. Scale to reduce round-off
error.
24
Gauss elimination (pivoting)
  • PIVOTING
  • A. Row pivoting (Partial Pivoting) -
  • In any good routine, at each step i, find
  • maxk aki for k i, i1, i2, ..., n
  • Move corresponding row to pivot position.
  • (i) Avoids zero aii
  • (ii) Keeps numbers small minimizes round-off,
  • (iii) Uses an equation with large aki to
    find xi
  • Maintains diagonal dominance.
  • Row pivoting does not affect the order of the
    variables.
  • Included in any good Gaussian Elimination
    routine.

25
Gauss elimination (pivoting)
B. Column pivoting - Reorder remaining
variables xj for j i, . . . ,n so get largest
aji Column pivoting changes the order of
the unknowns, xi, and thus leads to complexity in
the algorithm. Not usually done. C. Complete
or Full pivoting Performing both row pivoting and
column pivoting. (If A is symmetric, needed to
preserve symmetry.)
26
Gauss elimination (pivoting)
How to fool pivoting Multiply the third equation
by 100 and then performing pivoting will yield
Forward elimination then yields (2-digit
arithmetic)
Backsolution yields x3 0.095 (vs. 0.091,
et 4.4) x2 0.020 (vs. 0.041, et
50.0) x1 0.000 (vs. 0.016, et
100) The order of the rows is still poor!!
27
Gauss elimination (scaling)
SCALING A. Express all equations (and variables)
in comparable units so all elements of A are
about the same size. B. If that fails, and maxj
aij varies widely across the rows, replace each
row i by aij ? This
makes the largest coefficient aij of each
equation equal to 1 and the largest element of
A equal to 1 or -1 NOTE Routines generally do
not scale automatically scaling can cause
round-off error too! SOLUTIONS Don't actually
scale, but use hypothetical scaling factors to
determine what pivoting is necessary. Scale
only by powers of 2 no roundoff or division
required.
28
Gauss elimination (scaling)
How to fool scaling A poor choice of units can
undermine the value of scaling. Begin with our
original example
If the units of x1 were expressed in µg instead
of mg the matrix might read
Scaling then yields
Which equation is used to determine x1 ? Why
bother to scale ?
29
Gauss elimination (operation counting)
OPERATION COUNTING In numerical scientific
calculations, the number of multiplies divides
often determines CPU time. (This represents the
numerical effort!) One floating point multiply
or divide (plus any associated adds or subtracts)
is called a FLOP. (The adds/subtracts use little
time compared to the multiplies/divides.) FLOP
FLoating point OPeration. Examples a x
b a / x b
30
Gauss elimination (operation counting)
Useful identities in counting FLOPS
O(mn) means that there are terms of order mn and
lower.
31
Gauss elimination (operation counting)
Simple Example of Operation Counting DO i
1 to n Y(i) X(i)/i 1 ENDDO X(i) and Y(i)
are arrays whose values change when i changes.
In each iteration X(i)/i 1 represents one
FLOP because it requires one division ( one
subtraction). The DO loop extends over i from 1
to n iterations
32
Gauss elimination (operation counting)
Another Example of Operation Counting DO i
1 to n Y(i) X(i) X(i) 1 DO j i to
n Z(j) Y(j) / X(i) Y(j)
X(i) ENDDO ENDDO With nested loops, always
start from the innermost loop. Y(j)/X(i) Y(j)
X(i) represents 2 FLOPS
33
Gauss elimination (operation counting)
For the outer i-loop X(i) X(i) 1 represents
1 FLOP
3n 2n2 - n2 - n n2 2n n2 O(n)
34
Gauss elimination (operation counting)
Forward Elimination DO k 1 to n1 DO i
k1 to n r A(i,k)/A(k,k) DO j k1
to n A(i,j)A(i,j) rA(k,j) ENDDO B(
i) B(i) rB(k) ENDDO ENDDO
35
Gauss elimination (operation counting)
Operation Counting for Gaussian Elimination Back
Substitution X(n) B(n)/A(n,n) DO i n1 to
1 by 1 SUM 0 DO j i1 to n SUM SUM
A(i,j)X(j) ENDDO X(i) B(i)
SUM/A(i,i) ENDDO
36
Gauss elimination (operation counting)
Operation Counting for Gaussian
Elimination Forward Elimination Inner
loop Second loop
(n2 2n) 2(n 1)k k2
37
Gauss elimination (operation counting)
Operation Counting for Gaussian
Elimination Forward Elimination (cont'd) Outer
loop
38
Gauss elimination (operation counting)
Operation Counting for Gaussian Elimination Back
Substitution Inner Loop Outer Loop
39
Gauss elimination (operation counting)
Total flops Forward Elimination Back
Substitution n3/3 O (n2) n2/2 O
(n) ? n3/3 O (n2)
To convert (A,b) to (U,b') requires n3/3, plus
terms of order n2 and smaller, flops. To back
solve requires 1 2 3 4 . . . n
n (n1) / 2 flops Grand Total the entire
effort requires n3/3 O(n2) flops altogether.
40
Gauss-Jordan Elimination
Diagonalization by both forward and backward
elimination in each column. Perform
elimination both backwards and forwards
until Operation count for Gauss-Jordan is
(slower than Gauss elimination)
41
Gauss-Jordan Elimination
Example (two-digit arithmetic)
x1 0.015 (vs. 0.016, et 6.3) x2
0.041 (vs. 0.041, et 0) x3 0.093
(vs. 0.091, et 2.2)
42
Gauss-Jordan Matrix Inversion
The solution of Ax b is x
A-1b where A-1 is the inverse
matrix of A Consider A A-1 I 1)
Create the augmented matrix A I 2)
Apply Gauss-Jordan elimination gt I A-1
43
Gauss-Jordan Matrix Inversion
Gauss-Jordan Matrix Inversion (with 2 digit
arithmetic)
MATRIX INVERSE A-1
44
Gauss-Jordan Matrix Inversion
CHECK A A -1 I
A -1 b x
Gaussian Elimination
45
LU decomposition
  • LU decomposition - The LU decomposition is a
    method that uses the elimination techniques to
    transform the matrix A in a product of triangular
    matrices. This is specially useful to solve
    systems with different vectors b, because the
    same decomposition of matrix A can be used to
    evaluate in an efficient form, by forward and
    backward sustitution, all cases.

46
LU decomposition
47
LU decomposition
  • LU decomposition is very much related to Gauss
    method, because the upper triangular matrix is
    also looked for in the LU decomposition. Thus,
    only the lower triangular matrix is needed.
  • Surprisingly, during the Gauss elimination
    procedure, this matrix L is obtained, but one is
    not aware of this fact. The factors we use to get
    zeroes below the main diagonal are the elements
    of this matrix L.

Substract
48
LU decomposition
resto
resto
49
LU decomposition (Complexity)
Basic Approach Consider Ax b a)
Gauss-type "decomposition" of A into LU
n3/3 flops Ax b
becomes L Ux b let Ux ? d
b) First solve L d b for d by
forward subst. n2/2 flops c) Then solve
Ux d for x by back substitution
n2/2 flops
50
LU Decompostion notation
A L U0
A L0 U
A L0 U0 D
A L1 U
A L U1
51
LU decomposition
LU Decomposition Variations Doolittle L1U Gen
eral A Crout LU1 General
A Cholesky LL T Pos. Def. Symmetric
A Cholesky works only for Positive Definite
Symmetric matrices Doolittle versus Crout
Doolittle just stores Gaussian elimination
factors where Crout uses a different series of
calculations (see CC 10.1.4). Both decompose
A into L and U in n3/3 FLOPS Different
location of diagonal of 1's Crout uses each
element of A only once so the same array can be
used for A and L\U saving computer memory!
52
LU decomposition
Matrix Inversion Definition of a matrix
inverse A A-1 I gt A x
b A-1 b x
First Rule Dont do it. (numerically
unstable calculation)
53
LU decomposition
Matrix Inversion If you really must -- 1)
Gaussian elimination A I gt U B' gt
A-1 2) Gauss-Jordan A I gt I A-1
Inversion will take n3 O(n2) flops if one is
careful about where zeros are (taking advantage
of the sparseness of the matrix) Naive
applications (without optimization) take 4n3/3
O(n2) flops. For example, LU decomposition
requires n3/3 O(n2) flops. Back solving twice
with n unit vectors ei 2 n (n2/2) n3
flops. Altogether n3/3 n3 4n3/3 O(n2)
flops
54
FLOP Counts for Linear Algebraic Equations
Summary FLOP Counts for Linear Algebraic
Equations, Ax b Gaussian Elimination (1
r.h.s) n3/3 O (n2) Gauss-Jordan (1 r.h.s) n3/2
O (n2) LU decomposition n3/3 O (n2) Each
extra LU right-hand-side n2 Cholesky
decomposition (symmetric A) n3/6 O
(n2) Inversion (naive Gauss-Jordan) 4n3/3 O
(n2) Inversion (optimal Gauss-Jordan) n3 O
(n2) Solution by Cramer's Rule n!
55
Errors in Solutions to Systems of Linear Equations
System of Equations Errors in Solutions to
Systems of Linear Equations Objective Solve
Ax b Problem Round-off errors may
accumulate and even be exaggerated by the
solution procedure. Errors are often exaggerated
if the system is ill-conditioned Possible
remedies to minimize this effect 1. Partial or
complete pivoting 2. Work in double precision 3.
Transform the problem into an equivalent system
of linear equations by scaling or equilibrating
56
Errors in Solutions to Systems of Linear Equations
  • Ill-conditioning
  • A system of equations is singular if detA 0
  • If a system of equations is nearly singular it is
    ill-conditioned.
  • Systems which are ill-conditioned are extremely
    sensitive to small changes in coefficients of A
    and b. These systems are inherently sensitive
    to round-off errors.
  • Question
  • Can we develop a means for detecting these
    situations?

57
Errors in Solutions to Systems of Linear Equations
Ill-conditioning of Ax b Consider the
graphical interpretation for a 2-equation system
We can plot the two linear equations on a graph
of x1 vs. x2.
58
Errors in Solutions to Systems of Linear Equations
Ill-conditioning of Ax b Consider the
graphical interpretation for a 2-equation system
We can plot the two linear equations on a graph
of x1 vs. x2.
x1
x1
x2
x2
Uncertainty in x2
Uncertainty in x2
Well-conditioned
Ill-conditioned
59
Errors in Solutions to Systems of Linear Equations
Ways to detect ill-conditioning 1. Calculate
x, make small change in A or b and
determine change in solution x. 2. After
forward elimination, examine diagonal of upper
triangular matrix. If aii ltlt ajj, i.e. there is
a relatively small value on diagonal, then this
may indicate ill-conditioning. 3. Compare
xsingle with xdouble 4. Estimate "condition
number" for A. Substituting the calculated x
into Ax and checking this against b will
not always work!!!
60
Errors in Solutions to Systems of Linear Equations
  • Ways to detect ill-conditioning
  • If detA 0 the matrix is singular
  • gt the determinant may be an indicator of
    conditioning
  • If detA is near zero is the matrix
    ill-conditioned?
  • Consider
  • After scaling
  • gt detA will provide an estimate of
    conditioning if it is normalized by the
    "magnitude" of the matrix.

61
Norms
Norms and the Condition Number We need a
quantitative measure of ill-conditioning. This
measure will then directly reflect the possible
magnitude of round off effects. To do this we
need to understand norms Norm Scalar measure
of the magnitude of a matrix or vector ("how
big" a vector is). Not to be confused with the
dimension of a matrix.
62
Vector Norms
Vector Norms Scalar measure of the magnitude of
a vector
Here are some vector norms for n x 1 vectors x
with typical elements xi. Each is in the
general form of a p norm defined by the general
relationship 1. Sum of the magnitudes 2.
Magnitude of largest element (infinity
norm) 3. Length or Euclidean norm
63
Norms
Vector Norms Required Properties of vector
norm 1. x ? 0 and x 0 if and only if
x0 2 kx k x where k is any
positive scalar 3. xy ? x y
Triangle Inequality For the Euclidean vector norm
we also have 4. xy ? x y
because the dot product or inner product
property satisfies xy xy
cos(?) ? x y.
64
Matrix Norms
Matrix Norms Scalar measure of the magnitude of
a matrix.
Matrix norms corresponding to vector norms above
are defined by the general relationship
1. Largest column sum (column sum norm) 2.
Largest row sum (row sum norm) (infinity norm)
65
Matrix norms
3. Spectral norm A 2 (µmax)1/2 where
µmax is the largest eigenvalue of ATA If A
is symmetric, (µmax)1/2 ?max , is the largest
eigenvalue of A. (Note this is not the same
as the Euclidean or Frobenius norm, seldom used
66
Matrix norms
  • Matrix Norms
  • For matrix norms to be useful we require that
  • 0. Ax ? A x
  • General properties of any matrix norm
  • 1. A ? 0 and A 0 iff A
    0
  • 2. k A k A where k is any
    positive scalar
  • 3. A B ? A B "Triangle
    Inequality"
  • 4. A B ? A B
  • Why are norms important?
  • Norms permit us to express the accuracy of the
    solution x in terms of x
  • Norms allow us to bound the magnitude of the
    product A x and the associated errors.

67
Error Analysis
  • Forward and backward error analysis can estimate
    the effect of truncation and roundoff errors on
    the precision of a result. The two approaches
    are alternative views
  • Forward (a priori) error analysis tries to trace
    the accumulation of error through each process of
    the algorithm, comparing the calculated and exact
    values at every stage.
  • Backward (a posteriori) error analysis views the
    final solution as the exact solution to a
    perturbed problem. One can consider how
    different the perturbed problem is from the
    original problem.
  • Here we use the condition number of a matrix A
    to specify the amount by which relative errors in
    A and/or b due to input, truncation, and
    rounding can be amplified by the linear system in
    the computation of x.

68
Error Analysis
Backward Error Analysis of Ax b for
errors in b Suppose the coefficients b are
not precisely represented. What might be the
effect on the calculated value for x
dx? Lemma Ax b yields A x ?
b or Now an error in b yields a
corresponding error in x A x dx b
db Ax A dx b db Subtracting
Ax b yields Adx db gt dx
A-1db
69
Error Analysis
Backward Error Analysis of Ax b for
errors in b
Taking norms we have And using the lemma
we then have
Define the condition number as k cond A ?
A-1 A ? 1 If k ? 1 or k is small, the
system is well-conditioned If k gtgt 1, system is
ill conditioned. 1 I A-1A ?
A-1 A k Cond(A)
70
Error Analysis
Backward Error Analysis of Ax b for
errors in A If the coefficients in A are not
precisely represented, what might be effect on
the calculated value of x dx? A dA x
dx b Ax A dx dAxdx
b Subtracting Ax b yields A dx
dAxdx or dx A-1 dA
xdx Taking norms and multiplying by A /
A yields
71
Linfield Penny, 1999
72
Linfield Penny, 1999
73
Error Analysis
Estimate of Loss of Significance Consider the
possible impact of errors dA on the precision
of x.
  • implies that if
  • Or, taking log of both sides s gt p -
    log10(?)
  • log10(?) is the loss in decimal precision i.e.,
    we start with p decimal figures and end-up with s
    decimal figures.
  • It is not always necessary to find A-1 to
    estimate k condA. Instead, use an estimate
    based upon iteration of inverse matrix using LU
    decomposition.

74
Iterative Solution Methods
  • Impetus for Iterative Schemes
  • 1. May be more rapid if coefficient matrix is
    "sparse"
  • 2. May be more economical with respect to memory
  • 3. May also be applied to solve nonlinear systems
  • Disadvantages
  • 1. May not converge or may converge slowly
  • 2. Not appropriate for all systems
  • Error bounds apply to solutions obtained by
    direct and iterative methods because they address
    the specification of dA and db.

75
Iterative Solution Methods
Basic Mechanics Starting with a11x1 a12x2
a13x3 ... a1nxn b1 a21x1 a22x2 a23x3
... a2nxn b2 a31x1 a32x2 a33x3 ...
a3nxn b3 an1x1 an2x2 an3x3 ...
annxn bn Solve each equation for one
variable x1 b1 (a12x2 a13x3 ...
a1nxn ) / a11 x2 b2 (a21x1 a23x3 ...
a2nxn ) / a22 x3 b3 (a31x1 a32x2
... a3nxn ) / a33 xn bn (an1x2
an2x3 ... an,n-1xn-1 ) / ann
76
Iterative Solution Methods
  • Start with initial estimate of x0.
  • Substitute into the right-hand side of all the
    equations.
  • Generate new approximation x1.
  • This is a multivariate one-point iteration
    xj1 g(xj)
  • Repeat process until the maximum number of
    iterations reached or until
  • xj1 xj ? d e xj1

77
Convergence
To solve Ax b Separate A into A
Lo D Uo D diagonal (aii) Lo
lower triangular with 0's on diagonal Uo
upper triangular with 0's on diagonal Rewrite
system Ax ( Lo D Uo )x
b Dx ( Lo Uo )x
b Iterate Dxj1 b ( LoUo )
xj xj1 D-1b D-1 ( LoUo )
xj Iterations converge if D-1 ( Lo
Uo ) lt 1 (sufficient if equations are
diagonally dominant)
78
Iterative Solution Methods the Jacobi Method
79
Iterative Solution Methods -- Gauss-Seidel
In most cases using the newest values within the
right-hand side equations will provide better
estimates of the next value. If this is done,
then we are using the Gauss-Seidel Method (
LoD )xj1 b Uo xj or
explicitly
If this is done, then we are using the
Gauss-Seidel Method
80
Iterative Solution Methods -- Gauss-Seidel
If either method is going to converge, Gauss-Seide
l will converge faster than Jacobi. Why use
Jacobi at all? Because you can separate the
n-equations into n independent tasks, it is very
well suited computers with parallel processors.
81
Convergence of Iterative Solution Methods
Rewrite given system Ax B E x
b where B is diagonal, or triangular so we
can solve By g quickly. Thus, B
xj1 b E xj which is effectively xj1
B-1 (b E xj ) True solution xc
satisfies xc B-1 (b E
xc) Subtracting yields xc xj1 B-1
E xc xj So xc xj1 ?
B-1 E xc xj Iterations
converge linearly if B-1 E lt 1 gt
(D Lo)-1 Uo lt 1 For Gauss-Seidel
gt D -1 (Lo Uo) lt 1 For Jacobi
82
Convergence of Iterative Solution Methods
Iterative methods will not converge for all
systems of equations, nor for all possible
rearrangements. If the system is diagonally
dominant, i.e., aii gt aij where i ?
j then
with all lt 1.0, i.e., small slopes.
83
Convergence of Iterative Solution Methods
A sufficient condition for convergence exists
Notes 1. If the above does not hold, still may
converge. 2. This looks similar to infinity norm
of A
84
Improving Rate of Convergence of G-S Iteration
Relaxation Schemes where 0.0 lt l ?
2.0 (Usually the value of l is close to
1) Underrelaxation ( 0.0 lt l lt 1.0 ) More weight
is placed on the previous value. Often used to
- make non-convergent system convergent or - to
expedite convergence by damping out
oscillations. Overrelaxation ( 1.0 lt l ? 2.0
) More weight is placed on the new value.
Assumes that the new value is heading in right
direction, and hence pushes new value close to
true solution. The choice of l is highly
problem-dependent and is empirical, so relaxation
is usually only used for often repeated
calculations of a particular class.
85
Why Iterative Solutions?
We often need to solve Ax b where n
1000's Description of a building or airframe,
Finite-Difference approximations to PDE's. Most
of A's elements will be zero a
finite-difference approximation to Laplace's
equation will have five aij???0 in each row of
A. Direct method (Gaussian elimination)
Requires n3/3 flops (say n 5000 n3/3
4 x 1010 flops) Fills in many of n2-5n zero
elements of A Iterative methods (Jacobi or
Gauss-Seidel) Never store A (say n 5000
A would need 4n2 100 Mb) Only need to
compute A-B x and to solve Bxt1 b
86
Why Iterative Solutions?
Effort Suppose B is diagonal, solving
B v b n flops Computing A-B x 4n
flops For m iterations 5mn flops For n m
5000, 5mn 1.25x108 At worst
O(n2).
Write a Comment
User Comments (0)
About PowerShow.com