CS110 Computational Engineering Part 3 - PowerPoint PPT Presentation

1 / 102
About This Presentation
Title:

CS110 Computational Engineering Part 3

Description:

(Geometrical Construction) Two initial points x0, x1 are chosen ... For the data set (xk , yk), k = 1, ... , n , ... The n interpolation constraints f(xk) = yk gives ... – PowerPoint PPT presentation

Number of Views:61
Avg rating:3.0/5.0
Slides: 103
Provided by: Sub80
Category:

less

Transcript and Presenter's Notes

Title: CS110 Computational Engineering Part 3


1
CS-110Computational EngineeringPart 3
  • A. Avudainayagam
  • Department of Mathematics

2
Root Finding f(x)0
  • Method 1 The Bisection method
  • Th If f(x) is continuous in a,b and if
    f(a)f(b)lt0, then there is atleast one root of
    f(x)0 in (a,b).

y
y
f(b) gt 0
x2
a
a

x4
b
x
x5
x
x3
x1
f (a) lt 0
b
x1
Multiple roots in (a,b)
Single root in (a,b)
3
The Method
  • Find an interval x0 ,x1 so that f(x0)f(x1) lt 0
    (This may not be easy. Use your knowledge of the
    physical phenomenon which the equation
    represents).
  • Cut the interval length into half with each
    iteration,by
  • examining the sign of the function at the mid
    point.
  • If f(x2) 0, x2 is the root.

  • If f(x2) ? 0 and f(x0)f(x2) lt 0, root lies in
    x0,x2.
  • Otherwise root lies in x2,x1.
  • Repeat the process until the interval shrinks
    to a desired level.

4
Number of Iterations and Error Tolerance
  • Length of the interval (where the root lies)
    after n iterations
  • We can fix the number of iterations so that
    the root lies within an interval of chosen
    length ? (error tolerance).
  • If n satisfies this, root lies within a
    distance of ? / 2 of the actual root.

5
  • Though the root lies in a small interval,
    f(x) may not be small if f(x) has a
    large slope.

Conversely if f(x) small, x may not be
close to the root if f(x) has a small slope.
So, we use both these facts for the
termination criterion. We first choose an
error tolerance on f(x) f(x) lt ? and m
the maximum number of iterations.
6
Pseudo code (Bisection Method)
  • 1. Input ?? gt 0 , m gt 0, x1 gt x0 so that
    f(x0) f(x1) lt 0.
  • Compute f0 f(x0) .
  • k 1 (iteration count)

2. Do (a) Compute f2 f(x2)
f (b) If f2f0 lt 0 , set x1 x2
otherwise set x0 f2 and f0 f2. (c)
Set k k1.
3. While f2 gt ? and k ? m set x
x2 , the root.
7
False Position Method (Regula Falsi)
  • Instead of bisecting the interval x0,x1, we
    choose the point where the straight line through
    the end points meet the x-axis as x2 and bracket
    the root with x0,x2 or x2,x1 depending on
    the sign of f(x2).

8
False Position Method
y
yf(x)
(x1,f1)
(x2,f2)
x0
x2
x1
x
(x0,f0)
Straight line through (x0,f0) ,(x1,f1)
New end point x2
9
False Position Method (Pseudo Code)
1. Choose ? gt 0 (tolerance on f(x) )
m gt 0 (maximum number of iterations )
k 1 (iteration count)
x0 ,x1 (so that f0 ,f1 lt 0)
2. a. Compute
f2 f(x2) b. If f0f2
lt 0 set x1 x2 , f0 f2 c. k k1

3. While (f2 ? ?) and (k ? m)
4. x x2 , the root.
10
Newton-Raphson Method /
Newtons Method
  • At an approximate xk to the root ,the curve is
    approximated by the tangent to the curve at xk
    and the next approximation xk1 is the point
    where the tangent meets the x-axis.

y
root
x
x0
x1
x2
y f(x)
?
11
Tangent at (xk, fk)
y f(xk) f (xk)(x-xk)
This tangent cuts the x-axis at xk1
  • Warning If f(xk) is very small , method
    fails.
  • Two function Evaluations per iteration

12
Newtons Method - Pseudo code
1. Choose ? gt 0 (function tolerance f(x) lt ?)
m gt 0 (Maximum number of
iterations) x0 - initial
approximation k - iteration
count Compute f(x0)
2. Do q f ?(x0) (evaluate
derivative at x0) x1 x0 -
f0/q x0 x1
f0 f(x0) k k1

3. While (f0 ? ?) and (k ? m)
  • 4. x x1 the root.

13
Getting caught in a cycle of Newtons Method
y
xk1
xk
x
  • Alternate iterations fall at the same point .
  • No Convergence.

14
Newtons Method for finding the square root of a
number x ? a
f(x) x2 - a2 0
Example a 5 , initial approximation x0 2.
x1 2.25
x2 2.236111111
x3 2.236067978
x4 2.236067978
15
The secant Method
  • The Newtons Method requires 2 function
    evaluations (f,f ?).
  • The Secant Method requires only 1 function
    evaluation and converges as fast as Newtons
    Method at a simple root.
  • Start with two points x0,x1 near the root (no
    need for bracketing the root as in Bisection
    Method or Regula Falsi Method) .
  • xk-1 is dropped once xk1 is obtained.

16
The Secant Method
(Geometrical Construction)
y
y f(x1)
x2
x0
x1
x
?
(x0,f0)
(x1,f1)
  • Two initial points x0, x1 are chosen
  • The next approximation x2 is the point where the
    straight line joining (x0,f0) and (x1,f1) meet
    the x-axis
  • Take (x1,x2) and repeat.

17
The secant Method (Pseudo Code)
1. Choose ? gt 0 (function tolerance f(x) ? ?
) m gt 0 (Maximum number of
iterations) x0 , x1 (Two
initial points near the root )
f0 f(x0) f1 f(x1)
k 1 (iteration count)
2. Do
x0 x1
f0 f1
x1 x2 f1
f(x2) k k1

3. While (f1 ? ?) and (m ? k)
18
General remarks on Convergence
  • The false position method in general converges
    faster than the bisection method. (But
    not always ) counter example
    follows.
  • The bisection method and the false position
    method are guaranteed for convergence.
  • The secant method and the Newton-Raphson
    method are not guaranteed for convergence.

19
Order of Convergence
  • A measure of how fast an algorithm converges .
  • Let ? be the actual root f
    (?) 0
  • Let xk be the approximate root at the kth
    iteration . Error at the kth iteration ek
    xk- ?
  • The algorithm converges with order p if there
    exist ? such that

20
Order of Convergence of
  • Bisection method p 1 ( linear convergence )
  • False position - generally Super linear ( 1
    lt p lt 2 )
  • Secant method
    (super linear)
  • Newton Raphson method p 2 quadratic

21
Machine Precision
  • The smallest positive float ?M that can be
    added to
  • one and produce a sum that is greater
    than one.

22
Pseudo code to find Machine Epsilon
1. Set ?M 1 2. Do
?M ?M / 2 x 1 ?M
3. While ( x lt 1 ) 4. ?M 2 ?M
23
Solutions Of Linear Algebraic Systems
AX B
A n ? n matrix B n
? 1 vector X n ? 1 vector
(unknown)
  • If A ? 0, the system has a unique solution,
    provided B ? 0.
  • If A 0, the matrix A is called a singular
    matrix.
  • Direct Methods
  • Iterative Methods

24
The Augmented Matrix
A B
25
Elementary row operations
1. Interchange rows j and k. 2. Multiply
row k by ?. 3. Add ? times row j to row k.
  • Use elementary row operations and transform the
    equation so that the solution becomes
    apparent

26
Diagonal Matrix (D)
Upper Triangular
Matrix (U)
Lower Triangular Matrix (L)
27
Gauss Elimination Method
Using row operations, the matrix is reduced to
an upper triangular matrix and the solutions
obtained by back substitution
28
Partial Pivot Method
  • Step 1 We look for the largest element in
    absolute value (called
  • pivot) in the first column and interchange this
    row with the first row

Step 2 Multiply the I row by an appropriate
number and subtract this from the II row
so that the first element of the second row
becomes zero.



Do this for
the remaining rows so that all the elements in
the I column except the top most element are zero.
Now the matrix looks like
29
Now look for the pivot in the second column below
the first row and interchange this with the
second row.
Multiply the II row by an appropriate numbers and
subtract this from remaining rows (below the
second row) so that the matrix looks like
30
Now look for the pivot in the III column below
the second row and interchange this row with the
third row. Proceed till the matrix looks like
Now the solution is obtained by back substitution.
31
Gauss Elimination (An example)
Augmented Matrix
32
?
?
33
Solution by back substitution
x3 0.8333/1.1667 5 x2 (2 - 5 )/1.5 -2 x1
(2-8(5)2)/4 -9
x1 -9 , x2 -2 , x3 5
34
Gauss Elimination (Pseudo Code)
1. Input matrix A of order n ? n and r h s
vector B and form the augmented matrix AB.
2. For j 1 to n do

(a) Compute the pivot index j ? p ? n
such that
(b) If Apj 0 , print singular matrix and
exit.(c) If p gt j , interchange rows p and j
(d) For each i gt
j , subtract Aij/Ajj times row j from row i.
3. Now the augmented matrix is CD where C is
upper triangular.
4. Solve by back substitution For j 1
n down to 1 compute
35
Determinant of a Matrix
Some properties of the determinant.
1. det (AT) det (A) 2. det (AB) det (A)
det (B) 3. det (U) U11U22U33...Unn 4. det
E1(k,j) -1 . det A 5. det E2(k,?) ?
det A 6. det E3(k,j,?) det A
36
Matrix Determinant
Gauss Elimination procedure uses two types of row
operations. 1. Interchange of rows for pivoting
2. Add ? times row j to row k.
The first operation changes the sign of the
determinant while the second has no effect on the
determinant.
To obtain the matrix determinant,
follow Gauss elimination and obtain the upper
triangular form U, keeping count of (r) the
interchanges of rows
Then det A (-1)r U11U22 ... Unn
37
Matrix Determinant (Pseudo Code)
1. Set r 0 2. For j 1 to n do
(a) Compute the pivot index j ? p ? n such that
(b) If Apj 0 , print Singular Matrix and
exit. (c) If p gt j interchange rows p and j ,
set r r 1. (d) For each i gt j , subtract
Aij/Ajj times row j and row i
3. det (-1)r A11A22 ... Ann
38
Matrix Determinant
A
?
?
?
det (-1)1 (1)(4)(5.5) -22
39
LU Decomposition
  • When Gauss elimination is used to solve a linear
    system, the row operations are applied
    simultaneously to the RHS vector.
  • If the system is solved again for a different RHS
    vector, the row operations must be repeated.
  • The LU decomposition eliminates the need to
    repeat the row operators each time a new RHS
    vector is used.

40
The Matrix A (AX B) is written as a product of
an upper triangular matrix U and a lower
triangular matrix L. The diagonal elements of
the upper triangular matrix are chosen to be 1 to
get a consistent system of equations
n 3

? 9 Equations for 9 unknowns L11, L21, L22, L31,
L32, L33, U12, U13, U23.
41

First column gives L11 L21 L31 (A11, A21,
A31) First row gives U11 U12 U13 (1, A12/L11,
A13/L11) Second column gives L12 L22 L32 (0,
A22 - L21U12, A32 - L31U22) Second row gives U21
U22 U23 (0, 1, (A23 - L21U13)/L22 ) Third
column gives L13 L23 L33 (0, 0, A33 - L31U13 -
L32 U23)
42
LU Decomposition (Pseudo Code)
Li1 Ai1 , i 1, 2, ... , n U1j
A1j/L11 , j 2, 3, ... , n for j 2, 3,
... , n-1
, i j, j1, ... , n
, k j, j1, ... , n
43
LU Decomposition (Example)

44
The value of U are stored in the zero space of L .
Q L \ U
After each element of A is employed , it is not
needed again. So Q can be stored in place of A.
45
After LU decomposition of the matrix A, the
system AX B is solved in 2 steps (1) Forward
substitution, (2) Backward Substitution
AX LUX B Put UX
Y LY

Forward Substitution Y1
B1/ L11
Y2
(B2 - L21Y1) / L22

Y3 (B3 - L31B1 - L32B2) / L33
46
UX

Back Substitution
x3 Y3
x2 Y2 - x3
U23
x1 Y1 - x2U12 - x3 U23
47
  • As in Gauss elimination, LU
    decomposition must employ pivoting to
    avoid division by zero and to minimize round off
    errors. The pivoting is done immediately after
    computing each column.

48
Matrix Inverse Using the LU Decomposition
  • LU decomposition can be used to obtain the
    inverse of the original coefficient matrix.
  • Each column j of the inverse is determined
    by using a unit vector (with 1 in the jth
    row ) as the RHS vector

49
Matrix inverse using LU decomposition-an example

L
U
A
First column of the inverse of A is given by

The second and third columns are obtained by
taking the RHS vector as (0,1,0) and (0,0,1).
50
Iterative Method for solution of Linear systems
  • Jacobi iteration
  • Gauss - Seidel iteration

51
Jacobi Iteration
AX B
n 3 a11x1 a12x2 a13x3
b1 a21x1
a22x2 a23x3 b2
a31x1 a32x2 a33x3 b3
Preprocessing 1. Arrange the equations so
that the diagonal terms of the coefficient matrix
are not zero. 2. Make row transformation if
necessary to make the diagonal elements as large
as possible.
52
Rewrite the equation as x1 1/a11 (b1 - a12x2 -
a13x3) x2 1/a22 (b2 - a21x1 - a23x3) x3 1/a33
(b3 - a31x1 - a32x2)
Jacobi Pseudo Code
  • Choose an initial approximation vector (x0,
    x1, x2).
  • If no prior information is available then
    (x0, x1, x2) (0, 0, 0) will do.
  • Iterate

, 1 ? i ? n
Stop when
53
Diagonally Dominant Matrices
ie., the diagonal element of each row is larger
than the sum of the absolute values of the other
elements in the row. Diagonal
dominance is a sufficient but not necessary
condition for the convergence of the Jacobi or
Gauss-Seidel iteration.
54
The Gauss-Seidel iteration
  • In the Jacobi iteration all the components
    of the new estimate of the vector

are computed from the
current estimate
  • However when is computed the
    updated estimates
  • are
    already available.
  • The Gauss - Seidel iteration takes advantage
    of the latest information available where
    updating an estimate.

55
Gauss - Seidel
Jacobi
First Iteration
x1 (b1 - a12x2 - a13x3) / a11 x2 (b2 - a21x1
- a23x3) / a22 x3 (b3 - a31x1 - a32x2) / a33
x1 (b1 - a12x2 - a13x3) / a11 x2 (b2 - a21x1
- a23x3) / a22 x3 (b3 - a31x1 - a23x3) / a33
Second Iteration
x1 (b1 - a12x2 - a13x3) / a11 x2 (b2 -
a21x1 - a23x3) / a22 x3 (b3 - a31x1 - a32x2) /
a33
x1 (b1 - a12x2 - a13x3) / a11 x2 (b2 - a21x1
- a23x3) / a22 x3 (b3 - a31x1 - a32 x2) / a33
56
Matrix Condition Number
  • Vector and Matrix norms
  • Norm is a real valued function that gives a
    measure of size or length of the vector or
    the matrix.
  • Example Length of the vector A (a, b, c)

  • is a norm.



For the n dimensional vector A (x1, x2, x3, ...
, xn)
For the matrix Aij ,
These are called Euclidean norms.
57
Other Useful Norms
For Vectors
- (the p - norm)
- (sum of the absolute values of the
element )
- (Maximum magnitude norm)
For Matrices
- (Column sum norm)
- (row sum norm)
58
Properties of norms
1. x gt 0 for x ? 0 2. x
0 for x 0 3. ?x ? .
x 4. x y ? x y
59
Ill Conditioning
Consider the system
The exact solution x1 201, x2 100 Consider
the slightly perturbed system
A11 is increased by 1 percent A22 is decreased
by 0.25 percent (approx.) Solution of the
perturbed system x1 50, x2 24.75 Near 75
change in the solution.
60
  • When the solution of system, is highly
    sensitive to the values of the
    coefficient matrix or the RHS vector, we
    say that the system is ill - conditioned
    or the matrix is ill - conditioned
  • Ill conditioned systems will result in large
    accumulated round off errors.

61
Condition number of a Matrix
  • k(A) A A-1
  • This is a measure of the relative sensitivity
    of the solution to small change in the RHS
    vector.
  • If k is large, the system is regarded as being
    ill - conditioned
  • The row - sum norm is usually employed to find
    k(A)

62
Condition Number of the Matrix
Row - sum norm of A A Max (300, 601)
601. Row - sum norm of A-1 A-1 Max
(6.01, 3) 6.01. Condition Number k(A) 601
(6.01) 3612 (large). A is ill - conditioned.
63
  • Equations that are not properly scaled may
    give rise to large condition numbers.

Consider
k (A) A A-1 2000 0.0005
1001(large)
64
Equilibration Scale each row of A by its
largest element.
K(A) A A-1 2(1) 2s Great
Improvement in the condition number.
65
  • Checking the answer is not enough when the
    condition number is high
  • Consider

x - 0.11 and y 0.45 given 34 (-0.11)
55 (0.45) - 21 0.01 ? 0 55 (-0.11) 89 (0.45)
- 34 0.00 ? 0
66
Interpolation
P4
P3
y

P2


P1

Given the data (xk , yk) , k 1, 2, 3,
... , n , find a function f passing through
each of the data samples.
f(xk) yk , 1 ? k ? n Once f is
determined, we can use it to predict the value of
y at points other than the samples.
x
67
Piecewise Linear Interpolation
  • A straight line segment is used between each
    adjacent pair of the data points.






x1
x2
x3
x4
x5
1 ? k ? n
68
Polynomial Interpolation
For the data set (xk , yk), k 1, ... , n , we
find a polynomial of degree (n - 1) that has n
constants
The n interpolation constraints f(xk) yk gives
  • Not feasible for large data sets, since the
    condition number increases rapidly with
    increasing n.

69
Lagrange Interpolating Polynomial
The Lagrange interpolating polynomial of degree
k, Qk(x) is constructed as follows
...
70
  • Each Qk(x) is a polynomial of degree (n - 1)
  • Qk (xj) 1 , j k
  • 0 , j ? k
  • The polynomial curve that passes through the
    data set (xk, yk), k 1, 2, ..., n
    is
  • f(x) y1Q1(x) y2Q2(x) ... yn
    Qn(x)
  • Polynomial is written directly without
    having to solve a system of equations.

71
Lagrange interpolations (Pseudo Code)
Choose x, the point where the function value is
required y 0 Do for i 1 to n
product f1 Do for j 1
to n if ( i ? j )
product product (x - xj)/ (xi - xj)
end if end Do y
y product End Do
72
Least Square Fit
If the number of sample is large or if the
dependent variable contains measurement noise, it
is often letter to find a function which
minimizes an error criterion such as
A function that minimizes E is called the Least
Square Fit
73
Straight Line Fit
  • To fit a straight line through the n points
    (x1, y1), (x2, y2), ... ,(xn, yn)
  • Assume f(x) a1 a2x
  • Error
  • Find a1, a2 which minimizes E.

74
(No Transcript)
75
Straight Line Fit (An example )
Fit a straight line through the five points (0,
2.10), (1, 2.83), (2, 1.10), (3, 3.20), (4,
3.90) a11 n 5
a21 a12
a1 1.84 , a2 0.395 , f(x) 1.84 0.395x
76
Data Representation
Integers - Fixed Point Numbers
Decimal System - base 10 uses
0,1,2,,9 (396)10 (6 ? 100) (9 ? 101) (3
? 102) (396)10 Binary System - base 2
uses 0,1 (11001)2 (1 ? 20) (0 ?
21) (0 ? 22) (1 ? 23) (1 ? 24) (25)10
77
Decimal to Binary Conversion
Convert (39)10 to binary form base 2
39
2
2 19 Remainder 1
2 9 Remainder 1
2 4 Remainder 1
2 2 Remainder 0
2 1 Remainder 0
0 Remainder 1
Put the remainder in reverse order (100111)2
(1 ? 20 ) (1 ? 21) (1 ? 22) (0 ? 23) (0 ?
24)
(1 ? 25)
(39)10
78
Largest number that can be stored in m-digits
base - 10 (999999) 10m -
1 base - 2 (111111) 2m -
1 m 3 (999) 103 - 1
(111) 23 -
1 Limitation Memory cells consist of 8
bits(1byte) multiples each
position containing 1 binary digit.
79
  • Common cell lengths for integers k 16 bits

  • k 32 bits

Sign - Magnitude Notation
First bit is used for a sign
0 - non negative number
1 - negative number
The remaining bits are used to store
the binary magnitude of the number. Limit of 16
bit cell (32767)10 Limit of 32 bit cell (2
147 483 647)10
80
Twos Compliment notation
Definition The twos complement of a negative
integer I in a k - bit cell
Twos Compliment of I 2k I (Eg) Twos
Complement of (-3)10 in a 3 - bit cell
(2 3 - 3)10 (5)10
(101)2 (-3)10 will be stored as 101

81
Storage Scheme for storing an integer I in a k -
bit cell in Two Compliment notation
I , I ? 0 , first bit 0
Stored Value C
2k I , I lt 0
82
The Twos Compliment notation admits one more
negative number than the sign - magnitude
notation.
83
(Eg) Take a 2 bit cell (k 2) Range in Sign -
magnitude notation 21 - 1 1 -1 11 1
01 Range in Twos Compliment notation Twos
Compliment of -1 22-1 (3)10 (11)2 Twos
Compliment of -2 22- 2 (2)10 (10)2 Twos
Compliment of -3 22 - 2 0 - Not possible
84
Floating Point Numbers
Integer Part Fractional Part
Decimal System - base 10
235 . 7846 Binary System - base 2
10011 . 11101

Fractional Part (0.7846)10
Fractional Part (0.11101)2
85
Binary Fraction ? Decimal Fraction
(10.11)
Integer Part (10)2 0.20 1.21 2 Fractional
Part (11)2
Decimal Fraction ( 2.75 )10
86
Decimal Fraction ? Binary Fraction
Convert (0.9)10 to binary fraction
0.9? 2
0.8 integer part 1 ? 2
0.6 integer part 1 ? 2
0.2 integer part 1 ? 2
0.4 integer part 0 ? 2
Repetition
0.8 integer part 0
(0.9)10 (0.11100)2
87
Scientific Notation (Decimal)
0.0000747 7.47 ? 10-5 31.4159265 3.14159265 ?
10 9,700,000,000 9.7 ? 109
Binary (10.01)2 (1.001)2 ? 21 (0.110)2
(1.10)2 ? 2-1
88
Computer stores a binary approximation to x
x ? ? q ? 2n
q - mantissa , n exponent
(-39.9)10 (-100111.1 1100)2
(-1.0001111 1100)2 ? (25)10
89
Decimal Value of stored number (-39.9)10
-1. 001111 1100 1100 1100 11001
23 bit
32 bits First bit for sign
Next 8 bits for exponent
23 bits for
mantissa -39. 900001525 878 906
25
90
Round of Errors can be reduced by Efficient
Programming Practice
  • The number of operations (multiplications and
    additions ) must be kept minimum. (Complexity
    theory)

91
An Example of Efficient Programming
Problem Evaluate the value of the
Polynomial. P(x) a4x4
a3x3 a2x2 a1x a0
at a given x.
Requires 13 mulitiplications and 4 additions.
Bad Programme !

92
An Efficient method (Horners method)
P(x) a4x4 a3x3 a2x2 a1x a0 a0
x(a1 x(a2 x(a3 xa4) )) Requires 4
multiplications and 4 additions. Psuedo code for
an nth degree polynomial Input as, n,x p 0
For i n, n-1, , 0 p xP a(i) End
93
Numerical Integration
  • When do we need numerical integration ?
  • Antiderivate of f(x) difficult to find.
  • f(x) is available only in a tabular form

94
Trapezoidal Rule
95
Simpsons rule
96
EULERS METHOD-Pseudocode
97
Eulers Method-Pseudo code
98
A Second order methodTaylor Series method
99
Taylors method
100
Taylor Series MethodAn example
101
Taylor Series solution at x 0.1
102
Finite Difference Method forBoundary Value
Problems
Write a Comment
User Comments (0)
About PowerShow.com