PHYS100 Lecture 16 Numerical Differentiation 110309 - PowerPoint PPT Presentation

1 / 18
About This Presentation
Title:

PHYS100 Lecture 16 Numerical Differentiation 110309

Description:

5) Numerical Differentiation (Kuisalaas, p.182) ... error1 = gh1 - exact. error1 = -6.1344e-004 error2 = gh2 - exact. error2 = -1.5330e-004 ... – PowerPoint PPT presentation

Number of Views:29
Avg rating:3.0/5.0
Slides: 19
Provided by: admi1432
Category:

less

Transcript and Presenter's Notes

Title: PHYS100 Lecture 16 Numerical Differentiation 110309


1
PHYS100Lecture 16Numerical Differentiation11/03
/09
2
  • 5) Numerical Differentiation (Kuisalaas,
    p.182)
  • Given a function, y f(x), one must determine
    derivative, df(x)/dx, at x xk
  • Given means
  • a) Have algorithm for computing f(x)
  • Used for checking algorithm
  • b) Possess set of data points xi, yi i
    1n
  • Primary use

3
  • Numerical differentiation related to
    interpolation
  • Subject to trade off/balance between round off
    error,
  • created by 16-digit machine precision, and
    approximation error
  • Usually error of numerical derivative
    significantly greater than eps
  • Primary tool ? Taylor series approximation

4
  • 5.2) Finite Difference Approximation

5
  • Continuing
  • Consider grid of function values given n points
    x1 .. xn
  • Central differences use function values on
    both sides of any x
  • Thus central differences cannot be employed at
    x1 and xn

6
  • Eq. a ?
  • Eq. b ?
  • Truncation error is O(h) -

7
  • Eq. c minus 4 times Eq. a ?
  • Additional algebra produces

8
  • Example 2nd Order df/dx
  • Uniform grid x1 to xn
  • forward
  • df_dx(1)(-f(x(3)) 4f(x(2)) -3f(x(1)) )/(2h)
  • central
  • for k 2n 1
  • df_dx(k) (f(x(k 1)) - f(x(k -
    1)))/(2h)
  • end
  • backward
  • df_dx(n)(f(x(n-2))-4f(x(n-1))
    3f(x(n)))/(2h)

9
  • gtgt h pi/200
  • gtgt x (0hpi)'
  • gtgt df_dx first_derivative(_at_sin, x)
  • gtgt plot(x,abs(df_dx -cos(x)),'r')

10
  • Investigate trade off between truncation and
    round off
  • FD1 Minimum error of 1.378e-010 at h
    8.315e-009

11
  • CD2 Minimum error of 5.126e-013 at h
    1.170e-006

12
  • CD4 Minimum error of 1.188e-014 at h
    9.682e-004

13
  • CD6 Minimum error of 3.331e-016 at h
    8.869e-003

14
  • CD8 Minimum error of 1.110e-016 at h
    1.748e-003

15
  • 5.3) Richardson Extrapolation (Kuisalaas
    p. 188)
  • Simple, elegant method for improving
    finite-difference accuracies
  • exact value represented by G
  • step-size dependent approximation represented
    by g(h)
  • step-size dependent error by E(h) and E(h)
    form is
  • E(h) chp with c and p constant
  • Thus
  • G g(h) E(h)
  • Start with h h1

16

17
  • Central difference error -
  • E(h) O(h2) c1h2 c2h4 c3h6
  • Eliminate dominant error with p 2
  • Example of Richardson extrapolation
  • Approximate 1st derivative of e-x at x 1
  • 2nd Order central difference

18
  • gtgt df_dxc _at_(f, x, h) (f(x h) - f(x -
    h))./(2h)
  • gtgt f _at_(x) exp(-x)
  • gtgt h1 0.1 h2 h1/2
  • gtgt x 1.0
  • gtgt gh1 df_dxc(g, x, h1)
  • gtgt gh2 df_dxc(g, x, h2)
  • gtgt G (4gh2 - gh1)/3
  • G -3.6788e-001
  • gtgt exact -f(x)
  • exact -3.6788e-001
  • gtgt error G - exact
  • error 7.6664e-008
  • gtgt error1 gh1 - exact
  • error1 -6.1344e-004
  • gtgt error2 gh2 - exact
  • error2 -1.5330e-004
Write a Comment
User Comments (0)
About PowerShow.com