Computational Methods in Physics PHYS 3437 - PowerPoint PPT Presentation

About This Presentation
Title:

Computational Methods in Physics PHYS 3437

Description:

Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C) thacker_at_ap.smu.ca – PowerPoint PPT presentation

Number of Views:206
Avg rating:3.0/5.0
Slides: 32
Provided by: RobT158
Category:

less

Transcript and Presenter's Notes

Title: Computational Methods in Physics PHYS 3437


1
Computational Methods in Physics PHYS 3437
  • Dr Rob Thacker
  • Dept of Astronomy Physics (MM-301C)
  • thacker_at_ap.smu.ca

2
Todays Lecture
  • Numerical solutions of ordinary differential
    equations
  • Euler method
  • Runge-Kutta
  • Adaptive step size selection
  • You can skip this lecture if you have a
    significant amount of experience with numerical
    ODEs

3
Simplest approach to ODEs
  • To begin consider a general first order ordinary
    differential equation
  • Using a Taylor expansion of the solution y(x)
  • Which suggests we predict forward function values
    from a starting point x0,y0

4
Crude (forward) Euler solver graphically
y2
y(x)F(x)
y1
y0
h
x0 x1 x2
5
Useful notation
  • n step number
  • Dx width of interval

6
Numerical errors
  • Discretization error
  • Error resulting from the computation of
    quantities that have been calculated using an
    approximation to the true solution
  • We may neglect higher order terms for example
  • This error is sometimes called truncation error
  • Unavoidable problem in numerical integration work
  • Would occur even in the presence of infinite
    precision
  • Round-off error
  • Result of finite precision arithmetic

7
Discretization round-off errors
  • Suppose the exact solution at a given value of xn
    is (y)F(xn), the (accumulated) discretization
    error is defined as
  • Caused by
  • approximate formula to estimate yn1
  • Input data at start of step do not necessarily
    correspond to exact soln
  • Accumulated round-off error is defined as
  • Where Yn is the value we actually calculate after
    round-off rather than the true value yn
  • So the absolute value of the total error is given
    by the inequality

8
Problems with the forward Euler method
  • This method relies upon the derivative at the
    beginning of each interval to predict forward to
    the end of each interval
  • Any errors in the solution tend to get amplified
    quickly
  • Calculated solution quickly diverges away from
    the true solution as the error grows
  • We can precisely calculate the error in a single
    step as follows

9
Local discretization error in Euler Method
The easiest way to calculate the local error is
to use a Taylor expansion


These three terms correspond to Eulers method -
The local discretization error in the
approximation at one step is thus given by
10
Error over a series of intervals
Be VERY careful about the distinction between The
local discretization error ei and the global
error En accumulated over a series of intervals.
In many books/lectures a clear definition of what
error is being discussed isnt always
given. Roughly speaking, if the local error is
O(hn) then the global error will be O(hn-1).
This happens because n(xn-x0)/h and you sum over
n intervals. Unfortunately, calculating En
usually requires fairly complicated derivations
that we dont have time to do.
y0
x0 x1 x2 xn
11
Modified Euler Method (extrapolate-integrate)
  • The Euler method is flawed because we use the
    beginning of the interval to predict to the end
  • If we could use the midpoint derivative that
    would usually be expected to be a better
    approximation
  • If the slope is changing rapidly across an
    interval using the midpoint slope usually gives
    us something closer to the average over the
    interval (of course it doesnt always have to be
    better, just on average)
  • However since ymidf(xmid,ymid) and we dont
    know ymid (only y0) we have to estimate it

12
Modified Euler Method
  • Let y(x)f(x,y) and y(x0)y0
  • Let the step size be given by hx1-x0,
    f0f(x0,y0), x1/2x0h/2, x3/2x1h/2 etc.

13
Modified Euler Method graphically
In this method the slope is estimated at the
mid-point and the y(x) is integrated forward
using that slope.
y(t)
f1/2
y1
ft1/2
y0
Can show that errors in this method are
x0 x1/2 x1
14
Improved Euler Method
  • Similar in concept to the trapezoid rule in
    quadrature, by utilizing an average of the start
    and end point derivatives we can better
    approximate the change in y over the interval
  • This is the simplest example of so called
    Predictor-Corrector methods, (which includes
    Runge-Kutta methods)

15
Improved Euler method graphically
y(x)
yn
xn xn1
Can show that errors in this method are also
16
Higher order scheme preliminaries
  • As a first motivation consider repeating Improved
    E. method but with two substeps

y(x)F(x)
y0
x0 x1/2 x1
Indicates y1 was obtained with two h/2 steps
17
General points about higher order methods
  • Higher order accurate schemes will use more
    substep evaluations
  • We need to ensure these are chosen optimally so
    as to produce the most accuracy for the least
    number of function evaluations
  • Lets go back to the Improved Euler method on a
    series of steps and apply Richardson-Extrapolation
    -esque approach

18
4th order Runge-Kutta
  • For a single Improved Euler method we get
  • A global evaluation with n steps will produce an
    error En, of order O(h2)
  • If we halve the size of h then the global error
    produced by (D) with the smaller step size must
    be ¼ that of the previous evaluation
  • That must correspond to the error produced by
    using the formula y1,h/2 (i.e. C) over the n steps

19
Subtraction step defining
  • Since we know the global error term produced by
    using y1,h/2 Err(y1,h/2)K(h/2)2Kh2/4 (E)
  • Global error term for y1,h Err(y1,h)Kh2 (F)
  • Subtract (F) from 4(E) define new y1

This formula will actually be globally 4th order
accurate!
20
Comments
  • There are an infinite number of ways of choosing
    slopes and substeps
  • Mathematically all these methods are
    parameterized and limited by using appropriate
    Taylor expansions and matching coefficients
  • See W. Gear Numerical Initial Value Problems in
    ODEs, 1971, pgs 27-36
  • However, one must always make an arbitrary choice
    to derive a specific algorithm
  • Many numerical analysts have considered all the
    possible choices and 4th order Runge-Kutta is
    considered a very good method

21
Classical Runge-Kutta method
Equivalent to the five term Taylor expansion
Interpret the sum over interior points as an
average slope. Local discretization error is
proportional to h5, so halving h leads to a 1/32
reduction in the local disc. error. Very widely
used method
No proof of this algebra is extremely lengthy!
22
Runge-Kutta graphically
y0
x0 x1/2
x1
23
Comparison of methods
Lets compare solutions of the following system
x Euler h0.1 Improved Euler h0.1 RK h0.2 RK h0.1 Exact
0 1.0 1.0 1.0 1.0 1.0
0.2 1.5 1.595 2.5016 2.5050062 2.5053299
0.4 4.4774 5.6099494 5.7776358 5.7927853 5.7942260
0.6 8.9038240 12.442193 12.997178 13.047713 13.052522
0.8 17.537495 27.348020 28.980768 29.130609 29.144880
1.0 34.411490 59.938223 64.441579 64.858107 64.897803
24
Convergence with changes in h
Solutions now taken at x1
h Euler Improved Euler RK Exact
0.2 64.441579 64.897803
0.1 34.411490 59.938223 64.858107 64.897803
0.05 45.588400 63.424698 64.894875 64.897803
0.025 53.807866 64.497931 64.897604 64.897803
0.01 60.037126 64.830722 64.897798 64.897803
0.7 error
0.06 error
10-7 error
RK with 10 steps better than improved Euler with
100 steps!
25
(No Transcript)
26
Adapting the step size
  • What value of h is optimal?
  • If we consider functions that are smooth in some
    regions and rapidly changing in others then h for
    one region is not appropriate in another
  • h ought to vary from one region to another
  • Lets examine an algorithm to vary h
  • Well use the relative error on a given interval
    as the controlling parameter

27
Algorithm to adapt h
  • Let efractional error tolerance (say 10-4)
  • Let Dxmax-xmin be the domain over which
    yf(x,y) is to be solved
  • Initially let h0.1eD
  • Use R-K to find y1y(xminh) in one step h
  • Next use two R-K steps of size h/2 to find a new
    y1(xminh) (most accurate estimate yet)
  • (Now have two values we can apply
    Richardson-Extrapolation to)

28
Algorithm to adapt h
  • 6) Compute y1y(xminh) using y1 and y1 using
    R.E.
  • Note since error is prop to h4 must use 1/16
    multiplier
  • 7) Estimate error
  • 8) If err lt 0.1e ? h is too small. Increase by
    1.5 for next step
  • If err gt e ? h is too big. Halve h and
    repeat iteration
  • If 0.1 e err e ? h is OK.
  • 9) When y value is acceptable increment all x
    values to do next zone
  • 10) Stop when xxmax

29
Notes
  • The R.E. ensures that even though we appear to do
    3 R-K steps per h zone, we actually get a much
    faster convergence rate than R-K with a step of
    h/3
  • The adaptive grid is a necessary part of the
    error estimation process
  • Without local adaption you always have to choose
    the h for the hardest regions in your problem
  • You may well be wasting time in regions that are
    easy to solve
  • It doesnt make any sense to use R-K without
    adaptive stepping!

30
Summary
  • As with numerical integration, low order methods
    do not exhibit high accuracy
  • RK methods provide a good compromise between
    computational cost and stability
  • Adaptive RK methods are extremely powerful and
    using convergence tests that use R.E. ensures
    optimal usage of compute cycles

31
Next lecture
  • Issues with the adaptive step size algorithm
  • Second order ODEs
  • Nth order ODEs
Write a Comment
User Comments (0)
About PowerShow.com