Adaptive Stepsize Control - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

Adaptive Stepsize Control

Description:

hest is only an estimate -- set the actual new stepsize a little smaller than ... If we take m steps of size hest across the entire interval [x1, x2] ... – PowerPoint PPT presentation

Number of Views:192
Avg rating:3.0/5.0
Slides: 17
Provided by: cass76
Category:

less

Transcript and Presenter's Notes

Title: Adaptive Stepsize Control


1
Adaptive Stepsize Control
Goal of adaptive stepsize control
Achieve some predetermined accuracy in the
solution with minimum computational effort!
Implementation of an adaptive stepsize control
requires Stepping algorithm constantly
monitors the solution and returns information
about its performance
Most important is an estimation of the truncation
error
This error estimation is used to control the
stepsize!
2
Adaptive Stepsize Control
A simple implementation of an adaptive stepsize
control is step doubling
Take each step twice
1. As a full step
2. As two half steps
Difference between the two answers ybig(x2h) and
ysmall(x2h) is used to estimate the local
truncation error.
3
Adaptive Stepsize Control
How can we use this information to control the
stepsize?
Suppose that at the current stepsize hc we found
that the error to be
We want this error to be less or equal to our
specified ideal error, call it ?i
We can estimate the required stepsize to be
4
Some Practical Considerations
hest is only an estimate --gt set the actual new
stepsize a little smaller than the estimated value
A typical value for S1 is S1 0.9
A second safety factor, S2 gt 1, is often used to
ensure that the program does not raise or lower
the stepsize too enthusiastically
Prevents a too large increase
Prevents a too large decrease
Reasonable Variations
A typical value for S2 is S2 4.0
5
Some Practical Considerations
Our notation of ? is a little misleading.
For a set of n 1st order ODEs ? is a vector of
the desired accuracy for each equation in the
set of ODEs.
In general all equations need to be within their
respective accuracies.
We need to scale our stepsize according to the
worst-offender equation
6
Some Practical Considerations
How to chose ?I ?
Often we say something like The solution has to
be good to within one part in 106.
In this case you might want to use fractional
errors
where eps is for example a number like 10-6
If your solution, however passes through zero
than this scheme can become problematic
In this case a useful trick is
7
Global Error Constraints
How can we chose a cautious stepsize adjustor
that will control the global error?
The local error due to one step is related to the
stepsize by
If we take m steps of size hest across the entire
interval x1, x2
We can estimate the required stepsize to achieve
a global error control
Slightly more stringent than local error control
8
Pseudo Code for adaptive RK (evaluates one
single step)
Subroutine Adaptive_Runge_Kutta Inputs x,
y(n), h_initial, ?i, n Output xout, yout(n),
h_new Set initial variables (e.g., max number
of attempts, S1, S2, ) Loop over maximum
number of attempts to satisfy error bounds -
Take the two small steps (call RK4) - Take the
single big step (call RK4) - Compute the
estimated truncation error - Estimate the new
hnew - If error is acceptable, return computed
values Issue error message if error bound is
never satisfied
9
The Midpoint Method
A few days ago, we introduced the 2nd order
Runge-Kutta Methods starting with the Midpoint
Method
The Midpoint Method is a second order method. The
local error in the estimate of yn1 is
proportional to O(?x3)
Lets modify this method a little bit
10
Modified Midpoint Method
Advance the solution of an ODE from x to xH via
a sequence of n steps of length h H/n
First Step Explicit Euler
2nd to n-1st Step Midpoint Method
Last Step Combination of Midpoint and Euler
n1 function evaluations are needed
11
Modified Midpoint Method
The Modified Midpoint Method is useful for two
important reasons
1. It is second-order accurate even though only
n1 function evaluations are needed
2. It has an error series that can be expressed
in even powers of h
We can play our usual trick of combining steps
with different h-values to eliminate error terms
12
Modified Midpoint Method
Using two repeated crossings of the interval with
stepsizes h and h/2
We can eliminate the h2-term
13
Modified Midpoint Method
The estimate is fourth-order accurate, the same
as fourth-order Runge-Kutta
However Modified Midpoint requires about 1.5
evaluation per step Fourth-order Runge-Kutta
requires 4 evaluation per step
14
Remember Romberg Integration
For Romberg Integration we had tabulated the
results as
Only the first column in the table required an
evaluation of the integrant function using the
composite trapezoid rule (CTR).
We can use the same scheme here to extrapolate
the solution of our ODE to higher order accuracy
Only the first column in the table requires an
evaluation of the ODE using the Modified Midpoint
Method

--gt Bulirsch-Stoer Method
15
Bulirsch-Stoer Method
The Bulirsch-Stoer Method is for differential
equations representing smooth functions.
A single Bulirsch-Stoer step takes us from x to
xH This step consists of of many substeps of the
modified midpoint method
Next the interval from x to xH is crossed in
separate attempts with an increasing number of n,
the number of substeps
After each successive n, we can calculate the
extrapolated value and an error estimate

16
Practical Consideration
n 2, 4, 6, 8, 10, 12, 14, 16, ., nj 2j,
After each successive n, we can calculate the
extrapolated value and an error estimate
Question How far do we push this scheme?
Remember Have some mistrust for extrapolation.
Typically If no acceptable solution is found
after eight crossings of the interval STOP and
reduce the interval size H
There might be an obstacle in the way and we
should not extrapolate

Remember Bulirsch-Stoer is BEST for smooth
functions
Write a Comment
User Comments (0)
About PowerShow.com