Skip to main content

Section 1.4 Variable step size

Sometimes the function \(f(t,x)\) at the right hand side of an ODE changes wildly in some parameter window. In this case, a fix step-size \(h\) is not optimal since either it gives a very rough approximation of the solution (if \(h\) is chosen too large with respect to the fluctuations of the function) or it requires an unnecessarily large time to compute (if \(h\) is chosen small enough).

In these cases, it is better to implement an algorithm able to modify the step size according to the point of the phase space where the solution is. Below we present an elementary example of an adaptive method:
  • First, we'll use two fixed-step methods at the same time, so that at every step \(t_n\to t_{n+1}=t_n+h\text{,}\) we have two outcomes \(x_{n+1,1}\) and \(x_{n+1,2}\text{.}\) Notice that, in order to be efficient, we choose the methods so that all evaluations of \(f\) for the first will also be used for the second.
  • Second, we fix a tolerance parameter \(\epsilon>0\text{.}\)

We get this way the following algorithm:

  1. Evaluate the "euristic" error \(e_n = |x_{n+1,1}-x_{n+1,2}\)|;
  2. If \(e_n<\epsilon\cdot h\text{,}\) accept the current \(h\) and set \(x_{n+1} = x_{n+1,2}\) or to some better approximation (depending on the case, see the example below);
  3. Otherwise, decrease \(h\) through some function of \(e_n\) and start again from \(t_n\text{.}\)
A Toy Example: Adaptive Euler. We follow here a nice toy model by Joel Feldman. As first of the two methods, we use the explicit Euler method, namely
\begin{equation*} x_{n+1,1} = x_n + h f(t_n,x_n). \end{equation*}
Remember that the local error of explicit Euler is
\begin{equation*} \ell_n = K h^2 + O(h^3), \end{equation*}
or, in other words,
\begin{equation*} x(t_n+h) = x_{n+1,1} + K h^2 + O(h^3), \end{equation*}
where \(x(t)\) is the solution of \(x'=f(t,x)\) with \(x(t_n)=x_n\text{.}\)

As the second method, we use two steps of the explicit Euler method with half step size, namely
\begin{equation*} x_{n+1,2} = x_n + \frac{h}{2} f(t_n,x_n) + \frac{h}{2} f(t_n+\frac{h}{2},x_n+\frac{h}{2}f(t_n,x_n)). \end{equation*}
In this case, the local error is
\begin{equation*} \ell_n = K \left[\frac{h}{2}\right]^2 + O(h^3) + K \left[\frac{h}{2}\right]^2 + O(h^3) = K\frac{h^2}{2}+O(h^3). \end{equation*}
Hence
\begin{equation*} e_n = |x_{n+1,1}-x_{n+1,2}| = K\frac{h^2}{2}+O(h^3). \end{equation*}
The key idea of this method is that we can now use these two estimates to get the rate of change of the error close to \((t_n,x_n)\text{:}\)
\begin{equation*} \frac{e_n}{h} \simeq \frac{K}{2}h. \end{equation*}
If \(e_n<\epsilon\cdot h\text{,}\) we accept \(h\) and set
\begin{equation*} x_{n+1} = 2x_{n+1,2}-x_{n+1,1}. \end{equation*}

Notice indeed that
\begin{equation*} x(t_n+h) = 2x_{n+1,2}-x_{n+1,1} + O(h^3) \end{equation*}
because the 2nd order terms cancel out!

If, instead, \(e_n>\epsilon\cdot h\text{,}\) then we chose a new step size \(h'\) so that
\begin{equation*} \frac{K}{2}h'<\epsilon, \end{equation*}
for instance
\begin{equation*} h'=0.9\epsilon\frac{h^2}{e_n}, \end{equation*}
and we repeat the process.

Finally, at every step we start with
\begin{equation*} h_{n+1} = 0.9\epsilon \frac{h_n^2}{e_n} \end{equation*}
We present below an implementation of this algorithm to solve our main ODE example:
\begin{equation*} x' = -x\cos t,\;x(0)=1 \end{equation*}
We plot the solution for \(0\leq t\leq 25\text{.}\)