Section 1.5 Higher Order Methods: Runge-Kutta
Even with variable step-size, the Euler method converges too slowly to the solution for \(h\to0\text{.}\) There are several ways to increase the global order of convergence and we will see three of them in this and the next two chapters. The one devised by Runge and Kutta can be though of as extending to ODEs the Newton-Cotes quadrature methods for the numerical evaluation of definite integrals. In concrete, the method uses the integral version of the IVP
\begin{equation*}
\dot x = f(t,x),\;x(t_0)=x_0
\end{equation*}
namely
\begin{equation*}
x_{n+1} = x_n + \int_{t_n}^{t_{n+1}} f(t,x(t))dt,
\end{equation*}
and uses a mixture of Newton-Cotes methods plus the explicit Euler method to evaluate the integral at the right hand side. These (or, more precisely, their variable step size variants) are among the most used methods in Scientific Computing and they are implemented in all languages and main ODEs libraries. Subsection 1.5.1 Heun Method
As mentioned before, all methods for numerical quadrature do translate into methods for numerical solution of ODEs. For instance, the explicit Euler method corresponds to the left Riemann sum method. We will see later that the implicit Euler method corresponds to the right Riemann sum. In case of the Heun method we illustrate here, it is the case of the trapezoidal method. If \(f(t,x)\) depended on \(t\) only, the "local" version of the method would be
\begin{equation*}
x_{n+1}=x_n+\frac{h}{2}\left(f(t_n)+f(t_{n+1})\right).
\end{equation*}
Naively, one would want to use
\begin{equation*}
x_{n+1}=x_n+\frac{h}{2}\left(f(t_n,x_n)+f(t_{n+1},x_{n+1})\right)
\end{equation*}
but we have \(x_{n+1}\) on both sides! Of course we do not know \(x_{n+1}\text{,}\) that is exactly what we are trying to evaluate. A way to overcome the problem above is evaluating a first time \(x_{n+1}\) using Euler`s method and use that value into the trapezoidal algorithm. The method obtained this way is an example of "two steps method", since two steps are needed to evaluate \(x_{n+1}\text{,}\) and it is called Heun method:
- \(\displaystyle \hat x_{n+1} = x_n + h f(t_n,x_n)\)
- \(\displaystyle x_{n+1} = x_n + \frac{h}{2}(f(t_n,x_n)+f(t_{n+1},\;\;\hat x_{n+1}))\)
\begin{equation*}
\dot x = -x\cos t,\;\;x(0)=1
\end{equation*}
by subdividing the time interval \([0,25]\) in 100 subintervals, namely \(h=0.25\text{.}\) The code saves all intermediate steps and plots them together with the corresponding values of the exact solution \(x(t)=e^{-\sin t}\text{.}\)
Just like the trapezoidal method, one can prove that Heun method`s local error is \(O(h^3)\) and global error is \(O(h^2)\text{.}\) Below we verify this fact numerically: Subsection 1.5.2 The Runge-Kutta 4 Method
The Heun algorithm above shows the general idea of the Runge-Kutta methods: each method looks exactly like the explicit Euler method with the only difference that the slope used to pass from \((t_n,x_n)\) to \((t_{n+1},x_{n+1})\) is not obtained as the value of \(f(t,x)\) at the single point \((t_n,x_n)\) but rather as the weighted average of the values of \(f(t,x)\) at several points \((t^{(k)}_n,x^{(k)}_n)\) with \(t_n\leq t^{(k)}_n\leq t_{n+1}\text{.}\) If you want to know more details about all these coefficients and their relations, you can get them all from the wikipedia page linked above. Here we illustrate a single but significative example called Runge-Kutta 4 (RK4 for short). As the name suggests, our final slope will be the average of four slopes. Suppose we are at the \(n\)-th step of the algorithm and we reached the poin \((t_n,x_n).\) The first slope is just the slope at the starting point:
\begin{equation*}
k_1 = f(t_n,x_n).
\end{equation*}
The second slope is the slope at the point we reach by moving by half time step with Euler's method:
\begin{equation*}
k_2 = f(t_n+\frac{h}{2},x_n+\frac{h}{2}k_1).
\end{equation*}
The third slope is the slope at the point we reach by moving by half time step with Euler's method, but this time using the slope at the point we reached with the standard Euler's method:
\begin{equation*}
k_3 = f(t_n+\frac{h}{2},x_n+\frac{h}{2}k_2).
\end{equation*}
The fourth slope is the slope at the point we reach by moving by a full time step with Euler's method, this time using the slope at the point we reached in the third step:
\begin{equation*}
k_4 = f(t_n+h,x_n+hk_3).
\end{equation*}
Finally, the RK4 rule is now
\begin{equation*}
x_{n+1} = x_n + h\frac{k_1+2k_2+2k_3+k_4}{6}.
\end{equation*}
If you are puzzled by where do the weights \(1,2,2,1\) of the slopes come from, consider the case when \(f\) does not depend on \(x\text{.}\) In this case clearly \(k_2=k_3\) and RK4 reduces to
\begin{equation*}
x_{n+1} = x_n + h\frac{f(t_n)+4f\left(\frac{t_n+t_{n+1}}{2}\right)+f(t_{n+1})}{6}.
\end{equation*}
Yes, this is exactly the Simpson's rule of numerical integration! So RK4 is nothing but the adaptation to ODEs of Simpson's rule. Since in general \(f\) does depend also on \(x\text{,}\) the middle term \(f(\frac{t_n+t_{n+1}}{2})\) splits "evenly" into two terms, both evaluated at the same \(t\) but at two different values of \(x\text{.}\) The implementation below finds a numerical approximation of our usual IVP
\begin{equation*}
\dot x = -x\cos t,\;\;x(0)=1
\end{equation*}
by subdividing the time interval \([0,25]\) in 100 subintervals, namely \(h=0.25\text{.}\) The code saves all intermediate steps and plots them together with the corresponding values of the exact solution \(x(t)=e^{-\sin t}\text{.}\)
One can prove that RK4 method's local error is \(O(h^5)\) and global error is \(O(h^4)\text{.}\) Below we verify this fact numerically: