Section 1.11 Other implicit methods
The same idea used in the Implicit Euler algorithm can be applied to Runge-Kutta and Multistep algorithms. Here we present a few elementary examples for both methods.Subsection 1.11.1 Runge-Kutta
A general explcit Runge-Kutta algorithm has the form
\begin{equation*}
x_{n+1} = x_n + h\sum_{i=1}^s b_i k_i,
\end{equation*}
where \(s\) is some natural integer and
\begin{equation*}
k_i = f\left(t_n+c_ih,x_n+h\sum_{j=1}^{i-1} a_{ij}k_j\right),\;0\leq c_i\leq1,\;0\leq a_{ij}\leq 1.
\end{equation*}
For instance, in Euler we have that
\begin{equation*}
s=1, b_1 = 1, c_1 = 0, a_{11}=0
\end{equation*}
while in RK4 we have that
\begin{equation*}
s=4, b_1 = b_4 = \frac{1}{6}, b_2=b_3=\frac{1}{3}, c_1=0, c_2=c_3=\frac{1}{2}, c_4=1
\end{equation*}
and the only non-zero \(a_{ij}\) coefficients are
\begin{equation*}
a_{21} = a_{32} = 1/2,\;a_{43}= 1.
\end{equation*}
The implicit Runge-Kuttat methods can be obtained by allowing the summation in the expression of \(k_i\) to run up to \(i\text{,}\) namely
\begin{equation*}
k_i = f\left(t_n+c_ih,x_n+h\sum_{j=1}^{i} a_{ij}k_j\right).
\end{equation*}
At this point, each \(k_i\) can potentially depend on itself too and so it must be obtained by solving a (usually highly nonlinear) equation.
Example: the implicit Trapezoidal algorithm. Consider the formula that led us to the Heun method:
\begin{equation*}
x_{n+1} = x_n + h\frac{f(t_n,x_n)+f(t_{n+1},x_{n+1})}{2}.
\end{equation*}
If, rather than replacing \(x_{n+1}\) with its value found via the explicit Euler method, we insist leaving \(x_{n+1}\) there, we find a second order Runge-Kutta implicit method called trapezoidal method. We can write this method in the way above as follows:
\begin{gather*}
x_{n+1} = x_n + h\left(\frac{1}{2}k_1+\frac{1}{2}k_2\right),\\
k_1 = f(t_n,x_n),\\
k_2 = f\left(t_{n+1},x_n+h\left(\frac{1}{2}k_1+\frac{1}{2}k_2\right)\right).
\end{gather*}
Hence here the coefficients are
\begin{equation*}
s=2, b_1=b_2=\frac{1}{2}, c_1=0, c_2=1,
\end{equation*}
and the only non-zero \(a_{ij}\) coefficients are
\begin{equation*}
a_{21} = a_{22} = 1/2.
\end{equation*}
Notice that in explicit methods necessarily \(a_{ij}=0\) when \(i\leq j\text{.}\)Subsection 1.11.2 Multistep
In this case the pattern is even simpler: just shift by one the set of \(k+1\) points used to interpolate, namely use
\begin{equation*}
(t_{n+1},f_{n+1}),\,(t_{n},f_{n}),\dots,\,(t_{n-k+1},f_{n-k+1}).
\end{equation*}
This way, \(x_{n+1}\) appears on both sides of the algorithm step. These algorithms are called Adams-Moulton methods.
Example: the Adams-Moulton method with \(k=1\). We repeat here all steps we did in case of the Adams-Bashforth method of the same order. The two points used to interpolate \(f(t,x(t))\) are now \((t_{n+1},f_{n+1})\) and \((t_{n},f_{n})\text{.}\) The corresponding Lagrange polynomials are
\begin{gather*}
L_0(t) = \frac{t-t_{n}}{t_{n+1}-t_{n}}=\frac{1}{h}(t-t_{n}),\\
L_1(t) = \frac{t-t_{n+1}}{t_{n}-t_{n+1}}=-\frac{1}{h}(t-t_n-h)
\end{gather*}
and so the interpolating polynomial is
\begin{equation*}
p(t) = f_{n+1}L_0(t) + f_{n}L_1(t) = \frac{f_{n+1}}{h}(t-t_{n})
- \frac{f_{n}}{h}(t-t_n-h).
\end{equation*}
Hence the \(k=1\) Adams-Moulton algorithm step is given by
\begin{gather*}
x_{n+1} = x_n + \int_{t_{n}}^{t_n+h} p(t) = x_n + \frac{f_{n+1}}{2h}(t-t_{n})^2\bigg|_{t_{n}}^{t_n+h} - \frac{f_{n}}{2h}(t-t_n-h)^2\bigg|_{t_{n}}^{t_n+h} =\\
= x_n + h\left(\frac{f_{n+1}}{2} + \frac{f_n}{2}\right).
\end{gather*}
More explicitly,
\begin{equation*}
x_{n+1} = x_n + h\frac{f(t_{n+1},x_{n+1})+f(t_{n},x_{n})}{2},
\end{equation*}
namely the Adams-Moulton method with \(k=1\) coincides with the implicit trapezoid method.