Skip to main content

Section 1.8 Convergence, Consistency and Stability

The three concepts in the section's title are fundamental features to be owned by a numerical method. First we give a somehow abstract definition of what an ODE algorithm is:
Definition 1.8.1.
By a fixed-stepsize \((k+1)\)-steps algorithm \(A_h\) to solve a first order ODE \(\dot x=f(t,x)\) on \(\bR^n\) we mean a map
\begin{equation*} \Phi_h:\bR^{n(k+1)+1}\to\bR^n. \end{equation*}
The \(h\) subindex above is the fixed time-step. Given a set of initial \(k+1\) points
\begin{equation*} x_{-k},\dots,x_{-1},x_0, \end{equation*}
the numerical solution of the ODE is defined recursively by
\begin{equation*} x_{n+1} = \Phi_h(t_n,x_n,\dots,x_{n-k}),\;t_{n+1}=t_n+h. \end{equation*}
The map corresponding to the Euler method applied to \(\dot x=f(t,x)\) is
\begin{equation*} \Phi_h(t,x) = x + h f(t,x). \end{equation*}
The one corresponding to the Heun method is
\begin{equation*} \Phi_h(t,x) = x + \frac{h}{2}\left( f(t,x) + f(t+h, x+h f(t,x))\right). \end{equation*}
The one corresponding to the Adams-Bashforth method of order 2 is
\begin{equation*} \Phi_h(t,x,y) = x + \frac{h}{2}\left( 3f(t,x) - f(t-h, y)\right). \end{equation*}
Perhaps the most important property an ODE algorithm can have is to converge to the exact solution of the ODE:
Definition 1.8.3.
A fixed-stepsize ODE algorithm \(\Phi_h\) is convergent if, given any IVP
\begin{equation*} \dot x=f(t,x),\;x(t_0)=x_0, \end{equation*}
where \(f(t,x)\) satisfies the conditions for the existence and uniqueness of the solution, and any \(t_f>t_0\) for which there exist a solution, the numerical solution \(\{x_1(h),\dots,x_N(h)\}\) converges to the exact solution \(x(t)\) as \(h\to0\text{,}\) namely
\begin{equation*} \lim_{h\to0}\max_{1\leq k\leq N}\{\|x_k-x(t_0+kh)\|\}=0. \end{equation*}
Remark 1. Since \(N\) must be integer, one actually sets \(h=(t_f-t_0)/N\) and so the limit can also by thought of as for \(N\to\infty\text{.}\)

Remark 2. The convergence of an ODE algorithm is equivalent to the fact that its global error goes to zero for \(h\to0\text{.}\)

Remark 3. All ODE algorithms presented so far are convergent because both their local and global errors are \(O(h^k)\) and so go to 0 with \(h\text{.}\)
Definition 1.8.4.
A fixed-stepsize ODE algorithm \(\Phi_h\) is consistent if, given any ODE \(\dot x=f(t,x)\) satisfying the conditions for the existence and uniqueness of the solution and any \(h\) small enough, the error of a single step of the algorithm goes to 0 as \(h\to0\text{.}\)
For instance, an algorithm is consistent if its local error is \(O(h^\alpha)\) for some \(\alpha>0\text{.}\) This is the case of all algorithms we covered in this chapter.

There are relations between consistency and convergence, for instance: In particular, we have the following: Notice that this is false for multistep methods (see below).

A third fundamental feature of an algorithm is stability, namely how sensitive is the algorithm with respect to small perturbations (for instance small changes in the initial conditions of a IVP).
Definition 1.8.7.
A fixed-stepsize ODE algorithm \(\Phi_h\) is stable if, given any ODE \(\dot x=f(t,x)\) satisfying the conditions for the existence and uniqueness of the solution and two points \(x_0,x'_0\text{,}\) the two corresponding numerical solutions \(\{x_1(h),\dots,x_N(h)\},\{x'_1(h),\dots,x'_N(h)\}\text{,}\) where \(Nh=t_f-t_0\text{,}\) are such that
\begin{equation*} \|x_N(h)-x'_N(h)\|\leq C\|x_0-x'_0\| \end{equation*}
for each small enough \(h\) and some constant \(C\) independent on \(h\text{.}\)
This means that numerical solutions corresponding to close initial conditions will not end up arbitrarily far from each other for \(h\to0\text{.}\) In other words, as a discrete dynamical system, the algorithm is not chaotic

Studying the stability of an algorithm is quite complicated, so often one only studies the following weaker version:
Definition 1.8.9.
A fixed-stepsize ODE algorithm \(\Phi_h\) is linearly stable (or A-stable, or Asymptotically stable) if, given any linear ODE \(\dot x=Kx\text{,}\) for any initial condition \(x_0\) we have that the corresponding numerical solution \(\{x_n(h)\}\text{,}\) for any fixed value of \(h\text{,}\) converges to 0 for \(n\to\infty\)
Remark 1.8.10.
This concept of stability is not necessarily about how close \(\{x_n(h)\}\) gets to the exact solution but rather about whether, as a discrete dynamical system, \(\Phi_h\) is stable in a neighborhood of the 0 point. Indeed, the constant function \(x(t)=0\) is always a solution of a linear ODE and so a linearly stable method \(\Phi_h\) has the property that the orbit of any point converges asymptotically to this solution.
Now consider a linear ODE
\begin{equation*} \dot x = K x,\;K\in\bC \end{equation*}
with \(Re(K)<0\text{.}\) In this case all solutions go to zero asymptotically and so any stable ODE algorithm will reproduce correctly this behavior. We will see in next section that, when \(Re(K)<0\text{,}\) the explicit Euler method is linearly stable only for \(h\) small enough, while the implicit method is so for every \(h\)