Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Skip to main content

Section 1.8 Convergence, Consistency and Stability

The three concepts in the section's title are fundamental features to be owned by a numerical method. First we give a somehow abstract definition of what an ODE algorithm is:
Definition 1.8.1.
By a fixed-stepsize (k+1)-steps algorithm Ah to solve a first order ODE ˙x=f(t,x) on Rn we mean a map Φh:Rn(k+1)→Rn. Here h is the fixed time-step. Given a set of initial k+1 points
x−k,…,x−1,x0,
the numerical solution of the ODE is defined recursively by
xn+1=Φh(xn,…,xn−k).
Perhaps the most important property an ODE algorithm can have is to converge to the exact solution of the ODE:
Definition 1.8.2.
A fixed-stepsize ODE algorithm Ah is convergent if, given any IVP
˙x=f(t,x),x(t0)=x0,
where f(t,x) satisfies the conditions for the existence and uniqueness of the solution, and any tf>t0 for which there exist a solution, the numerical solution {x1(h),…,xN(h)} converges to the exact solution x(t) as h→0, namely
lim
Remark 1. Since N must be integer, one actually sets h=(t_f-t_0)/N and so the limit can also by thought of as for N\to\infty\text{.}

Remark 2. The convergence of an ODE algorithm is equivalent to the fact that its global error goes to zero for h\to0\text{.}

Remark 3. All ODE algorithms presented so far are convergent because both their local and global errors are O(h^k) and so go to 0 with h\text{.}
Definition 1.8.3.
A fixed-stepsize ODE algorithm A_h is consistent if, given any ODE \dot x=f(t,x) satisfying the conditions for the existence and uniqueness of the solution and any h small enough, the error of a single step of the algorithm goes to 0 as h\to0\text{.}
For instance, an algorithm is consistent if its local error is O(h^\alpha) for some \alpha>0\text{.} This is the case of all algorithms we covered in this chapter.

There are relations between consistency and convergence, for instance: In particular, we have the following: Notice that this is false for multistep methods (see below).

A third fundamental feature of an algorithm is stability, namely how sensitive is the algorithm with respect to small perturbations (for instance small changes in the initial conditions of a IVP).
Definition 1.8.6.
A fixed-stepsize ODE algorithm A_h is stable if, given any ODE \dot x=f(t,x) satisfying the conditions for the existence and uniqueness of the solution and two points x_0,x'_0\text{,} the two corresponding numerical solutions \{x_1(h),\dots,x_N(h)\},\{x'_1(h),\dots,x'_N(h)\}\text{,} where Nh=t_f-t_0\text{,} are such that
\begin{equation*} \|x_N(h)-x'_N(h)\|\leq C\|x_0-x'_0\| \end{equation*}
for each small enough h and some constant C independent on h\text{.}
This means that numerical solutions corresponding to close initial conditions will not end up arbitrarily far from each other for h\to0\text{.} In other words, as a discrete dynamical system, the algorithm is not chaotic

Studying the stability of an algorithm is quite complicated, so often one only studies the following weaker version:
Definition 1.8.8.
A fixed-stepsize ODE algorithm A_h is linearly stable (or A-stable, or Asymptotically stable) if, given any linear ODE \dot x=Kx\text{,} for any initial condition x_0 we have that the corresponding numerical solution \{x_n(h)\}\text{,} for any fixed value of h\text{,} converges to 0 for n\to\infty
Remark 1.8.9.
This concept of stability is not necessarily about how close \{x_n(h)\} gets to the exact solution but rather about whether, as a discrete dynamical system, A_h is stable in a neighborhood of the 0 point. Indeed, the constant function x(t)=0 is always a solution of a linear ODE and so a linearly stable method A_h has the property that the orbit of any point converges asymptotically to this solution.
Now consider a linear ODE
\begin{equation*} \dot x = K x,\;K\in\bC \end{equation*}
with Re(K)<0\text{.} In this case all solutions go to zero asymptotically and so any stable ODE algorithm will reproduce correctly this behavior. We will see in next section that, when Re(K)<0\text{,} the explicit Euler method is linearly stable only for h small enough, while the implicit method is so for every h