Skip to main content

Section 9.12 Finite Differences Method

The idea of this method is to discretize time and approximate all derivatives with finite differences to transform the ODE into a (very large) algebraic system of equations.

We mention here several ways to accompish that, some of which we already met:
First derivative
Forward difference: \(\dot x(t) = \frac{x(t+h)-x(t)}{h} + O(h)\text{.}\)
Backward difference: \(\dot x(t) = \frac{x(t)-x(t-h)}{h} + O(h)\text{.}\)
Symmetric difference: \(\dot x(t) = \frac{x(t+h)-x(t-h)}{2h} + O(h^2)\text{.}\)
Second derivative
Symmetric difference: \(\ddot x(t) = \frac{x(t+h)-2x(t)+x(t-h)}{h^2} + O(h^2)\text{.}\)
5-points stencil: \(\ddot x(t) = \frac{-x(t+2h)+16x(t+h)-20x(t)+16x(t-h)-x(t+2h)}{12h^2} + O(h^4)\text{.}\)

Now, divide the interval \([t_0,t_f]\) into \(N=\frac{t_f-t_0}{h}\) smaller intervals
\begin{equation*} [t_0,t_0+h],\,[t_0+h,t_0+2h],\,\dots,\,[t_0+(N-1)h,t_f=t_0+Nh] \end{equation*}
and set \(t_i = t_0+i h\) and \(x_i = x(t_i)\text{.}\)

Then, with respect to the discretized time \(t_0=t_0,t_1,\dots,t_{n-1},t_n=t_f\text{,}\) we can approximate the linear infinite-dimensional differentiation operators \(d/dt: x(t)\mapsto \dot x(t)\) and \(d^2/dt^2: x(t)\mapsto \ddot x(t)\) with linear finite-dimensional operators.

Below, we focus on the case of symmetric ones. The first derivative becomes the matrix
\begin{equation*} \begin{pmatrix}x_0\\ x_1\\ \vdots\\ x_{N-1}\\ x_N\end{pmatrix} \mapsto \begin{pmatrix}\dot x_1\\ \dot x_2\\ \vdots\\ \dot x_{N-2}\\ \dot x_{N-1}\end{pmatrix} =\frac{1}{2h} \begin{pmatrix}-1&0&1&&\cr &-1&0&1&&\cr &&\ddots&\ddots&\ddots\cr &&&-1&0&1\cr\end{pmatrix} \begin{pmatrix}x_0\\ x_1\\ \vdots\\ x_{N-1}\\ x_N\end{pmatrix} \end{equation*}
The second derivative becomes the matrix
\begin{equation*} \begin{pmatrix}x_0\\ x_1\\ \vdots\\ x_{N-1}\\ x_N\end{pmatrix} \mapsto \begin{pmatrix}\ddot x_1\\ \ddot x_2\\ \vdots\\ \ddot x_{N-2}\\ \ddot x_{N-1}\end{pmatrix} =\frac{1}{h^2} \begin{pmatrix}1&-2&1&&\cr &1&-2&1&&\cr &&\ddots&\ddots&\ddots\cr &&&1&-2&1\cr\end{pmatrix} \begin{pmatrix}x_0\\ x_1\\ \vdots\\ x_{N-1}\\ x_N\end{pmatrix} \end{equation*}
Notice that both matrices take a \(N\)-dimensional vector and return a \((N-2)\)-dimensional one. The two operators above are a good model to use for BVPs since the initial conditions "lock" the values of \(x_0=x_0\) and \(x_N=x_f\) and so the first and second derivatives are both represented by square matrices \((N-2)\times(N-2)\text{.}\)

The first derivative then becomes the affine map
\begin{equation*} \begin{pmatrix}\dot x_1\cr \dot x_2\cr\vdots\cr \dot x_{N-2}\cr \dot x_{N-1}\end{pmatrix} =\frac{1}{2h} \begin{pmatrix} 0&1&&\cr -1&0&1&&\cr &\ddots&\ddots&\ddots\cr &&-1&0&1\cr &&&-1&0\cr \end{pmatrix} \begin{pmatrix}x_1\cr x_2\cr\vdots\cr x_{N-2}\cr x_{N-1}\end{pmatrix} + \frac{1}{2h} \begin{pmatrix}-x_0\cr 0\cr \vdots\cr 0\cr x_f\end{pmatrix} \end{equation*}
while the second derivative becomes the affine map
\begin{equation*} \begin{pmatrix}\ddot x_1\cr \ddot x_2\cr\vdots\cr \ddot x_{N-2}\cr \ddot x_{N-1}\end{pmatrix} =\frac{1}{h^2} \begin{pmatrix} -2&1&&\cr 1&-2&1&&\cr &\ddots&\ddots&\ddots\cr &&1&-2&1\cr &&&1&-2\cr \end{pmatrix} \begin{pmatrix}x_1\cr x_2\cr\vdots\cr x_{N-2}\cr x_{N-1}\end{pmatrix} + \frac{1}{h^2} \begin{pmatrix}x_0\cr 0\cr \vdots\cr 0\cr x_f\end{pmatrix} \end{equation*}
Example. Consider again the problem
\begin{equation*} \ddot x = -\dot x\cos t+x\sin t,\,x(0)=1,\,x(15\pi/2)=e, \end{equation*}
that we already studied with the shooting method.

The code below implements a general algorithm to solve the linear BVP problem of second order
\begin{equation*} \ddot x = u(t) + v(t) x +w(t) \dot x,\,x(t_0)=x_0,\,x(t_f)=x_f. \end{equation*}