Section 9.3 Explicit Euler Method
This is the simplest numerical method to solve the IVP
\begin{equation*}
\dot x=f(t,x),\; x(t_0)=x_0.
\end{equation*}
It is just a direct consequence of Taylor`s series expansion formula:
\begin{gather*}
x(t+h) = x(t) + \dot x(t)h + \frac{\ddot x(\tau)}{2}h^2=\\
= x(t) + f(t,x(t))h + \left(f_t(\tau,x(\tau))+ f(\tau,x(\tau))f_x(\tau,x(\tau))\right)h^2
\end{gather*}
for some \(t_0\leq\tau\leq t_0+h\text{.}\) The step of explicit Euler`s method is precisely
\begin{equation*}
\begin{aligned}
x_{n+1} &= x_n + h f(t_n,x_n)\\
t_{n+1} &= t_n+h\\
\end{aligned}.
\end{equation*}
and it is clearly of local order 2. Below is an implementation of the Euler method used to evaluate numerically \(x(1.5)\) for the solution of the ODE
\begin{equation*}
x' = \frac{\pi}{2x}\cos(\pi t),\;x(0)=\sqrt{5}/2.
\end{equation*}
This particular implementation subdivides the "time" interval \([0,1.5]\) in 10 subintervals, namely \(h=0.15\text{.}\) The code saves all intermediate steps and plots them together with the corresponding values of the exact solution \(x(t)=\sqrt{1.25+\sin(\pi t)}\text{.}\)
Subsection 9.3.1 Euler method \(\Longleftrightarrow\) left Riemann sums
When \(f\) depends on \(t\) only,
\begin{equation*}
\begin{aligned}
x_1 &= x_0 + hf(t_0)\\
x_2 &= x_0 + hf(t_0) + hf(t_0+h)\\
\dots&\\
x_n &= x_0 + hf(t_0) + hf(t_0+h) + \dots + hf(t_0+(n-1)h),\\
\end{aligned}
\end{equation*}
namely
\begin{equation*}
x_n-x_0 = \left[f(t_0) + f(t_0+h) + \dots + f(t_0+(n-1)h)\right]h
\end{equation*}
(compare with the left Riemann sum formula in Section 8.1)
Global error in the Explicit Euler method. Consider a numerical IVP, namely evaluating \(x(t_f)\) for the solution of \(\dot x=f(t,x), x(t_0)=x_0\text{.}\) Notice first of all that, in order for this problem to make sense, we must assume that the unique solution \(\hat x(t)\) of the IVP problem above is defined at \(t=t_f\text{.}\) The element \(x_N\text{,}\) obtained by applying \(N\) times the Euler algorithm with \(h=(t_f-t_0)/N\text{,}\) is a numerical estimate of \(x(t_f)\text{.}\) The distance between \(x_N\) and \(x(t_f)\) is called the global error of the method. When the rhs \(f(t,x)\) does not depend on \(x\text{,}\) we know already that the method is of order 1 (see Section 8.1). What to do when it does depend on \(x\text{?}\)
Recall, first of all, that we are assuming the existence of a smooth solution \(\hat x\) defined on \([t_0,t_f]\text{.}\) Hence, \(\hat x(t)\) has a maximum \(a\) and a minimum \(b\text{,}\) since it is a continuous function over a bounded interval. In other words, the graph of the solution lies inside the rectangle \(R=[t_0,t_f]\times[a,b]\text{.}\) We do not know how large \(R\) is but we do know that it is bounded, and this will be enough for our purposes. Now, set
\begin{equation*}
e_n = \hat x(t_n)-x_n.
\end{equation*}
Then
\begin{gather*}
e_{n+1} = \hat x(t_{n+1})-x_{n+1} = \\
= \hat x(t_n)+\dot{\hat x}(t_n)h+\frac{\ddot{\hat x}(\tau)}{2}h^2-\left(x_n+f(t_n,x_n)h\right)=\\
= e_n+(f(t_n,\hat x(t_n))-f(t_n,x_n))h+\frac{\ddot{\hat x}(\tau)}{2}h^2=\\
= e_n+(f(t_n,x_n)+f_x(t_n,\xi)e_n-f(t_n,x_n))h+\frac{\ddot{\hat x}(\tau)}{2}h^2=\\
= e_n\left(1+f_x(t_n,\xi)h\right)+\frac{\ddot{\hat x}(\tau)}{2}h^2,
\end{gather*}
where \(x_n\leq\xi\leq x_{n+1}\) and \(t_n\leq\tau\leq t_n+h\text{.}\) Denote by \(M\) and \(K\text{,}\) respectively, the maxima of \(|f_t(t,x)+f(t,x)f_x(t,x)|\) and \(|f_x(t,x)|\) in \(R\text{.}\) Since \(R\) is closed and bounded (i.e. compact) these maxima are finite. Then
\begin{equation*}
\|e_1\|\leq\frac{M}{2}h^2
\end{equation*}
and
\begin{equation*}
\|e_{n}\|\leq\|e_{n-1}\|\cdot(1+Kh)\leq\dots\leq\|e_{1}\|\cdot(1+Kh)^{n-1}.
\end{equation*}
Hence
\begin{equation*}
\|e_{1}\|+\dots+\|e_N\|\leq\|e_{1}\|\left((1+Kh)+\dots+(1+Kh)^{n-1}\right)\leq
\end{equation*}
\begin{equation*}
\leq\frac{M}{2}h^2\frac{(1+Kh)^N-1}{Kh}=Ah,
\end{equation*}
where \(A\) is some bounded constant. Below we visualize numerically the bound above by evaluating numerically, for different values of \(h\text{,}\) the value of \(x(1.5)\) for the ODE solved numerically in the previous section and we plot them in a loglog plot.