Press ?
for presentation controls.
Best viewed in Firefox,
Chrome and
Opera.
for some $\epsilon>0$.
It is just a re-writing of the definition of derivative:
$$ x'(t) = \frac{x(t+h)-x(t)}{h}+O(h)\,, $$ so that
Since $$ x_{n+1} - x(t_{n+1}) = O(h^2) $$
we say that this method is local order 2.
The number of steps to pass from a time $t_0$ to $t_1$
increases as $1/h$ for $h\to0$
and so the global
order of convergence is 1
[since $h\cdot O(h^2)=O(h)$].
Click here for a rigorous proof.
whose analytical solution (use separation of variables!) is $$ x(t) = e^{-\sin t}\,. $$
whose analytical solution is $$ x(t) = e^{-\sin t}\,. $$
whose analytical solution is $$ x(t) = \cos t-\sin t. $$
whose analytical solution is $$ x(t) = \sqrt{1-t^2}. $$
whose analytical solution is $$ x(t) = \frac{5}{t}. $$ See if you can spot a funny patter in the numerical evolution of the solution by trying several different values of $h$.
Apply the Explicit Euler method to the linear problem $$ \dot x = Kx, x(t_0)=x_0. $$ The method basic step for this equation is $$ x_{n+1} = x_n + h M x_n = (1+hM)x_n $$ so that $$ x_{n} = (1+hM)^n x_0. $$
Consider for instance the IVP
whose analytical solution is $$ x(t) = e^{-15t}\,. $$
$$ x'(t+h) = \frac{x(t+h)-x(t)}{h}+O(h), $$
from which we get that
This method is again of local order 2 but the value of $x_{n+1}$ is now given only implicitly and in general must be found numerically, e.g. with the Newton method.
As in the explicit case, the global order of convergence is 1. The reason why it can be convenient using this more complicated method rather than the explicit one is that, as we are going to see below, this method works also for stiff equations, when the explicit one fails.
whose analytical solution is $$ x(t) = e^{-\sin t}\,. $$
whose analytical solution is $$ x(t) = \cos t-\sin t. $$
whose analytical solution is $$ x(t) = \sqrt{1-t^2}. $$
whose analytical solution is $$ x(t) = \frac{5}{t}. $$ See if you can find a reason for what you see.
We analyize below the round-off error in the Explicit Euler method and show a way to keep it at bay.
Since $h$ is usually quite small with respect to $x_n$,
the round-off absolute error in evaluating $x_{n+1}$ is just the error on $x_n$,
and therefore is of the order of
$$
|x_n|\epsilon_m\,,
$$
where $\epsilon_m$ is the 'epsilon-machine' value.
Hence, an upper bound for the global error of the Explicit Euler method has the following expression: $$ e(h) = \frac{C}{h}+C'h\,. $$
In this method, in order to sum $n$ elements of the array $X = [x(1),\dots,x(n)]$,
we use the following algorithm:
s=0; e=0; for i = 1:n temp = s y = x(i) + e s = temp + y e = (temp-s) + y end
The exact sum is $10.67$, that approximates to $10.7$
after dropping the 4-th digit.
In the naif summations following the index order, though,
we do not get the correct result:
s = x(1) + x(2) = 10.0 + 0.23 = 10.2 s = s + x(3) = 10.2 + 0.44 = 10.64 = 10.6
temp = s = 0 y = x(1) + e = 10.0 + 0 s = temp + y = 0 + 10.0 = 10.0 e = (temp - s) + y = (0-10.0) + 10.0 = 0 temp = s = 10.0 y = x(2) + e = 0.23 + 0 = 0.23 s = temp + y = 10.0 + 0.23 = 10.2 e = (temp-s) + y = (10.0-10.2) + 0.23 = -0.2 + 0.23 = 0.03 temp = s = 10.2 y = x(3) + e = 0.44 + 0.03 = 0.47 s = temp + y = 10.2 + 0.47 = 10.67 = 10.7 e = (temp-s) + y = (10.2-10.7) + 0.47 = -0.03In short, in the variable $e$ is kept the quantity that is discarded in the sum and then this quantity is added to the new summand.
Notice that, in order to be efficient, we choose the methods so that all evaluations of $f$ for the first will also be used for the second.
Second, we fix a tolerance parameter $\epsilon>0$.
Algorithm:but then $x(\pi/2) = B\sin(\pi/2)=B$
and so there is in this case a unique solution: $$ x(t)=0. $$
Call $F(v)$ the value at $t=t_e$ of the solution of the IVP $$x''=f(t,x,x')\,,\;x(t_b)=x_b\,,\;x'(t_b)=v\,.$$ This is a function $F:\Bbb R\to\Bbb R$
and the $v$ we are looking for is a solution of the equation $$ F(v) = x_e\,. $$ One can now use numerical methods of equation solving to find $v$. The most well known of these methods is the Newton method.1st der. | Forward difference: | $\dot x(t) = \frac{x(t+h)-x(t)}{h} + O(h)$. |
Backward difference: | $\dot x(t) = \frac{x(t)-x(t-h)}{h} + O(h)$. | |
Symmetric difference: | $\dot x(t) = \frac{x(t+h)-x(t-h)}{2h} + O(h^2)$. | |
2nd der. | Symmetric difference: | $\ddot x(t) = \frac{x(t+h)-2x(t)+x(t-h)}{h^2} + O(h^2)$. |
5-points stencil: | $\ddot x(t) = \frac{-x(t+2h)+16x(t+h)-20x(t)+16x(t-h)-x(t+2h)}{12h^2} + O(h^4)$. |
Below, we focus on sthe case of symmetric ones.
The first derivative then becomes the affine map $$ \begin{pmatrix}x'_1\cr x'_2\cr\vdots\cr x'_{N-2}\cr x'_{N-1}\end{pmatrix} =\frac{1}{2h} \begin{pmatrix} 0&1&&\cr -1&0&1&&\cr &\ddots&\ddots&\ddots\cr &&-1&0&1\cr &&&-1&0\cr \end{pmatrix} \begin{pmatrix}x_1\cr x_2\cr\vdots\cr x_{N-2}\cr x_{N-1}\end{pmatrix} + \frac{1}{2h} \begin{pmatrix}-x_b\cr 0\cr \vdots\cr 0\cr x_e\end{pmatrix} $$
Since $f$ is Lipschitz in $x$, $$ \left|f(t_n,x(t_n))-f(t_n,x_n)\right|\leq L_n |e_n||x(t_n)-x_n|=L_n|e_n|\,. $$
Then $$ |e_{n+1}\leq L(1+hL)e_n+\ell\,. $$ Hence $$ |e_{n}|\leq (1+hL)^n)|e_0|+\ell\sum_{k=1}^{n-1}(1+hL)^k. $$
Then: $$ |e_{n}|\leq \frac{h}{2L}(e^{Lt_f}-1)\sup|x''(t)|\,, $$ where $t_f$ is the final time.