Skip to main content

Section 2.2 Finite Differences Method: Hyperbolic PDEs

Before analyzing the algorithms to solve numerically the advection PDE, we illustrate a concrete physical problem from which this PDE arises.

Let \(R\subset\Bbb R^3\) be a region bounded by a closed surface \(\partial R\) containing a mass of fluid \(M_R\) with density \(u\text{.}\) In particular, this means that the quantity of fluid contained in a subregion \(P\subset R\) is given by
\begin{equation*} M_P = \iiint_P u(\mathbf x) dV. \end{equation*}
Now, suppose that the fluid in some neighborhood of \(R\) moves with speed \({\mathbf v}(t,{\mathbf x})\text{.}\) Then we have the following continuity equation:
\begin{equation*} \frac{dM(t)}{dt} = \Sigma(t) - \iint_{\partial R} u(t,{\mathbf x})\,{\mathbf v}(t,{\mathbf x})\cdot {\mathbf dS}({\mathbf x}), \end{equation*}
where \(\Sigma\) is the contribution given by sources and sinks inside \(R\) and the last term represents the net amount of fluid leaving (or entering) the region \(R\text{:}\) for every point \({\mathbf x}\in\partial R\text{,}\) the amount, per time unit, that is entering/leaving \(R\) through \({\mathbf x}\) is equal to
\begin{equation*} u(t,{\mathbf x})\,{\mathbf v}(t,{\mathbf x})\cdot {\mathbf dS} \end{equation*}
(recall that \({\mathbf dS}\) at \({\mathbf x}\) is perpendicular to \(\partial R\) at that point).

In absence of sources/sinks, we can use Stokes' theorem
\begin{equation*} \iint_{\partial V} {\mathbf F}\cdot {\mathbf{dS}} = \iiint_V {\mathbf\nabla}\cdot{\mathbf F} dV \end{equation*}
to convert the integral continuity equation into its differential form:
\begin{equation*} \frac{\partial u}{\partial t} + {\mathbf\nabla}\cdot\left(u {\mathbf v}\right) = 0. \end{equation*}
Finally, in case of an incompressible fluid (i.e. \({\mathbf\nabla}\cdot{\mathbf v}=0\)) this equation reduces to the advection equation
\begin{equation*} \frac{\partial u}{\partial t} + {\mathbf v} \cdot {\mathbf\nabla} u = 0. \end{equation*}

Subsection 2.2.1 The 1D Advection equation

We consider in particular the simple case of a 1-dimensional fluid with constant velocity field \(c\text{,}\) in which case the advection equation reduces to
\begin{equation*} u_t+cu_x=0. \end{equation*}

The most natural initial condition for such equation is specifying the shape of the solution at \(t=0\text{,}\) namely
\begin{equation*} u(0,{\mathbf x})=f({\mathbf x}). \end{equation*}

Characteristics. The analytical solution of the advection equation can be determined by the method of characteristics, which in this cases reduces to the following observation.

Re-write the PDE as
\begin{equation*} (\partial_t+c\partial_x) u(t,x) = 0,\; u(0,x)=f(x). \end{equation*}
Hence \(u\) is constant on the integral lines of the vector field \(\partial_t+c\partial_x\text{.}\) These are the solutions of the ODE
\begin{equation*} \begin{cases} \dot t(s) = 1\cr \dot x(s) = c\cr \end{cases} \end{equation*}
namely \(x(t)=x_0+ct\text{.}\)

This means that \(u(t,x)\) is equal to the value of \(f\) at the point \((0,x_0)\) such that \(x_0+c t = x\text{,}\) namely \(x_0=x-ct\text{.}\) In other words, the solution is
\begin{equation*} u(x,t) = f(x-ct), \end{equation*}
namely the profile of the solution at time \(t\) is the same as the profile at the initial time shifted by \(ct\) to the right.

Below we solve numerically the 1D advection equation with several different instances of the Finite Differences Method (Section 1.17) to test the weak and strong points of this method for hyperbolic PDEs.

Recall that the basic idea of this method is replacing the "phase space" \([0,T]\times[0,L]\) with the uniform lattice
\begin{equation*} t_n = n\Delta t,\;0\leq n\leq N;\;\;x_m = m\Delta x,\;0\leq m\leq M \end{equation*}
and \(u_t\) and \(u_x\) with one of their discrete approximations.
Figure 2.2.1. Discretization of the rectangle \([0,T]\times[0,L]\text{.}\) The time segment \([0,T]\) is divided into \(N\) subintervals of length \(h=T/N\) and the space segment \([0,L]\) into \(M\) subintervals of length \(k=L/M\text{.}\) Correspondingly, the rectangle \([0,T]\times[0,L]\) is replaced by a grid (or, equivalently, lattice) of \(N\cdot M\) points with coordinates \((ih,jk),\,0\leq i\leq N,\,0\leq j\leq M\text{.}\)

Definition 2.2.2.
The quantity
\begin{equation*} c_0=\frac{\Delta x}{\Delta t} \end{equation*}
has the dimension of a velocity. We call \(c_0\) the critical velocity of the lattice.

Subsubsection 2.2.1.1 FTBS (Downwind Method), \(O(\Delta t,\Delta x)\)

.

In this first method, we use the forward Euler approximation for time derivatives and the backward one for space derivatives (Forward in Time, Backward in Space). Consider a function \(u(t,x)\) and set
\begin{equation*} u_{n,m} = u(t_n,x_m). \end{equation*}
Then, from the Taylor expansion formula,
\begin{equation*} u_t(t_n,x_m)=\frac{u_{n+1,m}-u_{n,m}}{\Delta t}+O(\Delta t) \end{equation*}
and
\begin{equation*} u_x(t_n,x_m)=\frac{u_{n,m}-u_{n,m-1}}{\Delta x}+O(\Delta x). \end{equation*}

Hence, after dropping the terms above in \(\Delta t,\Delta x\) from the first order on and substituting these formulae in the advection PDE, we get the linear system of \(N\) equations
\begin{equation*} u_{n+1,m} = u_{n,m} - \nu\left(u_{n,m}-u_{n,m-1}\right), \end{equation*}
with
\begin{equation*} 0\leq n\leq N-1,\;1\leq m\leq M,\;\nu=c/c_0. \end{equation*}
Notice that we must require \(n<N\) (because the last index of the grid is \(N\)) and \(m>0\) (because there are no grid points with space index \(-1\)).

FTBS Diagram. This scheme can be summarized by the following diagram:
FTBS Initial Conditions. The initial condition of the Advection PDE
\begin{equation*} u(0,x)=f(x) \end{equation*}
becomes here the vector
\begin{equation*} u_{_{0,0}}=f(0),\dots,u_{_{0,m}}=f(x_m),\dots,u_{_{0,M}}=f(L). \end{equation*}
Notice that the solution of the FTBS system cannot give the values of the variables \(u_{n,0}\text{,}\) \(n\geq1\text{,}\) namely the values of the solution at the left endpoint at all times, since they would involve the coefficients \(u_{n-1,-1}\text{!}\)

This is due to the fact that the PDE is defined for all \(x\in\Bbb R\) but numerical methods can only solve the problem within a finite box. The impossibility to get the \(u_{n,0}\text{,}\) \(n\geq1\text{,}\) corresponds to the fact that the characteristics of the PDE hit the boundary at all points \((t,0)\text{.}\)

We need therefore to provide an extra boundary condition on the segment \((t,x)\text{,}\) \(0\leq x\leq L\text{,}\) corresponding to the values reached, on that segment, by the solution of the PDE.

Remark 1: this extra boundary condition is not really "extra". The point is that some of the characteristics meet first the time axis than the space axis (see Figure 2.2.1) and so the initial condition must be assiged also on that side. This is not a restriction in any way: the values on the x axis are anyway independent on each other (as long as the function is continuous).

Remark 2: different schemes will entail different kinds of extra boundary conditions!

FTBS local order. Let us set \(h=\Delta t\) and \(k=\Delta x\) and let \(u(t,x)\) be a solution of \(u_t+cu_x=0\) and set \(u_{n,m}=u(t_n,x_m)\text{.}\) From the Taylor expansion formula we know that
\begin{equation*} u(t+h,x) = u(t,x) + u_t(t,x)h + \frac{1}{2}u_{tt}(\tau,x)h^2 \end{equation*}
Finally, the values \(u_{n+1,m}\) dictated by the FTBS method are
\begin{equation*} u_{n+1,m} = u_{n,m} - \nu\left(u_{n,m}-u_{n,m-1}\right). \end{equation*}
Hence, the difference between the exact and the numerical solutions is given by
\begin{equation*} \begin{aligned} u(t_n+h,x_m) - u_{n+1,m} &= u(t_n,x_m) + u_t(t_n,x_m)h + \frac{h^2}{2}u_{tt}(\tau,x_m) - u_{n,m} + \nu\left(u_{n,m}-u_{n,m-1}\right)\\ &= hu_t(t_n,x_m) + \frac{h^2}{2}u_{tt}(\tau,x_m) + \nu\left(u_{n,m}-u_{n,m-1}\right)\\ &= -chu_x(t_n,x_m) + \frac{h^2}{2}u_{tt}(\tau,x_m) + \nu\left(u_{n,m}-u_{n,m-1}\right). \end{aligned} \end{equation*}
Now, notice that
\begin{equation*} u_x(t_n,x_m) = \frac{u(t_n,x_m) - u(t_n,x_m-k)}{k} + \frac{k}{2}u_{xx}(t_n,\xi) = \frac{u_{n,m} - u_{n,m-1}}{k} + \frac{k}{2}u_{xx}(t_n,\xi) \end{equation*}
so that
\begin{equation*} \begin{aligned} u(t_n+h,x_m) - u_{n+1,m} &= -ch\frac{u_{n,m} - u_{n,m-1}}{k} - ch\frac{k}{2}u_{xx}(t_n,\xi) + \frac{h^2}{2}u_{tt}(\tau,x_m) + \nu\left(u_{n,m}-u_{n,m-1}\right)\\ &= \frac{h^2}{2}\left(u_{tt}(\tau,x_m) - \frac{c^2}{\nu}u_{xx}(t_n,\xi)\right), \end{aligned} \end{equation*}
since \(\nu=ch/k\text{.}\) Hence, the FTBS method is of local order 2.

Remark. Notice that the solutions of the Advection PDE satisfy the relation \(u_{tt}=c^2 u_{xx}\) so that for \(\nu=1\text{,}\) the method becomes of local order 3.

FTBS consistency: the CFL condition. Courant, Friedrichs and Lewy formulated the following simple necessary condition for the convergence of a finite difference scheme.

Definition 2.2.3.
For any variable \(u_{n,m}\) we define numerical and exact domains of dependence as follows:

The numerical domain of dependence of \(u_{n,m}\) is the convex envelope of all coefficients \(u_{i,j}\) that \(u_{n,m}\) depends upon, explicitly or implicitly, in the linear system determined by the method.

Its exact domain of dependence is the set of all points that lie on characteristic curves passing through \(u_{n,m}\text{.}\)
For instance, in the picture below we show, in cyan, the numerical domain of dependence of the FTBS method. In green, we show the characteristic of an advection PDE with \(0<c<c_0\text{.}\) In this case, the exact domain of dependence of the PDE is a subset of its numerical one.

We give here an elementary proof for the case of advection PDEs with constant coefficients. Since we will do calculations in a floating point system, we can always assume that \(c/c_0\) is rational. This means that, for some large enough time (namely for large enough \(m\)), the characteristic curve passing through the node with index \((n,m)\) will also pass through some other node \((i,j)\text{,}\) \(i<m\) and so \(u_{n,m}=u_{i,j}\text{.}\) If the exact domain of dependence of \((n,m)\) is outside of the numerical domain of dependence, then \(u_{i,j}\) does not appear in the expression for \(u_{n,m}\) given by the FTBS system. Hence, when writing explicitly the value of \(u_{n,m}\) as function of the values of \(u\) at the initial time, we get two expressions depending on two disjoint sets of \(u(0,j)\text{.}\) Such expressions cannot coincide in general because the values \(u(0,j)\) are all independent of each other.

Recall now that characteristics for the Advection PDE \(u_t+cu_x=0\) are
\begin{equation*} x(t) = x_0-ct. \end{equation*}
Hence, in case of the FTBS scheme, there cannot be convergence for \(c<0\) while, for \(c\geq0\text{,}\) we must also have \(c/c_0\leq1\text{,}\) namely \(0\leq\nu\leq1\text{.}\)

Remark: the CFL condition is necessary but not sufficient!

FTBS convergence: the von Neumann condition. Consider a solution of the form
\begin{equation*} u_{n,m} = \lambda^n e^{i\xi m\Delta x} \end{equation*}
(this is called a Fourier mode of the solution). Notice that, under suitable conditions, every PDE solution can be written as the Fourier transform of some function and so it is important to verify that no elementary Fourier mode diverges exponentially. In order for this to happen, we must verify that any solution of the form above has \(|\lambda|<1\text{.}\)

By substituting the expression above in the algorithm, we get that
\begin{equation*} u_{n+1,m} = (1-\nu)u_{n,m} +\nu u_{n,m-1}, \end{equation*}
namely that
\begin{equation*} \;\;\;\lambda^{n+1} e^{i\xi m\Delta x} = (1-\nu)\lambda^n e^{i\xi m\Delta x} + \nu\lambda^n e^{i\xi(m-1)\Delta x}\text{,} \end{equation*}

and so that
\begin{equation*} \lambda(\xi) = 1 - \nu + \nu e^{-i\xi\Delta x}. \end{equation*}

Hence, the FTBS algorithm is stable when
\begin{equation*} |\lambda(\xi)|^2 = 1 - 4\nu(1-\nu)\sin^2\frac{\xi\Delta x}{2}\leq1 \end{equation*}
for every \(\xi\in\Bbb C\) and \(\Delta x\in\Bbb R\text{.}\) This happens precisely when \(0\leq\nu\leq1\text{,}\) since
\begin{equation*} \max_{0\leq\nu\leq1}4\nu(1-\nu)=1. \end{equation*}
This is precisely the same range in which the method satisfies the CFL condition, so it gives us no extra condition.

Implementation in Python. The code below uses FTBS to solve numerically the advection PDE
\begin{equation*} u_t(t,x)+cu_x(t,x)=0,\;u(0,x)=u_0(x),\;(t,x)\in[0,1]\times[0,1], \end{equation*}
whose exact solution is
\begin{equation*} u(t,x) = u_0(x-ct). \end{equation*}
FTBS global error order and "code vectorization". We argumented above that the FTBS method has local order 2, due to the fact that we approximate both first partial derivatives with expressions that have a local error of order 1 (the local error of the method refers to the local error of the solution, whose order is one more than the order of its derivatives).

Based on what happens in case of ODEs, we expect that the global error of the FTBS method be 1. Below, we verify this fact numerically. Notice that in this case we have two parameters, \(\Delta t\) and \(\Delta x\text{,}\) and so the asymptotics depends on the reciprocal way they get small. Since, as we discussed above, the entity of the diffusion phenomenon depends on the Courant parameter \(\nu\text{,}\) it make sense to let \(\Delta t\) and \(\Delta x\) get small in such a way to keep \(\nu\) constant. We do so in the code below.

Running the code shows at different values of \(c\) shows the following:
  • when \(0<c\ll1\text{,}\) the global error decreases initially with order \(1/2\) and only for \(\Delta t\) and \(\Delta x\) large enough becomes, as expected from ODE theory, of order 1.
  • when \(c\lesssim1\text{,}\) the global error show from the beginning to be of order 1.
  • when \(c=1\text{,}\) the behavior of the algorithm is quite peculiar: the error is more than one order of magnitude smaller than for smaller values of \(c\) and it does not depend on the resolution in time or space.
  • when \(c>1\) or \(c<0\) the algorithm is unstable and so there is nothing to discuss about.
We present three slightly different versions of the same code, each being an improvement of the previous.

Code 1. The one below is a vanilla version, namely we do not make any effort to make the code neither faster neither lighter (i.e. using less RAM). This code is the most readable of the three but far from being the most efficient:
  • Due to its slowness, given the default choice of parameters, it won't run on this page for \(N>5\text{,}\) namely for more than 1280 subdivisions in time and space.
  • At \(N=5\text{,}\) the code takes about 23 seconds to evaluate the numerical solution.
Code 2. The one below is a vectorized version of the code above. By vectorization it is meant the replacement of explicit loops with array expressions. Both MATLAB and NumPy are capable of vectorization. The rationale of this operation is that, rather than living the programmer write their own code, via loops, to perform vectors operations (for instance a scalar product), these languages provide alternate calls to perform the same operation in ways that are implemented internally and are highly optimized. In short, if you can avoid an explicit loop in your code, do avoid it!.

For instance, in the code below we replace lines 38-41 of the code above
for m in range(xSteps+1): 
   x[m] = m*dx
   u[0,m]  = u0(x[m])
   ue[0,m] = u[0,m]
	  
with the following implicit loops:
x = dx*np.arange(xSteps+1)
u[0,:]  = u0(x[:])
ue[0,:] = u[0,:]	    
	  
Similarly, we replace lines 53,55
for m in range(1,xSteps+1):
   u[n+1,m] = u[n,m] - nu*( u[n,m] - u[n,m-1] )
	  
by the one-liner
u[n+1,1:-1] = u[n,1:-1] - nu*( u[n,1:-1] - u[n,0:-2] )
	  
Notice that it is not possible to eliminate also the loop in n.

By running the code below, you will be able to notice that, after vectorization, the case \(N=5\) is now executed in about \(0.03s\text{,}\) an improvement of 4 orders of magnitude!!

Thanks to this, we can run this code up to \(N=8\text{.}\) What stops us from using larger values of \(N\) here is not speed but storage: from \(N=9\) on, the code uses more than 3GB and this is not allowed when running from SageMathCell (on your PC, the limit would be roughly its amount of RAM).
Code 3. There is an obvious improvement we can do to the code above in order to save RAM. Indeed, in the implementations above we kept the profiles of the numerical solutions at all time, that is completely useless unless one plans to plot the solutions (which we do not do here). Hence, below we define the array u as a 1D array (rather than a 2D one as above) and we replace the line
u[n+1,1:-1] = u[n,1:-1] - nu*( u[n,1:-1] - u[n,0:-2] )
	
with
u[1:-1] = u[1:-1] - nu*( u[1:-1] - u[0:-2] )
	
This way, each new solution will overwrite the old one and precious space won't be wasted pointlessly.

With this version, which is both fast and space-savy, we can go up to \(N=11\text{,}\) after which the running time becomes again large for a SageMathCell job.
Animation. The following python code produces an animation showing the time evolution of the solution. Below are a few interesting parameters to check:
  1. c=0.4 The numerical solution moves with the correct speed but there are quite evident signs of diffusion: the gaussians get lower and wider. See Subsubsection 2.2.1.4 for an explanation of this phenomenon.
  2. c=1 As explained above, in this case the method becomes of local order 3. This is why in this case there are no visible signs of diffusion.
  3. c=1.1 Look at what happens at the evolution of the tail of the main gaussian. This is effect of the von Neumann instability.

Subsubsection 2.2.1.2 FTFS (Upwind), \(O(\Delta t,\Delta x)\)

If we use forward differences also for space, we get the algorithm

\begin{equation*} u_{n+1,m} = u_{n,m} - \nu\left(u_{n,m+1}-u_{n,m}\right), \end{equation*}
Below we show the numerical domain of dependence.
Comparing with the previous method, it is clear that in this case there is consistency only for \(-1\leq\nu\leq0\text{.}\)

Ultimately, we showed that every Advection PDE can be solved numerically with either the FTBS algorithm or with the FTFS algorithm as long as we choose \(\Delta t\) and \(\Delta x\) so that \(|c|\leq c_0\text{.}\)

Subsubsection 2.2.1.3 FTCS (Centered), \(O(\Delta t,\Delta x^2)\)

Based on what seen above, a good candidate for an algorithm working for both positive and negative values of \(c\) is the one where the centered differences approximation is used for the space derivative:
\begin{equation*} u_x(t_n,x_m)\simeq\frac{u_{n,m+1}-u_{n,m-1}}{2\Delta x}. \end{equation*}
In particular, this means that this method is of second order in space.

The advection PDE \(u_t=cu_x\) becomes this way the linear system
\begin{equation*} u_{n+1,m} = u_{n,m} - \frac{\nu}{2}\left(u_{n,m+1}-u_{n,m-1}\right)\,, \end{equation*}
with \(0\leq n\leq N\text{,}\) \(1\leq j\leq M-1\text{,}\) \(\nu=c/c_0\text{.}\)

Notice that for FTCS we need to set boundary conditions at both \(x=0\) and \(x=L\text{.}\)

FTCS Diagram. This scheme can be summarized by the following diagram:

CFL and von Neumann conditions. As expected, the domain of dependence of \(u_{n,m}\) in FTCS is a triangle symmetric with respect to the vertical direction and therefore we get a weaker bound from the CFL condition: with both positive or negative values of \(c\) we have that the characteristics are inside the domain of dependence as long as \(|\nu|\leq1\text{.}\)

The von Neumann condition, though, gives us that
\begin{equation*} |\lambda|^2=1+\nu^2\sin^2(k\Delta x)\geq1! \end{equation*}
We say that the FTCS algorithm is unconditionally unstable, namely unstable for any value of \(\Delta t\) and \(\Delta x\text{.}\)

Subsubsection 2.2.1.4 The Lax Method, \(O(\Delta t,\Delta x^2)\)

It turns out that there is a simple way to fix FTCS's problem, namely replacing the \(u_{n,m}\) term in the FTCS system with the average over its two neighbours:
\begin{equation*} u_{n+1,m} = \frac{u_{n,m+1}+u_{n,m-1}}{2} - \nu\frac{u_{n,m+1}-u_{n,m-1}}{2}\,. \end{equation*}

The Lax Diagram. This scheme can be summarized by the following diagram:

Lax's CFL and Von Neumann conditions. The CFL condition for this case, as expected, holds for both positive and negative values of \(c\) as long as \(\nu\leq1\text{.}\)

Moreover, in this case
\begin{equation*} |\lambda|^2 = 1-(1-\nu^2)\sin^2(\xi\Delta x) \end{equation*}
so the method is stable independently on \(\Delta x\) as long as \(\nu^2\leq1\) (conditional stability).

Implementation in Python. The code below uses the Lax algorithm to solve numerically the advection PDE
\begin{equation*} u_t(t,x)+cu_x(t,x)=0,\;u(0,x)=u_0(x),\;(t,x)\in[0,1]\times[0,1], \end{equation*}
whose exact solution is
\begin{equation*} u(t,x) = u_0(x-ct). \end{equation*}
The diffusion problem. The Lax method works nice from the point of view of the "phase", in the sense that the velocity of the profiles is close to the constant velocity of the exact solution. What is much less satisfying is the evident damping problem: the profiles tend to become lower and larger, namely the numerical algorithm has a diffusion effect, a qualitative behavior that is absent in the exact solution.

A way to explain this phenomenon is to re-write the Lax algorithm as

\begin{equation*} u_{n+1,m}-u_{n,m} + \frac{\nu}{2}(u_{n,m+1}-u_{n,m-1}) = \frac{1}{2}(u_{n,m+1}-2u_{n,m}+u_{n-1,m}) \end{equation*}

and then as

\begin{equation*} \frac{u_{n+1,m}-u_{n,m}}{\Delta t} + \frac{\nu\Delta x}{\Delta t}\frac{u_{n,m+1}-u_{n,m-1}}{2\Delta x} = \frac{\Delta x^2}{2\Delta t}\frac{u_{n,m+1}-2u_{n,m}+u_{n-1,m}}{\Delta x^2} \end{equation*}

Recall that \(\nu=c\Delta t/\Delta x\text{.}\) Hence. this linear system is a discrete approximation of the PDE

\begin{equation*} u_t + c u_x = \frac{(\Delta x)^2}{2\Delta t}u_{xx} \end{equation*}

The combination \((\Delta x)^2/\Delta t\) does not necessarily go to zero for \(\Delta x,\Delta t\to0\text{,}\) in which case the Lax method is also a solver for a diffusion PDE (see the section about the heat PDE) and so the diffusion phenomenon is inherently contained inside the algorithm.

Subsubsection 2.2.1.5 The Leapfrog Method, \(O(\Delta t^2,\Delta x^2)\)

So far we used a first-order approximation for the time derivative. A natural way to improve a method's accuracy is passing to second order, namely using
\begin{equation*} \begin{aligned} u_t &\simeq \frac{u_{n+1,m}-u_{n-1,m}}{2\Delta t}\\ u_x &\simeq \frac{u_{n,m+1}-u_{n,m-1}}{2\Delta x}.\\ \end{aligned} \end{equation*}
In this case we get the system
\begin{equation*} u_{n+1,m} = u_{n-1,m} - \nu\left(u_{n,m+1}-u_{n,m-1}\right). \end{equation*}
Leapfrog diagram. This scheme can be summarized by the following diagram:

Leapfrog's CFL and Von Neumann conditions. From the picture above is clear that the CFL condition for this case holds for \(\nu\leq1\text{.}\)

The method's solution of the form \(u_{n,m}=\lambda^n e^{i\xi m\Delta x}\) reduces here to
\begin{equation*} \lambda^{n+1} = \lambda^{n-1} - \nu\lambda^n\left(e^{i\xi\Delta x}-e^{-i\xi\Delta x}\right), \end{equation*}
so that
\begin{equation*} |\lambda| = 1. \end{equation*}
This method is therefore unconditionally stable (although of course it cannot work outside of the range given by the CFL conditions).

There is unfortunately a major flaw in this method: mesh points can be sorted into two subgrids that are completely de-coupled:

Subsubsection 2.2.1.6 The Lax-Wendroff Method, \(O(\Delta t^2,\Delta x^2)\)

The methods we illustrated so far use all the same idea: given the exact solution \(\hat u(t,x)\) of the advection PDE \(u_t+cu_x=0\text{,}\) one expands \(\hat u\) in Taylor series in \(t\) about \(t_0\) and gets

\begin{equation*} \hat u(t,x) = \hat u(t_0,x) + \Delta t\, \hat u_t(t_0,x) + O((\Delta t)^2), \end{equation*}

where \(\Delta t=t-t_0\text{.}\) Then, after replacing \(\hat u_t\) with \(-c\hat u_x\) and using either forward or central differencing for the derivative with respect to \(x\text{,}\) one gets, respectively, the FTBS and FTCS algorithms (and the Lax one, which is a deformation of FTCS).

A natural way to improve the precision of the solution is using one more term of the Taylor expansion:

\begin{equation*} \hat u(t,x) = \hat u(t_0,x) + \Delta t\, \hat u_t(t_0,x) + \frac{(\Delta t)^2}{2} \hat u_{tt}(t_0,x) + O((\Delta t)^3) = \end{equation*}
\begin{equation*} = \hat u(t_0,x) - c\Delta t\, \hat u_x(t_0,x) + c^2\frac{(\Delta t)^2}{2} \, \hat u_{xx}(t_0,x) + O((\Delta t)^3). \end{equation*}

The Lax-Wendroff algorithms consists in dropping all terms beyond the second order and replacing the derivatives with the central differencing and backward differencing for the second derivative:

\begin{equation*} u_{n+1,m} = u_{n,m} - \nu\frac{u_{n,m+1}-u_{n,m-1}}{2} + \nu^2\frac{u_{n,m+1}-2u_{n,m}+u_{n,m-1}}{2} \end{equation*}
\begin{equation*} = \alpha u_{n,m+1} + \beta u_{n,m} + \gamma u_{n,m-1}, \end{equation*}

where

\begin{equation*} \alpha = -\frac{\nu(1-\nu)}{2},\; \beta = 1-\nu^2,\; \gamma = \frac{\nu(1+\nu)}{2}. \end{equation*}

Lax-Wendroff CFL and Von Neumann conditions. The CFL condition for this case is the same as for the Lax method, so a necessary condition for its consistence is that \(|c|\leq c_0\text{.}\) Moreover, in this case

\begin{equation*} |\lambda|^2 = 1-\nu^2(1-\nu^2)(1-\cos(\xi\Delta x))^2 \end{equation*}

so, just like the Lax method, it is stable independently on \(\Delta x\) as long as \(\nu^2\leq1\text{.}\) Unlike the Lax method, though, the diffusion in the Lax-Wendroff algorithm is much weaker.

Implementation in Python. The code below uses the Lax algorithm to solve numerically the advection PDE
\begin{equation*} u_t(t,x)+cu_x(t,x)=0,\;u(0,x)=u_0(x),\;(t,x)\in[0,1]\times[0,1], \end{equation*}
whose exact solution is
\begin{equation*} u(t,x) = u_0(x-ct). \end{equation*}
Lax-Wendroff global error order.
Animation. The following python code produces an animation showing the time evolution of the solution. Below are a few interesting parameters to check:
  1. c=0.4 The numerical solution moves with the correct speed but, even though much weaker than with the FTBS method, it is still possible to see the effects of diffusion on the smaller of the two gaussians.
  2. c=1 Just as in case of the FTBS method, in this case there are no visible signs of diffusion.
  3. c=1.1 In this case, the effects of the von Neumann instability are much more visible!

Subsection 2.2.2 The 1D Wave PDE

Here we discuss the numerical solution of the PDE
\begin{equation} u_{tt}=c^2u_{xx}.\label{eq-wave1d}\tag{2.2.1} \end{equation}
The most natural initial conditions for this PDE are given by
\begin{equation*} u(0,x)=f(x),\;u_t(0,x)=g(x). \end{equation*}
As seen for ODEs, we can lower the order of the PDE by setting \(u_t=p\) and \(u_x=q/c\text{,}\) so that we get the system
\begin{equation*} \left\{ \begin{aligned} u_t & = p\\ u_x & = q/c\\ p_t & = cq_x\\ \end{aligned} \right.,\;\;u(0,x)=f(x),\;p(0,x)=g(x),\;q(0,x)=cf'(x). \end{equation*}
After differentiating by \(x\) the first equation and by \(t\) the second, we see that we can replace the second PDE above by \(q_t = cp_x\text{,}\) so that we get the following final system
\begin{equation} \left\{ \begin{aligned} u_t & = p\\ p_t & = cq_x\\ q_t & = cp_x\\ \end{aligned} \right.,\;\;u(0,x)=f(x),\;p(0,x)=g(x),\;q(0,x)=cf'(x).\label{eq-wave1stOrder}\tag{2.2.2} \end{equation}
In concrete, we first solve numerically the Advection system
\begin{equation} \left\{ \begin{aligned} p_t & = cq_x\\ q_t & = cp_x\\ \end{aligned} \right.,\;\;p(0,x)=g(x),\;q(0,x)=cf'(x).\label{eq-advection2b2}\tag{2.2.3} \end{equation}
and then we evaluate \(u\) using the fact that \(u_t=p\text{.}\)

Subsubsection 2.2.2.1 FTBS (Downwind Method), \(O(\Delta t,\Delta x)\)

.

By applying the FTBS approximation to system (2.2.3) we get the following algorithm:
\begin{equation*} \left\{ \begin{aligned} p_{n+1,m} & = p_{n,m} + \nu( q_{n,m} - q_{n,m-1} )\\ q_{n+1,m} & = q_{n,m} + \nu( p_{n,m} - p_{n,m-1} )\\ u_{n+1,m} & = u_{n,m} + \Delta t\,p_{n,m}\\ \end{aligned} \right. \end{equation*}
We know that this method is stable only for positive velocities while the wave equation has always solutions that are superpsitions of profiles moving with positive and negative speeds. It is to be expected then that this method is always unstable for the wave PDE, as the code below shows.

Subsubsection 2.2.2.2 The Lax Method, \(O(\Delta t,\Delta x^2)\)

.

By applying the Lax method to system (2.2.3) we get the following algorithm:
\begin{equation*} \left\{ \begin{aligned} p_{n+1,m} & = \frac{p_{n,m+1}+p_{n,m-1}}{2} + \nu\frac{q_{n,m+1} - q_{n,m-1}}{2}\\ q_{n+1,m} & = \frac{q_{n,m+1}+q_{n,m-1}}{2} + \nu\frac{p_{n,m+1} - p_{n,m-1}}{2}\\ u_{n+1,m} & = u_{n,m} + \Delta t\,p_{n,m}\\ \end{aligned} \right. \end{equation*}
We know from the previous section that this method is stable for \(|\nu|\leq1\text{.}\) As the results of the numerical implementation below recall, this method suffers though greatly of a diffusion problem: the two generates waves, moving with velocities \(\pm c\text{,}\) get lower and wider.

Subsubsection 2.2.2.3 The Leapfrog Method, \(O(\Delta t^2,\Delta x^2)\)

.

By applying the Leapfrog method to system (2.2.3) we get the following algorithm:
\begin{equation*} \left\{ \begin{aligned} p_{n+1,m} & = p_{n-1,m} + \nu( q_{n,m+1} - q_{n,m-1} )\\ q_{n+1,m} & = q_{n-1,m} + \nu( p_{n,m+1} - p_{n,m-1} )\\ u_{n+1,m} & = u_{n,m} + \Delta t\,\frac{p_{n+1,m}+p_{n,m}}{2}\\ \end{aligned} \right., \end{equation*}
where we replaced in the last equation \(p_{n,m}\) with its "forward in time" average in order to maintain the time approximation of \(u_{n+1,m}\) of local order 2.

We know from the previous section that this method is stable for \(|\nu|<1\text{.}\) As the results of the numerical implementation below recall, this method suffers though greatly of a diffusion problem: the two generates waves, moving with velocities \(\pm c\text{,}\) get lower and wider.

Subsubsection 2.2.2.4 A new method

.

Of course we can act directly on the 2nd order PDE and approximate the second derivatives of \(u\) with
\begin{equation*} u_{tt} = \frac{u_{n+1,m}-u_{n,m}-u_{n-1,m}}{\Delta t^2} + O(\Delta t), u_{xx} = \frac{u_{n,m+1}-u_{n,m}-u_{n,m-1}}{\Delta x^2} + O(\Delta x). \end{equation*}
This way we get the following method:
\begin{equation*} u_{n+1,m} = 2*u_{n,m} - u_{n-1,m} + \nu^2 (u_{n,m+1}-2u_{n,m}+u_{n,m-1}). \end{equation*}

Subsection 2.2.3 The 2D Wave PDE

Below we solve the Wave PDE
\begin{equation*} u_{tt} = c^2\left(u_{xx}+u_{yy}\right). \end{equation*}
Extending the method above to the case of 2 space dimensions, we find the algorithm
\begin{equation*} u_{n+1,m,l} = 2u_{n,m,l} - u_{n-1,m,l} + \nu^2 (u_{n,m+1,l}+u_{n,m,l+1}-4u_{n,m,l}+u_{n,m-1,l}+u_{n,m,l-1}). \end{equation*}
The code below solves the wave equation above with initial conditions
\begin{equation*} u(0,x,y) = e^{-100((x-0.5)^2+(y-0.5)^2)},\;u_t(0,x,y)=0 \end{equation*}
and produces an animation of the numerical solution.