Press ?
for presentation controls.
Best viewed in Firefox,
Chrome and
Opera.
Poisson Equation | $u_{xx}+u_{yy}=f$ | $u$ = potential of a distribution of charges $f$ |
Advection Equation | $u_{t}=c u_{x}$ | change of # of part. = flux of associated current |
Waves Equation | $u_{tt}=u_{xx}+u_{yy}$ | evolution of mechanical and e.m. waves |
Klein-Gordon Eq. | $u_{tt}=\alpha^2 u_{xx}-\mu^2u$ | evolution of a relativistic quantum particle |
Sine-Gordon Equation | $u_{tt}=\alpha^2 u_{xx}-\mu^2\sin u$ | chains of pendula |
Heat Equation | $u_{t}=\alpha^2\left(u_{xx}+u_{yy}\right)$ | evolution of heat in a media |
Schrodinger Equation | $i u_{t}=\alpha^2\left(u_{xx}+u_{yy}\right)+V(x,y)u$ | evolution of a non-relativistic quantum particle |
This theorem though fails if any of the function is just simply smooth
rather than analytic!
see
https://en.wikipedia.org/wiki/Lewy's_example.
Basic examples:
Elliptic | $u_{xx}+u_{yy}=0$ |
Hyperbolic | $u_{tt}=u_{xx}$ |
Parabolic | $u_{t}=u_{xx}$ |
$$t_n = n\Delta t,\;0\leq n\leq N;\;\;x_m = m\Delta x,\;0\leq m\leq M$$
and $u_t$ and $u_x$ with one of their discrete approximations.
$u_{xx}(t_n,x_j)\to\frac{u_{n,j+1}-2u_{n,j}+u_{n,j-1}}{\left(\Delta x\right)^2}$
so the equation translates into
$$ u_{n+1,j} = u_{n,j} + \nu\left(u_{n,j+1}-2u_{n,j}-u_{n,j-1}\right)\,, $$
where $\nu=\frac{c\Delta t}{\left(\Delta x\right)^2}$
(implemented in p2a.m )
$$ u_{n,j} = u_{n-1,j} + \nu\left(u_{n,j+1}-2u_{n,j}+u_{n,j-1}\right)\,, $$
that can be solved "backwards":
$$ u_{n-1,j} = u_{n,j} - \nu\left(u_{n,j+1}-2u_{n,j}+u_{n,j-1}\right)\,, $$
Namely, $\vec u_{n-1}$, the vector of values of $u$ at $t=t_{n-1}$, is equal to $M\vec u_{n}$,
where the matrix $M$ is tridiagonal with $1+2\nu$ on the diagonal
and $-\nu$ in the upper/lower diagonal terms.
Hence
$\vec u_{n} = (M^{-1})^n\cdot\vec u_{0}$
(implemented in p2b.m )
so that $$ G(k) = \frac{1}{1+4\nu\sin^2\frac{k\Delta x}{2}} $$
This method therefore is stable for all $\Delta t$ and $\Delta x$!
$u_{xx}(t_n,x_j)\to \frac{u_{n+1,j+1}-2u_{n+1,j}+u_{n+1,j-1}}{\left(\Delta x\right)^2} + \frac{u_{n,j+1}-2u_{n,j}+u_{n,j-1}}{\left(\Delta x\right)^2}$
The Heat equation now translates into $$ -\nu u_{n+1,j+1}+2(1+\nu)u_{n+1,j}-\nu u_{n+1,j-1} = \nu u_{n,j+1}+2(1-\nu)u_{n,j}+\nu u_{n,j-1} $$
which writes as $M\vec u_{n+1}=N\vec u_{n}$
for two tridiagonal matrices $M,N$,
so that
$\vec u_{n} = (M^{-1}N)^n\cdot\vec u_{0}$
$$ G(k) = \frac{1-2r\sin^2\frac{k\Delta x}{2}}{1+2r\sin^2\frac{k\Delta x}{2}} $$
This method therefore is also unconditionally stable
plus
is 2nd order in both time and space.
$$ u_{xx} + u_{yy} = \rho\;\;\hbox{ in }S\,,\;\;u|_{\partial S}=f, $$
is named Poisson Equation or, when $f=0$, Laplace Equation.
In case of static electric fields, the two Maxwell's laws
$$ \vec\nabla\cdot\vec E = \frac{\rho}{\varepsilon} \,,\;\; \vec\nabla\times\vec E = -\frac{\partial\vec B}{\partial t}\,, $$
amount to
$$ \vec E=-\vec\nabla V \,,\;\; \Delta V=-\frac{\rho}{\varepsilon}\,, $$
where $V$ is the electric potential function
associated to the (conservative) electric vector field $\vec E$.
$V_{xx}(x_i,y_j)\to\frac{V_{i+1,j}-2V_{i,j}+V_{i-1,j}}{\left(\Delta x\right)^2}$, $V_{yy}(x_i,y_j)\to\frac{V_{i,j+1}-2V_{i,j}+V_{i,j-1}}{\left(\Delta y\right)^2}$,
so that the equation translates into
$ \frac{V_{i+1,j}-2V_{i,j}+V_{i-1,j}}{\left(\Delta x\right)^2} + \frac{V_{i,j+1}-2V_{i,j}+V_{i,j-1}}{\left(\Delta y\right)^2} = 4\pi\rho_{i,j} \;,\;\; 1\leq i,j\leq N $
where $\rho_{i,j}$ is equal to 1 at $(N/2,N/2)$ and zero everywhere else.
$$ M = \begin{pmatrix} B&-I&&&&&0\\ -I& B&-I&&&&\\ &.&.&.&&&\\ &&.&.&.&&\\ &&&.&.&.&\\ &&&&-I&B&-I\\ 0&&&&&-I&B\\ \end{pmatrix} $$
is a $N^2$ block matrix, where...
$$ B = \begin{pmatrix} 4&-1&&&&&0\\ -1& 4&-1&&&&\\ &.&.&.&&&\\ &&.&.&.&&\\ &&&.&.&.&\\ &&&&-1&4&-1\\ 0&&&&&-1&4\\ \end{pmatrix} $$
The complexity of this method is $O\left(N^3\right)$. For banded matrices of bandwidth $d$, the complexity goes down to $O\left(d^2N\right)$.
with $D$ diagonal, $U$ upper and $L$ lower diagonal.
The solution is obtained iteratively by
$$ \vec V_{k+1} = -D^{-1}(L+U)\vec V_{k} + D^{-1}\vec R $$
This converges to the solution
iff the spectral radius of $D^{-1}(L+U)$ is $<1$.
This happens, for example, for diagonally dominant matrices.
with $D$ diagonal, $U$ upper and $L$ lower diagonal.
The solution is obtained iteratively by
$$ \vec V_{k+1} = -(L+D)^{-1}U\vec V_{k} + (L+D)^{-1}\vec R $$
The GS method converges when $M$ is symmetric or diagonally dominant. For tridiagonal matrices, it is equivalent to Jacobi.
with $D$ diagonal, $U$ upper and $L$ lower diagonal.
The solution is obtained iteratively by
$$ \vec V_{k+1} = -(D+\omega L)^{-1}((1-\omega)D-\omega U)\vec V_{k} + (D+\omega L)^{-1}\vec R $$
where $0 < \omega < 2$
(it is sufficient for the convergence that $A$ is symmetric and positive-definite).
In general, the optimal $\omega$ depends on the boundary conditions.
We want to solve the equation $$ A v = r\,. $$
Recall that every finite dimensional vector space admits bases $\{e_1,\dots,e_n\}$, $n=\dim V$,
so that every $v\in V$ can be written in a unique way as $$v=\sum_{i=1}^n v_i e_i\,.$$
$\langle A v,e_i\rangle = \langle r,e_i\rangle\,$, for all $1\leq i\leq n$.
In turn, this is the same as asking
$\langle A v,s\rangle = \langle r,s\rangle\,$, for all $s\in V$.
While in finite dimension these two formulations are equivalent,
in infinite dimension
the latter is weaker,
namely it has in principle more solutions than the original one.
The completion of $V$ with respect to $\langle,\rangle$ is called $L^2[0,1]$ and is the space of all measurable functions $u$ such that $$\|u\|^2=\int_0^1|u(x)|^2 dx<\infty\,.$$
$$ -u''(x)+r(x)u(x)=f(x)\,,\;x\in[0,1]\,,\;\;u(0)=u(1)=0\,, $$
with $f,r\in V$. This is a linear equation in $V$ and can be written as $$ Au=f\,,\;u\in C^\infty_0[0,1] $$ where $A$ is the linear operator from $V$ to itself $$ -\frac{d^2}{dx^2}+r(x) $$ and $C^\infty_0[0,1]$ is the subset of smooth functions that are zero at $x=0,1$.
In this equation all functions appear under integral sign,
so there is no actual need to require
so much regularity:
it is enough to ask that both $u$ and $s$ belong to $H^1_0[0,1]$,
the space
of square-integrable functions whose first derivative
is also square-integrable and that converge
to $0$ for $x\to0^+,1^-$.
If $\{\phi_i\}$ is a basis for the subspace $H^1_0[0,1]$ of all functions of $L^2[0,1]$ with square-integrble gradient and which tend zero for $x\to0^+$ and $x\to1^-$, then
$\langle u',\phi'_i\rangle+r\langle u,\phi_i\rangle=\langle f,\phi_i\rangle$
for all $i$. Since $u=\sum_{i=1}^\infty u_i\phi_i$ and $f=\sum_{i=1}^\infty f_i\phi_i$ we have that
$\sum_{j=1}^\infty u_j\left( \langle \phi'_j,\phi'_i\rangle + r\langle \phi_j,\phi_i\rangle\right) = \sum_{j=1}^\infty f_j\langle \phi_j,\phi_i\rangle$
By truncating this sum to some $N$, we get that $$ AU=F $$ form some matrix $A$ and $F$.
Again, $V$ is an infinite dimensional vector space with scalar product
$$ \langle f,g \rangle = \displaystyle\iint_S\!\!\!f(x,y)g(x,y)dxdy\,. $$
$$ -u_{xx}(x,y)-u_{yy}(x,y)=f(x,y)\,,\;x\in[0,1]\,,\;\;u=0 on \partial S\,, $$ where $f\in V$ and $\partial S$ is the boundary of $S$.
This is a linear equation in $V$ and can be written as $$ Au=f\,,\;u\in C^\infty_0(S) $$ where $A$ is the linear operator from $V$ to itself $$ -(\partial_{xx}+\partial_{yy}) $$ and $C^\infty_0(S)$ is the subset of smooth functions that are zero at the boundary of $S$.
Then we integrate by parts the lhs terms as above: $$ \langle u_{xx},s\rangle = -\langle u_x,s_x\rangle\,,\;\langle u_{yy},s\rangle = -\langle u_y,s_y\rangle $$ The equation now is
$$ \langle\nabla u,\nabla s\rangle = \langle f,s\rangle\,,\;\hbox{ for all }s\in C^\infty_0(S) $$
Again, in this equation all functions appear under integral sign so there is no actual need to require so much regularity: it is enough to ask that both $u$ and $s$ belong to $H^1_0(S)$, the space of $L^2$ functions on $D$ whose gradient is also $L^2$ and that converge to 0 on any sequence of points converging to the boundary.
If $\{\phi_i\}$ is a basis for $H^1_0(S)$, then
$\langle \nabla u,\nabla \phi_i\rangle=\langle f,\phi_i\rangle$
for all $i$. Since $u=\sum_{i=1}^\infty u_i\phi_i$ and $f=\sum_{i=1}^\infty f_i\phi_i$ we have that
$\sum_{j=1}^\infty u_j \langle \nabla \phi_j,\nabla\phi_i\rangle = \sum_{j=1}^\infty f_j\langle\phi_j,\phi_i\rangle$
By truncating this sum to some $N$, we get that $$ AU=F $$ form some matrix $A$ and $F$.
This is accomplished mainly with the following two points:
A great starting point for generating 2D and 3D meshes with any sort off shape is Persson's Distmesh package.m = 10; n = 10; [x,y] = ndgrid((0:m-1)/(m-1),(0:n-1)/(n-1)); pts = [x(:),y(:)]; tri = [1,2,m+2; 1,m+2,m+1]; tri = kron(tri,ones(m-1,1))+kron(ones(size(tri)),(0:m-2)'); tri = kron(tri,ones(n-1,1))+kron(ones(size(tri)),(0:n-2)'*m); trimesh(t,pts(:,1),pts(:,2),zeros(N,1))
In this type of mesh, a reasonable set of $\phi_i$ is the set of piece-wise linear functions that, on any triangle $T$ of the mesh, are either identically zero or linear.
Let $(x_i,y_i)$, $i=1,2,3$ be the vertices of $T$ and $\psi_i=a_i+b_ix+c_iy$, $i=1,2,3$, the restriction to $T$ of the three basis functions that are not identically zero on $T$ and such that $\psi_i(x_j,y_j)=\delta_{ij}$.
namely
$$ \begin{pmatrix} a_1&a_2&a_3\cr b_1&b_2&b_3\cr c_1&c_2&c_3\cr \end{pmatrix} = M^{-1}_T $$
For example, consider $-u''(x)+u(x)=f(x)$. In momentum space the ODE just becomes the algebraic equation
$$ k^2\hat u(k)+\hat u(k)=\hat f(k) $$
so that $\hat u(k)=\hat f(k)/(1+k^2)$ and finally we get the ODE solution $$ u(x) = \frac{1}{2\pi}\int\limits_{-\infty}^\infty\frac{\hat f(k)}{1+k^2}\,e^{ikx}\,dk\,. $$
$$ u(x) = \sum_{k=-\infty}^\infty \hat u_k e^{2\pi ikx/L} $$
and
$$ \hat u_k = \frac{1}{L}\int\limits_{0}^L u(x) e^{-2\pi ikx/L}dx $$ so that $(u')_k = ik u_k$.
Clearly any DE under Fourier transform will become an algebraic equation, we can solve it and then use the inverse transformation to get the solution in $x$ after cutting the infinite series to $N$.
Suppose we want to solve a Dirichlet problem on $[0,L]$, so that $$ u(x)=\sum_{k=1}^\infty a_k \sin(\pi kx/L) $$
Then (Discrete Fourier Tranform) $$ u_l=u(l h)=\sum_{k=1}^N a_k \sin(\pi kl/(N+1))\,,\;1\leq l\leq N $$
is a $N\times N$ system in $a_1,\dots,a_N$.
Its solution is the Inverse Discrete Fourier Tranform $$ a_k=\frac{2}{N+1}\sum_{l=1}^N u_l\sin(\pi kl/(N+1))\,. $$
Hence $|G(z)|>1$ for any value of $r$, namely this numerical method is unstable for any choice of $\Delta t$ and $\Delta x$!
$$ \frac{u_{n,j+1}+u_{n,j-1}}{2}\,, $$
so that the system becomes
$$ u_{n+1,j} = \frac{u_{n,j+1}+u_{n,j-1}}{2} + \frac{r}{2}\left(u_{n,j+1}-u_{n,j-1}\right)\,, $$
Under the guess $u(t,x) = e^{ik(x-ct)}$ we find that
$$ G(k) = \cos\left({k\Delta x}\right) + i\,r\cos\left({k\Delta x}\right) $$
so that $|G(k)|\leq1$ when $c\Delta t\leq\Delta x$.