Section 8.2 Newton-Cotes Quadrature
Of course we'd be happier with a convergence to zero faster than linear. Interpolation is a way to get this goal: we approximate \(f(x)\) with some polynomial \(p_n(x)\text{,}\) so that
\begin{equation*}
\int_a^bf(x)dx\simeq\int_a^bp_n(x)dx
\end{equation*}
A quadrature rule based on an interpolant polynomial \(p_n\) at \(n+1\) equally spaced points is called a Newton-Cotes formula of order \(n\text{.}\) We denote by \(Q_n(f,[a,b])\) this approximation of \(\int_a^bf(x)dx\text{.}\) In this case the Lagrange basis is very convenient:
\begin{equation*}
x_k=a+kh\,,\;h=(b-a)/n
\,,\;
p_n(x)=\sum^n_{k=0} f(x_k)L_k(x)
\end{equation*}
so that
\begin{equation*}
Q_n(f,[a,b]) = \int_a^bp_n(x)dx = \sum_{k=0}^n w_k f(x_k)
\end{equation*}
The coefficients
\begin{equation*}
w_k=\int_a^b L_k(x)dx
\end{equation*}
are called the quadrature weights.
Estimate of error. Recall that the polynomial interpolation error with \(n+1\) nodes is given by
\begin{equation*}
\max_{x\in[a,b]}|f(x)-p(x)|\leq\frac{\displaystyle\max_{x\in[a,b]}\left|\prod_{i=1}^{n+1}(x-x_i)\right|}{(n+1)!}
\max_{x\in[a,b]}|f^{(n+1)}(x)|.
\end{equation*}
Correspondingly, we get the following bound on the Newton-Cotes quadrature:
\begin{equation*}
\left|\int_a^b\!\!f(x)dx - Q_n(f)\right|\leq\frac{(b-a)^{n+2}}{(n+1)!}\cdot
\max_{x\in[a,b]}|f^{(n+1)}(x)|\cdot\displaystyle\int_0^n s(s-1)\cdots(s-n)ds.
\end{equation*}
REMARK: for even \(n\text{,}\) the integral is zero. This does not mean that the error is zero, just that you must integrate the error for case \(n+1\text{.}\)
Newton-Cotes drawback. Since \(h=(b-a)/n\text{,}\) it seems that, by making \(n\) large, we have good chances to make the error as small as we please. Unfortunately it is not so, due to the Runge phenomenon (see Subsection 7.3.2). This means that this naif version of the method does not work.
Newton-Cotes fix. An easy fix for this problem is to keep \(n\) low and improve accuracy by dividing, as in case of the Riemann sum, the domain \([a,b]\) into \(N\) small equal-width intervals \([x_k,x_{k+1}]\) and then summing up all contributions:
\begin{equation*}
\int_a^b\!\!f(x)dx = \sum_{k=0}^{n-1} \int_{x_k}^{x_{k+1}}\!\!\!\!f(x)dx.
\end{equation*}
This way, each single summand has an accuracy given by the formula above:
\begin{equation*}
\left|\int_{x_k}^{x_{k+1}}\!\!\!\!f(x)dx - Q_n(f,[x_k,x_{k+1}])\right|\leq\frac{h^{n+2}}{(n+1)!}
\max_{x\in[a,b]}|f^{(n+1)}(x)|\displaystyle\int_0^n s(s-1)\cdots(s-n)ds,
\end{equation*}
where \(h=(b-a)/N\text{.}\) After summing all contributions, we get the following approximation for the integral of \(f(x)\) from \(a\) to \(b\text{:}\)
\begin{equation*}
Q_{n,h}(f,[a,b]) \bydef \sum_{k=0}^{N-1}Q_n(f,[x_k,x_{k+1}])
\end{equation*}
Summing all error terms, we get the following bound on the error:
\begin{equation*}
\left|\int_a^b\!\!f(x)dx - Q_{n,h}(f,[a,b])\right|\leq h^{n+1}\sum_{k=0}^{N-1}\frac{b-a}{(n+1)!}\max_{x\in[a,b]}|f^{(n+1)}(x)|\displaystyle\int_0^n s(s-1)\cdots(s-n)ds.
\end{equation*}
Since the number of summands is \(N=\frac{b-a}{h}\text{,}\) the error in the computation of the integral goes as \(h\) to one power less than the error on the single step.
Newton-Cotes formulae. In the next sections we go over, in detail, the formulae below: Midpoint (\(n=0\)) | Trapezoid (\(n=1\)) | Simpson (\(n=2\)) |
---|---|---|
\(\displaystyle h\sum_{k=0}^{N-1}f\left(\frac{x_k+x_{k+1}}{2}\right)\) | \(\displaystyle h\sum_{k=0}^{N-1}\frac{f(x_k)+f(x_{k+1})}{2}\) | \(\displaystyle h\sum_{k=0}^{N-1}\frac{f(x_k)+4f\left(\frac{x_k+x_{k+1}}{2}\right)+f(x_{k+1})}{6}\) |
\(\displaystyle h^2\frac{b-a}{24}\max_{[a,b]}|f''(x)|\) | \(\displaystyle h^2\frac{b-a}{12}\max_{[a,b]}|f''(x)|\) | \(\displaystyle h^4\frac{b-a}{90}\max_{[a,b]}|f^{(4)}(x)|\) |