Skip to main content

Section 8.2 Newton-Cotes Quadrature

Of course we'd be happier with a convergence to zero faster than linear. Interpolation is a way to get this goal: we approximate \(f(x)\) with some polynomial \(p_n(x)\text{,}\) so that
\begin{equation*} \int_a^bf(x)dx\simeq\int_a^bp_n(x)dx \end{equation*}
A quadrature rule based on an interpolant polynomial \(p_n\) at \(n+1\) equally spaced points is called a Newton-Cotes formula of order \(n\text{.}\)

In this case the Lagrange basis is very convenient:
\begin{equation*} x_k=a+kh\,,\;h=(b-a)/n \,,\; p_n(x)=\sum^n_{k=0} f(x_k)L_k(x) \end{equation*}
so that
\begin{equation*} Q_n(f) = \int_a^bp_n(x)dx = \sum_{k=0}^n w_k f(x_k) \end{equation*}
The coefficients
\begin{equation*} w_k=\int_a^b L_k(x)dx \end{equation*}
are called the quadrature weights.

Estimate of error. Recall that the polynomial interpolation error with \(n+1\) nodes is given by
\begin{equation*} \max_{x\in[a,b]}|f(x)-p(x)|\leq\frac{\displaystyle\max_{x\in[a,b]}\left|\prod_{i=1}^{n+1}(x-x_i)\right|}{(n+1)!} \max_{x\in[a,b]}|f^{(n+1)}(x)|. \end{equation*}
Correspondingly, we get the following bound on the Newton-Cotes quadrature:
\begin{equation*} |I(f) - Q_n(f)|\leq\frac{h^{n+2}}{(n+1)!}\cdot \max_{x\in[a,b]}|f^{(n+1)}(x)|\cdot\displaystyle\int_0^n s(s-1)\cdots(s-n)ds, \end{equation*}
where \(h=(b-a)/n\text{.}\)

REMARK: for even \(n\text{,}\) the integral is zero. This does not mean that the error is zero, just that you must integrate the error for case \(n+1\text{.}\)

Newton-Cotes Drawbacks. Since \(h=(b-a)/n\text{,}\) it seems that, by making \(n\) large, we have good chances to make the error as small as we please.

Unfortunately it is not so, due to the Runge phenomenon (see Subsection 7.3.2).

An easy fix for this problem is to keep \(n\) low and improve accuracy by dividing, as in case of the Riemann sum, the domain in small intervals and then summing up all contributions. This way, all single summands will have a precision given by the formula above.

Since the number of summands is \(N=\frac{b-a}{h}\text{,}\) the error in the computation of the integral goes as \(h\) to one power less than the error on the single step, namely a method built over an interpolation of order \(n\) will converge with an error of order \(h^{n+1}\text{.}\)