Skip to main content

Section 3.1 Bisection Method

The Bisection Method method is based on the following fundamental property of continuous functions: Notation: we say that a continuous function is a \(C^0\) function.

Subsection 3.1.1 The Algorithm

If \(f(x)\) is a \(C^0\) function and \(f(a)\) and \(f(b)\) have opposite signs, namely if \(f(a)f(b) \lt 0\text{,}\) there must be at least a root \(x_0\) of \(f(x)\) between \(a\) and \(b\text{.}\)

We have no info on where the root actually is located within the interval \((a,b)\text{,}\) so which number do we choose as our solution?

If we chose \(a\) or \(b\text{,}\) in the worst case our error would be \(|b-a|\text{.}\)

If we rather choose \(m=(a+b)/2\text{,}\) then the error cannot be larger than \(|b-a|/2\text{.}\)

This is the optimal choice.

In the same spirit, we can get arbitrarily close to a root by applying over and over the following elementary step:

Define \(m=(a+b)/2\text{.}\) If \(f(a)\) and \(f(m)\) have opposite signs, replace the interval \([a,b]\) with \([a,m]\text{;}\) otherwise, replace it with \([m,b]\text{.}\)

This way, at every step we are left with an interval that does contain for sure a root of \(f(x)\) and whose length is half of the previous interval.

Hence, after \(k\) steps, the error on the position of the root, namely the length of the interval one gets after \(k\) iterations, is \(|b-a|/2^k\text{.}\)

Subsection 3.1.2 A worked-out example

Let us solve numerically the equation \(\cos x=x.\)

This problem is equivalent to finding the root(s) of the function \(f(x)=\cos x-x\text{.}\)

In this case it is easy to see that there is a single root. Indeed, since \(|\cos x|\leq 1\text{,}\) there cannot be any root for \(x>1\) or \(x\lt-1\text{.}\) Since \(\cos x\gt0\) for \(-\pi/2 \lt x \lt 0\text{,}\) there cannot be any root in that interval either. Finally, in the interval \((0,1)\text{,}\) \(\cos x\) is monotonically decreasing while \(x\) is monotonically increasing, so their graphs can meet there only once.

Hence, now we know that there is a root of \(f(x)=\cos x-x\) between 0 and 1.

Let's verify that \(f(0)f(1) \lt 0\text{:}\)
\begin{equation*} f(0) = \cos0-0=1\gt0,\;\;f(1)=\cos(1)-1\lt0 \end{equation*}
because we know that \(\cos0=1\) and that \(\cos x\) decreases for \(0 \lt x \lt \pi\text{.}\)

Hence we set \(b=1\) and \(a=0\text{.}\) At this point, we can say that the solution of \(\cos x=x\) is \(x_0=(a+b)/2=0.5\) with an error \(\Delta x=|b-a|/2=0.5\) Let us take one step of the bisection algorithm:
\begin{equation*} m = (a+b)/2 = 0.5,\;f(0.5)=\cos(0.5)-0.5\simeq0.38\gt0 \end{equation*}
(just use a calculator or Python to evaluate \(f(0.5)\))

Since \(f(0)\gt0\) and \(f(1)\lt0\text{,}\) now we know that the solution must be between \(0.5\) and \(1\text{.}\) Hence our new interval is now \([0.5,1]\) (equivalently, we can say that we redefined \(a=0.5\)).

At this point, our best estimate of the root is \(x_0=0.75\) with an error \(\Delta x=0.25\text{.}\)

Let us take a second step of the bisection algorithm:
\begin{equation*} m = (a+b)/2 = 0.75,\;f(0.75)=\cos(0.75)-0.75\simeq-0.02\lt0 \end{equation*}
(just use a calculator or Python to evaluate \(f(0.75)\))

Since \(f(0.5)\gt0\) and \(f(1)\lt0\text{,}\) now we know that the solution must be between \(0.5\) and \(0.75\text{.}\) Hence our new interval is now \([0.5,0.75]\) (equivalently, we can say that we set \(b=0.75\)).

At this point, our best estimate of the root is
\begin{equation*} x_0=0.625 \end{equation*}
with an error \(\Delta x=0.125\text{.}\)

Using enough times this algorithm, we can get an estimate of the root with any desired precision.

For instance, in order to get an estimate of the root with an error not larger than \(0.001\text{,}\) it is enough to use a number of iteration \(k\) s.t.
\begin{equation*} |1-0|/2^k \lt 0.001, \end{equation*}
namely
\begin{equation*} 1000 \lt 2^k. \end{equation*}
You can solve this inequality either with your calculator, noticing that \(k>\log_2(1000)\simeq9.97\text{,}\) or by noticing that \(2^{10}=1024\) (a very useful relation to remember).

Hence, after \(k=10\) iterations, the error on the estimate of the root, namely the length of the interval, will be smaller than \(0.001\text{.}\)

In general, this means that the first two digits of the numerical solution are actually exact, while on the third digit there is an uncertainty of 1 unit.

In case you are curious, the root is equal to
\begin{equation*} x_0=0.739085133\pm0.000000001, \end{equation*}
which is of course compatible with our rough estimate
\begin{equation*} x_0=0.625\pm0.125. \end{equation*}

Subsection 3.1.3 An implementation of the Bisection algorithm

In the code below we evaluate numerically the square root of 2 by seeking the positive root of the polynomial \(f(x)=x^2-2\text{.}\)