Skip to main content

Section 3.5 Secant method

Newton's method algorithm suggests a change that would make it again work, at least in principle, for maps that are just continuous: replacing \(f'(x_n)\) by its elementary approximation
\begin{equation*} \frac{f(x_n)-f(x_{n-1})}{x_n-x_{n-1}}\text{,} \end{equation*}
we get the so-called secant method:
\begin{equation*} x_{n+1} = x_{n} - \displaystyle\frac{f(x_{n})}{\frac{f(x_n)-f(x_{n-1})}{x_n-x_{n-1}}} = x_{n} - \displaystyle f(x_n)\frac{x_n-x_{n-1}}{f(x_n)-f(x_{n-1})}. \end{equation*}

In order to be able to use this method we pay a price: we need to choose two initial points, just as in the bisection method case. The geometric meaning of this method is that one approximates the tangent direction with the line passing through the previous two points found by the algorithm.

Even with functions that are differentiable, the secant method is a nice alternative to Newton's method since it does not require to input the first derivative and so it requires fewer calculations per step. In some cases, this makes it even faster than Newton's.

The evolution of the error in the secant method, under the same regularity conditions of the Newton method, is given by

\begin{equation*} e_n = O(e_{n-1}^\phi), \end{equation*}

where \(\phi=(1+\sqrt{5})/2\simeq1.6\) is the so-called golden ratio.

An implementation of the secant algorithm. The code below evaluates the square root of 2 with the secant's method. The algorithm needs in input two initial values \(x_0\) and a function \(f\text{.}\)

A more robust implementation. So far in our code we did not check for any "bad thing" thast can happen. For instance, in the code above we get a "division by zero" warning because, after a few steps, the algorithm saturates all significant digits of the solution and so \(x_0\) and \(x_1\) become equal and so the evaluation of the slope in line 7 fails.

We can avoid this kind of problems by replacing the fixed loop <for i in range(10)> by a while loop that ends when the distance between \(x_n\) and \(x_{n-1}\) becomes smaller than some given positive number \(\epsilon\) (the desired relative precision) multiplied by \(x_n\text{:}\)