MATH 164 Lectures Log

Lecture 1, August 22

Topics

  • Introduction to the course.
  • Why do we need Numerical Analysis: very few equations have a simple expression for their solution, starting from polynomial equations.
  • Rational and irrational numbers: why very few numbers can be represented exactly on any computer. Important obvious consequence: even just setting a variable to some value (even a rational one) most of the times introduces an error, namely the value of the variable is not exactly equal to the one we gave in input!
  • An elementary way to prove that $1<\sqrt{2}<2$ and that $\sqrt{2}$ is irrational.

Readings

Homework #1a

Due date: Aug. 29

  1. Write in decimal form the following numbers: $\frac{34}{10}, \frac{1}{10}, \frac{1}{9}, \frac{1}{8}$.
  2. Explain why the number $0.999...$ is equal to 1.
  3. Let $x=0.777...$. Set $y=10x$ and write down the value of $y-x$. Conclude that $x=\frac{7}{9}$. Use the same idea to show that $0.767676...=\frac{76}{99}$ and $0.1767676...=\frac{176-1}{990}$.
  4. Write in fraction form (i.e. as ratio of two integers) the following numbers: 32.01, 0.15, 0.0123, 0.111..., 0.333..., 0.123123123....
  5. Describe which fractions write with a finite number of digits in the decimal system and explain why they do.

Lecture 2, August 24

Topics

  • What are the main goals of Numerical Analysis? Besides keeping track of computational errors, the main goal of NA is finding algorithms (as stable, fast and lean as possible!) to solve continuous mathematical problems (see this article by L. Trefethen).
  • How do we represent numbers: positional systems.
  • Representing real numbers on a computer: scientific notation and floating point systems (if you are brave enough, read "What Every Computer Scientist Should Know About Floating-Point Arithmetic" by David Goldberg).
    A floating point system consists in representing numbers in scientific notation (in some base - usually base 2 for nowadays computers) while keeping only some fixed number of digits for both the number itself and its exponent.
  • Quirks of floating point systems.
  • Absolute and relative error.
  • A toy model: the floating point system D3.

Readings

Extra Readings

  • You can find much more on floating point arithmetic in Section 1.3 of the Okten textbook. Either read this section or use it as a reference throughout this course.

Homework #1b

Due date: Aug 29

Solve problems 6 and 7 in 1.5.

Lecture 3, August 29

Topics

  • Binary numbers
  • What is Numerical Analysis
  • A first concrete visualization of the effects of the floating-point error: explaining the behavior of the absolute error as function of $h$ in the forward differencing $(f(x+h)-f(x))/h$.

Readings

Links

Decimal to single-precision converter

Homework #2a

Due date: Sep. 5

Solve problems 3 and 5 in 1.5.

Lecture 4, August 31

Topics

  • Review of Binary numbers
  • Single-precision numbers
  • Evaluating numerically the derivative of a function: what if we need more precision? Find a better algorithm! A quick look to centered differencing.
  • The bisection method

Readings

Homework #2b

Due date: Sep. 5

  1. Solve problem 1 in 2.5.
  2. Write the Taylor series of the function $\cos(x)$ at $x_0=0$ up to order 3. Use your result to evaluate $\cos(0.3)$. Use the truncation error formula in Thm 2.2.1 to estimate an upper bound for the error and verify with Python or a calculator that the upper bound is indeed not smaller than the actuall error.
  3. Modify the code in Sec. 2.2 to visualize the absolute error under the forward differencing approximation for the function $f(x)=x^3$ at $x_0=2$.

Lecture 5, September 5

Topics

  • Bisection method:
    • An example of how to use it.
    • How to write the numerical approximation to a root obtained via this method.
    • Error bounds.
    • An implementation in Python.
  • Newton's method:
    • Geometric view of the method.
    • Getting the method via Taylor series expansion.
    • Error asymptotics: Newton's method has "speed 2".
    • An implementation in Python.
  • What complex numbers are.

Readings

Suggested Extra Readings

Extra material

Homework #3a

Due date: Sep. 12

  1. Modify the bisection code in Sec. 3.1 as follows:
    1. At every step of the for loop, print the error. Recall that the "worst case scenario" error is given, at every step, by $(a+b)/2$. Store the error at the step i in an array element named err[i].
    2. Add lines to plot the array versus its index. Hint: a simple way to achieve this is to look at the code in Sec. 2.2. Just cut and paste the instructions needed to plot and slightly adjust them to fit with the new code.
    3. Notice that the graph you get is the graph of an exponential function. Replace the plot command with semilogy and see what happens. Explain why does it happen.
    4. Evaluate "by hand" the slope of the straight line you get in the previous point in a similar way we did in the code in Sec. 2.2.
    1. Find some way to prove that there is only one solution to the equation $e^x=x^2$.
    2. Explain why the solution must necessarily lie between $-1$ and 0.
    3. Use any implementation of the bisection method to find the unique solution of the equation above with six correct digits.
    4. Evaluate (before running the algorithm) how many steps will be necessary to achieve the required precision when starting the algorithm with the interval $[-1,0]$.
    5. Now, use the Newton's method to find the same root. How many steps are needed to achieve the desired precision when starting from $x_0=-1$?

    Lecture 6, September 7

    Topics

    • How to multiply and divide complex numbers.
    • Complex polynomials.
    • Newton's method:
      • Newton's method applied to complex polynomials: Newton's fractals.
      • What can go wrong with Newton's method.
    • Secant method:
      • Geometric view of the method.
      • Secant's method order.
      • Python's implementation and a lesson: a loop with a fixed number of iterations is often not a wise choice.

    Readings

    Suggested Extra Readings

    Extra material

    Lecture 7, September 12

    Topics

    • The root SciPy command.
    • Conditionals and anonymous function in Python.
    • Definition of Linear spaces.
    • Bases and dimension of a linear space.
    • Row vectors and column vectors.
    • Matricial multiplication.

    Readings

    • Sections 3.5 and 3.6 in the textbook.
    • Section 4.1 until Linear maps.

    Suggested Extra Readings

    Extra material

    Homework #4a

    Due date: Sep. 19

    Solve problems 1,2,3 in Sec. 4.10.

    Lecture 8, September 14

    Topics

    • Linear maps and matrices.
    • Determinant of a matrix.
    • Why the abstract algebraic definition of determinant is not useful in computations.
    • Solving a systems with Gaussian elimination.
    • The LU decomposition.
    • Implementations of the LU decomposition.

    Readings

    Suggested Extra Readings

    Extra material

    Homework #4b

    Due date: Sep. 19

    Solve problems 6,8,9 in Sec. 4.10.

    Lecture 9, September 21

    Topics

    • Computational complexity of the LU decomposition.
    • Solving a linear system with the LU decomposition method.

    Readings

    Suggested Extra Readings

    Extra material

    Homework #5a

    Due date: Sep. 26

    Solve problems 10,13,14 in Sec. 4.10.

    Lecture 10, September 23

    Topics

    • Worked-out examples of the LU decomposition.
    • Main problem of the LU decomposition algorithm: large entries in $L$.
    • How to fix the problem: LU decomposition with pivoting.

    Readings

    Suggested Extra Readings

    Extra material

    Homework #5b

    Due date: Sep. 26

    Solve problem 12 in Sec. 4.10.

    Lecture 11, September 26

    Topics

    • Chapter 1 review.
      • Fermat's last theorem and floating point calculations.
      • Some real-life example of catastrophic cancellations.
    • Chapter 2 review.
      • A close look at Taylor's series theorem.

    Readings

    Suggested Extra Readings

    Extra material

    Lecture 12, September 26

    Topics

    • Review of the LU method.
    • Norms on a linear space.
    • Norms of matrices.
    • Propation of errors in the direct solution of linear systems.

    Readings

    • Sections 4.5 up to the Residual.

    Suggested Extra Readings

    Homework #6

    Due date: Oct. 3

    Write Python code (or modify code in 4.4) to do the following:
    1. Input into a variable A the matrix $\begin{pmatrix}4.1&2.8\cr 9.7&6.7\cr\end{pmatrix}$.
    2. Input into a variable b the vector $(4.1,9.7)$.
    3. Use the LU method to solve the linear system $Av=b$. Denote the solution you find by $v1$.
    4. Input into a variable $db$ the vector $(0.01,0)$.
    5. Use the LU method to solve the linear system $Av=b+db$. Denote the solution you find by $v2$.
    6. Evaluate, in any norm you choose, $\|v1-v2\|/\|v1\|$ and show that this relative change of the solution is bound from above by the quantity $k(A)\cdot\|db\|/\|b\|$, where $k(A)$ is the matrix conditioning.

    Lecture 13, October 3

    Topics

    • Conditioning.
    • Conditioning of functions' evaluation.
    • Conditioning of functions derivatives' evaluation.
    • Conditioning of the root-finding problem.
    • An example of propagation of errors in the direct solution of linear systems.
    • Iterative methods to solve linear systems: Jacobi and Gauss-Seidel.

    Readings

    Suggested Extra Readings

    Lecture 14, October 5

    Topics

    • An application of numerical linear algebra: using time discretization to transform ODE boundary value problems into (large) linear systems.
    • Comparing the speed of our naive implementation of the LU method with SciPy's linalg.solve function.

    Readings

    Suggested Extra Readings

    Take-Home Midterm #1

    Due Oct. 10th
    1. Catastrophic Cancellation [30]

      Consider the quadratic formula $$x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$$ of the quadratic equation $ax^2+bx+c=0$.
      1. Explain in which cases this formula can suffer from catastrophic cancellation in a floating point system.
      2. Denote by $D_4$ a decimal floating point system with 4 significant digits. Set a=1.000, b=-62.74, c=0.3596 and use the quadratic formula above to solve the equation. Then evaluate the absolute and relative errors of these solutions by comparing them with the solutions you find using double precision.
      3. Prove that solutions of the quadratic equation can also be evaluated using the formula $$x=\frac{2c}{-b\mp\sqrt{b^2-4ac}}$$.
      4. Use the new formula above in $D_4$ and compare the relative errors using the "standard" formula and this new one.
      5. Find under which conditions it is better not to use that standard formula and under which conditions it is better not to use the new one.
    2. A hybrid root-finding method [40]

      1. Modify our bisection code in Section 3.2 so that at each step the new point is not evaluated as the mid-point of the segment but rather the point obtained by applying the current segment endpoints the secant method.
      2. Use your implementation to find the first correct 10 digits of any solution of the equation $e^x=2\cos x$ (for instance make the relative error smaller than $10^{-11}$).
      3. Do the same using the bisection method and compare the number of iterations needed to achieve the result.
    3. Conditioning [30]

      Use the code in Section 4.4 to solve, in double precision, the system at the end of Section 4.5. Check the order of the relative error of the double precision solution and provide an explanation for the order of the relative error.
    4. Solution of a linear system [20]

      Consider the $n\times n$ linear system $Av=b$, where $$ A = \begin{pmatrix} 1&0&\dots&0&1\cr -1&1&\ddots&\vdots&\vdots\cr \vdots&-1&\ddots&0&1\cr \vdots&\vdots&\ddots&1&1\cr -1&-1&\dots&-1&1\cr \end{pmatrix} \;\;\;\;\text{ and }\;\;\;\; b = \begin{pmatrix}1\cr \vdots\cr\vdots\cr1\cr2\cr1\cr\end{pmatrix}. $$ What solution will be computed using the LU decomposition in a floating point system where $2^{n-2}>1/\varepsilon_m$, where $\varepsilon_m$ is the "epsilon machine"? Recall that, by definition, the "epsilon machine" of a floating point system is the smallest number $\varepsilon_m$ such that $1+\varepsilon_m>1$.

      You might find helpful using Python to find the numerical solution of the system for several values of $n$ but in order to get full credit you must find a reason for your guess. Below is the code to initialize the matrix $A$ in Python:

      n=5
      A = np.eye(n)
      
      for i in range(n):
        for j in range(n):
          if j < i:
              A[i,j] = -1
          if j == n-1:
              A[i,j] = 1
      

    Lecture 15, October 10

    Topics

    • How to prove that the space of continuous functions in one variable is a vector space.
    • Eigenvalues and eigenvectors.
    • Important example 1: of how does a matrix change after a change of vetor basis.
    • Important example 2: how to use eigenvalues to classify $2\times2$ matrices.

    Readings

    Suggested Extra Readings

    Extra material

    Lecture 16, October 12

    Topics

    • Eigenvalues, eigenvectors and matrices diagonalization.
    • The Power method.
    • Implementation of the power method.
    • The QR decomposition.

    Readings

    Suggested Extra Readings

    Extra material

    Lecture 17, October 17

    Topics

    • The LU and QR methods.
    • What do we mean by "optimization".
    • Gradient methods

    Readings

    Suggested Extra Readings

    Homework #7

    Due date: Oct. 24

    1. Error behavior in the Power Method.

      Consider the matrix $A=\begin{pmatrix}1&2&3\\ 4&5&6\\ 7&8&8\\ \end{pmatrix}$, whose spectrum is $$\lambda_1 = 15.55528260941107,\; \lambda_2=-1.41940876406481,\; \lambda_3=-0.13587384534627,$$ and implement in Python the following tasks:
      1. Modify the power method code in 5.2 to evaluate the eigenvalue of highest modulus of the $A$.
      2. Modify the code above and run the power method algorithm for 11 cycles. Store into an array $x_n$ the power method estimate of $\lambda_1$ at the step $n$ and into an array $d_n$ the error at each step of the cycle and display a semilogy plot of $d_n$ versus the iteration number. Here by error I mean the actual error, namely $|x_n-\lambda_1|$ (use the value of $\lambda_1$ I gave you above).
      3. Display, in the same plot, a semilogy plot of the graph of $\left|\frac{\lambda_2}{\lambda_1}\right|^k, k=0,\dots,10$ and compare it with the graph of the error above.
      This problem shows you that the power method's error at the step $k$, as it was already clear from the discussion in 5.2, goes to zero asymptotically as $\left|\frac{\lambda_2}{\lambda_1}\right|^k$, namely $$x_n\simeq \lambda_1+c\cdot\left[\frac{\lambda_2}{\lambda_1}\right]^n$$ for some constant $c$.
    2. Aitken acceleration. Continuing the problem above, let us try to speed up the convergence to $\lambda_1$ of the power method in the following way.
      1. Consider a sequence $x_n=a+br^n+cs^n$, with $0 < s < r < 1$, for some positive constants $b,c$. Clearly $x_n\to a$ for $n\to\infty$. Show that $(x_n-a)/r^n$ converges, for $n\to\infty$, to a non-zero number while $(x_n-a)/s^n\to\infty$, namely $x_n$ converges to $a$ with "speed $r^n$" while $s^n$ goes to zero faster than $x_n-a$.
      2. Now consider the new sequence $$y_n = x_n - \frac{(x_{n+1}-x_n)^2}{x_{n+2}-2x_{n+1}+x_n}.$$ Show that $y_n\to a$, namely $y_n$ has the same limit as $x_n$, but the convergence speed of $y_n$ is now $s^n$.

        Hint: set $x_n = x(n) = a+br^n+cs^n$ and notice that, for large $n$, since $x_n$ is converging, $$\frac{(x_{n+1}-x_n)^2}{x_{n+2}-2x_{n+1}+x_n}\simeq\frac{[x'(n)]^2}{x''(n)}.$$
      3. The sequence $y_n$ defined above is called the Aitken acceleration of $x_n$. Use the Aitken acceleration to speed up the convergence of the power method sequence $x_n$ above. Verify that $y_n$ converges faster to $\lambda_1$ than $x_n$ by adding to the plot you did in #1c the graph of the error of $y_n$, $n=0,\dots,8$.
      4. Finally, add to the plot the graph of $\left|\frac{\lambda_3}{\lambda_1}\right|^n, n=0,\dots,8$. This will show you that $$y_n\simeq \lambda_1+c'\cdot\left[\frac{\lambda_3}{\lambda_1}\right]^n$$ for some constant $c'$.
      Remarks:
      1. This acceleration works effectively only when $|\lambda_3|\ll|\lambda_2|$, as in this case.
      2. If we had a matrix at least $4\times4$, in principle we could also accelerate $y_n$ and get a new sequence $z_n\simeq\lambda_1+c''\cdot\left[\frac{\lambda_4}{\lambda_1}\right]^n$ for some constant $c''$ and so on. By repeating too many times this process, though, floating point errors begin soon to accumulate too much.
    3. Symmetric Matrices. Use the code you wrote for problem #1 with the symmetric matrix $$A=\begin{pmatrix}1&2&3\\ 2&5&6\\ 3&6&8\\ \end{pmatrix},$$ whose spectrum is $$ \lambda_1 = 13.70276226741504466 ,\;\lambda_2 = 0.45694589062748164,\;\lambda_3=-0.15970815804252028, $$ and verify that, for symmetric matrices, the convergence of the power method to $\lambda_1$ is faster: $$ x_n\simeq\lambda_1 + c\cdot\left[\frac{\lambda_2}{\lambda_1}\right]^{2n}. $$ Remark: this is due to the fact that the eigendirections relative to distinct eigenvalues are mutually orthogonal.

    Lecture 18, October 19

    Topics

    • Solution of the first 3 problems of the midterm.

    Readings

    • All readings before this lecture!

    Lecture 19, October 24

    Topics

    • Solution of the 4th midterm problem.
    • The steepest descent methods: fixed stepsize, decreasing stepsize, linear search.
    • Implementations of the steepest descent method.

    Readings

    Suggested Extra Readings

    Extra material

    Lecture 20, October 26

    Topics

    • An implementation of code useful to explore problem 4 in the midterm.
    • Convergence and rate of convergence of the steepest descent methods.
    • Testing the properties above with numerical experiments.
    • Newton's method.
    • Implementation of Newton's method.

    Readings

    Suggested Extra Readings

    Extra material

    Lecture 21, October 31

    Topics

    • More numerical experiments with Newton's method.
    • A sympy-based implementation of Newton's method.
    • Scalar products on a linear space.

    Readings

    Suggested Extra Readings

    Lecture 22, November 2

    Topics

    • The conjugate gradient method.
    • Scipy's own minimize function.
    • Dynamics of the Newton method on the plane.

    Readings

    Suggested Extra Readings

    Homework #8

    Due date: Nov. 13

    Effects of the floating point calculations
    1. Find the minimum of the quadratic function $f(x,y) = (x-1)^2 + (y-2)^2$.
    2. Use any method to evaluate numerically the position of the minimum with the smallest error you can. As condition for the exit from the optimization loop, use the second of the conditions in Sec. 6.3.3 (check when the value of the function stabilizes).
    3. Use the same method with the function $g(x,y) = (x-1)^2 + (y-2)^2 + 7$. How small can you make the error now?
    4. For a more dramatic effect, repeat steps a-c with the functions $F(x,y) = (x-1)^4 + (y-2)^4$ and $G(x,y) = (x-1)^4 + (y-2)^4 + 7$.
    5. Explain why such a big difference between minimizing $f$ and minimizing $g$ (or, equivalently, $F$ and $G$).
    6. Repeat the exercise using the norm of the gradient as condition for exiting the optimization loop. Do you notice any difference now between minimizing $f,F$ and $g,G$? Can you explain why?

    Lecture 23, November 7

    Topics

    • Some nice interactive applet on optimization.
    • What do we mean by interpolation.
    • Polynomial interpolation.
    • How good can polynomial interpolation be?

    Readings

    Suggested Extra Readings

    Extra Material

    Lecture 24, November 9

    Topics

    • The Runge phenomenon.
    • Chebyshev points.
    • Functions without explicit antiderivative in terms of elementary functions.
    • Taylor series at $x=0$ of three very important functions: $e^x$, $\sin x$ and $\cos x$.
    • Riemann sums.

    Readings

    Extra Material

    Lecture 25, November 14

    Topics

    • Newton-Cotes type methods.
    • Midpoint Method: algorithm, error, implementation.
    • Trapezoid Method: algorithm, error, implementation.

    Readings

    Extra Readings

    Lecture 26, November 16

    Topics

    • Simpson Method: algorithm, error, implementation.
    • Existence and uniqueness of ODEs' Initial Value Problems.

    Readings

    Extra Readings

    Take-Home Midterm #2

    Due Nov. 22nd
    1. Polynomial Interpolation [30]

      1. Find the polynomial interpolation for the data shown in Sec. 7.1 using first the basis of monomials $x^k$ (Sec. 7.2.1) and then the Lagrange basis (Sec. 7.2.2).
      2. Evaluate both solutions at each of the three interpolating points and explain the result you see (i.e. why are they equal or why one is more precise than the other, depending on your results).
    2. A new method of numerical integration [70]

      We saw in class that polynomial interpolation does not suffer from the Runge problem when Chebyshev points are used. Hence, one can evaluate $\int_a^bf(x)dx$ proceeding as follows:
      1. fix an integer $n$;
      2. evaluate the polynomial interpolation $p_n(x)$ of $f(x)$ using the $n+1$ Chebyshev points;
      3. evaluate the integral $\int_a^b p_n(x)dx$.
      Notice that this last integral can be evaluated exactly because it is the integral of a polynomial!

      Below I will walk you through implementing this method with Sympy to evaluate numerically $$\displaystyle\int\limits_0^1\frac{dx}{1+x^2}.$$ Steps:
      1. Evaluate exactly the integral above.
      2. Modify the code in the book (or implement your own!) to write a code able to produce the polynomial interpolation of $\frac{1}{1+x^2}$ between 0 and 1 using $N$ Chebyshev points.
        You will use this polynomial interpolation to evaluate $\int_0^1\frac{dx}{1+x^2}$.
      3. Use Sympy to modify the code you wrote above so to create a symbolic version of the polynomial approximation of $\frac{1}{1+x^2}$.
        Hint: to see how sympy works, you can give a look to the Newton method code in Sec. 7.3. Just cut&paste the lines you need to your code and make a few adjustments.
        As an example, the following lines produce a symbolic version of the polynomial $1+x^3$ (the print line is there just to let you see that p is indeed not a number but a symbolic function).
        from sympy import Symbol
        x = Symbol('x')
        p = 1+x**3
        print(p)
      4. Use the integrate Sympy function to find the integral of the polynomial approximation of $\frac{1}{1+x^2}$ you found above. The following line evaluates and returns the integral of $p$ from 0 to 1:
        integrate(p,(x,0,1))
        (recall to import the function from sympy!).
      5. Now that you have the whole code, find the absolute error for all values of $N$ from 2 to 10 and make some comment on how the error depends on $N$.
      6. Make a semilogy plot of the error vs. $N$ and find roughly the slope of the line.

    Readings

    Lecture 27, November 21

    Topics

    • Euler method.
    • Heun method.
    • RK4 method.
    • Stiff ODEs.

    Readings

    Extra Readings

    Extra Material

    Homework #9

    Due date: Dec. 1

    Richardson's extrapolation. We already saw in a past homework a way to accelerate the convergence of series (Aitken's acceletation). In this homework we meet a method to improve the convergence of a numerical method.
    The idea, in his simplest formulation, is the following. Assume that a method to evaluate a quantity $M$ is of order $h^1$ and denote by $N(h)$ the numerical estimate of $M$ obtained with step-size $h$, so that $$ M - N(h) = c_1 h + c_2 h^2 + \dots. $$ We can eliminate the first-order error term -- and so obtain a second-order approximation of $M$ ! -- by simply evaluating $M$ numerically first with a given step-size $h$ (so we obtain $N(h)$) and then with half-step-size $h/2$ (so we obtain $N(h/2))$ and then using the combination $2N\left(h/2\right) - N(h)$ as the numerical approximation for $M$.
    Indeed, $$ M - \left( 2N\left(h/2\right) - N(h) \right) = d_2 h^2 + \dots. $$ Problems.
    1. [20] Prove the above claim that $M - \left( 2N\left(h/2\right) - N(h) \right)$ is of order $h^2$.
    2. [40] Implement in Python the Richardon's extrapolation to improve the convergence of the left Riemann method in Section 8.1.
    3. [40] Verify that the method you get is in fact of order $h^2$. You might find helpful using the code at the bottom of Section 8.1.
    Extra credit:
    1. [20] Use Richardon's extrapolation to improve the convergence of the second-order method you found above by eliminating the second order error term.
    2. [20] Find numerically the order of convergence of the method you find.

    Lecture 28, November 28

    Topics

    • implicit Euler method.
    • Euler method in the plane: explicit, implicit and symplectic.
    • Boundary value problems.
    • Shooting method.

    Readings

      Sections 9.7, 9.8, 9.9, 9.10, 9.11

    Lecture 29, November 30

    Topics

    • Review.

    Readings

    Final Exam

    Due date: Dec. 7

    In 2017 Francesco Calogero, a renowned Italian physicist, came up with a new root-finding method based on solving a Ordinary Differential Equations (ODEs) system. Unlike all methods we studied in this course, this method is able to find at once all roots of a polynomial! Your goal in the final test below will be to apply his method to a concrete case and verify numerically the convergence speed of the method.

    The Method. Let $p(x)$ be a polynomial of degree $N$. Assume for simplicity that all of its roots $r_1,\dots,r_N$ are real. Consider the system of ODEs $$ \begin{cases} \ddot x_1 = \frac{2\dot x_1\dot x_2}{x_1-x_2} + \frac{2\dot x_1\dot x_3}{x_1-x_3} + \dots + \frac{2\dot x_1\dot x_N}{x_1-x_N}\\ \phantom{.}\\ \ddot x_2 = \frac{2\dot x_2\dot x_1}{x_2-x_1} + \frac{2\dot x_2\dot x_3}{x_2-x_3} + \dots + \frac{2\dot x_2\dot x_N}{x_2-x_N}\\ \dots\\ \ddot x_N = \frac{2\dot x_N\dot x_1}{x_N-x_2} + \frac{2\dot x_N\dot x_3}{x_N-x_3} + \dots + \frac{2\dot x_N\dot x_{N-1}}{x_N-x_{N-1}}\\ \end{cases}. $$ As initial conditions, choose arbitrarily the values of $x_1(0),x_2(0),\dots,x_N(0)$ (as long as they are all distinct from each other!) and then use the following formulae for the initial velocities: $$ \begin{cases} \dot x_1(0) = -\frac{p(x_1(0))}{(x_1(0)-x_2(0))(x_1(0)-x_3(0))\cdots(x_1(0)-x_N(0))}\\ \dot x_2(0) = -\frac{p(x_2(0))}{(x_2(0)-x_1(0))(x_2(0)-x_3(0))\cdots(x_2(0)-x_N(0))}\\ \dots\\ \dot x_N(0) = -\frac{p(x_N(0))}{(x_N(0)-x_1(0))(x_N(0)-x_2(0))\cdots(x_N(0)-x_{N-1}(0))}\\ \end{cases}. $$ The values of $x_1(1),x_2(1),\dots,x_N(1)$ will give you precisely the solutions of $p(x)$!

    The problems you have to solve.
    Remark: you can either write the code from scratch or cut&paste code from the examples in the book.
    1. [5] Consider the polynomial $p(x)=x^2-x-1$ and find its exact solution. Notice that one of them is the so-called golden ratio.
    2. [5] Setup the Calogero ODEs system of second-order ODEs for this polynomial, including the initial conditions. You will get two second-order ODEs.
    3. [10] Re-write the Calogero ODEs system as a system of first-order ODEs, as shown in Sec. 9.8. You will get 4 first-order ODEs.
    4. [20] Write a code to solve the system of ODEs you found above with the explicit Euler method from $t_0=0$ to $t_f=1$, as done in Sec. 9.9. Use $h=10^{-3}$. The values of $x_1(1)$ and $x_2(1)$ are numerical approximations of the two roots of $p(x)$. Print their values and the corresponding absolute errors.
    5. [20] Repeat the point above using the Heun method.
    6. [40] Repeat each of the two points above for 20 values of $h$, starting from $h=0.1$ and decreasing $h$ at every step by a factor 2 (e.g. look at the code at the bottom of Sec. 9.3). Then do a loglog plot, for both the Euler and Heun methods, of the error vs. h and evaluate somehow the slopes to have an idea of the order of the method. Make some comment on your results.