MATH 164 Lectures Log

Lecture 1, Jan 9

Topics

  • Introduction to the course.
  • Rational and irrational numbers: why very few numbers can be represented exactly on any computer. Important obvious consequence: even just setting a variable to some value (even a rational one) most of the times introduces an error, namely the value of the variable is not exactly equal to the one we gave in input!
  • An elementary way to prove that $\sqrt{2}$ is irrational.
  • How do we represent numbers: positional systems.
  • Representing real numbers on a computer: scientific notation and floating point systems (if you are brave enough, read "What Every Computer Scientist Should Know About Floating-Point Arithmetic" by David Goldberg).
    A floating point system consists in representing numbers in scientific notation (in some base - usually base 2 for nowadays computers) while keeping only some fixed number of digits for both the number itself and its exponent.
  • A toy model: the floating point system D3.
  • Quirks of floating point systems.

Readings

Homework #1a

Due date: Jan. 18

  1. Write in decimal form the following numbers: $\frac{34}{10}, \frac{1}{10}, \frac{1}{9}, \frac{1}{8}$.
  2. Explain why the number $0.999...$ is equal to 1.
  3. Let $x=0.777...$. Set $y=10x$ and write down the value of $y-x$. Conclude that $x=\frac{7}{9}$. Use the same idea to show that $0.767676...=\frac{76}{99}$ and $0.1767676...=\frac{176-1}{990}$.
  4. Write in fraction form (i.e. as ratio of two integers) the following numbers: 32.01, 0.15, 0.0123, 0.111..., 0.333..., 0.123123123....
  5. Describe which fractions write with a finite number of digits in the decimal system and explain why they do.

Lecture 2, Jan 11

Topics

  • Quirks of floating point systems.
  • Binary numbers.
  • Double precision floating point numbers.
  • Absolute and relative error.

Python Programming

  • Basic facts about Python as a calculator and printing.

Readings

Extra Readings

  • You can find much more on floating point arithmetic in Section 1.3 of the Okten textbook. Either read this section or use it as a reference throughout this course.

Links

Decimal to single-precision converter

Homework #1b

Due date: Jan 18

Solve problems 6 and 7 in 1.5.

Lecture 3, Jan 16

Topics

  • What are the main goals of Numerical Analysis? Besides keeping track of computational errors, the main goal of NA is finding algorithms (as stable, fast and lean as possible!) to solve continuous mathematical problems (see this article by L. Trefethen).
  • A first example of a numerical analysis problem: evaluating functions values.
  • A fundamental review topic: Taylor's expansion series.
  • A first concrete visualization of the effects of the floating-point error: explaining the behavior of the absolute error as function of $h$ in the forward differencing $(f(x+h)-f(x))/h$.

Readings

Homework #2a

Due date: Jan. 25

Solve problems 6,7,8 in 1.5.

Lecture 4, Jan 18

Topics

  • A first concrete visualization of the effects of the floating-point error: explaining the behavior of the absolute error as function of $h$ in the forward differencing $(f(x+h)-f(x))/h$.
  • Error and conditioning.
  • Conditioning of evaluating a function.

Python Programming

  • Putting labels in plots.

Readings

Homework #2b

Due date: Jan. 25

Solve problems 1,2,3 in 2.5.

Lecture 5, Jan 23

Topics

  • Conditioning of evaluating a function.
  • Conditioning of evaluating a function's derivative.
  • The bisection method.

Python Programming

  • Putting labels in plots.

Readings

Lecture 6, Jan 25

Topics

  • Bisection method:
    • An example of how to use it.
    • How to write the numerical approximation to a root obtained via this method.
    • Error bounds.
    • An implementation in Python.
  • Iterating functions:
    1. Convergence of iterates.
    2. Using iterates to solve equations.
    3. A Python example of solving an equation with iterations.
  • Newton's method:
    • Geometric view of the method.
    • Getting the method via Taylor series expansion.

Readings

Suggested Extra Readings

Extra material

Homework #3

Due date: Feb. 2

  1. Let $p(x) = 2x^4-x^2-2$.
    1. Prove that $p(x)$ has a unique positive root;
    2. Find a bounded interval that contains the positive root and evaluate the number of iterations of the bisection methos needed to find the root with an error not larger than $10^{-5}$;
    3. Write a python code to find numerically the root with an absolute error of $10^{-5}$ and verify that the number of iterations in the previous point is indeed enough to achieve the task (you can re-use any code in the book).
  2. Use the iterations method to find the root above. In other words, find a polynomial $q(x)$ such that each fixed point of $q$ is a root of $p(x)$ and viceversa. Similarly to how we did in class, use iterates to find the positive root of $p(x)$ with the same error as above. If the iterates of the $q(x)$ you chose does not converge, modify $q(x)$ as I did in class until you find a $q(x)$ whose iterates do converge to the root.
  3. Apply two steps of the Newton method to the polynomial $p(x)$ using the starting point $x=1.5$. Evaluate the error at each step and verify that the error at a given step is roughly equal to the square of the previous error.

Lecture 7, Jan 30

Topics

  • Newton's method:
    • Order of convergence of the Newton's method.
    • An implementation in Python.
    • When the Newton method fails.
  • Secant's method:
    • Geometric view of the method.
    • Getting the method via Taylor series expansion.
    • Python's implementation and a lesson: a loop with a fixed number of iterations is often not a wise choice.

Readings

Suggested Extra Readings

Extra material

Lecture 8, Feb 1

Topics

  • The root_scalar function.
  • Conditioning of the root-finding problem.
  • Newton fractals.

Suggested Extra Readings

Homework #4

Due date: Feb. 9

  1. A hybrid root-finding method. Write a python code implementing the following "hybrid" method to evaluate numerically roots of a function $f(x)$:
    1. Set a tolerance $\varepsilon$ and start with an initial interval $[x_0,x_1]$ that contains a root of $f(x)$ (how to grant the presence of a root in the interval?);
    2. Apply a step of the secant method to $f(x)$ using as points $x_0$ and $x_1$ and denote the number obtained by $c$.
    3. If $c$ is inside $[x_0,x_1]$, set $x_2=c$; otherwise set $x_2=(x_0+x_1)/2$.
    4. Repeat the steps above until the tolerance is reached.
    Test the method evaluating the root of $x^2=2$ with an accuracy $\varepsilon=10^{-13}$.
  2. Comparing methods. Find the root of $\sin x=0.5625$ with an accuracy $\varepsilon=10^{-12}$ using the following methods: bisection, secant, Newton and the hybrid method above. In case of the bisection and hybrid methods, start from the interval $[0,\pi/2]$. In case of the secant and bisection methods, start from the point $0$. For each method, print the number of steps that were needed to get the desired precision.
  3. Conditioning. Find the root of the polynomial $p(x) = x^5 - 8 x^4 + 25.6 x^3 - 40.96 x^2 + 32.768 x -10.48576$ between 0 and 3 using Newton's method. Assume that, in the range $[0,3]$, $p(x)$ is known with an accuracy of $\Delta p\simeq10^{-12}$ and evaluate the absolute error on the numerical value of the root due to the conditioning of $p(x)$ at that value.
Remark: feel free to re-use any code or part of code from the textbook.

Lecture 9, Feb 6

Topics

  • Basics about complex numbers.
  • Linear/vector spaces.
  • Linear functions.
  • linear maps.
  • Determinant of a linear map.

Readings

Suggested Extra Readings

Extra material

Lecture 10, Feb 8

Topics

  • Determinant of a general linear map.
  • Exponential vs factorial.
  • Determinant of a diagonal matrix.
  • Determinant of an upper triangular matrix.
  • Determinant of an lower triangular matrix.
  • Gaussian elimination.
  • LU decomposition of a matrix.
  • Python implementation of the LU decomposition.

Readings

Suggested Extra Readings

Extra material

Homework #5

Due date: Feb. 15

Problems 1,3,7,8,10 in 4.10

Lecture 11, Feb 13

Topics

  • Computational complexity of the LU algorithm.
  • LU decomposition with Pivoting.

Python Programming

  • Vectorized calculations in NumPy.

Readings

Lecture 12, Feb 15

Topics

  • Implementation of LU decomposition with pivoting.
  • Using the LU decomposition to solve a linear system.
  • Norms in a vector space.

Readings

Extra Readings

Lecture 13, Feb 20

Topics

  • Conditioning of a matrix.
  • Propagation of error in the solution of a linear system.
  • Residual.
  • Iterative methods to solve linear systems.

Readings

Homework #6

Due date: Feb. 24

  1. Solve Problems 13,14,15 in 4.10.
  2. Solve, in D3, the system $$ \begin{pmatrix}0.913&0.659\\ 0.457&0.330\\\end{pmatrix}\begin{pmatrix}x\\ y\\\end{pmatrix} = \begin{pmatrix}0\\ 1\\\end{pmatrix}. $$ The matrix is the same in Sec. 4.5. Verify that, in this case, the residual is larger then the one we found in 4.5while the solution is actually much more accurate. This shows you that not necessarily the "worst case scenario" takes place and remind you that the residual is not necessarily a measure of how accurate is a solution of a linear system.

Lecture 14, Feb 22

Topics

  • Iterative methods to solve linear systems: the Jacobi and Gauss-Seidel methods.
  • Application: using linear systems to solve a boundary value problem.

Readings

MIDTERM #1: an exploration of Newton's method in the real and complex lines

Write a python code that implements the following tasks.

Consider the real polynomial $f(x) = x^4+2x^3-x^2-2x$. For any initial point one passes to it, Newton's method will either converge to one of the four roots $r_1,r_2,r_3,r_4$ of $f(x)$ or fail to converge. We say that a point $x_0$ belongs to the basin of attraction of the root $r_i$ if Newton's method starting at $x_0$ does converge to $r_i$. Points where the algorithm does not converge are "rare" (the probability to get one of them by picking up randomly a real number is zero) and so we can disregard this case.

The main question of this midterm is: how are the four basins (one for each root) located in the real line?

Answer this question numerically by doing the following:
  1. Fix an interval $I=[xmin,xmax]$. For instance, you can use at the beginning $[-3,3]$.
  2. Subdivide $I$ into $N$ subintervals and, for each of the relative $N+1$ endpoints, use the Newton method to see to which root does the endpoint go to and paint that point of a different color depending on the root.
    Since it is difficult to see points, I actually suggest you to paint a whole vertical line, so that ultimately your picture will be a strip painted in four colors. For instance, plt.axvline(x = 3.1, color = 'g', label = 'axvline - full height') will draw a green vertical line at $x=3.1$. You will need to set $N$ to about 1000 to see continuously colored strips.
  3. After you are able to display the case above, play with $xmin,xmax$ and find some interval of width $0.01$ where there are points converging to all roots.
    This shows you why you should choose your initial point as close as possible to the root you look for: away enough from the root, there are such regions where a tiny change in the initial point can send you to any other root of the function!
  4. Finally, slightly modify the code above to make it work with complex numbers, namely you will use Newton's method on the complex polynomial $f(z) = z^4+2z^3-z^2-2z$. This time you will select a region on the plane, initially the square $-3\leq x=Re(z)\leq 3$, $-3\leq y=Im(z)\leq 3$. By subdividing also the $y$ axis into $N$ subintervals, you will get a lattice of points $(x_i,y_j)$ in the square. For each of these points, determine to which root does it converge (the complex version has of course the very same roots as its real counterpart) and paint that point of the corresponding color.
    In this case, use a command like plt.plot(x, y, 'ro') (here "r" stands for the color "red", the "o" produces a visible point). To define a complex number from its component, use import cmath and z = complex(x,y).

Lecture 15, Feb 27

Topics

  • Application: using linear systems to solve a boundary value problem.
  • The SciPy's linalg.solve function.
  • Eigenvalues and eigenvectors.
  • The power method.

Readings

Suggested Extra Readings

Extra material

Lecture 16, Feb 29

Topics

  • Power method.
  • QR decomposition.
  • The LR and QR methods.

Readings

Suggested Extra Readings

Extra material

Lecture 17, Mar 12

Topics

  • What do we mean by "optimizing a function".
  • Minima and maxima of a function of many variables.
  • The role of the gradient and of the Hessian.

Readings

Lecture 18, Mar 14

Topics

  • Iterative methods.
  • Steepest Descent method.
  • Steepest Descent method with line search.
  • Important theorems on the convergence of the Steepest Descent method (without proofs).

Readings

Lecture 19, Mar 19

Topics

  • Python implementation of the Steepest Descent method, with and without line search.
  • Numerical verification of the largest constant step-size that is possible to use in the Steepest Descent method.
  • Taylor series for functions in more than one variable.
  • Newton's method for the search of critical points of a function.

Readings

Suggested Extra Readings

Extra material

Lecture 20, Mar 21

Topics

  • Newton's method for the search of critical points of a function.
  • Scalar products in linear spaces.
  • Conjugate gradient method.

Readings

Suggested Extra Readings

Extra material

Homework #7

Due date: Mar 28

Solve Problems 1,3,5 in 6.8.

Lecture 21, Mar 26

Topics

  • The minimize SciPy function.
  • What is Interpolation.
  • Polynomials of degree at most $n$ are a vector space of dimension $n+1$.
  • The basis of monomials is not a good basis for interpolating.

Readings

Lecture 22, Mar 28

Topics

  • Lagrange Polynomials
  • How good a polynomial approximation can be?

Readings

Lecture 23, Apr 2

Topics

  • How good a polynomial approximation can be?
  • The Runge phenomenon.
  • Chebishchev polynomials.

Readings

Lecture 24, Apr 4

Topics

  • Numerical integration.
  • The left Riemann sum.
  • Convergence order of the left Riemann sum.

Readings

Lecture 25, Apr 9

Topics

  • Verifying the order of the left Riemann sum graphically.
  • The Newton-Cotes method.
  • The midpoint method.
  • The trapezoid method.

Readings

Lecture 26, Apr 11

Topics

  • Simpson method.
  • ODEs.

Readings