Skip to main content

Section 4.2 The LU Method

The idea behind the LU method is that, while the determinant of a large general matrix cannot be evaluated, there are kind of matrices for which, on the contrary, this calculation is not a problem.

Subsection 4.2.1 Upper and lower triangular matrices

Consider, for instance, the case of diagonal matrices. For instance, in dimension-3, a diagonal matrix writes as
\begin{equation*} A=\begin{pmatrix}\alpha&0&0\cr 0&\beta&0\cr 0&0&\gamma\cr\end{pmatrix}. \end{equation*}
Clearly then
\begin{equation*} A\begin{pmatrix}1\cr0\cr0\cr\end{pmatrix} = \begin{pmatrix}\alpha\cr0\cr0\cr\end{pmatrix} ,\; A\begin{pmatrix}0\cr1\cr0\cr\end{pmatrix} = \begin{pmatrix}0\cr\beta\cr0\cr\end{pmatrix} ,\; A\begin{pmatrix}0\cr0\cr1\cr\end{pmatrix} = \begin{pmatrix}0\cr0\cr\gamma\cr\end{pmatrix}, \end{equation*}
namely the image of the unit cube is a parallelepiped of sides \(\alpha,\beta,\gamma\text{.}\) Hence, \(\det A=\alpha\beta\gamma\text{.}\) More generally, the determinant of any diagonal matrix is just the product of all its diagonal terms.

Simple geometric considerations show that the same result is true provided that a matrix has only zeros either above or below the diagonal. The first are called lower triangular matrices, the latter upper triangular matrices.

For instance, the determinant of the upper triangular matrix
\begin{equation*} \begin{pmatrix} 3&2&-1\cr 0&-1&7\cr 0&0&13\cr \end{pmatrix} \end{equation*}
is \(-39\text{:}\)
It makes sense therefore that it would be very convenient to write a matrix \(A\) as a product of a lower and an upper triangular matrices in order to solve the equation \(Av=b\text{.}\)

It turns out that this is possible. In fact, this is exactly one of the methods taught in college algebra to solve linear systems: Gaussian elimination!

Subsection 4.2.2 Gaussian elimination

Let us illustrate the Gaussian elimination with the following system:
\begin{equation*} \begin{pmatrix} 1&2&3\cr 4&5&6\cr 7&8&8\cr \end{pmatrix} \begin{pmatrix} x\cr y\cr z\cr \end{pmatrix} = \begin{pmatrix} 1\cr 1\cr 0\cr \end{pmatrix}. \end{equation*}
The Gaussian elimination technique consists in using the first row to set to zero the first coefficients in rows 2 and 3 and then using the second row to set to zero the second coefficient of the third row. The principle is that replacing a row with a linear combination of it (with non-zero coefficient) with the other two does not change the set of solutions.

After multiplying the first row by \(-4\) and summing it to the second, we get
\begin{equation*} \begin{pmatrix} 1&2&3\cr 0&-3&-6\cr 7&8&8\cr \end{pmatrix} \begin{pmatrix} x\cr y\cr z\cr \end{pmatrix} = \begin{pmatrix} 1\cr -3\cr 0\cr \end{pmatrix}. \end{equation*}
After multiplying the first row by \(-7\) and summing it to the third, we get
\begin{equation*} \begin{pmatrix} 1&2&3\cr 0&-3&-6\cr 0&-6&-13\cr \end{pmatrix} \begin{pmatrix} x\cr y\cr z\cr \end{pmatrix} = \begin{pmatrix} 1\cr -3\cr -7\cr \end{pmatrix}. \end{equation*}
After multiplying the second row by \(-2\) and summing it to the third, we finally get
\begin{equation*} \begin{pmatrix} 1&2&3\cr 0&-3&-6\cr 0&0&-1\cr \end{pmatrix} \begin{pmatrix} x\cr y\cr z\cr \end{pmatrix} = \begin{pmatrix} 1\cr -3\cr -1\cr \end{pmatrix}. \end{equation*}
The one above is a upper triangular matrix. In fact, this algorithm ultimately produces exactly a decomposition of
\begin{equation*} A = \begin{pmatrix} 1&2&3\cr 4&5&6\cr 7&8&8\cr \end{pmatrix} \end{equation*}
into the product of \(L\cdot U\text{,}\) a lower triangular matrix times an upper one. The upper one is the one above:
\begin{equation*} U = \begin{pmatrix} 1&2&3\cr 0&-3&-6\cr 0&0&-1\cr \end{pmatrix}. \end{equation*}
The lower one is built with the coefficients used in the algorithm:
\begin{equation*} L = \begin{pmatrix} 1&0&0\cr 4&1&0\cr 7&2&1\cr \end{pmatrix}. \end{equation*}
Let us verify:

Subsection 4.2.3 Code implementations

The code below implements the algorithm we just used in the general case of \(n\times n\) systems:
The implementation below is a bit fancier: we check whether \(A\) is a square matrix (line 4) and whether the pivot element is diving for is non-zero (line 7). Moreover, we avoid a for loop by using some fancy MATLAB capability.
In this last implementation we skip yet another for loop:
Notice that the look in the variable k ranges over all possible values of the column index. In this case there is an even simpler way to write the loop by replacing the index with the symbol :, which stands for "all possible values that can appear here":

Subsection 4.2.4 Computational complexity

We want to evaluate the order of magnitude of how many calculations are needed to perform a LU decomposition of a \(n\times n\) matrix \(A\text{.}\) This is called the computational complexity of the algorithm.

In general, the answer is a polynomial in \(n\) of some degree. We do not need the exact number, only the degree of the polynomial, namely we want to know whether this number scales like \(n^2\) or like \(n^3\) and so on.

According to the algorithm, at the first step (i=1) we perform the following operations:
  1. \(n-1\) divisions (to find the L(i,j), \(j=2,\dots,n\)),
  2. \(n(n-1)\) multiplications (when we multiply the first row by the L(i,j)),
  3. \(n(n-1)\) subtractions (when we subtract the 'normalized' first row from the remaining \(n-1\) ones.

This is a total of \((n-1)(2n+1)=2n^2-n-1\) operations. We are interested only in the leading order as \(n\to\infty\) so we can disregard the lower order terms.

At the second step (\(i=2\) ) we repeat thw algorithm but now on a smaller \((n-1)\times(n-1)\) matrix, so the leading term of the number of operations now scales as \(2(n-1)^2\text{.}\) We keep going like this until, at the last step (\(i=n-1\text{,}\) when we act on a \(2\times2\) matrix), we do one division, two multiplication and two subtractions.

Hence, the number of operations increases with \(n\) as
\begin{equation*} 2(n^2+(n-1)^2+(n-2)^2+\dots+2^2). \end{equation*}
One can prove (e.g. see Brilliant) that the sum of the squares of the first \(n\) integers is equal to
\begin{equation*} \frac{n(n-1)(2n+1)}{6} \end{equation*}
Hence, LU algorithm's computational complexity has order \(n^3\text{.}\)

The code below counts the number of operations needed for the LU decomposition of a matrix for \(n\) between 1 and 20 and plots the result in a log-log plot.