Skip to main content

Section 5.1 Eigenvalues and Eigenvectors

How many essentially different behaviors can a linear operator \(f\) have? The answer to this question passes through the concept of eigenvalue and so also of eigenvector and eigendirection.

Consider for example the simplest case, namely dimension 2. Remember that an operator \(f\) is completely determined by what it does on the vector basis \(\{e_1,e_2\}\text{:}\)
\begin{equation*} f(\lambda e_1+\mu e_2) = f(\lambda e_1)+f(\mu e_2) = \lambda f(e_1)+\mu f(e_2) \end{equation*}

The simplest behavior that \(f\) can have on a vector \(v\neq0\) (what happens to the vector 0 is trivial: since \(f\) is linear, \(f(0)=0\)) is that \(f(v)\) be proportional to \(v\text{:}\)
\begin{equation*} f(v)=\lambda v. \end{equation*}
When this happens, \(\lambda\) is called an eigenvalue of \(f\) and \(v\) is an eigenvector. The straight line containing \(v\) is called an eigendirection. Notice that, since \(f(\alpha v)=\alpha f(v)\text{,}\) all vectors belonging to the same eigendirection are eigenvectors.

Definition 5.1.1.
We denote by \(\sigma(A)\) the set of all eigenvalues of \(A\text{.}\) We call \(\sigma(A)\) the spectrum of \(A\text{.}\)

If \(\lambda\in\sigma(A)\text{,}\) we have that
\begin{equation*} (f-\lambda\bI)v=0 \end{equation*}
and therefore we must have also that
\begin{equation*} \det(f-\lambda\bI)=0 \end{equation*}
since, if \(f-\lambda\bI\) were invertible, it could not send non-zero vectors into the zero vector (recall that any linear map sends 0 into 0).

In particular, a \(n\times n\) matrix has at most \(n\) eigenvalues.

The following fundamental result on spectra will be useful in the next sections:

Discussion of the case \(n=2\). If \(f:\Bbb R^2\to\Bbb R^2\text{,}\) in coordinates \(f\) is represented by some \(2\times2\) matrix and so
\begin{equation*} \det(f-\lambda\bI)=\det\begin{pmatrix}a-\lambda&b\cr c&d-\lambda\end{pmatrix}= (a-\lambda)(d-\lambda)-bc \end{equation*}
is a polynomial of degree 2 in \(\lambda\text{.}\)

Hence \(f\) can have either two distinct real eigenvalues or two coincident ones or no real eigenvalue at all.

Correspondingly, there are just 3 possibilities: the map \(f\) can either leave invariant
  1. two directions; or
  2. one direction; or
  3. none at all.

1. Two invariant directions. Choose \(e_1\) and \(e_2\) as vectors pointing in those two directions.Then
\begin{equation*} f(e_1)=\lambda_1e_1\hbox{ and }f(e_2)=\lambda_2e_2 \end{equation*}

In other words, in this basis \(f\) is represented by the matrix

\begin{equation*} \begin{pmatrix}\lambda_1&0\cr0&\lambda_2\cr\end{pmatrix} \end{equation*}
The numbers \(\lambda_1\) and \(\lambda_2\) are called eigenvalues of \(f\) and \(e_1\) and \(e_2\) the corresponding eigenvectors.

IMPORTANT: eigenvalues are uniquely determined, eigenvectors are just determined up to a non-zero multiplicative constant, namely what is uniquely detemrined is the eigendirection.

2. Only one invariant direction. Choose \(e_1\) and \(e_2\) as vectors pointing in those two directions.Then
\begin{equation*} f(e_1)=\lambda e_1\hbox{ and }f(e_2)=\alpha e_1+\beta e_2. \end{equation*}

If \(\beta\neq\lambda\text{,}\) then the direction \(e'_2=e_1+\frac{\alpha}{\beta-\lambda}e_2\) would also be invariant (check it!), against our assumption, so actually
\begin{equation*} f(e_1)=\lambda e_1\hbox{ and }f(e_2)=\alpha e_1+\lambda e_2. \end{equation*}
Moreover, with a similar change of base we can always have \(\alpha=1\text{,}\) namely in such basis \(f\) is represented by the matrix

\begin{equation*} \begin{pmatrix}\lambda&0\cr1&\lambda\cr\end{pmatrix} \end{equation*}

3. No invariant direction. You know already a kind of linear transformations that leave no direction invariant: rotations!

In fact, in some appropriate basis, every linear operator \(f\) with no invariant direction writes as

\begin{equation*} \lambda\begin{pmatrix}\cos\theta&-\sin\theta\cr\sin\theta&\phantom{-}\cos\theta\cr\end{pmatrix} \end{equation*}

namely it is a rotation by an angle \(\theta\) composed with a stretch of \(\lambda\text{.}\)

In complex numbers notation, the action on the plane of this matrix is a multiplication by the complex number \(\lambda e^{i\theta}\text{.}\)

The \(n\)-dimensional case. These three cases represent actually all that can happen to a linear operator in any dimension namely:

given any operator \(f\) on \(\Bbb R^n\text{,}\) we can always split \(\Bbb R^n\) in smaller invariant subspaces where \(f\) either:
  1. is diagonal (namely has a number of eigendirections equal to the dimension of the subspace, like case 1);
  2. has a single eigendirection (like case 2);
  3. has no eigendirection (and then is it like case 3).

Some important facts about decomposition of linear maps:
  1. Since multidimensional rotations always break down to compositions of 2-dimensional rotations, an operator on an odd-dimensional vector space has always at least one real eigenvalue.
  2. Symmetric matrices have always all real eigenvalues.
  3. More generally, normal operators, namely such that their matrix \(A\) satisfies \(AA^T=A^TA\text{,}\) have all real eigenvalues.