Eigenvalues and Eigenvectors

Table of Contents

1. Eigenvalues and Eigenvectors

Say \(A\) is some \(n \times n\) matrix. For most vectors \(\mathbf{x}\), the product \(A\mathbf{x}\) will result in a change of direction — in other words, the resulting product is not in the same direction (i.e. on the same line) as \(\mathbf{x}\). However, certain exceptional vectors, called the eigenvectors, are in the same direction as \(\mathbf{x}\). Thus we can write the following equation:

\begin{align} A\mathbf{x} = \lambda \mathbf{x} \notag \end{align}

where \(\lambda\) is an eigenvalue of \(A\).

This equation isn't in a very useful form. Instead, we can write it as the following homogeneous system:

\begin{align} \boxed{(A - \lambda I)\mathbf{x} = 0} \end{align}

This gives us a new way to look at eigenvalues and eigenvectors. The question is, does this system have a nontrivial nullspace? If so, then that space is called the eigenspace of some \(\lambda\), and all nonzero vectors in that eigenspace are the eigenvectors of \(\lambda\).

1.1. Checking Eigenvalues

To check if some \(\lambda\) is an eigenvalue of \(A\), we can check if the system defined by (1) has any nontrivial solutions. This can be done either through row reduction of \(A - \lambda I\) or checking if the determinant of \(A - \lambda I\) is equal to zero.

1.2. Finding Eigenvalues

The condition for an eigenvalue is that the system defined by (1) must have nontrivial solutions: in other words, it cannot be invertible. Therefore, we can say that the determinant of the matrix is zero:

\begin{align} \det (A-\lambda I) = 0 \end{align}

This is known as the characteristic equation, with \(\det (A-\lambda I)\) the characteristic polynomial. To find eigenvalues \(\lambda\), we would solve the characteristic equation for \(\lambda\).

1.3. Properties of Eigenvalues and Eigenvectors

  1. If the set \({ v_1, v_2, \dots, v_p, v_{p+1} }\) are eigenvectors of \(A\) each corresponding to a different eigenvalue, then that set of vectors is linearly independent.
  2. If a square matrix \(A\) is a triangular matrix, then its eigenvalues are just its diagonal entries. This is because the determinant of a triangular matrix is just the product of the diagonal entries.

2. Nonreal Eigenvalues

Consider a \(2 \times 2\) real matrix \(A\) with nonreal eigenvalues. This is possible because we can have nonreal roots from a polynomial, which means that for these roots, we cannot diagonalize over \(\mathbb{R}\).

The key point here is that complex eigenvalues come in conjugate pairs \(a \pm bi\). Additionally, for a \(2 \times 2\) real matrix, when eigenvalues are conjugates of each other, then the corresponding eigenvectors are also conjugates of each other.

2.1. Matrix Powers

Unfortunately, since matrices with nonreal eigenvalues cannot be diagonalized over \(\mathbb{R}\), we have to find another way. Fortunately, there is another matrix that is similar to these matrices that can help.

Consider \(C = \begin{bmatrix} a & -b \\ b & a \end{bmatrix}, \: b \neq 0\). Then, the eigenvalues of this matrix are \(\lambda = a \pm bi\). We can express this matrix using two transformations in polar coordinates, where \(a = r \cos \phi\) and \(b = r \sin \phi\):

\begin{align} C = \begin{bmatrix} a & -b \\ b & a \end{bmatrix} = \begin{bmatrix} r & 0 \\ 0 & r \end{bmatrix} \begin{bmatrix} \cos \phi & -\sin \phi \\ \sin \phi & \cos \phi \end{bmatrix} \notag \end{align}

Geometrically, this is applying a scaling transformation and then a rotation. Since this is the same as first applying a rotation and then a scaling, we can say this product is commutative. Then, taking \(C^k\):

\begin{align} \begin{bmatrix} a & -b \\ b & a \end{bmatrix} ^k = \left ( \begin{bmatrix} r & 0 \\ 0 & r \end{bmatrix} \begin{bmatrix} \cos \phi & -\sin \phi \\ \sin \phi & \cos \phi \end{bmatrix} \right ) ^k \notag \end{align}

Since applying a rotation by \(\phi\) \(k\) times is the same as applying one \(k \phi\) rotation, we can write this as:

\begin{align} \boxed{\begin{bmatrix} a & -b \\ b & a \end{bmatrix} ^k = \begin{bmatrix} r^k & 0 \\ 0 & r^k \end{bmatrix} \begin{bmatrix} \cos (k\phi) & -\sin (k\phi) \\ \sin (k\phi) & \cos (k\phi) \end{bmatrix}} \end{align}

Critically, for a \(2 \times 2\) real matrix \(A\) with nonreal eigenvalues \(\lambda = a \pm bi, \: b \neq 0\), if for eigenvalue \(a - bi\) we have eigenvector \(\begin{bmatrix} c + di \\ e + fi \end{bmatrix}\), then \(A\) is similar to \(C\):

\begin{align} A = PCP^{-1} \end{align}

where \(P = \begin{bmatrix} c & d \\ e & f \end{bmatrix}\) and \(C = \begin{bmatrix} a & -b \\ b & a \end{bmatrix}\).

Last modified: 2025-10-13 09:50