4.4 Eigendecomposition and Diagonalization

Definition 4.8 A diagonal matrix has zeros on all off-diagonal elements: \[ \mathbf{D} = \begin{bmatrix} c_1 & 0 & \cdots & 0 \\ 0 & c_2 & \cdots & 0 \\ \vdots & & \ddots & \vdots \\ 0 & 0 & \cdots & c_n \end{bmatrix}. \]

Lemma 4.1 Let \(\mathbf{D}\) be a diagonal matrix. Then \(\mathbf{D}\) has the following properties:

  • \(\det(\mathbf{D}) = \prod_i c_i\)
  • \(\mathbf{D}^k = \text{diag}(c_1^k, \dots, c_n^k)\)
  • \(\mathbf{D}^{-1} = \text{diag}(1/c_1, \dots, 1/c_n)\) (if all \(c_i \neq 0\))

Example 4.17 Consider the diagonal matrix \[ \mathbf{D} = \begin{bmatrix} 2 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 4 \end{bmatrix}. \]

The determinant is the product of the diagonal entries: \[ \det(\mathbf{D}) = 2 \cdot (-1) \cdot 3 \cdot 4 = -24. \]

The inverse is obtained by taking the reciprocal of each diagonal entry: \[ \mathbf{D}^{-1} = \begin{bmatrix} \frac{1}{2} & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & \frac{1}{3} & 0 \\ 0 & 0 & 0 & \frac{1}{4} \end{bmatrix}. \]

Powers of a diagonal matrix are computed by raising each diagonal entry to the given power: \[ \mathbf{D}^5 = \begin{bmatrix} 2^5 & 0 & 0 & 0 \\ 0 & (-1)^5 & 0 & 0 \\ 0 & 0 & 3^5 & 0 \\ 0 & 0 & 0 & 4^5 \end{bmatrix} = \begin{bmatrix} 32 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 243 & 0 \\ 0 & 0 & 0 & 1024 \end{bmatrix}. \]

4.4.1 Diagonalizable Matrices

Definition 4.9 A matrix \(\mathbf{A} \in \mathbb{R}^{n \times n}\) is diagonalizable if there exists an invertible matrix \(\mathbf{P}\) such that: \[ \mathbf{D} = \mathbf{P}^{-1} \mathbf{A} \mathbf{P}, \] where \(\mathbf{D}\) is diagonal.

A matrix \(\mathbf{A}\) is diagonalizable if and only if \(\mathbf{A}\) has \(n\) linearly independent eigenvectors \[ \mathbf{A}\mathbf{P} = \mathbf{P}\mathbf{D} \quad \Leftrightarrow \quad \mathbf{A} p_i = \lambda_i p_i, \ i=1,\dots,n. \] Here, the columns of \(\mathbf{P}\) are eigenvectors of \(\mathbf{A}\), and the diagonal entries of \(\mathbf{D}\) are the eigenvalues of \(\mathbf{A}\).

4.4.2 Eigendecomposition Theorems

Theorem 4.6 Let \(\mathbf{A}\) be a matrix. Then there exist a diagonal matrix \(\mathbf{D}\) and matrix \(\mathbf{P}\) consisting of eigenvectors of \(\mathbf{A}\) such that \[ \mathbf{A} = \mathbf{P} \mathbf{D} \mathbf{P}^{-1}, \] if and only if eigenvectors of \(\mathbf{A}\) form a basis of \(\mathbb{R}^n\). Only non-defective matrices (ones with \(n\) linear independent eigenvectors) are diagonalizable.

Theorem 4.7 A symmetric matrix \(S \in \mathbb{R}^{n \times n}\) is always diagonalizable. By the spectral theorem, the eigenvectors can form an orthonormal basis (ONB), giving: \[ \mathbf{D} = \mathbf{P}^\top \mathbf{S} \mathbf{P}, \] with \(\mathbf{P}\) orthogonal.

Geometrically, eigendecomposition represents a basis change to the eigenbasis. \(\mathbf{D}\) scales vectors along eigenvectors by eigenvalues \(\lambda_i\) while \(\mathbf{P}\) maps scaled vectors back to the standard basis.

Example 4.18 Consider the matrix \[ \mathbf{A} = \begin{bmatrix} 4 & 1 \\ 1 & 2 \end{bmatrix}. \]

The characteristic polynomial is \[ \det(\mathbf{A} - \lambda \mathbf{I}) = \begin{vmatrix} 4 - \lambda & 1 \\ 1 & 2 - \lambda \end{vmatrix} = (4 - \lambda)(2 - \lambda) - 1 = \lambda^2 - 6\lambda + 7 = 0. \] Solving, \[ \lambda_{1,2} = 3 \pm \sqrt{2}. \]

The eigenvector for \(\lambda_1 = 3 + \sqrt{2}\) is found when we solve \((\mathbf{A} - \lambda_1 \mathbf{I})\mathbf{v} = \mathbf{0}\): \[ \begin{bmatrix} 1 - \sqrt{2} & 1 \\ 1 & -1 - \sqrt{2} \end{bmatrix} \mathbf{v} = \mathbf{0}. \] A corresponding eigenvector is \[ \mathbf{v}_1 = \begin{bmatrix} 1 \\ \sqrt{2} - 1 \end{bmatrix}. \]

The eigenvector for \(\lambda_2 = 3 - \sqrt{2}\) is found when we solve \((\mathbf{A} - \lambda_2 \mathbf{I})\mathbf{v} = \mathbf{0}\): \[ \begin{bmatrix} 1 + \sqrt{2} & 1 \\ 1 & -1 + \sqrt{2} \end{bmatrix} \mathbf{v} = \mathbf{0}. \] A corresponding eigenvector is \[ \mathbf{v}_2 = \begin{bmatrix} 1 \\ - (1 + \sqrt{2}) \end{bmatrix}. \]

Let \[ \mathbf{P} = \begin{bmatrix} 1 & 1 \\ \sqrt{2} - 1 & -(1 + \sqrt{2}) \end{bmatrix}, \quad \mathbf{D} = \begin{bmatrix} 3 + \sqrt{2} & 0 \\ 0 & 3 - \sqrt{2} \end{bmatrix}. \] Then the eigendecompositio* of \(\mathbf{A}\) is \[ \mathbf{A} = \mathbf{P}\mathbf{D}\mathbf{P}^{-1}. \]

Eigendecomposition requires square matrices. For general matrices, the Singular Value Decomposition (SVD) is used.


Exercises

Exercise 4.27 Discuss 2 reasons why we might want to complete an eigendecomposition.

Exercise 4.28 Find an eigendecomposition for \[\mathbf{A} = \begin{bmatrix}0&1\\ 3&2 \end{bmatrix}.\] Use this to compute \(\mathbf{A}^4\).

Exercise 4.29 Find an eigendecomposition for \[\mathbf{A} = \begin{bmatrix}0&2\\ 2&3 \end{bmatrix}.\] Use this to compute \(\mathbf{A}^5\).

Exercise 4.30 Find an eigendecomposition for \[\mathbf{A} = \begin{bmatrix}2&3\\ 2&1 \end{bmatrix}.\] Use this to compute \(\mathbf{A}^3\).

Exercise 4.31 Show that, if \(\mathbf{A}\) has an eigendecomposition, that \(det(\mathbf{A}) = \prod_{i=1}^n \lambda_{i}\), where the product is over the eigenvalues.