Commutative Algebra/The Cayley–Hamilton theorem and Nakayama's lemma

Determinants within a commutative ring
We shall now derive the notion of a determinant in the setting of a commutative ring.

We shall later see that there exists exactly one determinant.

Proofs:

1. Let $$A = (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_j, \mathbf a_{j+1}, \ldots, \mathbf a_n)$$, where the $$j$$-th column $$\mathbf a_j$$ is the zero vector. Then by axiom 3 for the determinant setting $$c = -1$$,
 * $$\det (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_j, \mathbf a_{j+1}, \ldots, \mathbf a_n) = \det (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_j - \mathbf a_j, \mathbf a_{j+1}, \ldots, \mathbf a_n) = \det (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_j, \mathbf a_{j+1}, \ldots, \mathbf a_n) - \det (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_j, \mathbf a_{j+1}, \ldots, \mathbf a_n) = 0$$.

Alternatively, we may also set $$c = 1$$ and $$\mathbf b_j = \mathbf a_j = \mathbf 0$$ to obtain
 * $$\det (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_j + c \mathbf b_j, \mathbf a_{j+1}, \ldots, \mathbf a_n) = (1 + c) \det (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_j, \mathbf a_{j+1}, \ldots, \mathbf a_n)$$,

from which the theorem follows by subtracting $$\det A$$ from both sides.

Those proofs correspond to the proofs for $$T0 = 0$$ for a linear map $$T$$ (in whatever context).

2. If we set $$\mathbf b_j = \mathbf a_{j+1}$$ or $$\mathbf b_j = \mathbf a_{j-1}$$ (dependent on whether we add the column left or the column right to the current column), then axiom 3 gives us
 * $$\det (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_j + c \mathbf b_j, \mathbf a_{j+1}, \ldots, \mathbf a_n) = \det (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_j, \mathbf a_{j+1}, \ldots, \mathbf a_n) + c \det (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf b_j, \mathbf a_{j+1}, \ldots, \mathbf a_n)$$,

where the latter determinant is zero because we have to adjacent equal columns.

3. Consider the two matrices $$A := (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_j, \mathbf a_{j+1}, \ldots, \mathbf a_n)$$ and $$B := (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_{j+1}, \mathbf a_j, \ldots, \mathbf a_n)$$. By 7.2, 2. and axiom 3 for determinants, we have
 * $$\begin{align}

\det B & = \det (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_{j+1} + \mathbf a_j, \mathbf a_j, \ldots, \mathbf a_n) \\ & = \det (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_{j+1} + \mathbf a_j, -\mathbf a_{j+1}, \ldots, \mathbf a_n) \\ & = \det (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_j, -\mathbf a_{j+1}, \ldots, \mathbf a_n) \\ & = - \det A \end{align}$$.

4. We exchange the $$j$$-th and $$k$$-th column by first moving the $$j$$-th column successively to spot $$k$$ (using $$|j - k|$$ swaps) and the $$k$$-th column, which is now one step closer to the $$j$$-th spot, to spot $$j$$ using $$|j - k| - 1$$ swaps. In total, we used an odd number of swaps, and all the other columns are in the same place since they moved once to the right and once to the left. Hence, 4. follows from applying 3. to each swap.

5. Let's say we want to add $$c \cdot \mathbf a_k$$ to the $$j$$-th column. Then we first use 4. to put the $$j$$-th column adjacent to $$\mathbf a_k$$, then use 2. to do the addition without change to the determinant, and then use 4. again to put the $$j$$-th column back to its place. In total, the only change our determinant has suffered was twice multiplication by $$-1$$, which cancels even in a general ring.

6. Let's say that the $$j$$-th column and the $$k$$-th column are equal, $$k \neq j$$. Then we subtract column $$j$$ from column $$k$$ (or, indeed, the other way round) without change to the determinant, obtain a matrix with a zero column and apply 1.

7. Split $$\sigma$$ into swaps, use 4. repeatedly and use further that $$\sgn$$ is a group homomorphism.

Note that we have only used axioms 2 & 3 for the preceding proof.

The following lemma will allow us to prove the uniqueness of the determinant, and also the formula $$\det(AB) = \det A \det B$$.

Lemma 7.3:

Let $$A = (a_{i, j})_{1 \le i, j \le n}$$ and $$B = (b_{i, j})_{1 \le i, j \le n}$$ be two $$n \times n$$ matrices with entries in a commutative ring $$R$$. Then
 * $$\det(AB) = \det A \sum_{\sigma \in S_n} \sgn(\sigma) b_{1, \sigma(1)} \cdots b_{n, \sigma(n)}$$.

Proof:

The matrix $$AB$$ has $$k$$-th columns $$\sum_{\nu = 1}^n b_{\nu, k} \mathbf a_\nu$$. Hence, by axiom 3 for determinants and theorem 7.2, 7. and 6., we obtain, denoting $$AB =: C = (c_{i, j})_{1 \le i, j \le n} = (\mathbf c_1, \ldots, \mathbf c_n)$$:
 * $$\begin{align}

\det(AB) &= \sum_{\nu_1 = 1}^n b_{\nu_1, 1} \det (\mathbf a_{\nu_1}, \mathbf c_2, \ldots, \mathbf c_n) \\ & = \sum_{\nu_1 = 1}^n \sum_{\nu_2}^n b_{\nu_1, 1} b_{\nu_2, 2} \det (\mathbf a_{\nu_1}, \mathbf a_{\nu_2}, \mathbf c_3, \ldots, \mathbf c_n) \\ & = \cdots = \sum_{\nu_1, \ldots, \nu_n = 1}^n b_{\nu_1, 1} \cdots b_{\nu_n, n} \det(\mathbf a_{\nu_1}, \ldots, \mathbf a_{\nu_n}) \\ & = \det A \sum_{\sigma \in S_n} \sgn(\sigma) b_{1, \sigma(1)} \cdots b_{n, \sigma(n)} \end{align}$$

Proof:

Let $$C \in R^{n \times n}$$ be an arbitrary matrix, and set $$A = I_n$$ and $$B = C$$ in lemma 7.3. Then we obtain by axiom 1 for determinants (the first time we use that axiom)
 * $$\det C = \det(I_n C) = 1 \cdot \sum_{\sigma \in S_n} \sgn(\sigma) c_{1, \sigma(1)} \cdots c_{n, \sigma(n)}$$.

Proof:

From lemma 7.3 and theorem 7.4 we may infer
 * $$\det(AB) = \det(A) \sum_{\sigma \in S_n} \sgn(\sigma) b_{1, \sigma(1)} \cdots b_{n, \sigma(n)} = \det(A) \det(B)$$.

Proof:

First of all, $$I_n$$ has nonzero entries everywhere except on the diagonal. Hence, if $$I_n = (a_{i, j})_{1 \le i, j \le n}$$, then $$a_{1, \sigma(1)} \cdots a_{n, \sigma(n)}$$ vanishes except $$\sigma(1) = 1, \ldots, \sigma(n) = n$$, i.e. $$\sigma$$ is the identity. Hence $$\det(I_n) = 1$$.

Let now $$A$$ be a matrix whose $$j$$-th and $$j+1$$-th columns are equal. The function
 * $$f: S_n \to S_n, f(\sigma) = k \mapsto \begin{cases}

\sigma(k) & k \notin \{j, j+1\} \\ \sigma(j) & k = j+1 \\ \sigma(j+1) & k = j \end{cases}$$ is bijective, since the inverse is given by $$f$$ itself. Furthermore, since $$f$$ amounts to composing $$\sigma$$ with another swap, it is sign reversing. Hence, we have
 * $$\begin{align}

\det(A) &= \sum_{\sigma \in S_n} \sgn(\sigma) a_{1, \sigma(1)} \cdots a_{n, \sigma(n)} \\ &= \sum_{\sgn \sigma = 1} a_{1, \sigma(1)} \cdots a_{n, \sigma(n)} - \sum_{\sgn \sigma = -1} a_{1, \sigma(1)} \cdots a_{n, \sigma(n)} \\ & = \sum_{\sgn \sigma = 1} a_{1, \sigma(1)} \cdots a_{n, \sigma(n)} - \sum_{\sgn \sigma = 1} a_{1, f(\sigma)(1)} \cdots a_{n, f(\sigma)(n)} \end{align}$$. Now since the $$j$$-th and $$j+1$$-th column of $$A$$ are identical, $$\forall k, l \in \mathbb N : a_{k, \sigma(l)} = a_{k, f(\sigma)(l)}$$. Hence $$\det A = 0$$.

Linearity follows from the linearity of each summand:
 * $$\sum_{\sigma \in S_n} \sgn(\sigma) a_{1, \sigma(1)} \cdots (a_{\sigma^{-1}(j), j} + c b_{\sigma^{-1}(j), j}) \cdots a_{n, \sigma(n)} = \sum_{\sigma \in S_n} \sgn(\sigma) a_{1, \sigma(1)} \cdots a_{\sigma^{-1}(j), j} \cdots a_{n, \sigma(n)} + c \sum_{\sigma \in S_n} \sgn(\sigma) a_{1, \sigma(1)} \cdots b_{\sigma^{-1}(j), j} \cdots a_{n, \sigma(n)}$$.

Proof:

Observe that inversion is a bijection on $$S_n$$ the inverse of which is given by inversion ($$(\sigma^{-1})^{-1} = \sigma$$). Further observe that $$\sgn(\sigma) = \sgn(\sigma^{-1})$$, since we just apply all the transpositions in reverse order. Hence,
 * $$\det A = \sum_{\sigma \in S_n} \sgn(\sigma) a_{1, \sigma(1)} \cdots a_{n, \sigma(n)} = \sum_{\sigma^{-1} \in S_n} \sgn(\sigma^{-1}) a_{1, \sigma^{-1}(1)} \cdots a_{n, \sigma^{-1}(n)} = \sum_{\sigma \in S_n} \sgn(\sigma) a_{\sigma(1), 1} \cdots a_{\sigma(n), n} = \det A^t$$.

Proof 1:

We prove the theorem from the formula for the determinant given by theorems 7.5 and 7.6.

Let $$k \in \{1, \ldots, n\}$$ be fixed. For each $$\nu \in \{1, \ldots, n\}$$, we define
 * $$f: S_{n-1} \to S_n, f(\sigma) := m \mapsto \begin{cases}

\nu & m = \nu \\ \sigma(m) & m < \nu \wedge \sigma(m) < \nu \\ \sigma(m) + 1 & m < \nu \wedge \sigma(m) \ge \nu \\ \sigma(m-1) & m > \nu \wedge \sigma(m) < \nu \\ \sigma(m-1) + 1 & m > \nu \wedge \sigma(m) \ge \nu \end{cases}$$. Then
 * $$\begin{align}

\sum_{\nu=1}^n a_{\nu, k} \det A_{\nu, k} & = \sum_{\nu=1}^n (-1)^{\nu + k} a_{\nu, k} \sum_{\sigma \in S_{n-1}} \sgn(\sigma) a_{1, f(\sigma)(1)} \cdots a_{k-1, f(\sigma)(k-1)} a_{k+1, f(\sigma)(k+1)} \cdots a_{n, f(\sigma)(n)} \\ & = \sum_{\sigma \in S_n} \sgn(\sigma) a_{1, \sigma(1)} \cdots a_{n, \sigma(n)}. \end{align}$$

Proof 2:

We note that all of the above derivations could have been done with rows instead of columns (which amounts to nothing more than exchanging $$a_{i, j}$$ with $$a_{j, i}$$ each time), and would have ended up with the same formula for the determinant since
 * $$\sum_{\sigma \in S_n} \sgn(\sigma) a_{1, \sigma(1)} \cdots a_{n, \sigma(n)} = \sum_{\sigma^{-1} \in S_n} \sgn(\sigma^{-1}) a_{1, \sigma^{-1}(1)} \cdots a_{n, \sigma^{-1}(n)} = \sum_{\sigma \in S_n} \sgn(\sigma) a_{\sigma(1), 1} \cdots a_{\sigma(n), n}$$

as argued in theorem 7.7.

Hence, we prove that the function $$R^{n \times n} \to R$$ given by the formula $$\sum_{\nu=1}^n (-1)^{\nu + k} a_{\nu, k} \det A_{\nu, k}$$ satisfies 1 - 3 of 7.1 with rows instead of columns, and then apply theorem 7.4 with rows instead of columns.

1.

Set $$A = I_n$$ to obtain
 * $$\sum_{\nu=1}^n a_{\nu, k} (-1)^{\nu + k} \det A_{\nu, k} = (-1)^{2k} a_{k, k} \det A_{k, k} = 1 \cdot 1 = 1$$.

2.

Let $$A$$ have two equal adjacent rows, the $$j$$-th and $$j+1$$-th, say. Then
 * $$\sum_{\nu=1}^n a_{\nu, k} (-1)^{\nu + k} \det A_{\nu, k} = (-1)^{j + k} \det A_{j, k} + (-1)^{j + 1 + k} \det A_{j + 1, k} = 0$$,

since each of the $$A_{\nu, k}$$ has two equal adjacent rows except for possibly $$\nu = j$$ and $$\nu = j+1$$, which is why, by theorem 7.6, the determinant is zero in all those cases, and further $$A_{j, k} = A_{j + 1, k}$$, since in both we deleted "the same" row.

3.

Define $$B := (b_{i, j})_{1 \le i, j \le n} := (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf a_j + c \mathbf b_j \mathbf a_{j+1}, \ldots, \mathbf a_n)^t$$, and for each $$\nu, k \in \{1, \ldots, n\}$$ define $$C_{\nu, k}$$ as the matrix obtained by crossing out the $$\nu$$-th row and the $$k$$-th column from the matrix $$C := (c_{i, j})_{1 \le i, j \le n} := (\mathbf a_1, \ldots, \mathbf a_{j-1}, \mathbf b_j \mathbf a_{j+1}, \ldots, \mathbf a_n)^t$$. Then by theorem 7.6 and axiom 3 for the determinant,
 * $$\begin{align}

\sum_{\nu=1}^n b_{\nu, k} (-1)^{\nu + k} \det B_{\nu, k} & = \sum_{\nu=1}^{j-1} a_{\nu, k} (-1)^{\nu + k} (\det A_{\nu, k} + c \det C_{\nu, k}) + (-1)^{j + k} (a_{j, k} + c b_{j, k}) \det A_{j, k} + \sum_{\nu = j+1}^n a_{\nu, k} (-1)^{\nu + k} (\det A_{\nu, k} + c \det C_{\nu, k}) \\ & = \sum_{\nu=1}^n a_{\nu, k} (-1)^{\nu + k} \det A_{\nu, k} + c \sum_{\nu=1}^n c_{\nu, k} (-1)^{\nu + k} \det C_{\nu, k} \\ & = \det A + c \det C \end{align}$$. Hence follows linearity by rows.

For the sake of completeness, we also note the following lemma:

Lemma 7.9:

Let $$A$$ be an invertible matrix. Then $$\det (A)$$ is invertible.

Proof:

Indeed, $$\det(A)^{-1} = \det(A^{-1})$$ due to the multiplicativity of the determinant.

The converse is also true and will be proven in the next subsection.

Exercises

 * Exercise 7.1.1: Argue that the determinant, seen as a map from the set of all matrices (where scalars are $$1 \times 1$$-matrices), is idempotent.

Cramer's rule in the general case
Proof 1:

Let $$j \in \{1, \ldots, n\}$$ be arbitrary but fixed. The determinant of $$A$$ is linear in the first column, and hence constitutes a linear map in the first column $$L_j: R^n \to R$$ mapping any vector to the determinant of $$A$$ with the $$j$$-th column replaced by that vector. If $$\mathbf a_j$$ is the $$j-$$-th column of $$A$$, $$L_j(\mathbf a_j) = \det(A)$$. Furthermore, if we insert a different column $$\mathbf a_k$$ into $$L_j$$, we obtain zero, since we obtain the determinant of a matrix where the column $$\mathbf a_k$$ appears twice. We now consider the system of equations
 * $$\begin{cases}

a_{1, 1} x_1 + \cdots + a_{1, n} x_n & = b_1 \\ & \vdots \\ a_{n, 1} x_1 + \cdots + a_{n, n} x_n & = b_n, \end{cases}$$ where $$(x_1, \ldots, x_n)^T$$ is the unique solution of the system $$Ax = b$$, which exists since it is given by $$A^{-1}b$$ since $$A$$ is invertible. Since $$L_j$$ is linear, we find an $$1 \times n$$ matrix $$(c_1, \ldots, c_n)$$ such that for all $$\mathbf v \in R^n$$
 * $$(c_1, \ldots, c_n) \cdot \mathbf v = L_j(\mathbf v)$$;

in fact, due to theorem 7.8, $$c_k = (-1)^{j+k} \det(A_{j, k})$$. We now add up the lines of the linear equation system above in the following way: We take $$c_1$$ times the first row, add $$c_2$$ times the second row and so on. Due to our considerations, this yields the result
 * $$\det(A) x_j = L_j(\mathbf b)$$.

Due to lemma 7.9, $$\det(A)$$ is invertible. Hence, we get
 * $$x_j = (\det(A))^{-1} L_j(\mathbf b) = (\det(A))^{-1} \det(A_j)$$

and hence the theorem.

Proof 2:

For all $$j \in \{1, \ldots, n\}$$, we define the matrix
 * $$X_j := \begin{pmatrix}

1 & 0 & \cdots & 0 & x_1 & 0 & \cdots & & 0 \\ 0 & 1 & \cdots & 0 & x_2 & 0 & \cdots & & 0 \\ \vdots & & \ddots & & \vdots & & & & \vdots \\ \vdots & & &  & \vdots &\ddots & & & \vdots \\ 0 & & \cdots & 0 & x_n & 0 & \cdots & 1 & 0 \\ 0 & & \cdots & 0 & x_n & 0 & \cdots & 0 & 1 \end{pmatrix};$$ this matrix shall represent a unit matrix, where the $$j$$-th column is replaced by the vector $$(x_1, \ldots, x_n)^\mathbf T$$. By expanding the $$j$$-th column, we find that the determinant of this matrix is given by $$\det(X_j) = x_j$$.

We now note that if $$A = (\mathbf a_1, \ldots, \mathbf a_n)$$, then $$X_j = A^{-1} (\mathbf a_1, \ldots, \mathbf a_{j-1}, A \mathbf b, \mathbf a_{j+1}, \ldots, \mathbf a_n) = A^{-1} A_j$$. Hence
 * $$x_j = \det(A^{-1} A_j) = \det(A^{-1}) \det(A_j) = \det(A)^{-1} \det (A_j)$$,

where the last equality follows as in lemma 7.9.

Proof:

For $$j \in \{1, \ldots, n\}$$, we set $$\mathbf b_j := e_j = (0, \ldots, 0, 1, 0, \ldots, 0)^T$$, where the zero is at the $$j$$-th place. Further, we set $$L_j$$ to be the linear function from proof 1 of theorem 7.10, and $$M_j$$ its matrix. Then $$\operatorname{adj}(A)$$ is given by
 * $$\operatorname{adj}(A) = \begin{pmatrix}

- M_1 - \\ \vdots \\ - M_n - \end{pmatrix}$$ due to theorem 7.8. Hence,
 * $$\operatorname{adj}(A) A = \begin{pmatrix}

- M_1 A - \\ \vdots \\ - M_n A - \end{pmatrix} = \begin{pmatrix} \det(A) & 0 & \cdots & \cdots & 0 \\ 0 & \det(A) & & 0 & \vdots \\ \vdots & & \ddots & & \vdots \\ \vdots & 0 & & \det(A) & 0 \\ 0 & \cdots & \cdots & 0 & \det(A) \end{pmatrix},$$ where we used the properties of $$L_j$$ established in proof 1 of theorem 7.10.

The theorems
Now we may finally apply the machinery we have set up to prove the following two fundamental theorems.

Note that the polynomial in $$\phi$$ is monic, that is, the leading coefficient is $$1$$, the unit of the ring in question.

Proof: Assume that $$\{m_1, \ldots, m_n\}$$ is a generating set for $$M$$. Since $$\phi(M) \subseteq I M$$, we may write
 * $$\phi(m_j) = \sum_{k=1}^n b_{j, k} x_k, ~j \in \{1, \ldots, n\}$$ (*),

where $$b_k \in I$$ for each $$k$$. We now define a new commutative ring as follows:
 * $$\tilde R := \{\phi^k | k \in \mathbb N\} \cup R$$,

where we regard each element $$r$$ of $$R$$ as the endomorphism $$m \mapsto rm$$ on $$M$$. That is, $$\tilde R$$ is a subring of the endomorphism ring of $$M$$ (that is, multiplication is given by composition). Since $$\phi$$ is $$R$$-linear, $$\tilde R$$ is commutative.

Now to every $$n \times n$$ matrix $$A$$ with entries in $$\tilde R$$ we may associate a function
 * $$A: M^n \to M^n, A\left((x_1, \ldots, x_n)^T\right) := \left( \sum_{k=1}^n a_{1, k}(x_1), \ldots, \sum_{k=1}^n a_{1, k}(x_1) \right)$$.

By exploiting the linearities of all functions involved, it is easy to see that for another $$n \times n$$ matrix with entries in $$\tilde R$$ called $$B$$, the associated function of $$AB$$ equals the composition of the associated functions of $$A$$ and $$B$$; that is, $$(AB)(x) = A(B(x))$$.

Now with this in mind, we may rewrite the system (*) as follows:
 * $$A(x) = 0$$,

where $$A$$ has $$j, k$$-th entry $$\delta_{j, k} \phi - b_{j, k} \in \tilde R$$. Now define $$B := \operatorname{adj}(A)$$. From Cramer's rule (theorem 7.11) we obtain that
 * $$BA = I_n \det(A)$$,

which is why
 * $$(\det A x_1, \ldots, \det A x_n)^t = (BA)(x) = B(A(x)) = B(0) = \mathbf 0$$, the zero vector.

Hence, $$\det A \in \tilde R$$ is the zero mapping, since it sends all generators to zero. Now further, as can be seen e.g. from the representation given in theorem 7.4, it has the form
 * $$\phi^n + a_{n-1} \phi^{n-1} + \cdots + a_1 \phi + a_0$$

for suitable $$a_{n-1}, \ldots, a_0 \in I$$.

Proof:

Choose $$\phi = \operatorname{Id_M}$$ in theorem 7.12 to obtain for $$m \in M$$ that
 * $$\phi^n(m) + a_{n-1} \phi^{n-1}(m) + \cdots + a_1 \phi(m) + a_0 m = (1 + a_{n-1} + \cdots + a_0)m = 0$$

for suitable $$a_{n-1}, \ldots, a_0 \in I$$, since the identity is idempotent.