2025-07-27

1221: Laplace Expansion of Determinant of Square Matrix over Commutative Ring And Its Corollary

<The previous article in this series | The table of contents of this series | The next article in this series>

description/proof of that Laplace expansion of determinant of square matrix over commutative ring and its corollary

Topics


About: matrices space

The table of contents of this article


Starting Context



Target Context


  • The reader will have a description and a proof of the Laplace expansion of the determinant of any square matrix over any commutative ring holds and its corollary.

Orientation


There is a list of definitions discussed so far in this site.

There is a list of propositions discussed so far in this site.


Main Body


1: Structured Description


Here is the rules of Structured Description.

Entities:
\(R\): \(\in \{\text{ the commutative rings }\}\)
\(n\): \(\in \mathbb{N} \setminus \{0, 1\}\)
\(M\): \(\in \{\text{ the } n \times n R \text{ matrices }\}\)
//

Statements:
\(\forall j \in \{1, ..., n\} (det M = \sum_{l \in \{1, ..., n\}} M^j_l M_{j, l})\)
\(\land\)
\(\forall l \in \{1, ..., n\} (det M = \sum_{j \in \{1, ..., n\}} M^j_l M_{j, l})\)
\(\land\)
\(\forall j, j' \in \{1, ..., n\} \text{ such that } j \neq j' (\sum_{l \in \{1, ..., n\}} M^j_l M_{j', l} = 0)\)
\(\land\)
\(\forall l, l' \in \{1, ..., n\} \text{ such that } l \neq l' (\sum_{j \in \{1, ..., n\}} M^j_l M_{j, l'} = 0)\)
//

After all, \(det M \delta^j_{j'} = \sum_{l \in \{1, ..., n\}} M^j_l M_{j', l}\) and \(det M \delta^l_{l'} = \sum_{j \in \{1, ..., n\}} M^j_l M_{j, l'}\).


2: Proof


Whole Strategy: Step 1: see that \(det M = \sum_{l \in \{1, ..., n\}} M^j_l M_{j, l}\); Step 2: see that \(det M = \sum_{j \in \{1, ..., n\}} M^j_l M_{j, l}\); Step 3: see that \(\sum_{l \in \{1, ..., n\}} M^j_l M_{j', l} = 0\); Step 4: see that \(\sum_{j \in \{1, ..., n\}} M^j_l M_{j, l'} = 0\).

Step 1:

For each \(j \in \{1, ..., n\}\), let us see that \(det M = \sum_{l \in \{1, ..., n\}} M^j_l M_{j, l}\).

\(det M = \sum_{\sigma \in S_n} sgn \sigma M^1_{\sigma_1} ... M^n_{\sigma_n}\), by the definition.

As each term has the \(M^j_{\sigma_j}\) factor, that can be expressed as \(\sum_{l \in \{1, ..., n\}} M^j_l N^l\), where \(N^l \in R\).

We are going to see that \(N^l = M_{j, l}\).

\(N^l = \sum_{\sigma \in \{\sigma \in S_n \vert \sigma^j = l\}} sgn \sigma M^1_{\sigma_1} ... \widehat{M^j_{\sigma_j}} ... M^n_{\sigma_n}\), where the hat means that the factor is missing.

Let \(\sigma_0 \in S_n\) be the permutation that \(\sigma_j = l\) and \((\sigma_1, ..., \widehat{\sigma_j}, ..., \sigma_n) = (1, ..., \widehat{l}, ..., n)\).

\(sgn \sigma_0 = (-1)^{j + l}\), because when \(l \le j\), it is \((1, ..., l, ..., j, ..., n) \mapsto (1, ..., \widehat{l}, ..., j, l, j + 1, ..., n)\), which means moving \(l\) to the \(j\)-th position, switching \(l\) with \(l + 1\), ..., and switching \(l\) with \(j\), doing \(j - l\) switches, but \((-1)^{j - l} = (-1)^{j - l} * 1 = (-1)^{j - l} * (-1)^{2 l} = (-1)^{j + l}\); when \(j \lt l\), it is \((1, ..., j, ..., l, ..., n) \mapsto (1, ..., j - 1, l, j, ..., \widehat{l}, ..., n)\), which means moving \(l\) to the \(j\)-th position, switching \(l\) with \(l - 1\), ..., and switching \(l\) with \(j\), doing \(l - 1 - (j - 1) = l - j\) switches, but \((-1)^{l - j} = (-1)^{l - j} * 1 = (-1)^{l - j} * (-1)^{2 j} = (-1)^{j + l}\).

\(\sigma = \sigma' \circ \sigma_0\), where \(\sigma'\) is the corresponding permutation of \((1, ..., \widehat{l}, ..., j, j + 1, ..., n)\) or \((1, ..., j - 1, j, ..., \widehat{l}, ..., n)\).

\(sgn \sigma = sgn \sigma' sgn \sigma_0 = (-1)^{j + l} sgn \sigma'\).

When \(\sigma\) goes round \(\{\sigma \in S_n \vert \sigma^j = l\}\), \(\sigma'\) goes round all the permutations of \((1, ..., \widehat{l}, ..., j, j + 1, ..., n)\) or \((1, ..., j - 1, j, ..., \widehat{l}, ..., n)\), which is practically \(S_{n - 1}\): I said "practically" because the definition of n-symmetric group requires that the underlying set is \(\{1, ..., n - 1\}\), but this is just a renaming of the elements.

So, \(N^l = \sum_{\sigma' \in S_{n - 1}} (-1)^{j + l} sgn \sigma' M^1_{\sigma'_1} ... \widehat{M^j_{\sigma_j}} ... M^n_{\sigma'_n} = (-1)^{j + l} \sum_{\sigma' \in S_{n - 1}} sgn \sigma' M^1_{\sigma'_1} ... \widehat{M^j_{\sigma_j}} ... M^n_{\sigma'_n} = M_{j, l}\), the \((j, l)\)-cofactor.

So, \(det M = \sum_{l \in \{1, ..., n\}} M^j_l M_{j, l}\).

Step 2:

For each \(l \in \{1, ..., n\}\), let us see that \(det M = \sum_{j \in \{1, ..., n\}} M^j_l M_{j, l}\).

By Note for the definition of determinant of square matrix over ring, \(det M = \sum_{\sigma \in S_n} sgn \sigma M^{\sigma_1}_1 ... M^{\sigma_n}_n\).

As is expected, the logic is parallel to Step 1.

As each term has the \(M^{\sigma_l}_l\) factor, that can be expressed as \(\sum_{j \in \{1, ..., n\}} M^j_l N_j\), where \(N_j \in R\).

We are going to see that \(N_j = M_{j, l}\).

\(N_j = \sum_{\sigma \in \{\sigma \in S_n \vert \sigma^l = j\}} sgn \sigma M^{\sigma_1}_1 ... \widehat{M^{\sigma_l}_l} ... M^{\sigma_n}_n\), where the hat means that the factor is missing.

Let \(\sigma_0 \in S_n\) be the permutation that \(\sigma_l = j\) and \((\sigma_1, ..., \widehat{\sigma_l}, ..., \sigma_n) = (1, ..., \widehat{j}, ..., n)\).

\(sgn \sigma_0 = (-1)^{j + l}\), as before.

\(\sigma = \sigma' \circ \sigma_0\), where \(\sigma'\) is the corresponding permutation of \((1, ..., \widehat{j}, ..., l, l + 1, ..., n)\) or \((1, ..., l - 1, l, ..., \widehat{j}, ..., n)\).

\(sgn \sigma = sgn \sigma' sgn \sigma_0 = (-1)^{j + l} sgn \sigma'\).

When \(\sigma\) goes round \(\{\sigma \in S_n \vert \sigma^l = j\}\), \(\sigma'\) goes round all the permutations of \((1, ..., \widehat{j}, ..., l, l + 1, ..., n)\) or \((1, ..., l - 1, l, ..., \widehat{j}, ..., n)\), which is practically \(S_{n - 1}\).

So, \(N_j = \sum_{\sigma' \in S_{n - 1}} (-1)^{j + l} sgn \sigma' M^{\sigma'_1}_1 ... \widehat{M^{\sigma_l}_l} ... M^{\sigma'_n}_n = (-1)^{j + l} \sum_{\sigma' \in S_{n - 1}} sgn \sigma' M^{\sigma'_1}_1 ... \widehat{M^{\sigma_l}_l} ... M^{\sigma'_n}_n = M_{j, l}\), the \((j, l)\)-cofactor.

So, \(det M = \sum_{j \in \{1, ..., n\}} M^j_l M_{j, l}\).

Step 3:

Let \(j, j' \in \{1, ..., n\}\) be any such that \(j \neq j'\).

Let us think of the matrix made by replacing the \(j'\)-th row of \(M\) with the \(j\)-th row, \(M'\).

As is well known, \(det M' = 0\), because it has 2 duplicate rows.

But by Step 1, expanding by the \(j'\)-th row, \(det M' = \sum_{l \in \{1, ..., n\}} M'^{j'}_l M'_{j', l} = \sum_{l \in \{1, ..., n\}} M^j_l M_{j', l}\), because \(M'_{j', l} = M_{j', l}\) and \(M'^{j'}_l = M^j_l\).

So, \(\sum_{l \in \{1, ..., n\}} M^j_l M_{j', l} = 0\).

Step 4:

Let \(l, l' \in \{1, ..., n\}\) be any such that \(l \neq l'\).

Let us think of the matrix made by replacing the \(l'\)-th column of \(M\) with the \(l\)-th column, \(M'\).

As is well known, \(det M' = 0\), because it has 2 duplicate columns.

But by Step 2, expanding by the \(l'\)-th column, \(det M' = \sum_{j \in \{1, ..., n\}} M'^j_{l'} M_{j, l'} = \sum_{j \in \{1, ..., n\}} M^j_l M_{j, l'}\), because \(M'_{j, l'} = M_{j, l'}\) and \(M'^j_{l'} = M^j_l\).

So, \(\sum_{j \in \{1, ..., n\}} M^j_l M_{j, l'} = 0\).


References


<The previous article in this series | The table of contents of this series | The next article in this series>