2024-10-27

836: For Invertible Square Matrix, from Top Row Downward Through Any Row, Each Row Can Be Changed to Have 1 1 Component and 0 Others Without Duplication to Keep Matrix Invertible

<The previous article in this series | The table of contents of this series | The next article in this series>

description/proof of that for invertible square matrix, from top row downward through any row, each row can be changed to have 1 1 component and 0 others without duplication to keep matrix invertible

Topics


About: matrix

The table of contents of this article


Starting Context



Target Context


  • The reader will have a description and a proof of the proposition that for any invertible square matrix, from the top row downward through any row, each row can be changed to have a 1 1 component and 0 others without any duplication to keep the matrix invertible.

Orientation


There is a list of definitions discussed so far in this site.

There is a list of propositions discussed so far in this site.


Main Body


1: Structured Description


Here is the rules of Structured Description.

Entities:
F: { the fields }
M: { the nxnF invertible matrices }, with the j-th row denoted as Mj
//

Statements:
M1 can be changed to (0,...,0,1,0,...,0) where 1 is the j1-th column with the new matrix, N1, invertible.
...
Mk can be changed to (0,...,0,1,0,...,0) where 1 is the jk-th column with the new matrix, Nk, invertible.

{j1,...,jk} is distinct.
//


2: Note


This proposition directly talks only about "from the top row downward", but of course, some middle rows can be changed, because we can just reorder the rows to move the to-be-changed rows to the top rows, change the top rows, and reorder the rows to restore the original order without changing the invertibleness.

This proposition directly talks only about rows, but of course, columns can be changed, because we can just transpose the matrix, change the rows, and transpose the transposed matrix without changing the invertibleness.


3: Proof


Whole Strategy: Step 1: see that any square matrix is invertible if and only if the determinant is nonzero; Step 2: Laplace expand detM with respect to the 1st row, change M1 to (0,...,0,1,0,...,0) where 1 is the j1-th column with the new matrix, N1, and see that detN1=1(1)s1det m1(M)0, where s1 is the appropriate natural number and m1 is the operation that takes the appropriate minor; Step 3: Laplace expand det m1(M) with respect to the 1st row, change M2 to (0,...,0,1,0,...,0) where 1 is the j2-th column with the new matrix, N2, and see that detN2=1(1)s11(1)s2det m2m1(M)0, where s2 and m2 are likewise; Step 4: suppose that M1,...,Ml1 have been already changed, Laplace expand det ml1...,m1(M) with respect to the 1st row, change Ml to (0,...,0,1,0,...,0) where 1 is the jl-th column with the new matrix, Nl, and see that detNl=1(1)s1...1(1)sldet ml...m1(M)0, where sl and ml are likewise.

Step 1:

Any square matrix is invertible if and only if the determinant is nonzero, by the proposition that any square matrix is invertible if and only if its determinant is nonzero.

So, all what we need to check is that detNk0.

Step 2:

Let us Laplace expand detM with respect to the 1st row.

detM=Mj1(1)1+jdet m1,j(M), where m1,j is the operator that takes the 1,j minor.

As detM0, at least 1 det m1,j(M) is nonzero, because otherwise, detM=Mj1(1)1+j0=0, a contradiction.

Let us take such any j1 and change M1 to (0,...,0,1,0,...,0) where 1 is the j1-th column with the new matrix, N1.

Then, detN1=1(1)1+j1det m1,j1(M)0: m1,j1(N1)=m1,j1(M).

Let us denote s1:=1+j1 and m1:=m1,j1, and detN1=1(1)s1det m1(M)0.

Step 3:

Let us Laplace expand det m1(M) with respect to the 1st row.

The 1st row consists of M2 without the j1-th column.

There is a Mj220 with the corresponding minor m2m1(M) determinant nonzero, because otherwise, each term of the expansion would be zero with the total zero, a contradiction.

j2j1, because m1(M) has excluded the j1-th column.

Let us change M2 to (0,...,0,1,0,...,0) where 1 is the j2-th column with the new matrix, N2.

Then, detN2=1(1)s1det m1(N1)=1(1)s11(1)s2det m2m1(N1)=1(1)s11(1)s2det m2m1(M)0: m2m1(N1)=m2m1(M), because each of the both sides does not contain any part of M1,M2.

Step 4:

Let us suppose that M1,...,Ml1 have been already changed with {j1,...,jl1} distinct and detNl1=1(1)s1...1(1)sl1det ml1...m1(M)0.

Let us Laplace expand det ml1...m1(M) with respect to the 1st row.

The 1st row consists of Ml without the j1,...,jl1-th columns.

There is a Mjll0 with the corresponding minor mlml1...m1(M) determinant nonzero, because otherwise, each term of the expansion would be zero with the total zero, a contradiction.

{j1,...,jl} is distinct, because ml1...m1(M) has excluded the j1,...,jl1-th columns.

Let us change Ml to (0,...,0,1,0,...,0) where 1 is the jl-th column with the new matrix, Nl.

Then, detNl=1(1)s1det m1(Nl)=1(1)s11(1)s2det m2m1(Nl)=...=1(1)s1...1(1)sldet ml...m1(Nl)=1(1)s1...1(1)sldet ml...m1(M)0: ml...m1(Nl)=ml...m1(M), because each of the both sides does not contain any part of M1,...,Ml.

Step 5:

So, for each 1kn, detNk0.

We can go all down to k=n, and Nn will be a reordered identity matrix.


References


<The previous article in this series | The table of contents of this series | The next article in this series>