2025-09-21

1315: For Lie Group Homomorphism, if Differential at Identity Is 0, Map Is Constant over Connected Component

<The previous article in this series | The table of contents of this series |

description/proof of that for Lie group homomorphism, if differential at identity Is 0, map is constant over connected component

Topics


About: C manifold

The table of contents of this article


Starting Context



Target Context


  • The reader will have a description and a proof of the proposition that for any Lie group homomorphism, if the differential at the identity is 0, the map is constant over each connected component.

Orientation


There is a list of definitions discussed so far in this site.

There is a list of propositions discussed so far in this site.


Main Body


1: Structured Description


Here is the rules of Structured Description.

Entities:
G1: { the Lie groups }
G2: { the Lie groups }
f: :G1G2, { the Lie group homomorphisms }
df1: :T1G1T1G2, = the differential at 1
//

Statements:
df1=0

T{ the connected components of M1}(f|T{ the constant maps })
//


2: Note


Especially, for the connected component that contains 1, T1, f|T=1, because f|T(1)=1.


3: Proof


Whole Strategy: Step 1: see that f is a constant rank map; Step 2: conclude the proposition.

Step 1:

Any Lie group homomorphism has a constant rank, by the proposition that for any 2 Lie group homomorphisms between any Lie groups, the map as the multiplication of the 1st homomorphism and the multiplicative inverse of the 2nd homomorphism has a constant rank: the immediate corollary mentioned in the Note.

So, f has a constant rank, but as f has the rank, 0, at 1, f has the constant 0 rank.

Step 2:

Let T be any connected component of M1.

By the proposition that for any C map between any C manifolds, if the map has the constant rank 0, the map is constant over each connected component, f|T is constant.


References


<The previous article in this series | The table of contents of this series |

1314: For 2 Lie Group Homomorphisms Between Lie Groups, Map as Multiplication of 1st Homomorphism and Multiplicative Inverse of 2nd Homomorphism Has Constant Rank

<The previous article in this series | The table of contents of this series | The next article in this series>

description/proof of that for 2 Lie group homomorphisms between Lie groups, map as multiplication of 1st homomorphism and multiplicative inverse of 2nd homomorphism has constant rank

Topics


About: C manifold

The table of contents of this article


Starting Context



Target Context


  • The reader will have a description and a proof of the proposition that for any 2 Lie group homomorphisms between any Lie groups, the map as the multiplication of the 1st homomorphism and the multiplicative inverse of the 2nd homomorphism has a constant rank.

Orientation


There is a list of definitions discussed so far in this site.

There is a list of propositions discussed so far in this site.


Main Body


1: Structured Description


Here is the rules of Structured Description.

Entities:
G1: { the Lie groups }
G2: { the Lie groups }
f: :G1G2, { the Lie group homomorphisms }
f: :G1G2, { the Lie group homomorphisms }
h: :G1G2,gf(g)(f(g))1
//

Statements:
gG1(Rank(dhg)=Rank(dh1))
//


2: Note


h is not any Lie group homomorphism in general, because h(gg)=f(gg)(f(gg))1=f(g)f(g)(f(g)f(g))1=f(g)f(g)(f(g))1(f(g))1=f(g)h(g)(f(g))1, which is not guaranteed to equal h(g)h(g)=f(g)(f(g))1f(g)(f(g))1, unless G2 is Abelian.

As an immediate corollary, f has a constant rank, because f can be taken to be f=1, which is a Lie group homomorphism, and h(g)=f(g)(f(g))1=f(g)11=f(g)1=f(g), so, h=f.


3: Proof


Whole Strategy: Step 1: see that h(g0g)=R(f(g0))1Lf(g0)h(g) and take h:G1G2,gh(g0g),=R(f(g0))1Lf(g0)h=hLg0; Step 2: see that dh1=dR(f(g0))1f(g0)dLf(g0)1dh1=dhg0dLg01; Step 3: conclude the proposition.

Step 1:

h is C, because f and f are C, and the multiplicative inverse map and the multiplication map are C: it is :G1G1×G1G2×G2G2×G2G2,g(g,g)(f(g),f(g))(f(g),(f(g))1)f(g)(f(g))1.

For each g0Gj, let Lg0:GjGj,gg0g and Rg0:GjGj,ggg0 be the left translation and the right translation by g0, which are some diffeomorphisms.

For each g0,gG1, let us see that h(g0g)=R(f(g0))1Lf(g0)h(g).

h(g0g)=f(g0g)(f(g0g))1=f(g0)f(g)(f(g0)f(g))1=f(g0)f(g)(f(g))1(f(g0))1=f(g0)h(g)(f(g0))1=Lf(g0)(h(g))(f(g0))1=R(f(g0))1(Lf(g0)(h(g))).

So, let us take h:G1G2,gh(g0g),=R(f(g0))1Lf(g0)h=hLg0, which is C as a composition of C maps.

Step 2:

dh1=dR(f(g0))1f(g0)h(1)dLf(g0)h(1)dh1=dhg01dLg01.

As h(1)=f(1)(f(1))1=111=11=1, dh1=dR(f(g0))1f(g0)dLf(g0)1dh1=dhg0dLg01.

Step 3:

dR(f(g0))1f(g0):Tf(g0)G2Tf(g0)(f(g0))1G2, which is a 'vectors spaces - linear morphisms' isomorphism, because R(f(g0))1 is a diffeomorphism.

dLf(g0)1:T1G2Tf(g0)G2, which is a 'vectors spaces - linear morphisms' isomorphism, because Lf(g0) is a diffeomorphism.

dh1:T1G1T1G2.

dhg0:Tg0G1Th(g0)G2.

dLg01:T1G1Tg0G1, which is a 'vectors spaces - linear morphisms' isomorphism, because Lg0 is a diffeomorphism.

By the proposition that for any linear map between any vectors spaces, any 'vectors spaces - linear morphisms' isomorphism onto the domain of the linear map, and any 'vectors spaces - linear morphisms' isomorphism from any superspace of the codomain of the linear map, the composition of the linear map after the 1st isomorphism and before the 2nd isomorphism has the rank of the linear map, Rank(dh1)=Rank(dh1)=Rank(dhg0).


References


<The previous article in this series | The table of contents of this series | The next article in this series>

1313: For C Map Between C Manifolds, if Map Has Constant Rank 0, Map Is Constant over Connected Component

<The previous article in this series | The table of contents of this series | The next article in this series>

description/proof of that for C map between C manifolds, if map has constant rank 0, map is constant over connected component

Topics


About: C manifold

The table of contents of this article


Starting Context



Target Context


  • The reader will have a description and a proof of the proposition that for any C map between any C manifolds, if the map has the constant rank 0, the map is constant over each connected component.

Orientation


There is a list of definitions discussed so far in this site.

There is a list of propositions discussed so far in this site.


Main Body


1: Structured Description


Here is the rules of Structured Description.

Entities:
M1: { the C manifolds }
M2: { the C manifolds }
f: :M1M2, { the C maps }
//

Statements:
mM1(Rank(f)m=0)

T{ the connected components of M1}(f|T{ the constant maps })
//


2: Note


M1 and M2 are required to be without boundary for this proposition, because this proposition uses the rank theorem for C map between C manifolds, which requires the domain and the codomain without boundary.

As an immediate corollary, when M1 is connected, f is constant.


3: Proof


Whole Strategy: Step 1: see that around each mM1, there is an open neighborhood, Um, over which f is constant; Step 2: conclude the proposition.

Step 1:

Let mM1 be any.

As f has the constant rank, 0, by the rank theorem for C map between C manifolds, there are a chart around m, (UmM1,ϕm), and a chart around f(m), (Uf(m)M2,ϕf(m)), such that ϕf(m)fϕm1|ϕm(Um):ϕm(Um)ϕf(m)(Uf(m)) is (x1,...,xd1)(0,...,0).

That means that f is constant over Um.

Step 2:

Let TM1 be any connected component of M1.

T is a connected topological subspace of M1, by the proposition that any connected topological component is exactly any connected topological subspace that cannot be made larger.

f|T:TM2 is a map that is anywhere locally constant over a connected topological space: around each tT, there is an open neighborhood of t on M1, UtM1, over which f is constant by Step 1, and UtTT is an open neighborhood of t on T over which f|T is constant.

By the proposition that any map that is anywhere locally constant on any connected topological space is globally constant, f|T is globally constant.


References


<The previous article in this series | The table of contents of this series | The next article in this series>

1312: Rank of C Map Between C Manifolds with Boundary at Point

<The previous article in this series | The table of contents of this series | The next article in this series>

definition of rank of C map between C manifolds with boundary at point

Topics


About: C manifold

The table of contents of this article


Starting Context



Target Context


  • The reader will have a definition of rank of C map between C manifolds with boundary at point.

Orientation


There is a list of definitions discussed so far in this site.

There is a list of propositions discussed so far in this site.


Main Body


1: Structured Description


Here is the rules of Structured Description.

Entities:
M1: { the C manifolds with boundary }
M2: { the C manifolds with boundary }
f: :M1M2, { the C maps }
m: M1
dfm: :TmM1Tf(m)M2, = the differential of f at m
Rank(f)m: =Rank(dfm)
//

Conditions:
//


2: Note


In general, the rank of f changes point to point.

When the rank of f is the same at all the points of M1, f is called to have "constant rank".

When at each point of M1, there is a neighborhood of the point over which f has the same rank, f is called to have "locally constant rank".


References


<The previous article in this series | The table of contents of this series | The next article in this series>

1311: For Linear Map Between Vectors Spaces, 'Vectors Spaces - Linear Morphisms' Isomorphism onto Domain of Linear Map, and 'Vectors Spaces - Linear Morphisms' Isomorphism from Superspace of Codomain of Linear Map, Composition of Linear Map After 1st Isomorphism and Before 2nd Isomorphism Has Rank of Linear Map

<The previous article in this series | The table of contents of this series | The next article in this series>

description/proof of that for linear map between vectors spaces, 'vectors spaces - linear morphisms' isomorphism onto domain of linear map, and 'vectors spaces - linear morphisms' isomorphism from superspace of codomain of linear map, composition of linear map after 1st isomorphism and before 2nd isomorphism has rank of linear map

Topics


About: vectors space

The table of contents of this article


Starting Context



Target Context


  • The reader will have a description and a proof of the proposition that for any linear map between any vectors spaces, any 'vectors spaces - linear morphisms' isomorphism onto the domain of the linear map, and any 'vectors spaces - linear morphisms' isomorphism from any superspace of the codomain of the linear map, the composition of the linear map after the 1st isomorphism and before the 2nd isomorphism has the rank of the linear map.

Orientation


There is a list of definitions discussed so far in this site.

There is a list of propositions discussed so far in this site.


Main Body


1: Structured Description


Here is the rules of Structured Description.

Entities:
F: { the fields }
V1: { the F vectors spaces }
V2: { the F vectors spaces }
f1: :V1V2, { the linear maps }
V0: { the F vectors spaces }
f0: :V0V1, { the 'vectors spaces - linear morphisms' isomorphisms }
V2: { the F vectors spaces }, such that V2{ the vectors subspaces of V2}
V3: { the F vectors spaces }
f2: :V2V3, { the 'vectors spaces - linear morphisms' isomorphisms }
//

Statements:
Rank(f2f1f0)=Rank(f1)
//


2: Note


The codomain of f0 is V1 with the vectors space structure of V1 and V2 is a vectors subspace of V2 (not just a subset), which are the points.

Refer to the proposition that for any linear map between any vectors spaces and any 'vectors spaces - linear morphisms' isomorphism, the composition of the linear map after the isomorphism does not necessarily have the rank of the linear map.


3: Proof


Whole Strategy: Step 1: see that f2f1f0 is linear; Step 2: see that Ran(f1f0)=Ran(f1); Step 3: see that f2|Ran(f1):Ran(f1)f2(Ran(f1)) is a 'vectors spaces - linear morphisms' isomorphism, see that f2f1f0=f2|Ran(f1)f1f0, and see that Dim(Ran(f2f1f0))=Dim(Ran(f1f0)).

Step 1:

f2f1f0 is linear, which is because the codomain of f0 is the domain of f1 (more generally, a vectors subspace of the domain of f1) and the codomain of f1 is a vectors subspace of the domain of f2: just V1V1 and V2V2 sets-wise does not guarantee the linearity.

We needed to check the linearity, because otherwise, Rank(f2f1f0) would not be defined.

Step 2:

Let us see that Ran(f1f0)=Ran(f1).

For each vRan(f1f0), v=f1(f0(v0)) for a v0V0, but v1:=f0(v0)V1 and v=f1(v1)Ran(f1).

For each vRan(f1), v=f1(v1) for a v1V1, but as f0 is surjective, there is a v0V0 such that v1=f0(v0), so, v=f1(v1)=f1(f0(v0))Ran(f1f0).

That implies that Rank(f1f0)=Dim(Ran(f1f0))=Dim(Ran(f1))=Rank(f1).

Step 3:

Ran(f1) is a vectors subspace of V2, by the proposition that the range of any linear map between any vectors spaces is a vectors subspace of the codomain.

Ran(f1) is a vectors subspace of V2, because V2 is a vectors subspace of V2.

f2|Ran(f1):Ran(f1)f2(Ran(f1)) is a 'vectors spaces - linear morphisms' isomorphism, by the proposition that for any 'vectors spaces - linear morphisms' isomorphism, its restriction on any subspace domain and the corresponding range codomain is a 'vectors spaces - linear morphisms' isomorphism.

f2f1f0=f2|Ran(f1)f1f0.

So, Dim(Ran(f2f1f0))=Dim(Ran(f2|Ran(f1)f1f0)).

But as Ran(f1f0)=Ran(f1), Ran(f2|Ran(f1)f1f0)=Ran(f2|Ran(f1)), so, Dim(Ran(f2|Ran(f1)f1f0))=Dim(Ran(f2|Ran(f1))), but Dim(Ran(f2|Ran(f1)))=Dim(Ran(f1)), by the proposition that for any 'vectors spaces - linear morphisms' isomorphism, the image of any linearly independent subset or any basis of the domain is linearly independent or a basis on the codomain, so, Rank(f2f1f0)=Dim(Ran(f2f1f0))=Dim(Ran(f2|Ran(f1)f1f0))=Dim(Ran(f1))=Rank(f1).


References


<The previous article in this series | The table of contents of this series | The next article in this series>

1310: For Linear Map Between Vectors Spaces and 'Vectors Spaces - Linear Morphisms' Isomorphism, Composition of Linear Map after Isomorphism Does Not Necessarily Have Rank of Linear Map

<The previous article in this series | The table of contents of this series | The next article in this series>

description/proof of that for linear map between vectors spaces and 'vectors spaces - linear morphisms' isomorphism, composition of linear map after isomorphism does not necessarily have rank of linear map

Topics


About: vectors space

The table of contents of this article


Starting Context



Target Context


  • The reader will have a description and a proof of the proposition that for a linear map between some vectors spaces and a 'vectors spaces - linear morphisms' isomorphism, the composition of the linear map after the isomorphism does not necessarily have the rank of the linear map.

Orientation


There is a list of definitions discussed so far in this site.

There is a list of propositions discussed so far in this site.


Main Body


1: Structured Description


Here is the rules of Structured Description.

Entities:
F: { the fields }
V1: { the F vectors spaces }
V2: { the F vectors spaces }
f1: :V1V2, { the linear maps }
V0: { the F vectors spaces }
V1: { the F vectors spaces } such that V1V1
f0: :V0V1, { the 'vectors spaces - linear morphisms' isomorphisms }
//

Statements:
Not necessarily Rank(f1f0)=Rank(f1)
//


2: Note


V1 is not presupposed to be a vectors subspace of V1 and V1 is not presupposed to equal V1 (the composition is valid if V1V1), which are the points.

This proposition is a warning not to jump to the conclusion without checking some things: refer to the proposition that for any linear map between any vectors spaces, any 'vectors spaces - linear morphisms' isomorphism onto the domain of the linear map, and any 'vectors spaces - linear morphisms' isomorphism from any vectors superspace of the codomain of the linear map, the composition of the linear map after the 1st isomorphism and before the 2nd isomorphism has the rank of the linear map.


3: Proof


Whole Strategy: Step 1: see that f1f0 is not guaranteed to be linear; Step 2: see a counterexample that even if V1 is a vectors subspace of V1, Rank(f1f0)<Rank(f1).

Step 1:

V1V1 means just that V1 is a subset of V1 not that V1 is a vectors subspace of V1.

For each v,vV0 and each r,rF, f1f0(rv+rv)=f1(f0(rv+rv))=f1(rf0(v)+rf0(v)), but the issue here is that rf0(v)+rf0(v) is by the vectors space structure of V1 not of V1, so, =rf1(f0(v))+rf1(f0(v)) is not guaranteed.

So, f1f0 is not guaranteed to be linear.

Without being linear, Rank(f1f0) is not defined.

Step 2:

Let us suppose that V1 is a vectors subspace of V1.

Let V1=R2 as the Euclidean vectors space, V2=R2 as the Euclidean vectors space, f1=idR2:V1V2 as the identity map, V0=R{0} where R is the Euclidean vectors space, V1=R{0} where R is the Euclidean vectors space, and f0=idR{0}:V0V1 as the identity map.

Certainly, V1V1, where V1 is 1-dimensional, because for example, {(1,0)} is a basis.

Certainly, f0 is a 'vectors spaces - linear morphisms' isomorphism.

But Ran(f1)=R2 and Rank(f1)=Dim(Ran(f1))=2, while Ran(f1f0)=R{0} and Rank(f1f0)=Dim(Ran(f1f0))=1.

So, Rank(f1f0)<Rank(f1).


References


<The previous article in this series | The table of contents of this series | The next article in this series>

1309: For 'Vectors Spaces - Linear Morphisms' Isomorphism, Its Restriction on Subspace Domain and Corresponding Range Codomain Is 'Vectors Spaces - Linear Morphisms' Isomorphism

<The previous article in this series | The table of contents of this series | The next article in this series>

description/proof of that for 'vectors spaces - linear morphisms' isomorphism, its restriction on subspace domain and corresponding range codomain is 'vectors spaces - linear morphisms' isomorphism

Topics


About: vectors space

The table of contents of this article


Starting Context



Target Context


  • The reader will have a description and a proof of the proposition that for any 'vectors spaces - linear morphisms' isomorphism, its restriction on any subspace domain and the corresponding range codomain is a 'vectors spaces - linear morphisms' isomorphism.

Orientation


There is a list of definitions discussed so far in this site.

There is a list of propositions discussed so far in this site.


Main Body


1: Structured Description


Here is the rules of Structured Description.

Entities:
F: { the fields }
V1: { the F vectors spaces }
V2: { the F vectors spaces }
f: :V1V2, { the 'vectors spaces - linear morphisms' isomorphisms }
V1: { the vectors subspaces of V1}
V2: =f(V1)
f: =f|V1:V1V2
//

Statements:
f{ the 'vectors spaces - linear morphisms' isomorphisms }
//


2: Proof


Whole Strategy: Step 1: see that V2 is a vectors subspace of V2; Step 2: see that f is linear; Step 3: see that f is bijective; Step 4: conclude the proposition.

Step 1:

V2:=f(V1) is a vectors subspace of V2, by the proposition that the range of any linear map between any vectors spaces is a vectors subspace of the codomain, which means that V2 is an F vectors space.

So, f is a map from an F vectors space into an F vectors space.

Step 2:

f is linear, because for each v1,v2V1 and each r1,r2F, f(r1v1+r2v2)=f(r1v1+r2v2)=r1f(v1)+r2f(v2)=r1f(v1)+r2f(v2).

Step 3:

f is bijective, because for each v1,v2V1 such that v1v2, f(v1)=f(v1)f(v2)=f(v2), because f is injective, and f is obviously surjective.

Step 4:

By the proposition that any bijective linear map between any vectors spaces is a 'vectors spaces - linear morphisms' isomorphism, f is a 'vectors spaces - linear morphisms' isomorphism.


References


<The previous article in this series | The table of contents of this series | The next article in this series>

1308: For Linear Map Between Finite-Dimensional Vectors Spaces, Rank of Map Is Rank of Representative Matrix w.r.t. Bases

<The previous article in this series | The table of contents of this series | The next article in this series>

description/proof of that for linear map between finite-dimensional vectors spaces, rank of map is rank of representative matrix w.r.t. bases

Topics


About: vectors space

The table of contents of this article


Starting Context



Target Context


  • The reader will have a description and a proof of the proposition that for any linear map between any finite-dimensional vectors spaces, the rank of the map is the rank of the representative matrix with respect to any bases.

Orientation


There is a list of definitions discussed so far in this site.

There is a list of propositions discussed so far in this site.


Main Body


1: Structured Description


Here is the rules of Structured Description.

Entities:
F: { the fields }
V1: { the d1 -dimensional F vectors spaces }
V2: { the d2 -dimensional F vectors spaces }
f: :V1V2, { the linear maps }
B1: { the bases for V1}
B2: { the bases for V2}
M: = the representative matrix for f with respect to B1 and B2
//

Statements:
Rank(f)=Rank(M)
//


2: Proof


Whole Strategy: Step 1: see that V2 is 'vectors spaces - linear morphisms' isomorphic to the components vectors space and see that the rank of f is the rank of the linear map by M; Step 2: suppose that Rank(M)=d and the top-left d×d submatrix is determinant nonzero; Step 3: see that Rank(f)=d; Step 4: see that when Rank(f)=d, Rank(M)=d.

Step 1:

V2 is 'vectors spaces - linear morphisms' isomorphic to the components vectors space, by the proposition that for any finite-dimensional vectors space and any basis, the vectors space is 'vectors spaces - linear morphisms' isomorphic to the components vectors space with respect to the basis.

That means that any subspace of V2 has the same dimension with the corresponding subspace of the components vectors space, by the proposition that for any 'vectors spaces - linear morphisms' isomorphism, the image of any linearly independent subset or any basis of the domain is linearly independent or a basis on the codomain: any basis of the subspace of V2 is mapped to a basis of the corresponding subspace of the components vectors space: the restriction of the 'vectors spaces - linear morphisms' isomorphism is obviously a 'vectors spaces - linear morphisms' isomorphism.

So, Rank(f) is the dimension of the range of the components vectors spaces linear map by M.

Step 2:

Let us suppose that Rank(M)=d.

Let the top-left d×d submatrix, N, be determinant nonzero, which is possible by reordering B1 and B2.

Step 3:

Let us take the set of d-dimensional column vectors, {(1,0,...,0)t,...,(0,...,0,1)t}, which is obviously linearly independent.

{N(1,0,...,0)t,...,N(0,...,0,1)t}, which is a set of d-dimensional column vectors, is linearly independent, by the proposition that for any 'vectors spaces - linear morphisms' isomorphism, the image of any linearly independent subset or any basis of the domain is linearly independent or a basis on the codomain: the linear map by N is a 'vectors spaces - linear morphisms' isomorphism, because detN0.

Let us think of the d1-dimensional expansion of {(1,0,...,0)t,...,(0,...,0,1)t}, {(1,0,...,0,0,...,0)t,...,(0,...,0,1,0,...,0)t} and {M(1,0,...,0,0,...,0)t,...,M(0,...,0,1,0,...,0)t}.

{M(1,0,...,0,0,...,0)t,...,M(0,...,0,1,0,...,0)t} is a d2-dimensional expansion of {N(1,0,...,0)t,...,N(0,...,0,1)t}, because the 1st d components are not changed, and so, is linearly independent, by the proposition that for any finite-dimensional columns or rows module and any linearly independent subset, any expansion of the subset into any larger-dimensional columns or rows module is linearly independent.

So, the range of the map by M is equal to or larger than d-dimensional.

Let us suppose that the range was d-dimensional where d<d.

There would be a basis for the range, and some d components could be chosen such that the shrunk basis was still linearly independent, by the proposition that for any finite-dimensional columns or rows vectors space and any linearly independent subset, the subset can be shrunk into a number-of-elements-dimensional columns or rows vectors space by choosing some components.

Let us move the chosen d components to the top d rows by reordering B2, calling the result M hereafter.

That would mean that the shrunk range (the projection to the 1st d components) was d-dimensional.

Let M be the top d×d1 submatrix of M.

{M(10...0),...,M(0...01)} would span (generate) the shrunk range, obviously.

But it would be nothing but the columns of M.

So, some d columns of M could be chosen to be a basis for the shrunk range, by the proposition that for any vectors space, any finite generator can be reduced to be a basis and the proposition that for any vectors space, the bases have the same cardinality.

Let us move the corresponding d columns of M to the left by reordering B1, calling the result M hereafter.

Then, the top-left d×d submatrix would be determinant nonzero, by the proposition that the determinant of any square matrix over any field is nonzero if and only if the set of the columns or the rows is linearly independent, so, d<dRank(M), a contradiction.

So, the range is d-dimensional.

So, the rank of the linear map by M is d, so, Rank(f)=d.

Step 4:

Let us suppose that Rank(f)=d.

If Rank(M)=d, Rank(f)=d, by Step 3, so, d=d.


References


<The previous article in this series | The table of contents of this series | The next article in this series>

1307: Determinant of Square Matrix over Field Is Nonzero iff Set of Columns or Rows Is Linearly Independent

<The previous article in this series | The table of contents of this series | The next article in this series>

description/proof of that determinant of square matrix over field is nonzero iff set of columns or rows is linearly independent

Topics


About: matrices space

The table of contents of this article


Starting Context



Target Context


  • The reader will have a description and a proof of the proposition that the determinant of any square matrix over any field is nonzero if and only if the set of the columns or the rows is linearly independent.

Orientation


There is a list of definitions discussed so far in this site.

There is a list of propositions discussed so far in this site.


Main Body


1: Structured Description


Here is the rules of Structured Description.

Entities:
F: { the fields }
M: { the n×nF matrices }
//

Statements:
(
detM0

{ the columns of M}{ the linearly independent subsets of the n -dimensional F columns vectors space }
)

(
detM0

{ the rows of M}{ the linearly independent subsets of the n -dimensional F rows vectors space }
)
//


2: Proof


Whole Strategy: apply Cramer's rule for any system of linear equations; Step 1: take M(c1,...,cn)t=0 and conclude for the columns; Step 2: take Mt(c1,...,cn)t=0 and conclude for the rows.

Step 1:

Let the j-th column of M be denoted as Mj.

Let us take M(c1,...,cn)t=0.

It is nothing but c1M1+...+cnMn=0.

So, the columns' being linearly independent is nothing but M(c1,...,cn)t=0's having only the 0 solution.

By Cramer's rule for any system of linear equations, that is nothing but detM0: if detM=0, the rank would be smaller than n, and at least 1 of cj s could be taken arbitrarily.

So, detM0 if and only if the set of the columns is linearly independent.

Step 2:

By Step 1, detMt0 if and only if the set of the columns is linearly independent.

But the set of the columns of Mt is linearly independent if and only if the set of the rows of M is linearly independent, obviously.

On the other hand, detMt=detM.

So, detM0 if and only if the set of the rows of M is linearly independent.


References


<The previous article in this series | The table of contents of this series | The next article in this series>

1306: For Finite-Dimensional Columns or Rows Vectors Space and Linearly Independent Subset, Subset Can Be Shrunk into Number-of-Elements-Dimensional Columns or Rows Vectors Space by Choosing Components

<The previous article in this series | The table of contents of this series | The next article in this series>

description/proof of that for finite-dimensional columns or rows vectors space and linearly independent subset, subset can be shrunk into number-of-elements-dimensional columns or rows vectors space by choosing components

Topics


About: vectors space

The table of contents of this article


Starting Context



Target Context


  • The reader will have a description and a proof of the proposition that for any finite-dimensional columns or rows vectors space and any linearly independent subset, the subset can be shrunk into a number-of-elements-dimensional columns or rows vectors space by choosing some components.

Orientation


There is a list of definitions discussed so far in this site.

There is a list of propositions discussed so far in this site.


Main Body


1: Structured Description


Here is the rules of Structured Description.

Entities:
F: { the fields }
d: N
Vc: ={(r1,...,rd)t|rjF}, { the F vectors spaces }
Vr: ={(r1,...,rd)|rjF}, { the F vectors spaces }
Sc: ={(r11,...,r1d)t,...,(rn1,...,rnd)t}, { the linearly independent subsets of Vc}
Sr: ={(r11,...,r1d),...,(rn1,...,rnd)}, { the linearly independent subsets of Vr}
Vc: ={(r1,...,rn)t|rjF}, { the F vectors spaces }
Vr: ={(r1,...,rn)|rjF}, { the F vectors spaces }
//

Statements:
{j1,...,jn}{1,...,d}(Sc:={(r1j1,...,r1jn)t,...,(rnj1,...,rnjn)t}{ the linearly independent subsets of Vc})

{j1,...,jn}{1,...,d}(Sr:={(r1j1,...,r1jn),...,(rnj1,...,rnjn)}{ the linearly independent subsets of Vr})
//


2: Note


It is not that any {j1,...,jn}{1,...,d} would do: for example, when F=R, d=3, and Sc={(0,1,0)t,(0,0,1)t}, {1,2}{1,2,3} does not do because {(0,1)t,(0,0)t} is not linearly independent, while {2,3}{1,2,3} does.


3: Proof


Whole Strategy: Step 1: take c1(r11,...,r1d)t+...+cn(rn1,...,rnd)t=0 and see that that equals (r11...rn1...r1d...rnd)(c1...cn)=0; Step 2: apply Cramer's rule for any system of linear equations to conclude that the rank of the matrix is n; Step 3: choose a linearly independent Sc; Step 4: conclude for Sr.

Step 1:

Let us take c1(r11,...,r1d)t+...+cn(rn1,...,rnd)t=0.

That equals (r11...rn1...r1d...rnd)(c1...cn)=0, obviously.

Sc' s being linearly independent equals that equation's having only the (c1...cn)=0 solution.

Step 2:

By Cramer's rule for any system of linear equations, the rank of the matrix is n: if the rank was smaller than n, at least 1 of cj s can be taken arbitrarily, a contradiction.

Step 3:

So, the matrix can be rearranged such that the top-left n×n submatrix is determinant nonzero.

Then, the n columns of the submatrix is linearly independent, by the proposition that the determinant of any square matrix over any field is nonzero if and only if the set of the columns or the rows is linearly independent.

The n columns is an Sc.

Step 4:

As for Sr, we can just take the transpose of Sr, ScVc, take an Sc, and take the transpose of Sc, SrVr.


References


<The previous article in this series | The table of contents of this series | The next article in this series>