description/proof of that for 'vectors spaces - linear morphisms' isomorphism, image of linearly independent subset or basis of domain is linearly independent or basis on codomain
Topics
About: vectors space
The table of contents of this article
- Starting Context
- Target Context
- Orientation
- Main Body
- 1: Structured Description
- 2: Natural Language Description
- 3: Note
- 4: Proof
Starting Context
- The reader knows a definition of %field name% vectors space.
- The reader knows a definition of %category name% isomorphism.
- The reader knows a definition of linearly independent subset of module.
- The reader knows a definition of basis of module.
Target Context
- The reader will have a description and a proof of the proposition that for any 'vectors spaces - linear morphisms' isomorphism, the image of any linearly independent subset or any basis of the domain is linearly independent or a basis on the codomain.
Orientation
There is a list of definitions discussed so far in this site.
There is a list of propositions discussed so far in this site.
Main Body
1: Structured Description
Here is the rules of Structured Description.
Entities:
\(F\): \(\in \{\text{ the fields }\}\)
\(V_1\): \(\in \{\text{ the } F \text{ vectors spaces }\}\)
\(V_2\): \(\in \{\text{ the } F \text{ vectors spaces }\}\)
\(f\): \(: V_1 \to V_2\), \(\in \{\text{ the 'vectors spaces - linear morphisms' isomorphisms }\}\)
\(S\): \(\subseteq V_1\), \(\in \{\text{ the linearly independent subsets of } V_1\}\)
\(B\): \(\subseteq V_1\), \(\in \{\text{ the bases of } V_1\}\)
//
Statements:
\(f (S) \in \{\text{ the linearly independent subsets of } V_2\}\)
\(\land\)
\(f (B) \in \{\text{ the bases of } V_2\}\)
//
2: Natural Language Description
For any field, \(F\), any \(F\) vectors spaces, \(V_1, V_2\), any 'vectors spaces - linear morphisms' isomorphism, \(f\), any linearly independent subset, \(S \subseteq V_1\), and any basis, \(B \subseteq V_1\), \(f (S)\) is a linearly independent subset of \(V_2\) and \(f (B)\) is a basis of \(V_2\).
3: Note
The vectors spaces do not need to be finite-dimensional; \(S\) does not need to be finite.
This proposition is usually regarded to be obvious with \(V_2\) treated as "the same" with \(V_1\), but let us be at ease with conscience that we have indeed proved it once and for all.
4: Proof
Whole Strategy: Step 1: take any finite subset, \(S' \subseteq f (S)\), suppose that \(\sum_{p_j \in S'} r^j p_j = 0, r^j \in F\), and see that \(r^j = 0\), by taking \(f^{-1} (\sum_{p_j \in S'} r^j p_j) = f^{-1} (0)\); Step 2: take any point, \(p \in V_2\), and see that \(p\) is a linear combination of a finite subset of \(f (B)\), by taking \(f^{-1} (p)\) and taking a finite subset, \(S' \subseteq B\), and some \(r^j\) s such that \(f^{-1} (p) = \sum_{p_j \in S'} r^j p_j\).
Step 1:
Let \(S' \subseteq f (S)\) be any finite subset.
Let \(\sum_{p_j \in S'} r^j p_j = 0\) for some \(r^j \in F\) s.
What we need to see is that for each \(j\), \(r^j = 0\).
\(f^{-1} (\sum_{p_j \in S'} r^j p_j) = f^{-1} (0)\). As \(f^{-1}\) is linear, \(f^{-1} (\sum_{p_j \in S'} r^j p_j) = \sum_{p_j \in S'} r^j f^{-1} (p_j) = f^{-1} (0) = 0\). As \(\{f^{-1} (p_j)\}\) is a finite subset of \(S\), \(r^j = 0\), because \(S\) is linearly independent.
Step 2:
By Step 1, we already know that \(f (B)\) is linearly independent on \(V_2\).
What we need to see is that for each point, \(p \in V_2\), \(p\) is a linear combination of a finite subset of \(f (B)\).
Let us take \(f^{-1} (p) \in V_1\). As \(B\) is a basis, there is a finite subset, \(S' \subseteq B\), and some \(r^j\) s, such that \(f^{-1} (p) = \sum_{p_j \in S'} r^j p_j\). Then, \(p = f (f^{-1} (p)) = f (\sum_{p_j \in S'} r^j p_j) = \sum_{p_j \in S'} r^j f (p_j)\). As \(\{f (p_j)\}\) is a finite subset of \(f (B)\), that is what we needed to see.