description/proof of that for finite-dimensional vectors space, subset that spans space can be reduced to be basis
Topics
About: vectors space
The table of contents of this article
- Starting Context
- Target Context
- Orientation
- Main Body
- 1: Structured Description
- 2: Natural Language Description
- 3: Proof
Starting Context
Target Context
- The reader will have a description and a proof of the proposition that for any finite-dimensional vectors space, any subset that spans the space can be reduced to be a basis.
Orientation
There is a list of definitions discussed so far in this site.
There is a list of propositions discussed so far in this site.
Main Body
1: Structured Description
Here is the rules of Structured Description.
Entities:
\(F\): \(\in \{\text{ the fields }\}\)
\(V\): \(\in \{\text{ the } d \text{ -dimensional } F \text{ vectors spaces }\}\)
\(S\): \(\subseteq V\)
//
Statements:
\(\forall v \in V (\exists S' \in \{\text{ the finite subsets of } S\}, \exists r^j \in F (v = \sum_{v_j \in S'} r^j v_j)))\)
\(\implies\)
\(\exists B \in \{\text{ the } d \text{ -ordered subsets of } S\} (B \in \{\text{ the bases of } V\})\)
//
2: Natural Language Description
For any field, \(F\), any \(d\)-dimensional \(F\) vectors space, \(V\), and any subset, \(S \subseteq V\), such that \(\forall v \in V (\exists S' \in \{\text{ the finite subsets of } S\}, \exists r^j \in F (v = \sum_{v_j \in S'} r^j v_j)))\), there is a \(d\)-ordered subset, \(B \subseteq S\), that is a basis of \(V\).
3: Proof
Whole Strategy: Step 1: deal with the case, \(V = \{0\}\), and suppose otherwise thereafter; Step 2: choose any nonzero element, \(e_1 \in S\); Step 3: choose any another element, \(e_2 \in S \setminus \{e_1\}\), such that \(\{e_1, e_2\}\) is linearly independent; Step 4: and so on to get a linearly independent \(\{e_1, ..., e_d\}\); Step 5: see that \(\{e_1, ..., e_d\}\) is a basis.
Step 1:
Let us suppose that \(V = \{0\}\).
Inevitably, \(S = \{0\}\).
\(B = \emptyset \subseteq S\) will do.
Let us suppose otherwise hereafter.
Step 2:
Let us choose any nonzero element, \(e_1 \in S\). That is possible, because if \(S = \{0\}\), \(S\) could not span \(V\).
Step 3:
If \(2 \le d\), let us choose any another element, \(e_2 \in S \setminus \{e_1\}\), such that \(\{e_1, e_2\}\) is linearly independent. That is possible, because otherwise, for each \(v \in S \setminus \{e_1\}\), \(\{e_1, v\}\) would be linearly dependent, which would mean that \(c^1 e_1 + c^2 v = 0\) would have not-all-zero \((c^1, c^2)\), but \(c^2 \neq 0\), because otherwise, \(c^1 e_1 = 0\), which would mean that \(e_1 = 0\), a contradiction, so, \(v = - {c^2}^{-1} c^1 e_1\), which would mean that \(\{e_1\}\) spanned \(V\) and \(\{e_1\}\) was a basis, a contradiction against \(V\)'s being \(d\)-dimensional.
Step 4:
And so on, and we get a linearly independent \(\{e_1, ..., e_d\}\). That is possible, because supposing that we have already chosen a linearly independent \(\{e_1, ..., e_j\}\) for any \(j \lt d\), we can choose an element, \(e_{j + 1} \in S \setminus \{e_1, ..., e_j\}\), such that \(\{e_1, ..., e_{j + 1}\}\) is linearly independent, because otherwise, for each \(v \in S \setminus \{e_1, ..., e_j\}\), \(\{e_1, ..., e_j, v\}\) would be linearly dependent, which would mean that \(c^1 e_1 + ... + c^j e_j + c^{j + 1} v = 0\) would have not-all-zero \((c^1, ..., c^{j + 1})\), but \(c^{j + 1} \neq 0\), because otherwise, \(c^1 e_1 + ... + c^j e_j = 0\), which would mean that \((c^1, ..., c^j) = (0, ..., 0)\), because \(\{e_1, ..., e_j\}\) was linearly independent, a contradiction, so, \(v = - {c^{j + 1}}^{-1} (c^1 e_1 + ... + c^j e_j)\), which would mean that \(\{e_1, ..., e_j\}\) spanned \(V\) and \(\{e_1, ..., e_j\}\) was a basis, a contradiction against \(V\)'s being \(d\)-dimensional.
Step 5:
\(\{e_1, ..., e_d\}\) is a basis, by the proposition that for any finite-dimensional vectors space, any linearly independent subset with dimension number of elements is a basis.