<The previous article in this series | The table of contents of this series | The next article in this series>
definition of symmetric-tensors space w.r.t. field and \(q\) same vectors spaces and vectors space over field
Topics
About:
vectors space
The table of contents of this article
Starting Context
Target Context
-
The reader will have a definition of symmetric-tensors space with respect to field and \(q\) same vectors spaces and vectors space over field.
Orientation
There is a list of definitions discussed so far in this site.
There is a list of propositions discussed so far in this site.
Main Body
1: Structured Description
Here is the rules of Structured Description.
Entities:
\( F\): \(\in \{\text{ the fields }\}\)
\( \{V, W\}\): \(\subseteq \{\text{ the } F \text{ vectors spaces }\}\)
\( L (V, ..., V: W)\): \(= \text{ the tensors space }\), where \(V\) appears \(q\) times
\(*\Sigma_q (V: W)\): \(= \{t \in L (V, ..., V: W) \vert t \in \{\text{ the symmetric-tensors }\}\}\), \(\in \{\text{ the } F \text{ vectors spaces }\}\)
//
Conditions:
//
\(\Sigma_q (V: W)\) is called "the \(q\)-symmetric-tensors space of \(V\) into \(W\)".
\(\Sigma_q (V: F)\) is called "the \(q\)-symmetric-tensors space of \(V\)".
Any element of \(\Sigma_q (V: F)\) is called "\(q\)-symmetric-tensor".
2: Note
Being symmetric means that for any \(\sigma \in S^q\) where \(S^q\) is the \(q\)-symmetric group, \(t (v_{\sigma_1}, ..., v_{\sigma_q}) = t (v_1, ..., v_q)\).
All the \(\{V_1, ..., V_q\}\) are required to be the same \(V\), because otherwise, \(t (v_{\sigma_1}, ..., v_{\sigma_q})\) would not make sense.
Let us see that \(\Sigma_q (V: W)\) is indeed an \(F\) vectors space.
1) for any elements, \(t_1, t_2 \in \Sigma_q (V: W)\), \(t_1 + t_2 \in \Sigma_q (V: W)\) (closed-ness under addition): \(t_1 + t_2 \in L (V, ..., V: W)\), because \(t_j \in L (V, ..., V: W)\) and \(L (V, ..., V: W)\) is an \(F\) vectors space; \((t_1 + t_2) (v_{\sigma_1}, ..., v_{\sigma_q}) = t_1 (v_{\sigma_1}, ..., v_{\sigma_q}) + t_2 (v_{\sigma_1}, ..., v_{\sigma_q}) = t_1 (v_1, ..., v_q) + t_2 (v_1, ..., v_q) = (t_1 + t_2) (v_1, ..., v_q)\).
2) for any elements, \(t_1, t_2 \in \Sigma_q (V: W)\), \(t_1 + t_2 = t_2 + t_1\) (commutativity of addition): it holds on the ambient \(L (V, ..., V; W)\).
3) for any elements, \(t_1, t_2, t_3 \in \Sigma_q (V: W)\), \((t_1 + t_2) + t_3 = t_1 + (t_2 + t_3)\) (associativity of additions): it holds on the ambient \(L (V, ..., V; W)\).
4) there is a 0 element, \(0 \in \Sigma_q (V: W)\), such that for any \(t \in \Sigma_q (V: W)\), \(t + 0 = t\) (existence of 0 vector): the \(0\) map, \(t_0 \in L (V, ..., V: W)\), is in \(\Sigma_q (V: W)\), because \(t_0 (v_{\sigma_1}, ..., v_{\sigma_q}) = 0 = t_0 (v_1, ..., v_q)\), and \(t + 0 = t\) holds because it holds on the ambient \(L (V, ..., V: W)\).
5) for any element, \(t \in \Sigma_q (V: W)\), there is an inverse element, \(t' \in \Sigma_q (V: W)\), such that \(t' + t = 0\) (existence of inverse vector): \(t' := - t \in L (V, ..., V: W)\); \((- t) (v_{\sigma_1}, ..., v_{\sigma_q}) = - (t (v_{\sigma_1}, ..., v_{\sigma_q})) = - t (v_1, ..., v_q) = (- t) (v_1, ..., v_q)\), so, \(- t \in \Sigma_q (V: W)\); \(- t + t = 0\) holds because it holds on the ambient \(L (V, ..., V: W)\).
6) for any element, \(t \in \Sigma_q (V: W)\), and any scalar, \(r \in F\), \(r . t \in \Sigma_q (V: W)\) (closed-ness under scalar multiplication): \(r . t \in L (V, ..., V: W)\); \(r . t (v_{\sigma_1}, ..., v_{\sigma_q}) = r (t (v_{\sigma_1}, ..., v_{\sigma_q})) = r (t (v_1, ..., v_q)) = (r . t) (v_1, ..., v_q)\), which means that \(r . t \in \Sigma_q (V: W)\).
7) for any element, \(t \in \Sigma_q (V: W)\), and any scalars, \(r_1, r_2 \in F\), \((r_1 + r_2) . t = r_1 . t + r_2 . t\) (scalar multiplication distributability for scalars addition): it holds on the ambient \(L (V, ..., V; W)\).
8) for any elements, \(t_1, t_2 \in \Sigma_q (V: W)\), and any scalar, \(r \in F\), \(r . (t_1 + t_2) = r . t_1 + r . t_2\) (scalar multiplication distributability for vectors addition): it holds on the ambient \(L (V, ..., V; W)\).
9) for any element, \(t \in \Sigma_q (V: W)\), and any scalars, \(r_1, r_2 \in F\), \((r_1 r_2) . t = r_1 . (r_2 . t)\) (associativity of scalar multiplications): it holds on the ambient \(L (V, ..., V; W)\).
10) for any element, \(t \in \Sigma_q (V: W)\), \(1 . t = t\) (identity of 1 multiplication): it holds on the ambient \(L (V, ..., V; W)\).
References
<The previous article in this series | The table of contents of this series | The next article in this series>
<The previous article in this series | The table of contents of this series | The next article in this series>
definition of \(\omega\)-accumulation point of subset of topological space
Topics
About:
topological space
The table of contents of this article
Starting Context
Target Context
-
The reader will have a definition of \(\omega\)-accumulation point of subset of topological space.
Orientation
There is a list of definitions discussed so far in this site.
There is a list of propositions discussed so far in this site.
Main Body
1: Structured Description
Here is the rules of Structured Description.
Entities:
\( T\): \(\in \{\text{ the topological spaces }\}\)
\( S\): \(\subseteq T\)
\(*t\): \(\in T\)
//
Conditions:
\(\forall U_t \in \{\text{ the open neighborhoods of } t\} (U_t \cap (S \setminus \{t\}) \in \{\text{ the infinite sets }\})\)
//
References
<The previous article in this series | The table of contents of this series | The next article in this series>
<The previous article in this series | The table of contents of this series | The next article in this series>
definition of accumulation point of subset of topological space
Topics
About:
topological space
The table of contents of this article
Starting Context
Target Context
-
The reader will have a definition of accumulation point of subset of topological space.
Orientation
There is a list of definitions discussed so far in this site.
There is a list of propositions discussed so far in this site.
Main Body
1: Structured Description
Here is the rules of Structured Description.
Entities:
\( T\): \(\in \{\text{ the topological spaces }\}\)
\( S\): \(\subseteq T\)
\(*t\): \(\in T\)
//
Conditions:
\(\forall U_t \in \{\text{ the open neighborhoods of } t\} (U_t \cap (S \setminus \{t\}) \neq \emptyset)\)
//
References
<The previous article in this series | The table of contents of this series | The next article in this series>
<The previous article in this series | The table of contents of this series | The next article in this series>
definition of \(T_1\) topological space
Topics
About:
topological space
The table of contents of this article
Starting Context
Target Context
-
The reader will have a definition of \(T_1\) topological space.
Orientation
There is a list of definitions discussed so far in this site.
There is a list of propositions discussed so far in this site.
Main Body
1: Structured Description
Here is the rules of Structured Description.
Entities:
\(*T\): \(\in \{\text{ the topological spaces }\}\)
//
Conditions:
\(\forall \{t_1, t_2\} \subseteq T \text{ such that } t_1 \neq t_2 (\exists U_{t_1} \in \{\text{ the open neighborhoods of } t_1\} (t_2 \notin U_{t_1}))\)
//
As it holds also for \(\{t_2, t_1\} \subseteq T\), there is inevitably also \(U_{t_2}\) such that \(t_1 \notin U_{t_2}\): this is not about that there is \(U_{t_1}\) or \(U_{t_2}\).
2: Note
This definition equals that each point is a closed subset of \(T\).
Let us see that fact.
Let us suppose this definition.
Let \(t_1 \in T\) be any.
Let \(t_2 \in T \setminus \{t_1\}\) be any.
\(t_1 \neq t_2\).
So, there is an open neighborhood of \(t_2\), \(U_{t_2} \subseteq T\), such that \(t_1 \notin U_{t_2}\), which means that \(U_{t_2} \subseteq T \setminus \{t_1\}\).
So, \(T \setminus \{t_1\} \subseteq T\) is an open subset, by the local criterion for openness.
So, \(\{t_1\}\) is a closed subset of \(T\).
Let us suppose that each point is a closed subset of \(T\).
Let \(\{t_1, t_2\} \subseteq T\) be any such that \(t_1 \neq t_2\).
\(t_1 \in T \setminus \{t_2\}\).
\(T \setminus \{t_2\}\) is open, so, there is an open neighborhood of \(t_1\), \(U_{t_1} \subseteq T\), such that \(U_{t_1} \subseteq T \setminus \{t_2\}\), by the local criterion for openness.
That means that \(t_2 \notin U_{t_1}\).
References
<The previous article in this series | The table of contents of this series | The next article in this series>
<The previous article in this series | The table of contents of this series | The next article in this series>
description/proof of Sylvester's law of inertia of signature of Hermitian matrix
Topics
About:
matrices space
The table of contents of this article
Starting Context
Target Context
-
The reader will have a description and a proof of Sylvester's law of inertia of signature of Hermitian matrix.
Orientation
There is a list of definitions discussed so far in this site.
There is a list of propositions discussed so far in this site.
Main Body
1: Structured Description
Here is the rules of Structured Description.
Entities:
\(M\): \(\in \{\text{ the } d \times d \text{ Hermitian matrices }\}\)
\((p, n, z)\): \(= \text{ the signature of } M\)
//
Statements:
\(\forall N \in \{\text{ the invertible matrices }\} \text{ such that } N^* M N = P \text{ where } P \text{ is the diagonal matrix with any diagonal elements, } (\rho_1, ..., \rho_d) ((\rho_1, ..., \rho_d) \text{ has } p \text{ positives, } n \text{ negatives, and } z \text{ } 0 \text{ s })\)
//
2: Note
In fact, there are some such \(N\) s, because any Hermitian matrix can be diagonalizied with real diagonal elements by an invertible matrix, as is well known.
As an immediate corollary, for \(M' = N'^* M N'\) where \(N'\) is any invertible matrix (\(M'\) is Hermitian, because \(M'^* = (N'^* M N')^* = N'^* M^* {N'^*}^* = N'^* M N' = M'\)), \(M'\) has the signature of \(M\), because for any diagonalization, \(\Lambda' = O'^* M' O'\), \(\Lambda' = O'^* N'^* M N' O' = (N' O')^* M N' O'\), so, \(\Lambda'\) has some \(p\) positives, some \(n\) negatives, and some \(z\) \(0\) s, by this proposition, which means that \(M'\) has the signature, \((p, n, z)\), by this proposition.
3: Proof
Whole Strategy: Step 1: see that the elements of \((\rho_1, ..., \rho_d)\) are all reals; Step 2: see that we can assume without loss of generality that \((\rho_1, ..., \rho_{p'})\) are positive, \((\rho_{p' + 1}, ..., \rho_{p' + n'})\) are negative, \((\rho_{p' + n' + 1}, ..., \rho_{p' + n' + z'})\) are \(0\); Step 3: let the eigenvalues of \(M\) be \((\lambda_1, ..., \lambda_p, \lambda_{p + 1}, ..., \lambda_{p + n}, \lambda_{p + n + 1}, ..., \lambda_{p + n + z})\) and see that there is an invertible, \(O\), such that \(O^* M O = \Lambda\) where \(\Lambda\) is the diagonal matrix with the diagonal elements, \((\lambda_1, ..., \lambda_d)\); Step 4: see that \(z' = z\); Step 5: suppose that \(p \lt p'\), and find a contradiction, by taking the matrix, \(S := (O_1, ..., O_p, N_{p' + 1}, ..., N_{p' + n'})\), the map, \(f: \mathbb{C}^d \to \mathbb{C}^{p + n'}, v \mapsto S^* M v\), and a \(v_0 \in Ker (f)\), and seeing \({v_0}^* M v_0\) is negative and positive.
Step 1:
Let us see that the elements of \((\rho_1, ..., \rho_d)\) are indeed all reals.
\(P^* = (N^* M N)^* = N^* M^* {N^*}^*\), by the proposition that the Hermitian conjugate of the product of any complex matrices is the product of the Hermitian conjugates of the constituents in the reverse order, \(= N^* M N = P\).
Especially, \({P^*}^j_j = P^j_j\), which means that \(\overline{\rho_j} = \rho_j\), which means that \(\rho_j\) is real.
Step 2:
If \((\rho_1, ..., \rho_d)\) is not in any order such that \((\rho_1, ..., \rho_{p'})\) are positive, \((\rho_{p' + 1}, ..., \rho_{p' + n'})\) are negative, \((\rho_{p' + n' + 1}, ..., \rho_{p' + n' + z'})\) are \(0\), there is a permutation, \(\sigma: \{1, ..., n\} \to \{1, ..., n\}\), such that \((\rho_{\sigma_1}, ..., \rho_{\sigma_d})\) is in such a order.
Let us see that there is an invertible matrix, \(N'\), such that \((N N')^* M N N' = P'\) where \(P'\) is the diagonal matrix with the diagonal elements, \((\rho_{\sigma_1}, ..., \rho_{\sigma_d})\).
Let \(N'\) be such that \(N'^j_l = \delta_{j, \sigma_l}\), which means that the \(l\)-th column is such that only the \(\sigma_l\)-th row is \(1\) with others \(0\), which is invertible, because by iteratively using the Laplace expansion of the determinant of any square matrix over any commutative ring holds and its corollary, as the 1st column of \(N'\) has the single \(1\), \(det N' = M_{\sigma_1, 1}\), but as the 1st column of the \((\sigma_1, 1)\) minor has the single \(1\), \(M_{\sigma_1, 1}\) is expanded likewise, and so on, after all, the last minor is \(1\), so, \(det N'\) is \(1\) or \(-1\).
\((N N')^* M N N' = N'^* N^* M N N' = N'^* P N'\), and \((N'^* P N')^j_l = {N'^*}^j_o P^o_p N'^p_l = \delta_{o, \sigma_j} P^o_p \delta_{p, \sigma_l} = P^{\sigma_j}_{\sigma_l} = \rho_{\sigma_j} \delta_{j, l}\), which means that \((N N')^* M N N' = P'\).
So, if \(P\) exists, \(P'\) exists, and if the proposition holds for \(P'\), the proposition holds for \(P\), because \((\rho_1, ..., \rho_d)\) is just a permutation of \((\rho_{\sigma_1}, ..., \rho_{\sigma_d})\).
So, we can prove the proposition only for the case such that \((\rho_1, ..., \rho_{p'})\) are positive, \((\rho_{p' + 1}, ..., \rho_{p' + n'})\) are negative, \((\rho_{p' + n' + 1}, ..., \rho_{p' + n' + z'})\) are \(0\).
Step 3:
Let the eigenvalues of \(M\) be positive \((\lambda_1, ..., \lambda_p)\), negative \((\lambda_{p + 1}, ..., \lambda_{p + n})\), and \(0\) \((\lambda_{p + n + 1}, ..., \lambda_{p + n + z})\).
There is an invertible matrix, \(O\), such that \(O^* M O = \Lambda\) where \(\Lambda\) is the diagonal matrix with the diagonal elements, \((\lambda_1, ..., \lambda_d)\), because while there is a unitary matrix, \(U\), such that \(U^* M U\) is a diagonal matrix with the diagonal elements as an order of \((\lambda_1, ..., \lambda_d)\), as is well known, the order can be changed to \((\lambda_1, ..., \lambda_d)\) by an invertible matrix, as in Step 2.
Step 4:
So, we have \(N^* M N = P\) and \(O^* M O = \Lambda\), and \(P = N^* {O^*}^{-1} \Lambda O^{-1} N = N^* {O^{-1}}^* \Lambda O^{-1} N\), by the proposition that for any invertible complex matrix, the inverse of the Hermitian conjugate of the matrix is the Hermitian conjugate of the inverse of the matrix, \(=(O^{-1} N)^* \Lambda O^{-1} N\).
So, the rank of \(P\) equals the rank of \(\Lambda\), by the proposition that the rank of any matrix over any field is conserved by multiplying any invertible matrices from left and right.
But \(Rank (P) = p' + n'\) and \(Rank (\Lambda) = p + n\), and as \(d = p' + n' + z' = p + n + z\), \(z = z'\).
Step 5:
Let us suppose that \(p \lt p'\).
Let us take the \(d \times (p + n')\) matrix, \(S := (O_1, ..., O_p, N_{p' + 1}, ..., N_{p' + n'})\) where \(O_j\) is the \(j\)-th column of \(O\) and \(N_j\) is the \(j\)-th column of \(N\).
\(S^* = \begin{pmatrix} {O_1}^* \\ ... \\ {O_p}^* \\ {N_{p' + 1}}^* \\ ... \\ {N_{p' + n'}}^* \end{pmatrix}\).
Let us take the map, \(f: \mathbb{C}^d \to \mathbb{C}^{p + n'}, v \mapsto S^* M v\).
\(f\) is linear.
So, \(Nullity (f) = d - Rank (f)\), by the rank-nullity law for linear map between finite-dimensional vectors spaces.
But \(Rank (f) \le p + n' \lt p' + n'\).
So, \(z = z' = d - (p' + n') \lt d - Rank (f) = Nullity (f)\).
As \(O\) is invertible, \(\{O_1, ..., O_d\}\) is a basis for \(\mathbb{C}^d\), by the proposition that for any vectors space over any field and any square matrix over the field with dimension equal to or smaller than the dimension of the vectors space, the matrix is invertible if it maps a linearly-independent set of vectors to a linearly-independent set of vectors, and if the matrix is invertible, it maps any linearly-independent set of vectors to a linearly-independent set of vectors: \((O_1, ..., O_d)\) is the image of \((\begin{pmatrix} 1 \\ 0 \\ ... \\ 0 \end{pmatrix}, ..., \begin{pmatrix} 0 \\ 0 \\ ... \\ 1 \end{pmatrix})\), which is obviously linearly independent.
As \(N\) is invertible, \(\{N_1, ..., N_d\}\) is a basis for \(\mathbb{C}^d\), likewise.
So, for each \(v \in \mathbb{C}^d\), \(v = r^j O_j = s^j N_j\).
There is a nonzero \(v_0 = r^j O_j = s^j N_j \in Ker (f)\) such that there is a nonzero \(r^j\) for a \(1 \le j \le p + n\) and there is a nonzero \(s^j\) for a \(1 \le j \le p' + n'\), because otherwise, \(Ker (f) \subseteq Span (\{O_{p + n + 1}, ..., O_d\})\), which would mean that \(Nullity (f) \le z\), a contradiction, and likewise for \(Span (\{N_{p' + n' + 1}, ..., N_d\})\).
\(f (v_0) = 0\), but \(f (v_0) = S^* M v_0 = S^* M r^j O_j = r^j S^* M O_j = \sum_{j \in \{1, ..., d\}} r^j \begin{pmatrix} {O_1}^* M O_j \\ ... \\ {O_p}^* M O_j \\ {N_{p' + 1}}^* M O_j \\ ... \\ {N_{p' + n'}}^* M O_j \end{pmatrix} = \sum_{j \in \{1, ..., d\}} r^j \begin{pmatrix} \lambda_j \delta_{1, j} \\ ... \\ \lambda_j \delta_{p, j} \\ {N_{p' + 1}}^* M O_j \\ ... \\ {N_{p' + n'}}^* M O_j \end{pmatrix}\).
So, for each \(1 \le l \le p\), \(\sum_{j \in \{1, ..., d\}} r^j \lambda_j \delta_{l, j} = 0\), but \(= r^l \lambda_l\), but as \(0 \lt \lambda_l\), \(r^l = 0\).
As \(r^j \neq 0\) for a \(1 \le j \le p + n\), \(r^j \neq 0\) for a \(p + 1 \le j \le p + n\).
Likewise, \(0 = f (v_0) = S^* M v_0 = S^* M s^j N_j = s^j S^* M N_j = \sum_{j \in \{1, ..., d\}} s^j \begin{pmatrix} {O_1}^* M N_j \\ ... \\ {O_p}^* M N_j \\ {N_{p' + 1}}^* M N_j \\ ... \\ {N_{p' + n'}}^* M N_j \end{pmatrix} = \sum_{j \in \{1, ..., d\}} s^j \begin{pmatrix} {O_1}^* M N_j \\ ... \\ {O_p}^* M N_j \\ \rho_j \delta_{p' + 1, j} \\ ... \\ \rho_j \delta_{p' + n', j} \end{pmatrix}\).
So, for each \(1 \le l \le n'\), \(\sum_{j \in \{1, ..., d\}} s^j \rho_j \delta_{p' + l, j} = 0\), but \(= s^{p' + l} \rho_{p' + l}\), but as \(\rho_{p' + l} \lt 0\), \(s^{p' + l} = 0\).
As \(s^j \neq 0\) for a \(1 \le j \le p' + n'\), \(s^j \neq 0\) for a \(1 \le j \le p'\).
Now, take \({v_0}^* M v_0 = (r^j O_j)^* M (r^l O_l) = \overline{r^j} r^l {O_j}^* M O_l\), but \({O_j}^* M O_l\) is in fact the \((j, l)\) component of \(O^* M O = \Lambda\), which is \(\lambda_j \delta_{j, l}\), so, \(= \sum_{j, l} \overline{r^j} r^l \lambda_j \delta_{j, l} = \sum_{j \in \{1, ..., d\}} \overline{r^j} r^j \lambda_j\), but as \(r^j = 0\) for each \(1 \le j \le p\), \(= \sum_{j \in \{p + 1, ..., d\}} \overline{r^j} r^j \lambda_j\), which is negative.
On the other hand, \({v_0}^* M v_0 = (s^j N_j)^* M (s^l N_l) = \overline{s^j} s^l {N_j}^* M N_l\), but \({N_j}^* M N_l\) is in fact the \((j, l)\) component of \(N^* M N = P\), which is \(\rho_j \delta_{j, l}\), so, \(= \sum_{j, l} \overline{s^j} s^l \rho_j \delta_{j, l} = \sum_{j \in \{1, ..., d\}} \overline{s^j} s^j \rho_j\), but as \(s^j = 0\) for each \(p' + 1 \le j \le p' + n'\), \(= \sum_{j \in \{1, ..., p'\}} \overline{s^j} s^j \rho_j\), which is positive.
So, we have found a contradiction.
So, \(p' \le p\).
But, by symmetry, \(p \le p'\).
So, \(p' = p\).
So, \(n' = n\).
References
<The previous article in this series | The table of contents of this series | The next article in this series>
<The previous article in this series | The table of contents of this series | The next article in this series>
definition of signature of Hermitian matrix
Topics
About:
matrices space
The table of contents of this article
Starting Context
Target Context
-
The reader will have a definition of signature of Hermitian matrix.
Orientation
There is a list of definitions discussed so far in this site.
There is a list of propositions discussed so far in this site.
Main Body
1: Structured Description
Here is the rules of Structured Description.
Entities:
\( n\): \(\in \mathbb{N} \setminus \{0\}\)
\( M\): \(\in \{\text{ the } n \times n \text{ Hermitian matrices }\}\)
\( (\lambda_1, ..., \lambda_n)\): \(= \text{ the eigenvalues of } M \text{ including any duplicated roots }\)
\(*(p, n, z)\): \(p = \text{ the number of the positive eigenvalues including any duplicated roots }, n = \text{ the number of the negative eigenvalues including any duplicated roots }, z = \text{ the number of the zero eigenvalues including any duplicated roots }\)
//
Conditions:
//
2: Note
The definition is well-defined, because the eigenvalues of \(M\) are all real, as a well known fact.
There are indeed exactly \(n\) roots including any duplicated roots, as a well known fact.
So, \(n = p + n + z\).
References
<The previous article in this series | The table of contents of this series | The next article in this series>
<The previous article in this series | The table of contents of this series | The next article in this series>
definition of eigenvalues of square matrix over ring
Topics
About:
matrices space
The table of contents of this article
Starting Context
Target Context
-
The reader will have a definition of eigenvalues of square matrix over ring.
Orientation
There is a list of definitions discussed so far in this site.
There is a list of propositions discussed so far in this site.
Main Body
1: Structured Description
Here is the rules of Structured Description.
Entities:
\( R\): \(\in \{\text{ the rings }\}\)
\( n\): \(\in \mathbb{N} \setminus \{0\}\)
\( M\): \(\in \{\text{ the } n \times n R \text{ matrices }\}\)
\(*S\): \(= \text{ the set of the roots for } \lambda \text{ of } det (M - \lambda I) = 0\)
//
Conditions:
//
2: Note
When \(R\) is any field, \(S\) has at most \(n\) elements including any duplicate roots, by the proposition that over any field, any n-degree polynomial has at most n roots: \(det (M - \lambda I)\) is an \(n\)-degree polynomial.
When \(R = \mathbb{C}\), \(S\) has exactly \(n\) elements including any duplicate roots, as a well-known fact.
When \(R = \mathbb{C}\) and \(M\) is a Hermitian matrix, \(S \subseteq \mathbb{R}\), as a well-known fact.
When \(R = \mathbb{R}\) and \(M\) is a symmetric matrix, \(S\) has exactly \(n\) elements including any duplicate roots: regarding \(R = \mathbb{C}\), \(S\) has exactly \(n\) complex elements including any duplicate elements, as is mentioned above, but \(M\) is Hermitian, so, the elements are all real, so, \(S\) has exactly \(n\) real elements including any duplicate elements.
References
<The previous article in this series | The table of contents of this series | The next article in this series>
<The previous article in this series | The table of contents of this series | The next article in this series>
description/proof of that rank of matrix over field is conserved by multiplying invertible matrices from left and right
Topics
About:
matrices space
The table of contents of this article
Starting Context
Target Context
-
The reader will have a description and a proof of the proposition that the rank of any matrix over any field is conserved by multiplying any invertible matrices from left and right.
Orientation
There is a list of definitions discussed so far in this site.
There is a list of propositions discussed so far in this site.
Main Body
1: Structured Description
Here is the rules of Structured Description.
Entities:
\(F\): \(\in \{\text{ the fields }\}\)
\(M\): \(\in \{\text{ the } m \times n F \text{ matrices }\}\)
\(M_1\): \(\in \{\text{ the invertible } m \times m F \text{ matrices }\}\)
\(M_2\): \(\in \{\text{ the invertible } n \times n F \text{ matrices }\}\)
//
Statements:
\(Rank (M) = Rank (M_1 M M_2)\)
//
2: Proof
Whole Strategy: use the proposition that for any linear map between any finite-dimensional vectors spaces, the rank of the map is the rank of the representative matrix with respect to any bases; Step 1: think of the linear map, \(f: F^n \to F^m\), of which \(M\) is the canonical representative matrix, and see that the rank of \(f\) is \(Rank (M)\); Step 2: think of the linear map, \(f_1: F^m \to F^m\), of which \(M_1\) is the canonical representative matrix, think of the linear map, \(f_2: F^n \to F^n\), of which \(M_2\) is the canonical representative matrix, think of the linear map, \(f': F^n \to F^m\), of which \(M_1 M M_2\) is the canonical representative matrix, and see that the rank of \(f'\) is \(Rank (M_1 M M_2)\); Step 3: see that \(Rank (f') = Rank (f)\).
Step 1:
Let us think of the linear map, \(f: F^n \to F^m\), of which \(M\) is the canonical representative matrix, which means that for each \(v \in F^n\), \(f (v) = M v^t\).
\(Rank (f) = Rank (M)\), by the proposition that for any linear map between any finite-dimensional vectors spaces, the rank of the map is the rank of the representative matrix with respect to any bases.
Step 2:
Let us think of the linear map, \(f_1: F^m \to F^m\), of which \(M_1\) is the canonical representative matrix, which means that for each \(v \in F^m\), \(f_1 (v) = M_1 v^t\).
Let us think of the linear map, \(f_2: F^n \to F^n\), of which \(M_2\) is the canonical representative matrix, which means that for each \(v \in F^n\), \(f_2 (v) = M_2 v^t\).
Let us think of the linear map, \(f': F^n \to F^m\), of which \(M_1 M M_2\) is the canonical representative matrix, which means that for each \(v \in F^n\), \(f' (v) = M_1 M M_2 v^t\).
\(Rank (f') = Rank (M_1 M M_2)\), by the proposition that for any linear map between any finite-dimensional vectors spaces, the rank of the map is the rank of the representative matrix with respect to any bases.
Step 3:
Let us see that \(Rank (f') = Rank (f)\).
\(f' = f_1 \circ f \circ f_2\).
As \(M_1\) is invertible, \(f_1\) is a 'vectors spaces - linear morphisms' isomorphism.
As \(M_2\) is invertible, \(f_2\) is a 'vectors spaces - linear morphisms' isomorphism.
So, \(Rank (f') = Rank (f)\), by the proposition that for any linear map between any vectors spaces, any 'vectors spaces - linear morphisms' isomorphism onto the domain of the linear map, and any 'vectors spaces - linear morphisms' isomorphism from any superspace of the codomain of the linear map, the composition of the linear map after the 1st isomorphism and before the 2nd isomorphism has the rank of the linear map.
So, \(Rank (M) = Rank (M_1 M M_2)\).
References
<The previous article in this series | The table of contents of this series | The next article in this series>
<The previous article in this series | The table of contents of this series | The next article in this series>
description/proof of that for bounded interval on \(\mathbb{R}\) with Borel \(\sigma\)-algebra and Lebesgue measure and \(L^2\) space, integral of absolute function is equal to or smaller than square-root of measure of interval times square-root of integral of squared absolute function
Topics
About:
vectors space
The table of contents of this article
Starting Context
Target Context
-
The reader will have a description and a proof of the proposition that for any bounded interval on \(\mathbb{R}\) with the Borel \(\sigma\)-algebra of the Euclidean subspace topology and the Lebesgue measure and the \(L^2\) space over the measure space, the integral of any absolute function is equal to or smaller than the square-root of the measure of the interval times the square-root of the integral of the squared absolute function.
Orientation
There is a list of definitions discussed so far in this site.
There is a list of propositions discussed so far in this site.
Main Body
1: Structured Description
Here is the rules of Structured Description.
Entities:
\(\mathbb{R}\): \(= \text{ the Euclidean topological space }\)
\((I, A, \lambda)\): \(\in \{\text{ the bounded intervals of } \mathbb{R}\}\), \(= (l, h)\), \((l, h]\), \([l, h)\), or \([l, h]\), with the subspace topology, with the Borel \(\sigma\)-algebra and the Lebesgue measure
\(\mathbb{C}\): \(= \text{ the complex Euclidean topological space }\) with the Borel \(\sigma\)-algebra
\(\mathbb{R}\): \(= \text{ the Euclidean topological space }\) with the Borel \(\sigma\)-algebra
\(F\): \(\in \{\mathbb{C}, \mathbb{R}\}\)
\(L^2 (I, A, \lambda, F)\):
\(f\): \(\in L^2 (I, A, \lambda, F)\)
//
Statements:
\(\int_I \vert f \vert d \lambda \le \sqrt{h - l} \sqrt{\int_I \vert f \vert^2 d \lambda}\)
//
2: Proof
Whole Strategy: Step 1: see that \(L^2 (I, A, \lambda, F)\) is a Hilbert space; Step 2: apply the Cauchy-Schwarz inequality to \(1, \vert f \vert \in L^2 (I, A, \lambda, F)\).
Step 1:
\(L^2 (I, A, \lambda, F)\) is a Hilbert space with the inner product, \(\langle f, g \rangle = \int_I f \overline{g} d \lambda\), by the proposition that \(L^2\) over any measure space with the canonical inner product is a Hilbert space.
Step 2:
\(1 \in L^2 (I, A, \lambda, F)\), where \(1\) is the constantly-1 function, because \(1\) is measurable (the preimage of any measurable subset of \(F\) is \(I\) or \(\emptyset\)) and \(\int_I \vert 1 \vert^2 d \lambda = h - l \lt \infty\).
\(\vert f \vert \in L^2 (I, A, \lambda, F)\) is a well-known fact.
Let us apply the Cauchy-Schwarz inequality for any real or complex inner-producted vectors space to \(1, \vert f \vert \in L^2 (I, A, \lambda, F)\): \(\vert \langle 1, \vert f \vert \rangle \vert \le \sqrt{\langle 1, 1 \rangle} \sqrt{\langle \vert f \vert, \vert f \vert \rangle}\).
\(\vert \langle 1, \vert f \vert \rangle \vert = \int_I 1 \vert f \vert d \lambda = \int_I \vert f \vert d \lambda\).
\(\sqrt{\langle 1, 1 \rangle} \sqrt{\langle \vert f \vert, \vert f \vert \rangle} = \sqrt{\int_I 1 1 d \lambda} \sqrt{\int_I \vert f \vert^2 d \lambda} = \sqrt{h - l} \sqrt{\int_I \vert f \vert^2 d \lambda}\).
So, \(\int_I \vert f \vert d \lambda \le \sqrt{h - l} \sqrt{\int_I \vert f \vert^2 d \lambda}\).
References
<The previous article in this series | The table of contents of this series | The next article in this series>