description/proof of that for linear map from Euclidean-normed Euclidean vectors space into itself, map is orthogonal iff representative matrix is orthogonal
Topics
About: vectors space
The table of contents of this article
Starting Context
- The reader knows a definition of Euclidean-normed Euclidean vectors space.
- The reader knows a definition of orthogonal linear map.
- The reader knows a definition of canonical representative matrix of linear map between finite-product-of-copies-of-field vectors spaces.
- The reader knows a definition of orthogonal matrix.
- The reader admits the proposition that for any polynomial over any field with any finite number of variables, if the polynomial is constantly \(0\), the coefficients are \(0\).
Target Context
- The reader will have a description and a proof of the proposition that for any linear map from any Euclidean-normed Euclidean vectors space into itself, the map is orthogonal if and only if the canonical representative matrix is orthogonal.
Orientation
There is a list of definitions discussed so far in this site.
There is a list of propositions discussed so far in this site.
Main Body
1: Structured Description
Here is the rules of Structured Description.
Entities:
\(d\): \(\in \mathbb{N} \setminus \{0\}\)
\(f\): \(: \mathbb{R}^d \to \mathbb{R}^d\), \(\in \{\text{ the linear maps }\}\)
\(B\): \(= \text{ the canonical basis for } \mathbb{R}^d\), \(= \{b_1, ..., b_d\}\)
\(M\): \(= \text{ the representative matrix of } f \text{ with respect to } B\)
//
Statements:
\(f \in \{\text{ the orthogonal maps }\}\)
\(\iff\)
\(M \in \{\text{ the orthogonal matrices }\}\)
//
2: Proof
Whole Strategy: apply the proposition that for any polynomial over any field with any finite number of variables, if the polynomial is constantly \(0\), the coefficients are \(0\); Step 1: suppose that \(f\) is an orthogonal map; Step 2: see that \(M\) is an orthogonal matrix; Step 3: suppose that \(M\) is an orthogonal matrix; Step 4: see that \(f\) is an orthogonal map.
Step 1:
Let us suppose that \(f\) is an orthogonal map.
Step 2:
Let us see that \(M\) is an orthogonal matrix.
Let \(v \in \mathbb{R}^d\) be any.
\(\Vert f (v) \Vert^2 = (M v^t)^t M v^t\), by the definition of Euclidean norm, \(= v M^t M v^t = \Vert v \Vert^2\), because \(f\) is orthogonal, but \(= v v^t = \sum_{j \in \{1, ..., d\}} v^j v^j = \sum_{l \in \{1, ..., d\}, j \in \{1, ..., d\}} v^l \delta^l_j v^j\).
Let \(N := M^t M\).
\(v N v^t = \sum_{l \in \{1, ..., d\}, j \in \{1, ..., d\}} v^l N^l_j v^j\).
So, \(\sum_{l \in \{1, ..., d\}, j \in \{1, ..., d\}} v^l N^l_j v^j = \sum_{l \in \{1, ..., d\}, j \in \{1, ..., d\}} v^l \delta^l_j v^j\), and \(\sum_{l \in \{1, ..., d\}, j \in \{1, ..., d\}} (v^l N^l_j v^j - v^l \delta^l_j v^j) = 0\), so, \(\sum_{l \in \{1, ..., d\}, j \in \{1, ..., d\}} (N^l_j - \delta^l_j) v^l v^j = 0\).
That holds constantly with respect to \((v^1, ..., v^d)\), so, by the proposition that for any polynomial over any field with any finite number of variables, if the polynomial is constantly \(0\), the coefficients are \(0\), \(N^l_j - \delta^l_j = 0\), so, \(N^l_j = \delta^l_j\).
So, \(M^t M = I\).
So, \(M\) is an orthogonal matrix.
Step 3:
Let us suppose that \(M\) is an orthogonal matrix.
Step 4:
Let \(v \in \mathbb{R}^d\) be any.
\(\Vert f (v) \Vert^2 = \Vert M v^t \Vert^2 = (M v^t)^t M v^t = v M^t M v^t = v I v^t = v v^t = \Vert v \Vert^2\).
So, \(f\) is an orthogonal map.