description/proof of that for linear combination of sines and cosines with distinct angular velocities, if it is constant, coefficients are \(0\)
Topics
About: field
The table of contents of this article
Starting Context
- The reader knows a definition of sine function.
- The reader knows a definition of cosine function.
- The reader knows a definition of Vandermonde determinant.
- The reader admits Cramer's rule for any system of linear equations.
Target Context
- The reader will have a description and a proof of the proposition that for any linear combination of sines and cosines with any distinct angular velocities, if it is constant, the coefficients are \(0\).
Orientation
There is a list of definitions discussed so far in this site.
There is a list of propositions discussed so far in this site.
Main Body
1: Structured Description
Here is the rules of Structured Description.
Entities:
\(\{\omega_1, ..., \omega_n\}\): \(\subseteq \mathbb{R}\), such that it is distinct and \(0 \lt \omega_j\)
\(\{c_1, ..., c_n\}\): \(\subseteq \mathbb{R}\)
\(\{c'_1, ..., c'_n\}\): \(\subseteq \mathbb{R}\)
\(c\): \(\in \mathbb{R}\)
\(\{\theta_1, ..., \theta_n\}\): \(\subseteq \mathbb{R}\), such that \(0 \le \theta_j \lt 2 \pi\)
\(\{\theta'_1, ..., \theta'_n\}\): \(\subseteq \mathbb{R}\), such that \(0 \le \theta'_j \lt 2 \pi\) and \(\vert \theta'_j - \theta_j \vert \neq \pi / 2, 3 \pi / 2\)
\(f\): \(: \mathbb{R} \to \mathbb{R}, t \mapsto c_1 cos (\omega_1 t + \theta_1) + ... + c_n cos (\omega_n t + \theta_n) + c'_1 sin (\omega_1 t + \theta'_1) + ... + c'_n sin (\omega_n t + \theta'_n)\)
//
Statements:
\(f \equiv c\)
\(\implies\)
\(\{c_1, ..., c_n, c'_1, ..., c'_n, c\} = \{0\}\)
//
2: Note
\(cos (\omega_j + \theta_j)\) and \(sin (\omega_j + \theta'_j)\) do not really need to appear in pair: if only \(c_j cos (\omega_j + \theta_j)\) appears, that is just \(c_j cos (\omega_j + \theta_j) + c'_j sin (\omega_j + \theta'_j)\) with \(c'_j = 0\).
The condition, \(\vert \theta'_j - \theta_j \vert \neq \pi / 2, 3 \pi / 2\), is required, because otherwise, \(sin (\omega_j t + \theta'_t) = cos (\omega_j t + \theta_t) \text{ or } - cos (\omega_j t + \theta_t)\), and practically, the combination of \(c_j cos (\omega_j t + \theta_t)\) and \(c'_j sin (\omega_j t + \theta'_t)\) is a duplication.
This proposition seems obvious intuitively, but let us prove it for sure.
3: Proof
Whole Strategy: Step 1: 1st, suppose that \(\{\theta_1, ..., \theta_n\} = \{\theta'_1, ..., \theta'_n\} = \{0\}\); Step 2: take \(d f / d t (0) = ... = d^{2 n} f / d t^{2 n} (0) = 0\); Step 3: conclude that \(\{c_1, ..., c_n, c'_1, ..., c'_n, c\} = \{0\}\) for the Step 1 case; Step 4: then, reduce the general case to the Step 1 case; Step 5: conclude that \(\{c_1, ..., c_n, c'_1, ..., c'_n, c\} = \{0\}\) for the general case.
Step 1:
1st, let us suppose that \(\{\theta_1, ..., \theta_n\} = \{\theta'_1, ..., \theta'_n\} = \{0\}\).
Step 2:
\(d f / d t (0) = 0\), but \(d f / d t = - c_1 \omega_1 sin (\omega_1 t) - ... - c_n \omega_n sin (\omega_n t) + c'_1 \omega_1 cos (\omega_1 t) + ... + c'_1 \omega_n cos (\omega_n t)\), and \(d f / d t (0) = - c_1 \omega_1 sin (0) - ... - c_n \omega_n sin (0) + c'_1 \omega_1 cos (0) + ... + c'_1 \omega_n cos (0) = c'_1 \omega_1 + ... + c'_n \omega_n = 0\).
\(d^2 f / d t^2 (0) = 0\), but \(d^2 f / d t^2 = - c_1 {\omega_1}^2 cos (\omega_1 t) - ... - c_n {\omega_n}^2 cos (\omega_n t) - c'_1 {\omega_1}^2 sin (\omega_1 t) - ... - c'_1 {\omega_n}^2 sin (\omega_n t)\), and \(d^2 f / d t^2 (0) = - c_1 {\omega_1}^2 cos (0) - ... - c_n {\omega_n}^2 cos (0) - c'_1 {\omega_1}^2 sin (0) - ... - c'_1 {\omega_n}^2 sin (0) = - c_1 {\omega_1}^2 - ... - c_n {\omega_1}^2 = 0\).
\(d^3 f / d t^3 (0) = 0\), but \(d^3 f / d t^3 = c_1 {\omega_1}^3 sin (\omega_1 t) + ... + c_n {\omega_n}^3 sin (\omega_n t) - c'_1 {\omega_1}^3 cos (\omega_1 t) - ... - c'_1 {\omega_n}^3 cos (\omega_n t)\), and \(d^3 f / d t^3 (0) = c_1 {\omega_1}^3 sin (0) + ... + c_n {\omega_n}^3 sin (0) - c'_1 {\omega_1}^3 cos (0) - ... - c'_1 {\omega_n}^3 cos (0) = - c'_1 {\omega_1}^3 - ... - c'_n {\omega_1}^3 = 0\).
And so on, until \(d^{2 n} f / d t^{2 n} (0) = 0\).
After all, we have gotten, \({\omega_1}^{2 (1)} c_1 + ... + {\omega_n}^{2 (1)} c_n = 0\), ..., \({\omega_1}^{2 (n)} c_1 + ... + {\omega_n}^{2 (n)} c_n = 0\) and \(c'_1 {\omega_1}^{2 (0) + 1} + ... + c'_n {\omega_n}^{2 (0) + 1} = 0\), ..., \(c'_1 {\omega_1}^{2 (n - 1) + 1} + ... + c'_n {\omega_n}^{2 (n - 1) + 1} = 0\).
Step 3:
Let us take \(M := \begin{pmatrix} {\omega_1}^{2 (1)} & ... & {\omega_n}^{2 (1)} \\ ... \\ {\omega_1}^{2 (n)} & ... & {\omega_n}^{2 (n)} \end{pmatrix}\) and \(M' := \begin{pmatrix} {\omega_1}^{2 (0) + 1} & ... & {\omega_n}^{2 (0) + 1} \\ ... \\ {\omega_1}^{2 (n - 1) + 1} & ... & {\omega_n}^{2 (n - 1) + 1} \end{pmatrix}\).
\(M \begin{pmatrix} c_1 \\ ... \\ c_n \end{pmatrix} = 0\) and \(M' \begin{pmatrix} c'_1 \\ ... \\ c'_n \end{pmatrix} = 0\).
But \(det M = {\omega_1}^{2 (1)} \begin{pmatrix} 1 & ... & {\omega_n}^{2 (1)} \\ {\omega_1}^{2 (1)} & ... & {\omega_n}^{2 (2)} \\ ... \\ {\omega_1}^{2 (n - 1)} & ... & {\omega_n}^{2 (n)} \end{pmatrix}\), by a property of determinant, and likewise, \(= ... = {\omega_1}^{2 (1)} ... {\omega_n}^{2 (1)} \begin{pmatrix} 1 & ... & 1 \\ {\omega_1}^{2 (1)} & ... & {\omega_n}^{2 (1)} \\ ... \\ {\omega_1}^{2 (n - 1)} & ... & {\omega_n}^{2 (n - 1)} \end{pmatrix} = {\omega_1}^{2 (1)} ... {\omega_n}^{2 (1)} D_n ({\omega_1}^2, ..., {\omega_n}^2)\), where \(D_n\) is the Vandermonde determinant, and as \(\{\omega_1, ..., \omega_n\}\) is distinct, \(\{{\omega_1}^2, ..., {\omega_n}^2\}\) is distinct, and \(det M \neq 0\).
\(det M' = \omega_1 \begin{pmatrix} 1 & ... & \omega_n \\ {\omega_1}^{2 (1)} & ... & {\omega_n}^{2 (1) + 1} \\ ... \\ {\omega_1}^{2 (n - 1)} & ... & {\omega_n}^{2 (n - 1) + 1} \end{pmatrix}\), by a property of determinant, and likewise, \(= ... = \omega_1 ... \omega_n \begin{pmatrix} 1 & ... & 1 \\ {\omega_1}^{2 (1)} & ... & {\omega_n}^{2 (1)} \\ ... \\ {\omega_1}^{2 (n - 1)} & ... & {\omega_n}^{2 (n - 1)} \end{pmatrix} = \omega_1 ... \omega_n D_n ({\omega_1}^2, ..., {\omega_n}^2)\), and \(det M' \neq 0\), likewise.
By Cramer's rule for any system of linear equations, \(\{c_1, ..., c_n\} = \{c'_1, ..., c'_n\} = \{0\}\).
Inevitably, \(c = 0\).
Step 4:
Then, let us deal with the general case.
\(c_j cos (\omega_j t + \theta_j) + c'_j sin (\omega_j t + \theta'_j) = c_j (cos (\omega_j t) cos (\theta_j) - sin (\omega_j t) sin (\theta_j)) + c'_j (sin (\omega_j t) cos (\theta'_j) + cos (\omega_j t) sin (\theta'_j)) = (c_j cos (\theta_j) + c'_j sin (\theta'_j)) cos (\omega_j t) + (- c_j sin (\theta_j) + c'_j cos (\theta'_j)) sin (\omega_j t)\).
So, the general case is reduced to the Step 1 case with \(c_j cos (\theta_j) + c'_j sin (\theta'_j)\) and \(- c_j sin (\theta_j) + c'_j cos (\theta'_j)\) instead of \(c_j\) and \(c'_j\).
Step 5:
By Step 3, \(c_j cos (\theta_j) + c'_j sin (\theta'_j) = 0\) and \(- c_j sin (\theta_j) + c'_j cos (\theta'_j) = 0\).
By multiplying the 1st equation by \(sin (\theta_j)\), multiplying the 2nd equation by \(cos (\theta_j)\), and adding the 2 equations, \(c'_j sin (\theta'_j) sin (\theta_j) + c'_j cos (\theta'_j) cos (\theta_j) = 0\), but the left hand side is \(c'_j (sin (\theta'_j) sin (\theta_j) + cos (\theta'_j) cos (\theta_j))\).
The condition, \(\vert \theta'_j - \theta_j \vert \neq \pi / 2, 3 \pi / 2\), implies that \(sin (\theta'_j) sin (\theta_j) + cos (\theta'_j) cos (\theta_j) \neq 0\).
So, \(c'_j = 0\).
So, \(c_j = 0\).
So, inevitably, \(c = 0\).