2022-02-13

27: Local Unique Solution Existence for Euclidean-Normed Euclidean Vectors Space ODE with Initial Condition

<The previous article in this series | The table of contents of this series | The next article in this series>

description/proof of the local unique solution existence for Euclidean-normed Euclidean vectors space ODE with initial condition

Topics


About: vectors space

The table of contents of this article


Starting Context



Target Context


  • The reader will have a description and a proof of the local unique solution existence for a closed interval domain for a Euclidean-normed Euclidean vectors space ordinary differential equation with an initial condition with a clarification on the solution domain area.

Orientation


There is a list of definitions discussed so far in this site.

There is a list of propositions discussed so far in this site.


Main Body


1: Structured Description


Here is the rules of Structured Description.

Entities:
\(\mathbb{R}^d\): \(= \text{ the Euclidean-normed Euclidean vectors space }\)
\(\mathbb{R}\): \(= \text{ the Euclidean-normed Euclidean vectors space }\)
\(x_0\): \(\in \mathbb{R}^d\)
\(B_{x_0, K}\): \(\subseteq \mathbb{R}^d\)
\(r_0\): \(\in \mathbb{R}\)
\(J\): \(= [r_0 - \epsilon_1, r_0 + \epsilon_2] \subseteq \mathbb{R}\)
\(f\): \(: B_{x_0, K} \times J \to \mathbb{R}^d\), \(\in \{\text{ the } C^0 \text{ maps }\}\), such that \(\forall x_1, x_2 \in B_{x_0, K}, \forall r \in J (\Vert f (x_1, r) - f (x_2, r) \Vert \le L \Vert x_1 - x_2 \Vert)\)
\(M\): \(\in \mathbb{R}\), such that \(Sup (\{\Vert f (x, r) \Vert \vert x \in B_{x_0, K}, r \in J\} \le M\)
\(p\): \(\in \mathbb{R}\), such that \(0 \le p \lt 1\)
\(B_{x_0, p K}\): \(\subseteq \mathbb{R}^d\)
\(x'_0\): \(\in B_{x_0, p K}\)
//

Statements:
\(\epsilon_j \le (1 - p) K / M \land \epsilon_j \lt L^{-1}\)
\(\implies\)
\(d x / d r = f (x, r)\) with \(x (r_0) = x'_0\) has the unique \(C^1\) solution, \(x: J \to B_{x_0, K}\)
//


2: Note


It is called "local solution", because usually, \(f\) is originally defined in a wider area and \(B_{x_0, K}\) and \(J\) are chosen for the solution.

When \(f\) satisfies the Lipschitz estimate on \(B_{x_0, K} \times J\) and \(\Vert f (x, r) \Vert\) is upper-bounded there, \(\epsilon_j\) s can be always chosen to satisfy the inequalities, because if the inequalities do not hold for a \(J\) after \(L\) and \(M\) are chosen, a narrower interval \(J'\) can be chosen to satisfy the inequalities without those \(L\) and \(M\) moved, because \(L\) and \(M\) do not need to be larger for the narrower domain.

\(J\) depends on the choices of \(x_0\) and \(K\), because they change \(K\), \(M\), and \(L\).

That is the reason why the local existence at every point in an interval does not guarantee the global solution existence for the entire interval (refer to another proposition).

Once \(x_0\), \(K\), and \(p\) have been determined, \(J\) does not depend on the choice of \(x'_0\): \(K\), \(M\), \(L\), and \(p\) are not changed by the choice of \(x'_0\).


3: Proof


Whole Strategy: Step 1: see that \(d x / d r = f (x, r)\) with \(x (r_0) = x'_0\) equals \(x (r) = x'_0 + \int^r_{r_0} f (x (s), s) d s\); Step 2: define \(Y := \{y: J \to B_{x_0, K} \in \{\text{ the } C^0 \text{ maps }\} \vert y (r_0) = x'_0\}\) with the metric, \(dist: Y \times Y \to \mathbb{R}, (y_1, y_2) \mapsto Sup (\{\Vert y_1 (r) - y_2 (r) \Vert \vert r \in J\})\), and see that \(Y\) is a complete metric space; Step 3: define \(g: Y \to Y, y \mapsto x'_0 + \int^r_{r_0} f (y (s), s) d s\), and see that \(g\) is a contraction, and take the unique fixed element, \(x\); Step 4: see that \(x\) is the unique solution.

Step 1:

Let us see that \(d x / d r = f (x, r)\) with \(x (r_0) = x'_0\) equals \(x (r) = x'_0 + \int^r_{r_0} f (x (s), s) d s\).

It is about if any \(x: J \to B_{x_0, K}\) satisfies one, \(x\) will satisfy the other.

Let us suppose that \(d x / d r = f (x, r)\) with \(x (r_0) = x'_0\).

As \(d x / d r\) exists, \(x (r)\) is continuous, and as \(f\) is continuous, \(f (x (r), r)\) is continuous with respect to \(r\), so, we can take the integral, \(\int^s_{r_0} d x / d r d r = \int^s_{r_0} f (x (r), r) d r\), and the left hand side is \(x (s) - x (r_0) = x (s) - x'_0\), so, \(x (s) = x'_0 + \int^s_{r_0} f (x (r), r) d r\), which is nothing but \(x (r) = x'_0 + \int^r_{r_0} f (x (s), s) d s\).

Let us suppose that \(x (r) = x'_0 + \int^r_{r_0} f (x (s), s) d s\).

\(x (r)\) is differentiable, and \(d x (r) / d r = d (x'_0 + \int^r_{r_0} f (x (s), s) d s) / d r = f (x (r), r)\), and \(x (r_0) = x'_0 + \int^{r_0}_{r_0} f (x (s), s) d s = x'_0\).

Step 2:

Let us define \(Y := \{y: J \to B_{x_0, K} \in \{\text{ the } C^0 \text{ maps }\} \vert y (r_0) = x'_0\}\).

Let us give \(Y\) a metric, \(dist: Y \times Y \to \mathbb{R}, (y_1, y_2) \mapsto Sup (\{\Vert y_1 (r) - y_2 (r) \Vert \vert r \in J\})\), which is valid, because \(y_j\) is into \(B_{x_0, K}\).

Let us see that \(dist\) is indeed a metric.

Let \(y_1, y_2, y_3 \in Y\) be any.

1) \(0 \le dist (y_1, y_2)\) with the equality holding if and only if \(y_1 = y_2\): \(0 \le Sup (\{\Vert y_1 (r) - y_2 (r) \Vert \vert r \in J\})\); if \(y_1 = y_2\), \(Sup (\{\Vert y_1 (r) - y_2 (r) \Vert \vert r \in J\}) = Sup (\{\Vert 0 \Vert \vert r \in J\}) = Sup (\{0 \vert r \in J\}) = 0\); if \(Sup (\{\Vert y_1 (r) - y_2 (r) \Vert \vert r \in J\}) = 0\), \(\Vert y_1 (r) - y_2 (r) \Vert = 0\) for each \(r \in J\), so, \(y_1 (r) - y_2 (r) = 0\) for each \(r \in J\), so, \(y_1 = y_2\).

2) \(dist (y_1, y_2) = dist (y_2, y_1)\): \(Sup (\{\Vert y_1 (r) - y_2 (r) \Vert \vert r \in J\}) = Sup (\{\Vert y_2 (r) - y_1 (r) \Vert \vert r \in J\})\).

3) \(dist (y_1, y_3) \le dist (y_1, y_2) + dist (y_2, y_3)\): \(dist (y_1, y_3) = Sup (\{\Vert y_1 (r) - y_3 (r) \Vert \vert r \in J\}) = Sup (\{\Vert y_1 (r) - y_2 (r) + y_2 (r) - y_3 (r) \Vert \vert r \in J\}) \le Sup (\{\Vert y_1 (r) - y_2 (r) \Vert + \Vert y_2 (r) - y_3 (r) \Vert \vert r \in J\}) \le Sup (\{\Vert y_1 (r) - y_2 (r) \Vert \vert r \in J\}) + Sup (\{\Vert y_2 (r) - y_3 (r) \Vert \vert r \in J\})\), by the proposition that for any partially-ordered ring, any finite number of subsets with any same index set, and the subset as the sum of the subsets with the same index set, if the supremums of the subsets exist, the sum of the supremums of the subsets is an upper bound of the subset and if furthermore the supremum of the subset exists, the supremum of the subset is equal to or smaller than the sum of the supremums of the subsets and if the infimums of the subsets exist, the sum of the infimums of the subsets is a lower bound of the subset and if furthermore the infimum of the subset exists, the infimum of the subset is equal to or larger than the sum of the infimums of the subsets, \(= dist (y_1, y_2) + dist (y_2, y_3)\).

Let us see that \(dist\) is complete.

Let \(s: \mathbb{N} \to Y\) be any Cauchy sequence.

Let \(\epsilon \in \mathbb{R}\) be any such that \(0 \lt \epsilon\).

There is an \(N \in \mathbb{N}\) such that for each \(j, l \in \mathbb{N}\) such that \(N \lt j, l\), \(dist (s (j), s (l)) \lt \epsilon\).

For each \(r \in J\), \(\Vert s (j) (r) - s (l) (r) \Vert \le Sup (\{\Vert s (j) (r) - s (l) (r) \Vert \vert r \in J\}) = dist (s (j), s (l)) \lt \epsilon\), which means that \(s_r: \mathbb{N} \to \mathbb{R}^d, j \mapsto s (j) (r)\) is a Cauchy sequence, and as \(\mathbb{R}^d\) with the metric induced by the norm is complete (as is well known), \(s_r\) converges to a point on \(\mathbb{R}^d\), which induces the map, \(y: J \to \mathbb{R}^d\).

\(\Vert y (r) - x_0 \Vert = \Vert y (r) - s (j) (r) + s (j) (r) - x_0 \Vert \le \Vert y (r) - s (j) (r) \Vert + \Vert s (j) (r) - x_0 \Vert = \Vert y (r) - s (j) (r) \Vert + K - \epsilon'\) for an \(\epsilon' \in \mathbb{R}\) such that \(0 \lt \epsilon'\), but there is an \(N' \in \mathbb{N}\) such that for each \(j \in \mathbb{N}\) such that \(N' \lt j\), \(\Vert y (r) - s (j) (r) \Vert \lt \epsilon'\), and by taking any such \(j\), \(\lt \epsilon' + K - \epsilon' = K\). So, \(y\) is into \(B_{x_0, K}\).

\(\Vert s (j) (r) - s (l) (r) \Vert \le Sup (\{\Vert s (j) (r) - s (l) (r) \Vert \vert r \in J\}) \lt \epsilon\), which means that \(s\) is a uniformly Cauchy sequence, so, \(s\) converges uniformly, by the proposition that any uniformly Cauchy sequence of maps from any set into any complete metric space converges uniformly, so, \(y\) is continuous, by the proposition that for any uniformly convergent sequence of continuous maps from any topological space into any metric space with the induced topology, the convergence is continuous and the proposition that for any map between any normed vectors spaces, the map is continuous if and only if the map is continuous as the map between the topological spaces induced by the metrics induced by the norms.

\(y (r_0) = x'_0\), because \(s (j) (r_0) = x'_0\) for each \(j\).

So, \(y \in Y\).

\(s\) converges to \(y\), because \(dist (s (j), y) = Sup (\{\Vert s (j) (r) - y (r) \Vert \vert r \in J\})\), but for any \(\epsilon \in \mathbb{R}\) such that \(0 \lt \epsilon\), there is an \(N \in \mathbb{N}\) such that for each \(j \in \mathbb{N}\) such that \(N \lt j\), \(\Vert s (j) (r) - y (r) \Vert \lt \epsilon / 2\) for each \(r \in J\), because \(s\) converges to \(y\) uniformly, so, \(\le Sup (\{\epsilon / 2 \vert r \in J\}) = \epsilon / 2 \lt \epsilon\).

So, \(Y\) is complete.

Step 3:

Let us define \(g: Y \to Y, y \mapsto x'_0 + \int^r_{r_0} f (y (s), s) d s\).

\(g\) is certainly into \(Y\), because \(\Vert (g (y)) (r) - x_0 \Vert = \Vert x'_0 - x_0 + \int^r_{r_0} f (y (s), s) d s \Vert \le \Vert x'_0 - x_0 \Vert + \Vert \int^r_{r_0} f (y (s), s) d s \Vert \lt p K + \vert \int^r_{r_0} \Vert f (y (s), s) \Vert d s \vert \le p K + \vert \int^r_{r_0} M d s \vert \le p K + M \epsilon_j \le p K + M (1 - p) K / M = K\); \(g (y)\) is continuous; and \(g (y) (r_0) = x'_0 + \int^{r_0}_{r_0} f (y (s), s) d s = x'_0\).

Let \(y_1, y_2 \in Y\) be any.

\(\Vert g (y_1) (r) - g (y_2) (r) \Vert = \Vert x'_0 + \int^r_{r_0} f (y_1 (s), s) d s - (x'_0 + \int^r_{r_0} f (y_2 (s), s) d s) \Vert = \Vert \int^r_{r_0} (f (y_1 (s), s) - f (y_2 (s), s)) d s \Vert \le \vert \int^r_{r_0} \Vert f (y_1 (s), s) - f (y_2 (s), s) \Vert d s \vert \le \vert \int^r_{r_0} L \Vert y_1 (s) - y_2 (s) \Vert d s \vert \le \vert \int^r_{r_0} L Sup (\{\Vert y_1 (s) - y_2 (s) \Vert \vert s \in J\}) d s \vert = L Sup (\{\Vert y_1 (s) - y_2 (s) \Vert \vert s \in J\}) \vert \int^r_{r_0} 1 d s \vert \le L Sup (\{\Vert y_1 (s) - y_2 (s) \Vert \vert s \in J\}) \epsilon_j = L \epsilon_j dist (y_1, y_2)\) where \(L \epsilon_j \lt 1\).

So, \(dist (g (y_1), g (y_2)) = Sup (\{\Vert g (y_1) (r) - g (y_2) (r) \Vert \vert r \in J\}) \le L \epsilon_j dist (y_1, y_2)\), so, \(g\) is a contraction.

So, there is the unique fixed element, \(x \in Y\), such that \(g (x) = x\), by the contraction mapping principle.

That means that \(x (r) = x'_0 + \int^r_{r_0} f (x (s), s) d s\).

\(x\) is \(C^1\), because \(d x / d r = f (x (r), r)\), which is continuous.

Step 4:

\(x\) is the unique solution, because any solution, \(x\), needs to satisfy \(x (r) = x'_0 + \int^r_{r_0} f (x (s), s) d s\), which means that the solution needs to satisfy \(g (x) = x\), so, \(x\) needs to be a fixed element, but the fixed element is unique.


References


<The previous article in this series | The table of contents of this series | The next article in this series>