Ordinary Differential Equations/Peano's theorem

In this section, we drop the requirement of Lipschitz continuity. In fact, in this case we still obtain the existence of solutions, although the uniqueness is now no longer given. One can even construct very easy examples where $$f$$ is not Lipschitz continuous, and uniqueness is no longer given. On the other hand, it remains possible that $$f$$ is not Lipschitz continuous, and at the same time uniqueness is still given.

Proof:

We aim to reduce the proof of this theorem to an application of the Arzelà–Ascoli theorem, a version of which we proved (without using the axiom of choice) in the beginning (theorem 2.3). Thus, we first define a set of functions which is uniformly bounded and equicontinuous, then pick a suitable sequence within that set and take the Arzelà–Ascoli theorem to prove the existence of a convergent subsequence. The limit of this subsequence will then be the desired function, as we will show; we will prove all the requested properties.

For each $$r \in (0, \gamma)$$, we define $$x_r: [t_0 - r, t_0 + T]$$ as follows: We set
 * $$x_r(t) := \begin{cases}

x_0 & t \in [t_0 - r, t_0] \\ x_0 + \int_{t_0}^t f(s, x_r(s - r)) ds & t \ge t_0 \end{cases}$$; this formula inductively defines $$x_r$$ on all of $$[t_0 - r, t_0 + T]$$, for if we know $$x_r$$ on the interval $$[t_0 - r, t_0 + kr]$$, we can use that information to compute $$x_r$$ on the interval $$[t_0 - r, t_0 + (k+1)r]$$. Furthermore, also by induction on $$k$$, we can prove that in fact, $$x_r(t)$$ is contained within $$B_C(x_0)$$ for $$t \in [t_0 - r, t_0 + kr]$$; we need this in order for the integral formula to make sense in the first place.

This we do like this: Assume the claim is true for a $$k$$, and let $$t \in [t_0 - r, t_0 + (k+1)r]$$. Then
 * $$\begin{align}

\left\|x_0 - x_0 + \int_{t_0}^t f(s, x_r(s - r)) ds\right\| & \le \int_{t_0}^t \|f(s, x_r(s - r))\| dr \\ &\le \int_{t_0}^t M dr \\ &\le M \cdot C/M = C. \end{align}$$ Next, we extend $$x_r$$ to $$[t_0 - \gamma, t_0]$$ as follows:
 * $$x_r(t) := \begin{cases}

x_0 & t \in [t_0 - r, t_0] \\ x_0 + \int_t^{t_0} f(s, x_r(s + r)) ds & t \ge t_0. \end{cases}$$ This is a continuous extension of the "old" $$x_r$$, since both the old and the new parts of the function are continuous and coincide on all of $$[t_0 - r, t_0]$$. In this case, the very same arguments for well-definedness apply, and we get the same estimate of $$\|x_0 - x_r(t)\|$$ as above, which hence holds on all of $$[t_0 - \gamma, t_0 + \gamma]$$. Thus, by the triangle inequality, we obtain uniform boundedness as follows:
 * $$\|x_r\|_\infty \le \|x_r - x_0\|_\infty + \|x_0\|_\infty$$,

where we identified $$x_0$$ with the vector-valued constant function that is constantly $$x_0$$ on $$[t_0 - \gamma, t_0 + \gamma]$$.

We now prove equicontinuity. Let thus $$|t_1 - t_2| \le \delta$$, where $$\delta$$ is to be specified later. There are three cases to be considered: We will only do the first two cases, the third is analogous.
 * 1) $$t_1, t_2 \in [t_0 - \gamma, t_0]$$
 * 2) $$t_1 \in [t_0 - \gamma, t_0]$$, $$t_2 \in [t_0, t_0 + \gamma]$$
 * 3) $$t_1, t_2 \in [t_0, t_0 + \gamma]$$

Case 1:

In this case,
 * $$\begin{align}

\left\| x(t_1) - x(t_2) \right\| & \le \int_{t_1}^{t_2} \|f(s, x_r(s + r))\| ds \\ & \le |t_2 - t_1| M. \end{align}$$ Hence, choosing
 * $$\delta \le 1/M \epsilon$$

suffices to get $$\left\| x(t_1) - x(t_2) \right\| \le \epsilon$$.

Case 2:

In this case,
 * $$\begin{align}

\left\| x(t_1) - x(t_2) \right\| & \le \left\| \int_{t_1}^{t_0} f(s, x_r(s + r)) ds \right\| + \left\| \int_{t_0}^{t_2} f(s, x_r(s - r)) ds \right\| \\ & \le \int_{t_1}^{t_2} (\| f(s, x_r(s + r)) \| + \| f(s, x_r(s - r)) \|) ds \\ & \le |t_2 - t_1| 2 M, \end{align}$$ where we replaced $$ f(s, x_r(s + r))$$ or $$f(s, x_r(s - r))$$ respectively with zero where it is not defined. Hence, choosing
 * $$\delta \le 1/(2M) \epsilon$$

suffices to get $$\left\| x(t_1) - x(t_2) \right\| \le \epsilon$$.

Hence, we are given equicontinuity. Now we seek to apply the Arzelá–Ascoli theorem. To this end, we define
 * $$r_n := \frac{1}{n + N}$$,

where $$N$$ is sufficiently large such that $$r_n \in (0, \gamma)$$ for all $$n \in \mathbb N$$. Then the Arzelá–Ascoli theorem states that there exists a subsequence of the sequence $$x_{r_n}$$ which converges uniformly to a certain limit function $$x(t)$$ (which must hence be continuous, as uniform convergence preserves continuity). Call this sequence $$y_k \to x$$. For all $$t \in [t_0, t_0 + \gamma]$$ we get the equation
 * $$y_k(t) = x_0 + \int_{t_0}^t f \left( s, y_k \left( s - \frac{1}{n_k + N} \right) \right) ds$$

and if we pass to the limit $$k \to \infty$$, we obtain that $$x$$ solves the problem on $$[t_0, t_0 + \gamma]$$. Indeed, this follows from $$y_k \left( s - \frac{1}{n_k + N} \right) \to x(s)$$ uniformly and theorem 2.5, as uniform convergence allows us to interchange limits and integration. In the same manner, we get that $$x$$ solves the problem on $$[t_0 - \gamma, t_0]$$, and hence we have indeed constructed a solution on $$[t_0 - \gamma, t_0 + \gamma]$$.