Partial Differential Equations/Test functions

Motivation
Before we dive deeply into the chapter, let's first motivate the notion of a test function. Let's consider two functions which are piecewise constant on the intervals $$[0, 1), [1, 2), [2, 3), [3, 4), [4, 5)$$ and zero elsewhere; like, for example, these two: Example_for_step_functions_1.svg Example_for_step_functions_2.svg Let's call the left function $$f_1$$, and the right function $$f_2$$.

Of course we can easily see that the two functions are different; they differ on the interval $$[4, 5)$$; however, let's pretend that we are blind and our only way of finding out something about either function is evaluating the integrals
 * $$\int_{\mathbb R} \varphi(x) f_1(x) dx$$ and $$\int_{\mathbb R} \varphi(x) f_2(x) dx$$

for functions $$\varphi$$ in a given set of functions $$\mathcal X$$.

We proceed with choosing $$\mathcal X$$ sufficiently clever such that five evaluations of both integrals suffice to show that $$f_1 \neq f_2$$. To do so, we first introduce the characteristic function. Let $$A \subseteq \mathbb R$$ be any set. The characteristic function of $$A$$ is defined as
 * $$\chi_A(x) := \begin{cases}

1 & x \in A \\ 0 & x \notin A \end{cases}$$ With this definition, we choose the set of functions $$\mathcal X$$ as
 * $$\mathcal X := \{\chi_{[0, 1)}, \chi_{[1, 2)}, \chi_{[2, 3)}, \chi_{[3, 4)}, \chi_{[4, 5)}\}$$

It is easy to see (see exercise 1), that for $$n \in \{1, 2, 3, 4, 5\}$$, the expression
 * $$\int_{\mathbb R} \chi_{[n-1, n)} (x) f_1 (x) dx$$

equals the value of $$f_1$$ on the interval $$[n - 1, n)$$, and the same is true for $$f_2$$. But as both functions are uniquely determined by their values on the intervals $$[n - 1, n), n \in \{1, 2, 3, 4, 5\}$$ (since they are zero everywhere else), we can implement the following equality test:
 * $$f_1 = f_2 \Leftrightarrow \forall \varphi \in \mathcal X : \int_{\mathbb R} \varphi(x) f_1(x) dx = \int_{\mathbb R} \varphi(x) f_2(x) dx$$

This obviously needs five evaluations of each integral, as $$\# \mathcal X = 5$$.

Since we used the functions in $$\mathcal X$$ to test $$f_1$$ and $$f_2$$, we call them test functions. What we ask ourselves now is if this notion generalises from functions like $$f_1$$ and $$f_2$$, which are piecewise constant on certain intervals and zero everywhere else, to continuous functions. The following chapter shows that this is true.

Bump functions
In order to write down the definition of a bump function more shortly, we need the following two definitions:

Now we are ready to define a bump function in a brief way:

These two properties make the function really look like a bump, as the following example shows:



Example 3.4: The standard mollifier $$\eta$$, given by
 * $$\eta: \mathbb R^d \to \mathbb R, \eta(x) = \frac{1}{c}\begin{cases} e^{-\frac{1}{1-\|x\|^2}}& \text{ if } \|x\|_2 < 1\\

0& \text{ if } \|x\|_2 \geq 1 \end{cases}$$ , where $$c := \int_{B_1(0)} e^{-\frac{1}{1-\|x\|^2}} dx$$, is a bump function (see exercise 2).

Schwartz functions
As for the bump functions, in order to write down the definition of Schwartz functions shortly, we first need two helpful definitions.

Now we are ready to define a Schwartz function.

By $$x^\alpha \partial_\beta \phi$$ we mean the function $$x \mapsto x^\alpha \partial_\beta \phi(x)$$.

Example 3.8: The function
 * $$f: \R^2 \to \R, f(x, y) = e^{-x^2-y^2}$$

is a Schwartz function.

This means for example that the standard mollifier is a Schwartz function.

Proof:

Let $$\varphi$$ be a bump function. Then, by definition of a bump function, $$\varphi \in \mathcal C^\infty(\mathbb R^d)$$. By the definition of bump functions, we choose $$R > 0$$ such that
 * $$\text{supp } \varphi \subseteq \overline{B_R(0)}$$

, as in $$\mathbb R^d$$, a set is compact iff it is closed & bounded. Further, for $$\alpha, \beta \in \mathbb N_0^d$$ arbitrary,
 * $$\begin{align}

\|x^\alpha \partial_\beta \varphi(x)\|_\infty & := \sup_{x \in \mathbb R^d} |x^\alpha \partial_\beta \varphi(x)| & \\ & = \sup_{x \in \overline{B_R(0)}} |x^\alpha \partial_\beta \varphi(x)| & \text{supp } \varphi \subseteq \overline{B_R(0)} \\ & = \sup_{x \in \overline{B_R(0)}} \left( |x^\alpha| |\partial_\beta \varphi(x)| \right) & \text{rules for absolute value} \\ & \le \sup_{x \in \overline{B_R(0)}} \left( R^{|\alpha|} |\partial_\beta \varphi(x)| \right) & \forall i \in \{1, \ldots, d\}, (x_1, \ldots, x_d) \in \overline{B_R(0)} : |x_i| \le R \\ & < \infty & \text{Extreme value theorem} \end{align}$$

Convergence of bump and Schwartz functions
Now we define what convergence of a sequence of bump (Schwartz) functions to a bump (Schwartz) function means.

Proof:

Let $$O \subseteq \mathbb R^d$$ be open, and let $$(\varphi_l)_{l \in \mathbb N}$$ be a sequence in $$\mathcal D(O)$$ such that $$\varphi_l \to \varphi \in \mathcal D(O)$$ with respect to the notion of convergence of $$\mathcal D(O)$$. Let thus $$K \subset \mathbb R^d$$ be the compact set in which all the $$\text{supp } \varphi_l$$ are contained. From this also follows that $$\text{supp } \varphi \subseteq K$$, since otherwise $$\|\varphi_l - \varphi\|_\infty \ge |c|$$, where $$c \in \mathbb R$$ is any nonzero value $$\varphi$$ takes outside $$K$$; this would contradict $$\varphi_l \to \varphi$$ with respect to our notion of convergence.

In $$\mathbb R^d$$, ‘compact’ is equivalent to ‘bounded and closed’. Therefore, $$K \subset B_R(0)$$ for an $$R > 0$$. Therefore, we have for all multiindices $$\alpha, \beta \in \N_0^d$$:
 * $$\begin{align}

\|x^\alpha \partial_\beta \varphi_l - x^\alpha \partial_\beta \varphi\|_\infty &= \sup_{x \in \mathbb R^d} \left| x^\alpha \partial_\beta \varphi_l(x) - x^\alpha \partial_\beta \varphi(x) \right| & \text{ definition of the supremum norm} \\ &= \sup_{x \in B_R(0)} \left| x^\alpha \partial_\beta \varphi_l(x) - x^\alpha \partial_\beta \varphi(x) \right| & \text{ as } \text{supp } \varphi_l, \text{supp } \varphi \subseteq K \subset B_R(0) \\ &\le R^{|\alpha|} \sup_{x \in B_R(0)} \left| \partial_\beta \varphi_l(x) - \partial_\beta \varphi(x) \right| & \forall i \in \{1, \ldots, d\}, (x_1, \ldots, x_d) \in \overline{B_R(0)} : |x_i| \le R \\ &= R^{|\alpha|} \sup_{x \in \mathbb R^d} \left| \partial_\beta \varphi_l(x) - \partial_\beta \varphi(x) \right| & \text{ as } \text{supp } \varphi_l, \text{supp } \varphi \subseteq K \subset B_R(0) \\ &= R^{|\alpha|} \left\| \partial_\beta \varphi_l(x) - \partial_\beta \varphi(x) \right\|_\infty & \text{ definition of the supremum norm} \\ & \to 0, l \to \infty & \text{ since } \varphi_l \to \varphi \text{ in } \mathcal D(O) \end{align}$$ Therefore the sequence converges with respect to the notion of convergence for Schwartz functions.

The ‘testing’ property of test functions
In this section, we want to show that we can test equality of continuous functions $$f, g$$ by evaluating the integrals
 * $$\int_{\mathbb R^d} f(x) \varphi(x) dx$$ and $$\int_{\mathbb R^d} g(x) \varphi(x) dx$$

for all $$\varphi \in \mathcal D(O)$$ (thus, evaluating the integrals for all $$\varphi \in \mathcal S(\mathbb R^d)$$ will also suffice as $$\mathcal D(O) \subset \mathcal S(\mathbb R^d)$$ due to theorem 3.9).

But before we are able to show that, we need a modified mollifier, where the modification is dependent of a parameter, and two lemmas about that modified mollifier.

Proof:

From the definition of $$\eta$$ follows
 * $$\text{supp } \eta = \overline{B_1(0)}$$.

Further, for $$R \in \mathbb R_{>0}$$
 * $$\begin{align}

\frac{x}{R} \in \overline{B_1(0)} & \Leftrightarrow \left\| \frac{x}{R} \right\| \le 1 \\ & \Leftrightarrow \|x\| \le R \\ & \Leftrightarrow x \in \overline{B_R(0)} \end{align}$$ Therefore, and since
 * $$x \in \text{supp } \eta_R \Leftrightarrow \frac{x}{R} \in \text{supp } \eta$$

, we have:
 * $$x \in \text{supp } \eta_R \Leftrightarrow x \in \overline{B_R(0)}$$

In order to prove the next lemma, we need the following theorem from integration theory:

We will omit the proof, as understanding it is not very important for understanding this wikibook.

Proof:


 * $$\begin{align}

\int_{\mathbb R^d} \eta_R (x) dx & = \int_{\mathbb R^d} \eta\left( \frac{x}{R} \right) \big/ R^d dx & \text{Def. of } \eta_R \\ & = \int_{\mathbb R^d} \eta(x) dx & \text{integration by substitution using } x \mapsto R x \\ & = \int_{B_1(0)} \eta(x) dx & \text{Def. of } \eta \\ & = \frac{\int_{B_1(0)} e^{-\frac{1}{1 - \|x\|}} dx}{\int_{B_1(0)} e^{-\frac{1}{1 - \|x\|}} dx} & \text{Def. of } \eta \\ & = 1 \end{align}$$

Now we are ready to prove the ‘testing’ property of test functions:

Proof:

Let $$x \in \mathbb R^d$$ be arbitrary, and let $$\epsilon \in \mathbb R_{>0}$$. Since $$f$$ is continuous, there exists a $$\delta \in \mathbb R_{>0}$$ such that
 * $$\forall y \in \overline{B_\delta(x)} : |f(x) - f(y)| < \epsilon$$

Then we have
 * $$\begin{align}

\left| f(x) - \int_{\mathbb R^d} f(y) \eta_\delta(x - y) dy \right| & = \left| \int_{\mathbb R^d} (f(x) - f(y)) \eta_\delta(x - y) dy \right| & \text{lemma 3.16} \\ & \le \int_{\mathbb R^d} |f(x) - f(y)| \eta_\delta(x - y) dy & \text{triangle ineq. for the } \int \text{ and } \eta_\delta \ge 0 \\ & = \int_{\overline{B_\delta(0)}} |f(x) - f(y)| \eta_\delta(x - y) dy & \text{lemma 3.14} \\ & \le \int_{\overline{B_\delta(0)}} \epsilon \eta_\delta(x - y) dy & \text{monotony of the } \int \\ & \le \epsilon & \text{lemma 3.16 and } \eta_\delta \ge 0 \end{align}$$

Therefore, $$\int_{\mathbb R^d} f(y) \eta_\delta(x - y) dy \to f(x), \delta \to 0$$. An analogous reasoning also shows that $$\int_{\mathbb R^d} g(y) \eta_\delta(x - y) dy \to g(x), \delta \to 0$$. But due to the assumption, we have
 * $$\forall \delta \in \mathbb R_{>0} : \int_{\mathbb R^d} g(y) \eta_\delta(x - y) dy = \int_{\mathbb R^d} f(y) \eta_\delta(x - y) dy$$

As limits in the reals are unique, it follows that $$f(x) = g(x)$$, and since $$x \in \mathbb R^d$$ was arbitrary, we obtain $$f = g$$.

Remark 3.18: Let $$f, g: \mathbb R^d \to \mathbb R$$ be continuous. If
 * $$\forall \varphi \in \mathcal S(\mathbb R^d) : \int_{\mathbb R^d} \varphi(x) f(x) dx = \int_{\mathbb R^d} \varphi(x) g(x) dx$$,

then $$f = g$$.

Proof:

This follows from all bump functions being Schwartz functions, which is why the requirements for theorem 3.17 are met.

Exercises
e^{-\frac{1}{x}} & x > 0\\ 0 & x \le 0 \end{cases}$$ is contained in $$\mathcal C^\infty(\mathbb R)$$.
 * 1) Let $$b \in \mathbb R$$ and $$f : \mathbb R \to \mathbb R$$ be constant on the interval $$[b-1, b)$$. Show that $$\forall y \in [b-1, b) : \int_{\mathbb R} \chi_{[b-1, b)} (x) f(x) dx = f(y)$$
 * 2) Prove that the standard mollifier as defined in example 3.4 is a bump function by proceeding as follows:
 * 3) Prove that the function $$x \mapsto \begin{cases}
 * 1) Prove that the function $$x \mapsto 1 - \|x\|$$ is contained in $$\mathcal C^\infty(\mathbb R^d)$$.
 * 2) Conclude that $$\eta \in \mathcal C^\infty(\mathbb R^d)$$.
 * 3) Prove that $$\text{supp } \eta$$ is compact by calculating $$\text{supp } \eta$$ explicitly.
 * 4) Let $$O \subseteq \mathbb R^d$$ be open, let $$\varphi \in \mathcal D(O)$$ and let $$\phi \in \mathcal S(\mathbb R^d)$$. Prove that if $$\alpha, \beta \in \mathbb N_0^d$$, then $$\partial_\alpha \varphi \in \mathcal D(O)$$ and $$x^\alpha \partial_\beta \phi \in \mathcal S(\mathbb R^d)$$.
 * 5) Let $$O \subseteq \mathbb R^d$$ be open, let $$\varphi_1, \ldots, \varphi_n \in \mathcal D(O)$$ be bump functions and let $$c_1, \ldots, c_n \in \mathbb R$$. Prove that $$\sum_{j=1}^n c_j \varphi_j \in \mathcal D(O)$$.
 * 6) Let $$\phi_1, \ldots, \phi_n$$ be Schwartz functions functions and let $$c_1, \ldots, c_n \in \mathbb R$$. Prove that $$\sum_{j=1}^n c_j \phi_j$$ is a Schwartz function.
 * 7) Let $$\alpha \in \mathbb N_0^d$$, let $$p(x) := \sum_{\varsigma \le \alpha} c_\varsigma x^\varsigma$$ be a polynomial, and let $$\phi_l \to \phi$$ in the sense of Schwartz functions. Prove that $$p \phi_l \to p \phi$$ in the sense of Schwartz functions.