Distribution Theory/Bump functions

Preliminary definitions
Definition:

Let $$\varphi: U \to \mathbb R$$ be a function, where $$U$$ is an open subset of $$\mathbb R^d$$. We say
 * $$\varphi \in \mathcal C^k(U)$$ iff all partial derivatives of $$\varphi$$ up to order $$k$$ exist and are continuous
 * $$\varphi \in \mathcal C^\infty(U)$$ iff all partial derivatives of $$\varphi$$ of any order exist and are continuous.

Definition:

Let $$(X, \tau)$$ be a topological space and let $$f: X \to \mathbb R$$ be a function. Then the support of $$f$$ is defined to be the set
 * $$\operatorname{supp} f := \overline{\left\{x \in X | f(x) \neq 0 \right\}}$$;

the bar above the set on the right denotes the topological closure.

Definition:

A bump function is a function $$\varphi$$ from an open set $$U \subseteq \mathbb R^d$$ to $$\mathbb R$$ such that the following two conditions are satisfied:
 * 1) $$\operatorname{supp} \varphi$$ is compact
 * 2) $$\varphi \in \mathcal C^\infty(U)$$

Multiindex notation
The multiindex notation is an efficient way of denoting several things in multi-dimensional space. For instance, it takes fairly long to denote a partial derivative in the usual way; in the usual notation, a partial derivative is denoted
 * $$\partial_1^{k_1} \cdots \partial_d^{k_d} f$$

for some $$k_1, \ldots, k_d \in \mathbb N$$. Now in multiindex notation, the $$k_1, \ldots, k_d$$ are assembled into a vector $$\alpha = (k_1, \ldots, k_d)$$, and the term
 * $$\partial_\alpha f$$

is then used instead of the partial derivative notation used above. Now, for one partial derivative this may not be a huge advantage (unless one is talking about a general partial derivative), but for instance when one sums all partial derivatives of a polynomial $$p$$, say, then one obtains expressions as such:
 * $$\sum_{\alpha \in \mathbb N_0^d} \partial_\alpha p$$ (Note that this is well-defined, as the sum is finite.)

Now compare this to the much longer
 * $$\sum_{k_1 = 1}^\infty \cdots \sum_{k_n = 1}^\infty \partial_1^{k_1} \cdots \partial_d^{k_d} p$$;

as you can see, we saved a lot of time, and that's what's all about. Multiindex notation was invented by Laurent Schwartz.

Other multiindex conventions are the following (we use a convention by Béla Bollobás and denote $$[d] := \{1, \ldots, d\}$$):
 * Multiindex Partial order: $$(k_1, \ldots, k_d) \le (m_1, \ldots, m_d) :\Leftrightarrow \forall j \in [d]: k_j \le m_j$$
 * Multiindex factorial: $$(k_1, \ldots, k_d)! := k_1! \cdots k_d!$$
 * Multiindex binomial coefficient: Let $$\alpha = (k_1, \ldots, k_d)$$ and $$\beta = (m_1, \ldots, m_d)$$ be multiindices, then $$\binom{\alpha}{\beta} := \binom{k_1}{m_1} \cdots \binom{k_d}{m_d}$$
 * Multiindex power: Let additionally $$x = (x_1, \ldots, x_d) \in \mathbb R^d$$, then set $$x^\alpha := x_1^{k_1} \cdots x_d^{k_d}$$
 * Constant multiindex: If $$n \in \mathbb N$$, we denote the constant multiindex $$(n, \ldots, n)$$ by the boldface $$\mathbf n$$
 * Multiindex differentiability: We write $$f \in \mathcal C^\alpha(U)$$ iff the partial derivatives $$\partial_\beta f$$ exist for all $$\beta \in \mathbb N_0^d$$ with $$\beta \le \alpha$$.

Further, the absolute value of a multiindex $$\alpha = (k_1, \ldots, k_d)$$ is defined as
 * $$|\alpha| := \sum_{j=1}^d k_d$$.

A few sample theorems on multiindices are these (we'll need them often):

Theorem (multiindex binomial formula):

Let $$\alpha \in \mathbb N_0^d$$ be a multiindex, $$x, y \in \mathbb R^d$$. Then
 * $$(x + y)^\alpha = \sum_{\mathbf 0 \le \beta \le \alpha} \binom{\alpha}{\beta} x^\beta y^{\alpha - \beta}$$.

Note that this formula looks exactly as in the one-dimensional case, with one dimensional variables replaced by multiindex variables. This will be a recurrent phenomenon.

Proof:

We prove the theorem by induction on $$|\alpha|$$. For $$|\alpha|=0$$ the case is clear. Now suppose the theorem has been proven where $$|\alpha| = n$$, and let instead $$|\alpha| = n+1$$. Then $$\alpha$$ has at least one nonzero component; let's say the $$j$$-th component of $$\alpha$$ is nonzero. Then $$\alpha' := \alpha - e_j$$ ($$e_j$$ denoting the $$j$$-th unit vector, i.e. $$e_j = \left( 0, \ldots, 0, \overbrace{1}^{j\text{-th place}}, 0, \ldots, 0 \right)$$) is a multiindex of absolute value $$n$$. By induction,
 * $$(x + y)^{\alpha'} = \sum_{\mathbf 0 \le \beta \le \alpha'} \binom{\alpha'}{\beta} x^\beta y^{\alpha' - \beta}$$

and hence, multiplying both sides by $$(x + y)^{e_j} = x_j + y_j$$,
 * $$\begin{align}

(x + y)^\alpha & = (x_j + y_j) \sum_{\mathbf 0 \le \beta \le \alpha'} \binom{\alpha'}{\beta} x^\beta y^{\alpha' - \beta} \\ & = \sum_{e_j \le \beta \le \alpha} \binom{\alpha'}{\beta - e_j} x^\beta y^{\alpha' - (\beta - e_j)} + \sum_{\mathbf 0 \le \beta \le \alpha'} \binom{\alpha'}{\beta} x^\beta y^{\alpha - \beta} \\ & = \binom{\alpha'}{\alpha - e_j} x^\alpha y^{\alpha' - (\alpha - e_j)} + \sum_{e_j \le \beta \le \alpha'} \left( \binom{\alpha'}{\beta - e_j} x^\beta y^{\alpha' - (\beta - e_j)} + \binom{\alpha'}{\beta} x^\beta y^{\alpha - \beta} \right) + \binom{\alpha'}{\mathbf 0} x^{\mathbf 0} y^{\alpha - \mathbf 0} \\ & = \sum_{\mathbf 0 \le \beta \le \alpha} \binom{\alpha}{\beta} x^\beta y^{\alpha - \beta} \end{align}$$ because
 * $$\binom{\alpha'}{\beta - e_j} + \binom{\alpha'}{\beta} = \binom{\alpha' + e_j}{\beta} = \binom{\alpha}{\beta}$$

by the respective rule for the usual $$1$$-dim. binomial coefficient.

Theorem (multiindex product rule):

Let $$\alpha \in \mathbb N_0^d$$ be a multiindex, $$U \subseteq \mathbb R^d$$ be open and $$f, g \in \mathcal C^\alpha(U)$$. Then
 * $$\partial_\alpha(f \cdot g) = \sum_{\beta \le \alpha} \binom{\alpha}{\beta} \partial_\beta f \cdot \partial_{\alpha - \beta} g$$;

in particular, $$f \cdot g \in \mathcal C^\alpha(U)$$.

Proof:

Again, we proceed by induction on $$|\alpha|$$. As before, pick $$j \in [d]$$ such that the $$j$$-th entry of $$\alpha$$ is nonzero, and define $$\alpha' := \alpha - e_j$$. Then by induction
 * $$\begin{align}

\partial_\alpha(f \cdot g) & = \partial_{e_j} \sum_{\beta \le \alpha'} \binom{\alpha'}{\beta} \partial_\beta f \cdot \partial_{\alpha' - \beta} g \\ & = \sum_{\beta \le \alpha'} \binom{\alpha'}{\beta} \left( \partial_{e_j} \partial_\beta f \cdot \partial_{\alpha' - \beta} g + \partial_\beta f \cdot \partial_{e_j} \partial_{\alpha' - \beta} g \right) \\ & = \sum_{e_j \le \beta \le \alpha} \binom{\alpha'}{\beta - e_j} \partial_\beta f \cdot \partial_{\alpha' - (\beta - e_j)} + \sum_{\mathbf 0 \le \beta \le \alpha'} \binom{\alpha'}{\beta} \partial_\beta f \cdot \partial_{\alpha - \beta} g \\ & = \binom{\alpha'}{\alpha - e_j} \partial_\alpha f + \sum_{e_j \le \beta \le \alpha'} \left( \binom{\alpha'}{\beta - e_j} \partial_\beta f \cdot \partial_{\alpha' - (\beta - e_j)} + \binom{\alpha'}{\beta} \partial_\beta f \cdot \partial_{\alpha - \beta} g \right) + \binom{\alpha'}{\mathbf 0} \partial_{\mathbf 0} f \cdot \partial_{\alpha - \mathbf 0} g \end{align}$$

Note that the proof is essentially the same as in the previous theorem, since by the product rule, differentiation in one direction has the same effect as multiplying the "sum of derivatives" to the existing derivatives.

Note that the dimension of the respective multiindex must always match the dimension of the space we are considering.