Partial Differential Equations/The heat equation

This chapter is about the heat equation, which looks like this:
 * $$\forall (t, x) \in \mathbb R \times \mathbb R^d : \partial_t u (t, x) - \Delta_x u (t, x) = f(t, x)$$

for some $$f : \mathbb R \times \mathbb R^d \to \mathbb R$$. Using distribution theory, we will prove an explicit solution formula (if $$f$$ is often enough differentiable), and we even prove a solution formula for the initial value problem.

Green's kernel and solution
Proof:
 * $$\begin{align}

\left( \int_\R e^{-x^2} \right)^2 & = \left( \int_\R e^{-x^2} \right) \cdot \left( \int_\R e^{-y^2} \right) & \\ & = \int_\R \int_\R e^{-(x^2 + y^2)} dx dy & \\ & = \int_{\R^2} e^{-\|(x, y)\|^2} d(x, y) & \text{Fubini} \\ & = \int_0^\infty \int_0^{2\pi} r e^{-r^2} d \varphi dr & \text{ integration by substitution using spherical coordinates} \\ & = 2 \pi \int_0^\infty r e^{-r^2} dr & \\ & = 2 \pi \int_0^\infty \frac{1}{2\sqrt{r}}\sqrt{r} e^{-r} dr & \text{integration by substitution using } r \mapsto \sqrt{r} \\ & = \pi & \end{align}$$

Taking the square root on both sides finishes the proof.

Proof:
 * $$\begin{align}

\int_{\R^d} e^{-\frac{\|x\|^2}{2}} dx &= \overbrace{\int_{-\infty}^\infty \cdots \int_{-\infty}^\infty}^{d \text{ times}} e^{-\frac{x_1^2 + \cdots + x_d^2}{2}} d x_1 \cdots d x_d & \text{Fubini's theorem} \\ &= \int_{-\infty}^\infty e^{-\frac{x_d^2}{2}} \cdots \int_{-\infty}^\infty e^{-\frac{x_1^2}{2}} d x_1 \, \cdots d x_d & \text{pulling the constants out of the integrals} \end{align}$$ By lemma 6.1,
 * $$\int_\R e^{-x^2} dx = \sqrt{\pi}$$.

If we apply to this integration by substitution (theorem 5.5) with the diffeomorphism $$x \mapsto \frac{x}{\sqrt{2}}$$, we obtain
 * $$\sqrt{\pi} = \int_\R \frac{1}{\sqrt{2}} e^{-\frac{x^2}{2}} dx$$

and multiplying with $$\sqrt{2}$$
 * $$\sqrt{2 \pi} = \int_\R e^{-\frac{x^2}{2}} dx$$

Therefore, calculating the innermost integrals first and then pulling out the resulting constants,
 * $$\overbrace{\int_{-\infty}^\infty e^{-\frac{x_d^2}{2}} \cdots \int_{-\infty}^\infty e^{-\frac{x_1^2}{2}}}^{d \text{ times}} d x_1 \cdots d x_d = {\sqrt{2\pi}}^d$$

Proof:

1.

We show that $$E$$ is locally integrable.

Let $$K \subset \mathbb R \times \mathbb R^d$$ a compact set, and let $$T > 0$$ such that $$K \subset (-T, T) \times \R^d$$. We first show that the integral
 * $$\int_{(-T, T) \times \R^d} E(s, y) d(s, y)$$

exists:
 * $$\begin{align}

\int_{(-T, T) \times \R^d} E(s, y) d(s, y) &= \int_{(0, T) \times \R^d} E(s, y) d(s, y) & \forall s \le 0 : E(s, y) = 0 \\ & = \int_0^T \int_{\R^d} \frac{1}{\sqrt{4\pi s}^d} e^{-\frac{\|y\|^2}{4s}} dy ds & \text{Fubini's theorem} \end{align}$$

By transformation of variables in the inner integral using the diffeomorphism $$y \mapsto \sqrt{2s} y$$, and lemma 6.2, we obtain:
 * $$=\int_0^T \int_{\R^d} \frac{\sqrt{2s}^d}{\sqrt{4\pi s}^d} e^{-\frac{\|y\|^2}{2}} dy ds = \int_0^T 1 ds = T$$

Therefore the integral
 * $$\int_{(-T, T) \times \R^d} E(s, y) d(s, y)$$

exists. But since
 * $$\forall (s, y) \in \mathbb R \times \mathbb R^d : |\chi_K (s, y) E(s, y)| \le |E(s, y)|$$

, where $$\chi_K$$ is the characteristic function of $$K$$, the integral
 * $$\int_{(-T, T) \times \R^d} \chi_K (s, y) E(s,y) d(s, y) = \int_K E(s, y) d(s, y)$$

exists. Since $$K$$ was an arbitrary compact set, we thus have local integrability.

2.

We calculate $$\partial_t E$$ and $$\Delta_x E$$ (see exercise 1).


 * $$\partial_t E(t, x) = \left(\frac{\|x\|^2}{4t^2} - \frac{d}{4t} \right) E(t,x)$$


 * $$\Delta_x E(t, x) = \left(\frac{\|x\|^2}{4t^2} - \frac{d}{4t} \right) E(t,x)$$

3.

We show that
 * $$\forall \varphi \in \mathcal D(\mathbb R \times \mathbb R^d), (t, x) \in \mathbb R \times \mathbb R^d : (\partial_t - \Delta_x) T_{E(\cdot - (t, x))}(\varphi) = \delta_{(t, x)} (\varphi)$$

Let $$\varphi \in \mathcal D(\mathbb R \times \mathbb R^d), (t, x) \in \R \times \R^d$$ be arbitrary.

In this last step of the proof, we will only manipulate the term $$(\partial_t - \Delta_x) T_{E(\cdot - (t, x))}(\varphi)$$.
 * $$\begin{align}

(\partial_t - \Delta_x) T_{E(\cdot - (t, x))}(\varphi) &= T_{E(\cdot - (t, x))}((-\partial_t - \Delta_x)\varphi) & \text{by definition of distribution derivation} \\ &= \int_{\R \times \R^d} (-\partial_t - \Delta_x)\varphi(s, y) E(s - t, y - x) d(s, y) & \\ &= \int_{(t, \infty) \times \R^d} (-\partial_t - \Delta_x)\varphi(s, y) E(s - t, y - x) d(s, y) & \forall t \le 0 : E(t, x) = 0 \\ \end{align}$$

If we choose $$R > 0$$ and $$T > 0$$ such that
 * $$\text{supp } \varphi \subset (-\infty, t + T) \times B_R(x)$$

, we have even
 * $$(\partial_t - \Delta_x) T_{E(\cdot - (t, x))}(\varphi) = \int_{(t, t + T) \times B_R(x)} (-\partial_t - \Delta_x)\varphi(s, y) E(s - t, y - x) d(s, y)$$

Using the dominated convergence theorem (theorem 5.1), we can rewrite the term again:
 * $$\begin{align}

(\partial_t - \Delta_x) T_{E(\cdot - (t, x))}(\varphi) &= \int_{(t, t + T) \times B_R(x)} (-\partial_t - \Delta_x)\varphi(s, y) E(s - t, y - x) d(s, y) \\ &= \lim_{\epsilon \downarrow 0} \int_{(t, t + T) \times B_R(x)} (-\partial_t - \Delta_x)\varphi(s, y) E(s - t, y - x) (1 - \chi_{[t, t + \epsilon]}(s)) d(s, y) \\ &= \lim_{\epsilon \downarrow 0} \int_{(t + \epsilon, t + T) \times B_R(x)} (-\partial_t - \Delta_x)\varphi(s, y) E(s - t, y - x) d(s, y) \end{align}$$ , where $$\chi_{[t, t + \epsilon]}$$ is the characteristic function of $$[t, t + \epsilon]$$.

We split the limit term in half to manipulate each summand separately:
 * $$\begin{align}

\int_{(t + \epsilon, t + T) \times B_R(x)} (-\partial_t - \Delta_x)\varphi(s, y) E(s - t, y - x) d(s, y) \\ = -\int_{(t + \epsilon, t + T) \times B_R(x)} \Delta_x \varphi(s, y) E(s - t, y - x) d(s, y) \\ - \int_{(t + \epsilon, t + T) \times B_R(x)} \partial_t \varphi(s, y) E(s - t, y - x) d(s, y) \\ \end{align}$$

The last integrals are taken over $$(t + \epsilon, t + T) \times B_R(x)$$ for $$\epsilon > 0$$. In this area and its boundary, $$E(s - t, y - x)$$ is differentiable. Therefore, we are allowed to integrate by parts.

$$\begin{align} \int_{(t + \epsilon, t + T) \times B_R(x)} \Delta_x \varphi(s, y) E(s - t, y - x) d(s, y) &= \int_{t + \epsilon}^{t + T} \int_{B_R(x)} \Delta_x \varphi(s, y) E(s - t, y - x) dy ds & \text{Fubini} \\ = \int_{t + \epsilon}^{t + T} \int_{\partial B_R(x)} E(s, y) n(y) \cdot \underbrace{\nabla_x \varphi(s, y)}_{=0} dy ds &-\int_{t + \epsilon}^{t + T} \int_{B_R(x)} \nabla_x \varphi(s, y) \cdot \nabla_x E(s - t, y - x) dy ds & \text{integration by parts in } y \\ = \int_{t + \epsilon}^{t + T} \int_{B_R(x)} \varphi(s, y) \Delta_x E(s - t, y - x) dy ds &-\int_{t + \epsilon}^{t + T} \int_{\partial B_R(x)} \underbrace{\varphi(s, y)}_{=0} n(y) \cdot \nabla_x E(s - t, y - x) dy ds & \text{integration by parts in } y \end{align}$$

In the last two manipulations, we used integration by parts where $$\varphi$$ and $$f$$ exchanged the role of the function in theorem 5.4, and $$\nabla_x f$$ and $$\nabla_x \varphi$$ exchanged the role of the vector field. In the latter manipulation, we did not apply theorem 5.4 directly, but instead with subtracted boundary term on both sides.

Let's also integrate the other integral by parts.
 * $$\begin{align}

\int_{(t + \epsilon, t + T) \times B_R(x)} \partial_t \varphi(s, y) E(s - t, y - x) d(s, y) &= \int_{B_R(x)} \int_{t + \epsilon}^{t + T} \partial_t \varphi(s, y) E(s - t, y - x) ds dy & \text{Fubini} \\ = \int_{B_R(x)} \underbrace{\varphi(s, y) E(s - t, y - x) \big|^{s=t + T}_{s=t + \epsilon}}_{=-\varphi(t + \epsilon, y) E(\epsilon, y - x)} dy &- \int_{B_R(x)} \int_{t + \epsilon}^{t + T} \varphi(s, y) \partial_t E(s - t, y - x) ds dy & \text{integration by parts in } s \end{align}$$

Now we add the two terms back together and see that
 * $$\begin{align}

(\partial_t - \Delta_x) T_{E(\cdot - (t, x))}(\varphi) &= \lim_{\epsilon \downarrow 0} -\int_{B_R(x)} -\varphi(t + \epsilon, y) E(\epsilon, y - x) dy \\ + \int_{B_R(x)} \int_{t + \epsilon}^{t + T} \varphi(s, y) \partial_t E(s - t, y - x) ds dy & -\int_{t + \epsilon}^{t + T} \int_{B_R(x)} \varphi(s, y) \Delta_x E(s - t, y - x) dy ds \end{align}$$

The derivative calculations from above show that $$\partial_t E = \Delta_x E$$, which is why the last two integrals cancel and therefore
 * $$(\partial_t - \Delta_x) T_{E(\cdot - (t, x))}(\varphi) = \lim_{\epsilon \downarrow 0} \int_{B_R(x)} \varphi(t + \epsilon, y) E(\epsilon, y - x) dy$$

Using that $$\text{supp } \varphi(t + \epsilon, \cdot ) \subset B_R(x)$$ and with multi-dimensional integration by substitution with the diffeomorphism $$y \mapsto x + \sqrt{2\epsilon} y$$ we obtain:
 * $$\int_{B_R(x)} \varphi(t + \epsilon, y) E(\epsilon, y - x) dy = \int_{\R^d} \varphi(t + \epsilon, y) E(\epsilon, y - x) dy$$ $$= \int_{\R^d} \varphi(t + \epsilon, y) \frac{1}{\sqrt{4\pi \epsilon}^d} e^{-\frac{\|y-x\|^2}{4\epsilon}} dy$$ $$= \int_{\R^d} \varphi(t + \epsilon, x + \sqrt{2\epsilon}y) \frac{\sqrt{2\epsilon}^d}{\sqrt{4\pi \epsilon}^d} e^{-\frac{\|y\|^2}{2}} dy = \frac{1}{\sqrt{2\pi}^d} \int_{\R^d} \varphi(t + \epsilon, x + \sqrt{2\epsilon}y) e^{-\frac{\|y\|^2}{2}} dy$$

Since $$\varphi$$ is continuous (even smooth), we have
 * $$\forall x \in \mathbb R^d : \lim_{\epsilon \to 0} \varphi(t + \epsilon, x + \sqrt{2\epsilon}y) = \varphi(t, x)$$

Therefore
 * $$\begin{align}

(\partial_t - \Delta_x) T_{E(\cdot - (t, x))}(\varphi) &= \lim_{\epsilon \downarrow 0} \frac{1}{\sqrt{2\pi}^d} \int_{\R^d} \varphi(t + \epsilon, x + \sqrt{2\epsilon}y) e^{-\frac{\|y\|^2}{2}} dy & \\ &= \frac{1}{\sqrt{2\pi}^d} \int_{\R^d} \varphi(t, x) e^{-\frac{\|y\|^2}{2}} dy & \text{dominated convergence} \\ &= \varphi(t, x) & \text{lemma 6.2} \\ &= \delta_{(t, x)}(\varphi)& \end{align}$$

Proof:

1.

We show that $$(E * f)(t, x)$$ is sufficiently often differentiable such that the equations are satisfied.

2.

We invoke theorem 5.?, which states exactly that a convolution with a Green's kernel is a solution, provided that the convolution is sufficiently often differentiable (which we showed in part 1 of the proof).

Initial Value Problem
Note that if we do not require the solution to be continuous, we may just take any solution and just set it to $$g$$ at $$t=0$$.

Proof:

1.

We show
 * $$\forall (t, x) \in (0, \infty) \times \mathbb R^d : \partial_t u(t, x) - \Delta_x u(t, x) = f(t, x) (*)$$

From theorem 7.4, we already know that $$\tilde f * E$$ solves
 * $$\forall (t, x) \in (0, \infty) \times \mathbb R^d : \partial_t (\tilde f * E)(t, x) - \Delta_x (\tilde f * E)(t, x) = \tilde f(t, x) \overset{t > 0}{=} f(t, x)$$

Therefore, we have for $$\forall (t, x) \in (0, \infty) \times \mathbb R^d$$,
 * $$\begin{align}

\partial_t u(t, x) - \Delta_x u(t, x) = & \partial_t (E *_x g)(t, x) + \partial_t (\tilde f * E)(t, x) \\ & - \Delta_x (E *_x g)(t, x) - \Delta_x (\tilde f * E)(t, x) \\ = & f(t, x) + \partial_t (E *_x g)(t, x) - \Delta_x (E *_x g)(t, x) \end{align}$$ which is why $$(*)$$ would follow if
 * $$\forall (t, x) \in (0, \infty) \times \mathbb R^d : \partial_t (E *_x g)(t, x) - \Delta_x (E *_x g)(t, x) = 0$$

This we shall now check.

By definition of the spatial convolution, we have
 * $$\partial_t (E *_x g)(t, x) = \partial_t \int_{\mathbb R^d} E(t, x - y) g(y) dy$$

and
 * $$\Delta_x (E *_x g)(t, x) = \Delta_x \int_{\mathbb R^d} E(t, x - y) g(y) dy$$

By applying Leibniz' integral rule (see exercise 2) we find that
 * $$\begin{align}

\partial_t (E *_x g)(t, x) - \Delta_x (E *_x g)(t, x) &=\partial_t \int_{\mathbb R^d} E(t, x - y) g(y) dy - \Delta_x \int_{\mathbb R^d} E(t, x - y) g(y) dy & \\ & = \int_{\mathbb R^d} \partial_t E(t, x - y) g(y) dy - \int_{\mathbb R^d} \Delta_x E(t, x - y) g(y) dy & \text{ Leibniz' integral rule} \\ & = \int_{\mathbb R^d} \left( \partial_t E(t, x - y) - \Delta_x E(t, x - y) \right) g(y) dy & \text{ linearity of the integral} \\ & = 0 & \text{ exercise 1} \end{align} $$ for all $$(t, x) \in (0, \infty) \times \mathbb R^d$$.

2.

We show that $$u$$ is continuous.

It is clear that $$u$$ is continuous on $$(0, \infty) \times \mathbb R^d$$, since all the first-order partial derivatives exist and are continuous (see exercise 2). It remains to be shown that $$u$$ is continuous on $$\{0\} \times \mathbb R^d$$.

To do so, we first note that for all $$(t, x) \in (0, \infty) \times \mathbb R^d$$
 * $$\begin{align}

\int_{\mathbb R^d} E(t, x - y) dy & = \int_{\mathbb R^d} E(t, y) dy & \text{ integration by substitution using } y \mapsto x - y \\ & = \int_{\mathbb R^d} \sqrt{4 \pi t}^{-d} e^{\frac{\|y\|^2}{4t}} dy & \\ & = \int_{\mathbb R^d} \sqrt{2 \pi}^{-d} e^{\frac{\|y\|^2}{2}} dy & \text{ integration by substitution using } y \mapsto \sqrt{2t}y \\ & = 1 & \text{ lemma 6.2} \end{align}$$ Furthermore, due to the continuity of $$g$$, we may choose for arbitrary $$\epsilon > 0$$ and any $$x \in \R^d$$ a $$\delta > 0$$ such that
 * $$\forall y \in B_\delta(x) : |g(y) - g(x)| < \epsilon$$.

From these last two observations, we may conclude:
 * $$\begin{align}

& = \left| \int_{\mathbb R^d} E(t, x - y) g(x) dy - \int_{\R^d} E(t, x - y) g(x) dy \right| \\ & = \left| \int_{B_\delta(x)} E(t, x - y) (g(y) - g(x)) dy + \int_{\R^d \setminus B_\delta(x)} E(t, x - y) (g(y) - g(x)) dy \right| & \\ & \le \left| \int_{B_\delta(x)} E(t, x - y) (g(y) - g(x)) dy \right| + \left| \int_{\R^d \setminus B_\delta(x)} E(t, x - y) (g(y) - g(x)) dy \right| & \text{triangle ineq. in } \mathbb R \\ & \le \int_{B_\delta(x)} |E(t, x - y)| \underbrace{|g(y) - g(x)|}_{< \epsilon} dy + \int_{\R^d \setminus B_\delta(x)} |E(t, x - y) (g(y) - g(x))| dy & \text{ triangle ineq. for } \int \\ & < \int_{\mathbb R^d} |E(t, x - y)| \epsilon dy + \int_{\R^d \setminus B_\delta(x)} |E(t, x - y)| \underbrace{(|g(y)| + |g(x)|)}_{\le 2\|g\|_\infty} dy & \text{ monotony of the } \int \\ & = \epsilon + 2\|g\|_\infty \left| \int_{\R^d \setminus B_\delta(x)} E(t, x - y) dy \right| \end{align}$$ But due to integration by substitution using the diffeomorphism $$x \mapsto \sqrt{2t} x$$, we obtain
 * g(x) - (E *_x g)(t, x)| & = \left| 1 \cdot g(x) - \int_{\R^d} E(t, x - y) g(x) dy \right| & \\
 * $$\int_{\R^d \setminus B_\delta(x)} E(t, x - y) dy = \int_{\R^d \setminus B_\delta(0)} E(t, x) dy = \int_{\R^d \setminus B_{\frac{\delta}{\sqrt{2t}}}(0)} \frac{1}{\sqrt{2\pi}^d} e^{-\frac{\|x\|^2}{2}} dy \to 0, t \to 0$$

which is why
 * $$\lim_{t \to 0} |g(x) - (E *_x g)(t, x)| < \epsilon$$

Since $$\epsilon > 0$$ was arbitrary, continuity is proven.