User:TakuyaMurata/Calculus

Module and linear space
An additive group $$G$$ is said to be a module over $$R$$, or R-module for short, if the scalars, the members of a ring $$R$$, satisfy the following properties: if $$x, y \in G$$ and $$\alpha, \beta \in R$$
 * (i) Both $$\alpha x$$ and $$x + y$$ are in $$G$$
 * (ii) $$(\alpha \beta) x = \alpha (\beta x)$$ (associativity)
 * (iii) $$\alpha (x + y) = \alpha x + \alpha y$$ and $$(\alpha + \beta) x = \alpha x + \beta x$$ (distribution law)
 * (iv) $$1_R x = x$$

By definition, every abelian group itself is a module over $$\mathbb{Z}$$, since $$x + x + x + ... = nx$$ and $$n$$ is a scalar. Finally, a linear space is a module over a field. Defining the notion of dimension is a bit tricky. However, we can safely say a $$\mathcal{K}$$-vector space is finite-dimensional if it has a finite basis; that is, we can find linear independent vectors $$e_1, e_2, ..., e_n$$ so that $$\mathcal{V} = \{ a_1 e_1 + a_2 e_2 + ... + a_n e_n; a_j \in \mathcal{F} \}$$. Such a basis need not be unique.

3 Theorem ''Let $$\mathcal{V}$$ be a finite-dimensional $$\mathcal{K}$$-vector space. Then $$\mathcal{V}^*$$ has the same dimension as $$\mathcal{V}$$ does; that is, every basis for $$\mathcal{V}$$ has the same cardinality as every basis for $$\mathcal{V}^*$$ does.''

It can be shown that the map $$\mathcal{V} \to \mathcal{V}^*$$ cannot be defined constructively. (TODO: need to detail this matter)

1 Theorem If $$\mathcal{X}$$ is a TVS and every finite subset of $$\mathcal{X}$$ is closed, it then follows that $$\mathcal{X}$$ is a Hausdorff space.

Proof: Let $$x, y \in X$$ with $$x \ne y$$ be given. Moreover, let $$\Omega$$ be the complement of the singleton $$\{y\}$$, which is open by hypothesis. Since the function $$f(z) = x + z$$ is continuous at $$0$$ and $$f(0) = x$$ is in $$\Omega$$, we can find an $$\omega$$ open and such that $$\{x\} + \omega \subset \Omega$$. Here, we used, and would do so henceforward, the notation $$A + B = $$ the union of $$\{ x + y \}$$ taken all over $$x \in A$$ and $$y \in B$$. Furthermore, since the function $$g(x) = -x$$ is continuous and so is its inverse, namely $$g$$, we may assume that $$\omega = -\omega$$ by replacing $$\omega$$ by the intersection of $$\omega$$ and $$-\omega$$. By repeating the same construction for each $$x + z$$ where $$z \in \omega$$, we find $$\omega$$ so that $$\{x\} + \omega + \omega \subset \Omega$$. It then follows that $$\{x\} + \omega$$ and $$\{y\} + \omega$$ are disjoint. Indeed, if we write $$x + z = y + w$$ for some $$z, w \in \omega$$, then $$y = x + z - w \in \Omega$$, a contradiction. $$\square$$

Normed spaces
A vector space is said to be normed if it is a metric space and its metric $$d$$ has the form:
 * $$d(x, y) = \|x - y\|$$

Here, the function $$\|\cdot\|$$, called a norm, has the property (in addition to that it induces the metric) that $$\| \lambda x \| = |\lambda| \| x \|$$ for any scalar $$\lambda$$. We note that:
 * $$\|x+y\| = d(x, -y) \le d(x, 0) + d(0, -y) = \|x\| + \|y\|$$

and
 * $$d(x+z, y+z) = \|x - y\| = d(x,y)$$ for any $$x, y, z$$.

It may go without saying but a vector space is infinite-dimensional if it is not finite-dimensional.

3 Theorem ''Let $$\mathcal{X}$$, $$\mathcal{Y}$$ be normed spaces. If $$\mathcal{X}$$ is an infinite-dimensional and if $$\mathcal{Y}$$ is nonzero, there exists a linear operator $$f:\mathcal{X} \to \mathcal{Y}$$ that is not continuous.

Baire's theorem
A normed space is said to be complete when every Cauchy sequence in it converges in it.

3 Theorem ''Let $$E$$ is a subspace of a Banach space $$G$$ carrying the same norm. Then the following are equivalent:''
 * (a) $$E$$ is complete.
 * (b) $$E$$ is closed in $$G$$.
 * (c) $$\sum \|x_k\| < \infty$$ implies $$\sum x_k$$.

Proof: (i) Show (a) $$\iff$$ (b). If $$E$$ is complete, then every Cauchy sequence in $$E$$ has the limit in $$E$$; thus, $$E$$ is closed. Conversely, if $$E$$ is closed, then every Cauchy sequence converges in $$E$$ since $$G$$ is complete. Hence, $$E$$ is complete. (ii) Show (a) $$\iff$$ (c). Let $$x_j \in G$$ be a Cauchy sequence. Then
 * $$\left \| \sum_0^n x_k - \sum_0^m x_k \right \| = \left \| \sum_m^n x_k \right \| \le \sum_m^n \| x_k \| \to 0$$ as $$n, m \to \infty$$.

Thus, $$\sum x_k$$ is Cauchy, and converges in $$G$$ since the completeness. Conversely, since a Cauchy sequence is convergent, we can find its subsequence $$x_k$$ such that $$\| x_{k+1} - x_k \| < 2^{-k}$$. Then
 * $$\sum \| x_{k+1} - x_k \| < \infty$$.

If the summation condition holds, then it follows that $$\sum x_{k + 1} - x_k$$ converges in $$G$$. Hence, $$x_j$$ converges in $$G$$ as well. $$\square$$

3 Corollary $$Q$$ is incomplete but dense in $$\mathbb{R}$$.

Proof: $$\mathbb{Q}$$ is not closed in $$\mathbb{R}$$. Since $$\mathbb{R} \backslash \mathbb{Q}$$ has empty interior, $$\overline{\mathbb{Q}} = \overline{\mathbb{R}}$$. $$\square$$

We say a set has dense complement if its closure has empty interior.

The next is the theorem whose importance is not what it says literally but that of consequences. Though the theorem can be proved more generally for a pseudometric space; e.g., F-space, this classical formulation suffices for the remainder of the book.

3 Theorem A complete normed space $$G$$ which is nonempty is never the union of a sequence of subsets of $$G$$ with dense complement.

Proof: Let $$E_n \subset G$$ be a sequence of subsets of $$G$$ with dense complement. Since $$\overline{E_1}$$ has empty interior and $$G$$ has nonempty interior, there exists an nonempty open ball $$S_1 \subset (G \backslash \overline{E_1})$$ with the radius $$\le 2^{-1}$$. Since $$\overline{E^2}$$ has empty interior and $$S_1$$ has empty interior, again there exists an nonempty open ball $$S_2 \subset (S_1 \backslash \overline{E_1})$$ with the radius $$\le 2^{-2}$$. Iterating the construction ad infinitum we get the decreasing sequence $$S_n$$. Now let $$x_n$$ be the sequence of the centers of $$S_n$$. Then $$x_n$$ is Cauchy since: for some $$N \le n, m$$
 * $$\|x_n - x_m\| < 2^{-N} + 2^{-N} \to 0$$ as $$N \to \infty$$.

It then follows $$x_n$$ converges in $$G \backslash \bigcup^{\infty} E_n$$ from the compleness of $$G$$. $$\square$$

3 Corollary (open mapping theorem) ''If $$A$$ and $$B$$ are Banach spaces, then a continuous linear surjection $$f: A \to B$$ maps an open set in $$A$$ to an open set in $$B$$.

Proof: Left as an exercise.

The following gives an nice example of the consequences of Baire's theorem.

3 Corollary (Lipschitz continuity) ''Let $$S_n$$ = the set of functions $$u \in \mathcal{C}^0 ([0, 1])$$ such that there exists some $$x \in [0, 1]$$ such that:
 * $$|u(x + h) - u(x)| \le n |h|$$ for all $$x + h \in [0, 1]$$.

Then (i) $$\mathcal{C}^0 ([0, 1])$$ is complete, (ii) $$S_n$$ is closed and has dense complement, and (iii) there exists a $$u \in \mathcal{C}^0 ([0, 1])$$ that is not in any $$S_n$$; i.e., one that is differentiable nowhere.

Proof: (i) $$[0, 1]$$ is complete; thus, $$\mathcal{C}^0$$ is a Banach space by some early theorem. (ii) Let $$u_j \in S_n$$ be a sequence, and suppose $$u_j \to u$$. Then we have:

Thus, $$u \in S_n$$; i.e., $$S_n$$ is closed. Stone-Weierstrass theorem says that every continuous function can be uniformly approximated by some infinitely differentiable function; thus, we find a $$g \in \mathcal{C}^{\infty}([0, 1])$$ such that:
 * $$|u(x + h) - u(x)|$$
 * $$\le |u(x+h) - u_j(x+h)| + |u_j(x + h) - u_j(x)| + |u_j(x) - u(x)|$$
 * $$\to n|h|$$ as $$j \to \infty$$
 * }
 * $$\to n|h|$$ as $$j \to \infty$$
 * }
 * }
 * $$\| u - g \|$$.

If we let $$v = g + {\epsilon \over 2} \sin Nx$$, then
 * $$v \in \mathcal{C}^0 ([0, 1]) \backslash S_n$$

Hence, $$S_n$$ has dense complement. Finally, (iii) follows from Baire's theorem since (i) and (ii). $$\square$$

More concisely, the theorem says that not every continuity is Lipschitz because of Baire's theorem.

3 Lemma In a topological space $$X$$, the following are equivalent: Proof: The lemma holds since an open set is dense if and only if its complement has empty interior. $$\square$$
 * (i) Every countable union of closed sets with empty interior has empty interior.
 * (ii) Every countable intersection of open dense sets is dense.

When the above equivalent conditions are true, we say $$X$$ is a Baire space.

3 Theorem If a Banach space $$G$$ has a Schauder basis, a unique sequence of scalars such that
 * $$\| x - \sum_1^n \alpha_k x_k \| \to 0$$ as $$n \to \infty$$,

then $$G$$ is separable.''

Proof:

The validity of the converse had been known as a Basis Problem for long time. It was, however, proven to be false in 19-something by someone.

Duality
The kernel of a linear operator $$f$$, denoted by $$\ker(f)$$, is the set of all zero divisors for $$f$$. A kernel of a linear operator is a linear space since $$f(x) = 0 implies that f(\alpha x) = 0$$ and $$f(x) = 0 = f(y)$$ implies $$0 = f(x + y)$$. Moreover, a linear operator has zero kernel if and only it is injective.

3 Theorem ''Let $$f$$ be a linear functional. Then $$f$$ is continuous if and only if $$\ker(f)$$ is closed.''

Proof: If $$f$$ is continuous, then $$\ker(f) = f^{-1} \mid_{\{0\}}$$ is closed since a finite set is closed. Conversely, suppose $$f$$ is not continuous. Then there exists a sequence $$x_j \to x$$ such that
 * $$\lim_{j \to \infty} f(x_j) - f(x) = \lim_{j \to \infty} f(x_j - x) \ne 0$$

In other words, $$\ker(f)$$ is not closed. $$\square$$

3 Theorem ''If $$f$$ is a linear functional on $$l^p$$, then
 * $$f(x_1, x_2, x_3, ...) = \sum_1^{\infty} x_k y_k$$''

Proof: Let $$y_k = f(\delta(1, k), \delta(2, k), \delta(3, k), ...)$$ where $$\delta(j, k) = 1$$ if $$j = k$$ else $$0$$. $$\square$$

The dual of a linear space $$G$$, denoted by $$G^*$$, is the set of all of linear operators from $$G$$ to $$\mathbb{F}$$ (i.e., either $$\mathbb{C}$$ or $$\mathbb{R}$$). Every dual of a linear space becomes again a linear space over the same field as the original one since the set of linear spaces forms an additive group.

Theorem ''Let G be a normed linear space. Then''
 * $$\|x\| = \sup_{\|y\| = 1} ||$$ and $$\|y\| = \sup_{\|x\| = 1} ||$$.

The duality between a Banach space and its dual gives rise to.

Example: For $$p$$ finite, the dual of $$l^p$$ is $$l^q$$ where $$1/p + 1/q = 1$$.

3 Theorem (Krein-Milman) The unit ball of the dual of a real normed linear space has an extreme point.

Proof: (TODO: to be written)

The theorem is equivalent to the AC. 

The Hahn-Banach theorem
3 Theorem (Hahn-Banach) ''Let $$\mathcal{X}, \mathcal{Y}$$ be normed vector spaces over real numbers. Then the following are equivalent.''
 * (i) Every collection of mutually intersecting closed balls of $$\mathcal{Y}$$ has nonempty intersection. (binary intersection property)
 * (ii) If $$\mathcal{M} \subset \mathcal{X}$$ is a subspace and $$f: \mathcal{M} \to \mathcal{Y}$$ is a continuous linear operator, then $$f$$ can be extend to a $$F$$ on $$\mathcal{X}$$ such that $$\|f\| = \|F\|$$. (dominated version)
 * (iii) If the linear variety $${x} + \mathcal{M}$$ does not meet a non-empty open convex subset $$G$$ of $$\mathcal{X}$$, then there exists a closed hyper-plane $$H$$ containing $${x} + \mathcal{M}$$ that does not meet $$G$$ either. (geometric form)

3 Corollary If the equivalent conditions hold in the theorem, $$\mathcal{Y}$$ is complete.

Proof: Consider the identity map extended to the completion of $$\mathcal{Y}$$. $$\square$$

3 Corollary ''Let $$f$$ be a linear operator from a Banach space $$\mathcal{X}$$ to a Banach space $$\mathcal{Y}$$. If there exists a set $$\Gamma$$ and operators $$f_1:\mathcal{X} \to l^\infty (\Gamma)$$ and $$f_2:l^\infty (\Gamma) \to \mathcal{X}$$ such that $$f_2 \circ f_1$$ and $$\|f_2\| = \|f_1\|$$, then $$f$$ can be extended to a Banach space containing $$\mathcal{X}$$ without increase in norm.

Hilbert spaces
A linear space $$\mathcal{X}$$ is called a pre-Hilbert space if for each ordered pair of $$(x, y)$$ there is a unique complex number called ''an inner product of $$x$$ and $$y$$ and denoted by $$\langle x, y \rangle_\mathcal{X}$$ satisfying the following properties:
 * (i) $$\langle x, y \rangle_\mathcal{X}$$ is a linear operator of $$x$$ when $$y$$ is fixed.
 * (ii) $$\langle x, y \rangle_\mathcal{X} = \overline {\langle y, x \rangle_\mathcal{X}}$$ (where the bar means the complex conjugation).
 * (iii) $$\langle x, x \rangle \ge 0$$ with equality only when $$x = 0$$.

When only one pre-Hilbert space is being considered we usually omit the subscript $$\mathcal{X}$$.

We define $$\| x \| = \langle x, x \rangle^{1/2}$$ and indeed this is a norm. Indeed, it is clear that $$\| \alpha x \| = | \alpha | \| x \|$$ and (iii) is the reason that $$\| x \| = 0$$ implies that $$x = 0$$. Finally, the triangular inequality follows from the next lemma.

3 Lemma (Schwarz's inequality) $$|\langle x, y \rangle| \le \|x\|\|y\|$$ where the equality holds if and only if we can write $$x = \lambda y$$ for some scalar $$\lambda$$.

If we assume the lemma, then since $$\operatorname{Re}(\alpha) \le | \alpha |$$ for any complex number $$\alpha$$ it follows:


 * $$\| x + y \|^2$$
 * $$= \| x \|^2 + 2 \operatorname{Re} \langle x, y \rangle + \| y \|^2$$
 * $$\le \| x \|^2 + 2 | \langle x, y \rangle | + \| y \|^2 $$
 * $$\le (\| x \| + \| y \|)^2$$
 * }
 * $$\le \| x \|^2 + 2 | \langle x, y \rangle | + \| y \|^2 $$
 * $$\le (\| x \| + \| y \|)^2$$
 * }
 * $$\le (\| x \| + \| y \|)^2$$
 * }

Proof of Lemma: The lemma is just a special case of the next theorem:

3 Theorem Let $$\mathcal{H}$$ be a pre-Hilbert and $$S \subset \mathcal{H}$$ be an orthonormal set (i.e., for $$u, v \in E$$ $$\langle u, v \rangle = 1$$ iff $$u = v$$ iff $$\langle u, v \rangle$$ is nonzero.) Proof: (TODO)
 * (i) $$\sum_{u \in S} \langle x, u \rangle \le |x|$$ for any $$x \in \mathcal{H}$$.
 * (ii) The equality holds in (i) if and only if $$S$$ is maximal in the collection of all orthonormal subsets of $$\mathcal{H}$$ ordered by $$\subset$$.

3 Theorem ''Let $$u_j$$ be a sequence in a pre-Hilbert space with $$\|u_j\| = 1$$. If $$\Gamma = \sum_{j \ne k} | \langle u_j, u_k \rangle |^2 < \infty$$, then
 * $$(1 - \Gamma) \sum_{j=m}^n | \alpha_j |^2 \le \| \sum_{j=m}^n \alpha_j u_j \|^2 \le (1 + \Gamma) \sum_{j=m}^n | \alpha_j |^2$$ for any sequence $$\alpha_j$$ of scalars.

Proof: Let $$I$$ be a set of all pairs $$(i, j)$$ such that $$m \le i \le n$$, $$m \le j \le n$$ and $$i \ne j$$. By Hölder's inequality we get:
 * $$\sum_{(j, k) \in I} | \langle \alpha_j u_j, \alpha_k u_k \rangle | \le \sum_{j=m}^n | \alpha_j |^2 \Gamma$$.

Since
 * $$\| \sum_{j=1}^\infty \alpha_j u_j \|^2 \le \sum_{j=m}^n |\alpha_j|^2 + \sum_{(j,k) \in I} | \langle \alpha_j u_j, \alpha_k u_k \rangle |$$,

we get the second inequality. Moreover,
 * $$\sum_{j=m}^n | \alpha_j |^2 \le \sum_{j=m}^n \langle \alpha_j u_j, \alpha_j u_j \rangle + \sum_{(j, k) \in I} \langle \alpha_j u_j, \alpha_k u_k \rangle + | \langle \alpha_j u_j, \alpha_k u_k \rangle |$$

and this gives the first inequality. $$\square$$

3 Theorem (Bessel's inequality) ''Let $$U$$ be an orthonormal subset of a pre-Hilbert space. Then for each $$x$$ in the space,
 * $$\sum_{u \in U} |\langle x, u \rangle|^2 \le \|x\|^2$$

where the sum can be obtained over some countable subset of $$U$$ and the equality holds if and only if $$U$$ is maximal; i.e., $$U$$ is contained in no other orthogonal sets.

Proof: First suppose $$U$$ is finite; i.e., $$U = \{ u_1, u_2, ... u_n \}$$. Let $$\alpha_j = \langle x, u_j \rangle$$. Since for each $$k$$, $$\langle x - \sum_{j=1}^n \alpha_j u_j, u_k \rangle = \langle x, u_k \rangle - \alpha_k \langle u_k, u_k \rangle = 0$$, by the preceding theorem or by direct computation,

Now suppose that $$U$$ is maximal. Let $$y = \sum_{j=1}^n \langle x, u_j \rangle u_j$$. Then by the same reasoning above, $$x - y$$ is orthogonal to every $$u_j$$. But since the assumed maximality $$x = y$$. Hence,
 * $$\|x\|^2$$
 * $$=\| x - \sum_{j=1}^n \alpha_j u_j \| + \| \sum_{j=1}^n \alpha_j u_j \|^2 $$
 * $$\ge \| \sum_{j=1}^n \alpha_j u_j \|^2 = \sum_{j=1}^n |\alpha_j|^2$$
 * }
 * $$\ge \| \sum_{j=1}^n \alpha_j u_j \|^2 = \sum_{j=1}^n |\alpha_j|^2$$
 * }
 * }
 * $$\sum_{j=1}^n |\langle x, u_j \rangle|^2 = \| \sum_{j=1}^n |\langle x, u_j \rangle u_j |^2 = \|y\|^2 = \|x\|^2$$. Conversely, suppose that $$U$$ is not maximal. Then there exists some nonzero $$x$$ such that $$\langle x, u \rangle = 0$$ for every $$u \in U$$. Thus,
 * $$\sum_{j=1}^n | \langle x, u_j \rangle |^2 = 0 < \|x\|^2$$.

The general case follows from the application of Egorov's theorem. $$\square$$

3 Corollary In view of Zorn's Lemma, it can be shown that a set satisfying the condition in (ii) exists. (TODO: need elaboration)

3 Lemma The function $$f(x) = \langle x, y \rangle$$ is continuous each time $$y$$ is fixed.

Proof: If $$f(x) = \langle x, y \rangle$$, from Schwarz's inequality it follows:
 * $$| f(z) - f(x) | = | \langle z - x, y \rangle | \le \| z - x \| \| y \| \to 0$$ as $$z \to x$$. $$\square$$

Given a linear subspace $$\mathcal{M}$$ of $$\mathcal{H}$$, we define: $$\mathcal{M}^\bot = \{ y \in \mathcal{H}; \langle x, y \rangle = 0, x \in \mathcal{M} \}$$. In other words, $$\mathcal{M}^\bot$$ is the intersection of the kernels of the continuous functionals $$f(x) = \langle x, y$$, which are closed; hence, $$M^\bot$$ is closed. (TODO: we can also show that $$\mathcal{M}^\bot = \overline{\mathcal{M}}^\bot$$)

3 Lemma ''Let $$\mathcal{M}$$ be a linear subspace of a pre-Hilbert space. Then $$z \in \mathcal{M}^\bot$$ if and only if $$\| z \| = \inf \{ \| z + w \| ; w \in \mathcal{M}\}$$.''

Proof: The Schwarz inequality says the inequality
 * $$| \langle z, z + w \rangle | \le \|z\| \|z + w \|$$

is actually equality if and only if $$z$$ and $$z + w$$ are linear dependent. $$\square$$

3 Theorem (Riesz) ''Let $$\mathcal{X}$$ be a pre-Hilbert space and $$\mathcal{M}$$ be its subspace. The following are equivalent:'' Proof: If $$\overline{\mathcal{M}} = \mathcal{H}$$ and $$z \in \mathcal{M}^\bot$$, then $$z \in \overline{\mathcal{M}} \cap \mathcal{M}^\bot = {0}$$. (Note: completeness was not needed.) Conversely, if $$\overline{\mathcal{M}}$$ is not dense, then it can be shown (TODO: using completeness) that there is $$y \in \overline{\mathcal{M}}$$ such that
 * (i) $$\mathcal{X}$$ is a complete.
 * (ii) $$\mathcal{M}$$ is dense if and only if $$\mathcal{M}^\bot = \{ 0 \}$$.
 * (iii) Every continuous linear functional on $$\mathcal{X}^*$$ has the form $$f(x) = \langle x, y \rangle$$ where y is uniquely determined by $$f$$.
 * $$\|x - y\| = \inf \{ \|x - w\|; w \in \mathcal{M} \}$$.

That is, $$0 \ne x - y \in \overline{M}^\bot$$. In sum, (i) implies (ii). To show (iii), we may suppose that $$f$$ is not identically zero, and in view of (ii), there exists a $$z \in \ker(f)^\bot$$ with $$\|z\| = 1$$. Since $$f(xf(z) - f(x)z) = 0$$,
 * $$0 = \langle xf(z) - f(x)z, z \rangle = \langle x, \overline f(z) z \rangle - f(x)$$.

The uniqueness holds since $$\langle x, y \rangle = \langle x, y_2 \rangle$$ for all $$x$$ implies that $$y = y_2$$. Finally, (iii) implies reflexivility which implies (i). $$\square$$

A complete pre-Hilbert space is called a Hilbert space.

3 Corollary ''Let $$\mathcal{M}$$ be a a closed linear subspace of a Hilbert space Proof: (i) Let $$x \in \mathcal{H}$$ be given. Define $$f(w) = \langle w, x \rangle$$ for each $$w \in \mathcal{M}$$. Since $$f$$ is continuous and linear on $$\mathcal{M}$$, which is a Hilbert space, there is $$y \in \mathcal{M}$$ such that $$f(w) = \langle w, y \rangle$$. It follows that $$\langle w, x - y \rangle = 0$$ for any $$w \in \mathcal{M}$$; that is, $$x - y \in \mathcal{M}^\bot$$. The uniqueness holds since if $$y_2 \in \mathcal{M}$$ and $$x - y_2 \in \mathcal{M}^\bot$$, then $$f(w) = \langle x, y_2 \rangle$$ and the representation is unique. (ii) If $$x \in \mathcal{M}$$, then since $$x$$ is orthogonal to $$\mathcal{M}^{\bot \bot}$$. Thus, $$\mathcal{M} \subset \mathcal{M}^{\bot\bot}$$ and taking closure on both sides we get: $$\overline{\mathcal{M}} \subset \overline {M^{\bot\bot}} \subset M^{\bot\bot}$$. Also, if $$x \in \overline{M}^{\bot\bot}$$, then we write: $$x = y + z$$ where $$y \in \overline{M}$$ and $$z \in \overline{M}^\bot$$ and $$\|z\|^2 = \langle x - y, z \rangle = \langle x, z \rangle = 0$$. Thus, $$x = y \in \overline{\mathcal{M}}$$. Since $$\mathcal{M} \subset \overline{\mathcal{M}}$$ implies that $$\overline{\mathcal{M}} \subset \mathcal{M}^\bot$$ and $$\mathcal{M}^{\bot\bot} \subset \overline{\mathcal{M}}^{\bot\bot}$$, the corollary follows. $$\square$$
 * ''(i) For any $$x \in \mathcal{H}$$ we can write $$x = y + z$$ where $$y \in \mathcal{M}$$ and $$z \in \mathcal{M}^\bot$$ and $$y, z$$ are uniquely determined by $$x$$.
 * (ii) then $$\mathcal{M}^{\bot\bot} = \bar \mathcal{M}$$.

Integration
3 Theorem (Fundamental Theorem of Calculus) The following are equivalent. Proof: Suppose (ii). Since we have:
 * (i) The derivative of $$\int_a^x f(t) dt$$ at $$x$$ is $$f(x)$$.
 * (ii) $$f$$ is absolutely continuous.
 * $$\inf_{x \le t \le y} f(t) \le (x-y)^{-1} \int_x^y f(t) \le \sup_{x \le t \le y} f(t)$$,

for any $$a$$,
 * $$\lim_{y \to x} (x-y)^{-1} (\int_{a}^y f(t)dt - \int_{a}^x f(t)dt) = f(x)$$. $$\square$$

Differentiation
Differentiation of $$f$$ at $$x$$ is to take the limit of the quotient by letting $$h \rightarrow 0$$:
 * $${f(x + h) - f(x) \over h}$$.

When the limit of the quotient indeed exists, we say $$f$$ is differentiable at $$x$$. The derivative of $$f$$, denoted by $$\dot f$$, is defined by $$f(x)$$ = the limit of the quotient at $$x$$.

3.8. Theorem The power series:
 * $$u = \sum_0^{\infty} a_j z^j$$

is analytic inside the radius of convergence.

Proof: The normal convergence of $$u$$ implies the theorem.

To show that every analytic function can be represented by a power series, we will, though not necessarily, wait for Cauchy's integral formula.

We define the norm in $$\mathbb{R}^n$$, thereby inducing topology;
 * $$\| x \| = \| \sum_1^n x_j^2 \|^{1/2}$$.

The topology in this way is often called a natural topology of $$\mathit{R}^2$$, since so to speak we don't artificially induce a topology by defining $$\sigma$$.

3. Theorem (Euler's formula)
 * $$z = |z|e^{i \theta} = |z|(cos \theta + sin \theta)$$ If $$z \in \mathbb{C}$$.

Proof:

3. Theorem (Cauchy-Riemann equations) ''Suppose $$u \in \mathcal{C}^1(\Omega)$$. We have:
 * $${\partial u \over \partial \bar z} = 0$$ on $$\Omega$$ if and only if $${\partial u \over \partial x} - {1 \over i} {\partial u \over \partial y} = 0$$ on $$\Omega$$.

Proof:

3. Corollary ''Let $$u, v$$ are non-constant and analytic in $$\Omega$$. If $$\mbox{Re }u = \mbox{Re }v$$, then $$u = v$$.

Proof: Let $$g = u - v$$. Then $$0 = \mbox{Re }g = g + \bar g$$. Thus, $$\mbox{Im }g = 0$$, and hence g = 0. $$\square$$

This furnishes examples of functions that are not analytic. For example, $$u(x + iy) = x + iy$$ is analytic everywhere and that means $$v(x + iy) = x + icy$$ cannot be analytic unless $$c = 1$$.

A operator $$f$$ is bounded if there exists a constant $$C > 0$$ such that for every $$x$$:
 * $$\| f(x) \| \le C \| x \|$$.

3.1 Theorem Given a bounded operator $$f$$, if
 * $$\alpha = \inf \{ C : \| f(x) \le C \| x \| \}$$, $$\beta = \sup_{\| x \| \le 1} \| f(x) \|$$ and $$\gamma = \sup_{\| x \| = 1} \| f(x) \|$$,

then $$\alpha = \beta = \gamma$$.

Proof: Since $$\beta \le \gamma$$ can be verified (FIXME) and $$\gamma$$ is inf,
 * $$\|f(x)\| = \left\|f\left({x \over \|x\|}\right)\right\| \|x\| \le \alpha \|x\| \le \beta \|x\| \le \gamma \|x\|$$.

Thus,
 * $$\alpha \le \beta \le \gamma$$.

But if $$\alpha \|x\| < \gamma \|x\|$$ in the above, then this is absurd since $$\gamma$$ is sup; hence the theorem is proven.

We denote by $$\| f \|$$ either of the above values, and call it the norm of $$f$$

3.2 Corollary A operator $$f$$ is bounded if and only if is continuous.

Proof: If $$f$$ is bounded, then we find $$\| f \|$$ and since the identity: for every $$x$$ and $$h$$
 * $$\| f(x + h) - f(x) \| \le \| f \| \| h \|$$,

$$f$$ is continuous everywhere. Conversely, every continuous operator maps a open ball centered at 0 of radius 1 to some bounded set; thus, we find the norm of $$f$$, $$\| f \|$$, and the theorem follows after the preceding theorem. $$\square$$

3. Theorem If F is a linear space of dimension $$n$$, then it has exactly $$n$$ subspaces including F and excluding {0}.

Proof: F has a basis of n elements.

Theorem ''If H is complete, then $$H^n = \{ \sum_1^n x_j e_j : x_j \in E \}$$ (i.e., a cartesian space of E) is complete '' Proof: Let $$z_j \in H$$ be a Cauchy sequence. Then we have:
 * $$|z_n - z_m| = \left| \sum_1^n (z|e_j)e_j - \sum_1^m (z|e_j)e_j \right| \to 0$$ as $$n, m \to \infty$$.

Since orthogonality, we have:
 * $$|(x_n - x_m) e_1 + (y_n - y_m) e_2| = |x_n - x_m| + |y_n - y_m|$$,

and both $$x_j$$ and $$y_j$$ are also Cauchy sequences. Since completeness, the respective limits $$x$$ and $$y$$ are in $$E$$; thus, the limit $$z = x e_1 + y e_2$$ is in E_2. $$\square$$

The theorem shows in particular that $$\mathbb {R}, \mathbb {R^n}, \mathbb{C}, \mathbb{C^n}$$ are complete.

3. Theorem (Hamel basis) The Axiom of Choice implies that every linear space has a basis

Proof: We may suppose the space is infinite-dimensional, otherwise the theorem holds trivially.

FIXEME: Adopt. 3. Theorem (Fixed Point Theorem) ''Suppose a function f maps a closed subset $$F$$ of a Banach space to itself, and further suppose that there exists some $$c < 1$$ such that $$\|f(x) - f(y)\| \le c \| x - y \|$$ for any $$x$$ and $$y$$. Then $$f$$ has a unique fixed point.''

Proof: Let $$s_n$$ be a sequence: $$x, f(x), f(f(x)), f(f(f(x))), ... $$. For any $$n$$ for some $$x \in F$$. Then we have:
 * $$\| s_{n + 1} - s_n \| = \| f(s_n) - f(s_{n - 1}) \| \le c \| s_n - s_{n - 1} \|$$.

By induction it follows:
 * $$\| s_{n + 1} - s_n \| = c^n \| s_1 - s_0 \|$$.

Thus, $$s_n$$ is a Cauchy sequence since:

That $$F$$ is closed puts the limit of $$s_n$$ in $$F$$. Finally, the uniqueness follows since if $$f(x) = x$$ and $$f(y) = y$$, then
 * $$\| s_{n+1} + ... s_{n+k} - s_n \| \le \sum_0^k \|s_{n + j}\| \le \| s_1 - s_0 \| c^n \sum_0^k c^j$$
 * $$= \| s_1 - s_0 \| c^n(c - 1)^{-1}(1 - c^{k+1})$$.
 * }
 * $$= \| s_1 - s_0 \| c^n(c - 1)^{-1}(1 - c^{k+1})$$.
 * }
 * $$\|f(x) - f(y)\| = \|x - y\| \le c \| x - y \|$$ or $$1 \le c$$unless $$x = y$$. $$\square$$

3. Corollary (mean value inequality) ''Let $$f : \mathbb{R}^n \to \mathbb{R}^m$$ be differentiable. Then there exists some $$z = (1 - t)x + ty$$ for some $$t \in [0, 1]$$ such that
 * $$\|f(x) - f(y)\| \le \| f' (z) \| \| x - y \|$$''

where the equality holds if $$n = m = 1$$ (mean value theorem).

Proof:

Theorem ''Let $$f: E \to \mathbb{R}$$ where $$E \subset \mathbb{R}^n$$ and is open. If $$D_1 f, D_2 f, ... D_n f$$ are bounded in $$E$$, then $$f$$ is continuous.''

Proof: Let $$\epsilon > 0$$ and $$x \in E$$ be given. Using the assumption, we find a constant $$M$$ so that:
 * $$\sup_E | D_i f | < M$$ for $$i = 1, 2, ... n$$.

Let $$\delta = \epsilon (nM)^{-1}$$. Suppose $$|h| < \delta$$ and $$x + h \in E$$. Let
 * $$\phi_k(t) = f \left( x + \sum_1^k(h \cdot e_j)e_j + t(h \cdot e_{k+1})e_{k+1} \right)$$.

Then by the mean value theorem, we have: for some $$c \in (0, 1)$$,

It thus follows: since $$\phi_k(0) = \phi_{k-1}(1)$$,
 * $$| \phi_k(1) - \phi_k(0) |$$
 * $$= |h| \left| D_k f(x + \sum_1^k (h \cdot e_j)e_j + c(h \cdot e_{k+1})e_{k+1}) \right|$$
 * $$< |h| M$$.
 * }
 * $$< |h| M$$.
 * }
 * }


 * $$| f(x+h) - f(x) |$$
 * $$= |\phi_n(1) - \phi_1(0) | = \left| \sum_1^n \phi_k(1) - \phi_k(0) \right|$$
 * $$< |h| nM < \epsilon$$ $$\square$$
 * }
 * $$< |h| nM < \epsilon$$ $$\square$$
 * }
 * }

Theorem (differentiation rules)' Given $$f, g: \mathbb{R} \to \mathbb{R}$$ differentiable, Proof: (b) and (c) follows after we apply (a) to them with $$log$$, $$h(x) = x^{-1}$$ and the implicit function theorem. $$\square$$.
 * (a) (Chain Rule) $$D(g \circ f) = (D(g) \circ f)D(f)$$.
 * (b) (Product Rule) $$D(fg) = D(f)g + fD(g)$$.
 * (b) (Quotient Rule) $$D(f / g) = g^{-2} (D(f)g - fD(g))$$.

Theorem (Cauchy-Riemann equations) ''Let $$\Omega \subset \mathbb{C}$$ and $$u:\Omega \to \mathbb{C}$$. Then $$u$$ is differentiable if and only if $${\partial \over \partial x}u$$ and $${\partial \over \partial y}u$$ are continuous on $$\Omega$$ and $${\partial \over \partial z} u = 0$$ on $$\Omega$$.''

Proof: Suppose $$u$$ is differentiable. Let $$z \in \Omega$$ and $$x = \mbox{Re}z$$ and $$y = \mbox{Im}z$$.

Since $$x = {z + \overline{z} \over 2}$$ and $$y = {z - \bar z \over 2i}$$, the Chain Rule gives:
 * $$u'(z)$$
 * $$= \lim_{h \in \mathbb{R} \to 0} {u(x+h, y) - u(x,y) \over h} = {\partial \over \partial x}u(z)$$
 * $$= \lim_{h \in \mathbb{R} \to 0} {u(x, y+h) - u(x,y) \over ih} = {1 \over i} {\partial \over \partial y}u(z)$$
 * }
 * $$= \lim_{h \in \mathbb{R} \to 0} {u(x, y+h) - u(x,y) \over ih} = {1 \over i} {\partial \over \partial y}u(z)$$
 * }
 * }

Conversely, let $$z \in \Omega$$. It suffices to show that $$u'(z) = {\partial \over \partial x} u(z)$$. Let $$\epsilon > 0$$ be given and $$x = \Re z$$ and $$y = \Im z$$. Since the continuity of the partial derivatives and that $$\Omega$$ is open, we can find a $$\delta > 0$$ so that: $$B(\delta, z) \subset \Omega$$ and for $$s \in B(\delta, z)$$ it holds:
 * $${\partial \over \partial \bar z} u$$
 * $$= \left( {\partial x \over \partial \bar z} {\partial \over \partial x} + {\partial y \over \partial \bar z} {\partial \over \partial y} \right) u$$
 * $$={1 \over 2} \left( {\partial \over \partial x} - {1 \over i} {\partial \over \partial y} \right) u$$
 * $$=0$$.
 * }
 * $$={1 \over 2} \left( {\partial \over \partial x} - {1 \over i} {\partial \over \partial y} \right) u$$
 * $$=0$$.
 * }
 * $$=0$$.
 * }
 * $$\left| {\partial \over \partial x}(u(s) - u(z)) \right| < \epsilon / 2$$ and $$\left| {\partial \over \partial y}(u(s) - u(z)) \right| < \epsilon / 2$$.

Let $$h \in B(\delta, 0)$$ be given and $$h_1= \Re h$$ and $$h_2 = \Im h$$. Using the mean value theorem we have: for some $$s_1, s_2 \in B(\delta, z)$$,

where $${\partial \over \partial y} u = i {\partial \over \partial x} u$$ by assumption. Finally it now follows:
 * $$u(x+h_1, y+h_2) - u(x, y)$$
 * $$= u(x+h_1, y+h_2) - u(x, y+h_2) + u(x, y+h_2) - u(x, y)$$
 * $$= h_1 {\partial \over \partial x} u(s1) + h_2 {\partial \over \partial y} u(s2)$$
 * }
 * $$= h_1 {\partial \over \partial x} u(s1) + h_2 {\partial \over \partial y} u(s2)$$
 * }
 * }


 * $$\left| {u(z + h) - u(z) \over h} - {\partial \over \partial x} u(z) \right|$$
 * $$= \left| {h_1 \over h} \right| \left| {\partial \over \partial x} (u(s_1) - u(z)) \right| + \left| {h_2 \over h} \right| \left| {\partial \over \partial x} (u(s_2) - u(z)) \right|$$
 * $$< \epsilon$$ $$\square$$
 * }
 * $$< \epsilon$$ $$\square$$
 * }
 * }

3 Corollary ''Let $$u \in \mathcal{A}(\Omega)$$ and suppose $$\Omega$$ is connected. Then the following are equivalent:'' Proof: That (a) $$\Rightarrow$$ (b) is obvious. Suppose (b). Since we have some constant $$M$$ so that for all $$z \in \Omega$$,
 * (a) $$u$$ is constant.
 * (b) $$\mbox{Re}u$$ is constant.
 * (c) $$|u|$$ is constant.
 * $$|e^u| = |e^{\Re u} e^{i \Im u}| = |e^{\Re u}| = e^M$$,

clearly it holds that $$|u| = |\log e^u| = M$$. Thus, (b) $$\Rightarrow$$ (c). Suppose (c). Then $$M^2 = |u|^2 = u \overline{u}$$. Differentiating both sides we get:
 * $$0 = {\partial \over \partial z} u\overline{u} = u {\partial \over \partial z} \overline{u} + \overline{u} {\partial \over \partial z} u$$.

Since $$u \in \mathcal{A}(\Omega)$$, it follows that $${\partial \over \partial z} \overline{u} = 0$$ and $$\overline{u} {\partial \over \partial z}u = 0$$. If $$\overline{u} = 0$$, then $$u = 0$$. If $${\partial \over \partial z}u = 0$$, then $$u$$ is constant since $$\Omega$$ is connected. Thus, (c) $$\Rightarrow$$ (a). $$\square$$

We say a function has the open mapping property if it maps open sets to open sets. The maximum principle states that equivalently
 * if a function has a local maximum, then the function is constant.

3 Theorem ''Let $$u: \Omega \to \mathbb{C}$$. The following are equivalent:''
 * (a) $$u$$ is harmonic.
 * (b) $$u$$ has the mean value property.

3 Theorem ''Let $$u: \Omega \to \mathbb{C}$$. If $$u$$ has the open mapping property, then the maximum principle holds.''

Proof: Suppose $$u \in \mathcal{A}(\Omega)$$ and $$\Omega$$ is open and connected. Let $$\omega = \{ z \in \Omega : |u(z)| = \sup_\Omega |u| \}$$. If $$u$$ has a local maximum, then $$\omega$$ is nonempty. Also, $$\Omega$$ is closed in $$\Omega$$ since $$\Omega = u^{-1}({\omega})$$. Let $$a \in \omega$$. Since $$\Omega$$ is open, we can find a $$r > 0$$ so that: $$B = B(r, a) \subset \Omega$$. Since $$u(\omega)$$ is open by the open mapping property, we can find a $$\epsilon > 0$$ so that $$B(\epsilon, u(a)) \subset u(\omega)$$. This is to say that $$u(a) < \epsilon + u(z)$$ for some $$z \subset B(r, a)$$. This is absurd since $$a \in \omega$$ and $$u(z) \le u(a)$$ for all $$z \in \Omega$$. Thus, $$u = \sup_\Omega |u|$$ identically on $$B(r, a)$$ and it thus holds that $$B(r, a) \subset \omega$$ and $$\omega$$ is open in $$\Omega$$. Since $$\Omega$$ is connected, $$\Omega = \omega$$. Therefore, $$u = \sup_\Omega |u|$$ on $$\Omega$$. $$\square$$

Addendum
Exercise ''Let $$f \in \mathcal{A}(\mathbb{C})0$$. Then $$f$$ is a polynomial of degree $$ > n$$ if and only if there are constants $$A$$ and $$B$$ such that $$|f(z)| \le A + B|z|^n$$ for all $$z \in \mathbb{C}$$.''

Exercise 2 ''Let $$f: A \to A$$ be linear. Further suppose $$A$$ has dimension $$n < \infty$$. Then the following are equivalent:''
 * 1) $$f^{-1}$$ exists
 * 2) $$\det(f) \ne 0$$ where $$\det(f) = \sum_1^n sgn(\sigma) x_{\sigma(i)j}$$
 * 3) The set
 * $$\left\{ f \begin{bmatrix} 1 \\ \vdots \\ 0 \end{bmatrix}, f \begin{bmatrix} 0 \\ \vdots \\ 0 \end{bmatrix} ... f \begin{bmatrix} 0 \\ \vdots \\ 1 \end{bmatrix} \right\}$$ has dimension $$n$$.