User:TakuyaMurata/Linear algebra

This chapter covers selected topics in linear algebra. The purpose of this chapter is twofold: (i) to give an finite-dimensional example of some results and notions that will appear in the next chapter, and (ii) to demonstrate, as application, tools we have developed in the last chapter.

Let $$V$$ be a finite-dimensional vector space over $$k$$. Let $$T:V \to V$$ be a linear operator. If $$V = \operatorname{span} \{ e_1, ... e_n \}$$, then for any $$x \in V$$ we can write:
 * $$x = \sum_j x_j e_j$$

with $$x_j \in k$$. Letting also $$T e_j = \sum (T e_j)_k e_k = \sum a_{kj} e_k$$ we get:
 * $$Tx = \sum_{k, j} a_{kj} x_j$$

$$a_{kj}$$ is then called the matrix representation of $$T$$. Clearly, the $$a_{kj}$$ depends on the choice of a basis. There exists a canonical choice of a basis, which is given by the next theorem.

Theorem ''$$V$$ has a basis in which $$T$$ can be represented by a matrix that is upper-triangular and whose main diagonal consists of eigenvalues of $$T$$. The matrix is called the Jordan form of $$T$$.''

Proof: See Jordan canonical form.

The Jordan form of $$T$$ might be simpler. $$T$$ is said to be semisimple if every T-invariant subspace has a complementary T-invariant subspace. (See: semisimple operator).

Theorem The following are equivalent. Proof: (ii) $$\Rightarrow$$ (iii) $$\Rightarrow$$(iv) are immediate.
 * (i) $$T$$ is semisimple.
 * (ii) The roots of the minimal polynomial of $$T$$ are all distinct.
 * ''(iii) There exists a basis consisting of eigenvectors of $$T$$.
 * (iv) The Jordan form of T is a diagonal matrix consisting of distinct eigenvalues of $$T$$. (That is, T is diagonalizable.)''

Corollary Every projection is semisimple.

An linear operator $$N$$ is said to be nilpotent if $$N^k = 0$$ for some integer $$k$$.

Theorem T is the sum of a semisimple operator and nilpotent operator.

Lemma ''Let $$T$$ be a normal operator on a non-zero finite-dimensional domain $$V$$. Then
 * $$r(T) = \|T\| = \sup \{ |\langle Tx, x \rangle|; x \in V, |x|=1 \}$$

where $$r(T) = \max \{ |\lambda|; \lambda \in \sigma(T) \}$$. Furthermore, if $$K$$ is a compact space and $$T$$ is parametrized in $$K$$, then
 * $$s \mapsto \max \sigma(T_s): K \to \mathbf{R}$$

is continuous.

Proof: If $$\lambda = \min \sigma(T)$$, then there exists $$y \in V$$ with $$|y|=1$$ and $$Ty = \lambda y$$. It follows:
 * $$\lambda = \langle Ty, y \rangle \ge \inf \{ \langle Tx, x \rangle; |x|=1 \}$$

Conversely, if $$x \in V$$ and $$|x|=1$$, then by spectral theory we can write:
 * $$x = \sum_j a_j u_j$$

where $$u_j$$ is an orthogonal basis consisting of eigenvectors. It follows:
 * $$|\langle Tx, x \rangle | \le \|Tx\|^2 = \sum_j |a_j Tu_j|^2 \le r(T)^2 \sum_j |a_j|^2 = r(T)^2$$

Finally, since the sphere of $$V$$ is compact, the continuity follows from:

Lemma ''Let $$K, L$$ be compact spaces, and $$f: K \times L \to \mathbf{R}$$ be a continuous function. Then
 * $$s \mapsto \max \{ f(s, x); x \in L \}$$

is continuous.

(For the proof of the lemma, see (to be filled).) $$\square$$

A positive definite matrix is a hermitian matrix that is positive as a linear operator. (So, the word "definite" is actually redundant but historical.)

Theorem ''Let T, S be positive definite matrices on a finite-dimensional linear space $$\mathcal{X}$$. Then TST is also a positive definite.''

Proof: Since $$\langle STSx, x \rangle = \langle TSx, Sx \rangle$$, it suffices to show that $$S$$ is injective. But if $$Sx = 0$$, then $$\langle Sx, x \rangle = 0$$, which happens, by assumption, only when $$x = 0.$$ $$\square$$

Give a matrix $$A$$, define
 * $$e^A = \sum_{n=0}^\infty {1 \over n!} A^n$$.

Note that the right-hand side makes sense (that is, converges) since:
 * $$\|e^A\| \le \sum_{n=0}^\infty {1 \over n!} \|A^n\| \le e^{\|a\|}$$

Theorem $$\det e^A = e^{\operatorname{tr}(A)}$$

Proof: We write $$A = P^{-1} J P$$ where $$J$$ is the Jordan form $$J$$ of $$A$$. Then
 * $$\det e^A = \det P^{-1} e^J P = \det e^J = e^{\operatorname{tr}(J)}$$

$$\square$$

Supplementary exercises
Exercise ''Let $$x$$ be a linear operator on a vector space $$V$$ of dimension $$n$$. If $$\operatorname{tr}(x^k) = 0$$ for every $$k = 1, 2, ... n$$, then $$x = 0$$.