Linear Algebra/Combining Subspaces

''This subsection is optional. It is required only for the last sections of Chapter Three and Chapter Five and for occasional exercises, and can be passed over without loss of continuity.''

This chapter opened with the definition of a vector space, and the middle consisted of a first analysis of the idea. This subsection closes the chapter by finishing the analysis, in the sense that "analysis" means "method of determining the ... essential features of something by separating it into parts".

A common way to understand things is to see how they can be built from component parts. For instance, we think of $$ \mathbb{R}^3 $$ as put together, in some way, from the $$ x $$-axis, the $$ y $$-axis, and $$ z $$-axis. In this subsection we will make this precise;we will describe how to decompose a vector space into a combination of some of its subspaces. In developing this idea of subspace combination, we will keep the $$\mathbb{R}^3$$ example in mind as a benchmark model.

Subspaces are subsets and sets combine via union. But taking the combination operation for subspaces to be the simple union operation isn't what we want. For one thing, the union of the $$ x $$-axis, the $$ y $$-axis, and $$ z $$-axis is not all of $$\mathbb{R}^3$$, so the benchmark model would be left out. Besides, union is all wrong for this reason: a union of subspaces need not be a subspace (it need not be closed; for instance, this $$\mathbb{R}^3$$ vector



\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} + \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} + \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} $$

is in none of the three axes and hence is not in the union). In addition to the members of the subspaces, we must at least also include all of the linear combinations.

(The notation, writing the "$$ + $$" between sets in addition to using it between vectors, fits with the practice of using this symbol for any natural accumulation operation.)

The above definition gives one way in which a space can be thought of as a combination of some of its parts. However, the prior example shows that there is at least one interesting property of our benchmark model that is not captured by the definition of the sum of subspaces. In the familiar decomposition of $$\mathbb{R}^3$$, we often speak of a vector's "$$x$$part" or "$$y$$part" or "$$z$$part". That is, in this model, each vector has a unique decomposition into parts that come from the parts making up the whole space. But in the decomposition used in Example 4.4, we cannot refer to the "$$xy$$part" of a vector&mdash; these three sums



\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} =\begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix} +\begin{pmatrix} 0 \\ 0 \\ 3 \end{pmatrix} =\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} +\begin{pmatrix} 0 \\ 2 \\ 3 \end{pmatrix} =\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} +\begin{pmatrix} 0 \\ 1 \\ 3 \end{pmatrix} $$

all describe the vector as comprised of something from the first plane plus something from the second plane, but the "$$xy$$part" is different in each.

That is, when we consider how $$\mathbb{R}^3$$ is put together from the three axes "in some way", we might mean "in such a way that every vector has at least one decomposition", and that leads to the definition above. But if we take it to mean "in such a way that every vector has one and only one decomposition" then we need another condition on combinations. To see what this condition is, recall that vectors are uniquely represented in terms of a basis. We can use this to break a space into a sum of subspaces such that any vector in the space breaks uniquely into a sum of members of those subspaces.

These examples illustrate a natural way to decompose a space into a sum of subspaces in such a way that each vector decomposes uniquely into a sum of vectors from the parts. The next result says that this way is the only way.

The special case of two subspaces is worth mentioning separately.

In this subsection we have seen two ways to regard a space as built up from component parts. Both are useful; in particular, in this book the direct sum definition is needed to do the Jordan Form construction in the fifth chapter.

Exercises
/Solutions/