Linear Algebra/Vector Spaces and Linear Systems

We will now reconsider linear systems and Gauss' method, aided by the tools and terms of this chapter. We will make three points.

For the first point, recall the first chapter's Linear Combination Lemma and its corollary: if two matrices are related by row operations $$A\longrightarrow\cdots\longrightarrow B$$ then each row of $$B$$ is a linear combination of the rows of $$A$$. That is, Gauss' method works by taking linear combinations of rows. Therefore, the right setting in which to study row operations in general, and Gauss' method in particular, is the following vector space.

Thus, row operations leave the row space unchanged. But of course, Gauss' method performs the row operations systematically, with a specific goal in mind, echelon form.

Thus, in the language of this chapter, Gaussian reduction works by eliminating linear dependencies among rows, leaving the span unchanged, until no  nontrivial linear relationships remain (among the nonzero rows). That is, Gauss' method produces a basis for the row space.

Using this technique, we can also find bases for spans not directly involving row vectors.

Our interest in column spaces stems from our study of linear systems. An example is that this system



\begin{array}{*{3}{rc}r} c_1 &+  &3c_2  &+  &7c_3  &=  &d_1  \\ 2c_1 &+  &3c_2  &+  &8c_3  &=  &d_2  \\ &  &c_2   &+  &2c_3  &=  &d_3  \\ 4c_1 &   &      &+  &4c_3  &=  &d_4 \end{array} $$

has a solution if and only if the vector of $$ d $$'s is a linear combination of the other column vectors,



c_1\begin{pmatrix} 1 \\ 2 \\ 0 \\ 4 \end{pmatrix} +c_2\begin{pmatrix} 3 \\ 3 \\ 1 \\ 0 \end{pmatrix} +c_3\begin{pmatrix} 7 \\ 8 \\ 2 \\ 4 \end{pmatrix} =\begin{pmatrix} d_1 \\ d_2 \\ d_3 \\ d_4 \end{pmatrix} $$

meaning that the vector of $$ d $$'s is in the column space of the matrix of coefficients.

So the instructions for the prior example are "transpose, reduce, and transpose back".

We can even, at the price of tolerating the as-yet-vague idea of vector spaces being "the same", use Gauss' method to find bases for spans in other types of vector spaces.

Thus, our first point in this subsection is that the tools of this chapter give us a more conceptual understanding of Gaussian reduction.

For the second point of this subsection, consider the effect on the column space of this row reduction.


 * $$\begin{array}{rcl}

\begin{pmatrix} 1 &2  \\ 2  &4 \end{pmatrix} &\xrightarrow[]{-2\rho_1+\rho_2} &\begin{pmatrix} 1 &2  \\ 0  &0 \end{pmatrix} \end{array} $$

The column space of the left-hand matrix contains vectors with a second component that is nonzero. But the column space of the right-hand matrix is different because it contains only vectors whose second component is zero. It is this knowledge that row operations can change the column space that makes next result surprising.

Another way, besides the prior result, to state that Gauss' method has something to say about the column space as well as about  the row space is to consider again Gauss-Jordan reduction. Recall that it ends with the reduced echelon form of a matrix, as here.


 * $$\begin{array}{rcl}

\begin{pmatrix} 1 &3  &1  &6  \\ 2  &6  &3  &16 \\ 1  &3  &1  &6 \end{pmatrix} &\xrightarrow[]{}\;\cdots\;\xrightarrow[]{} &\begin{pmatrix} 1 &3  &0  &2  \\ 0  &0  &1  &4  \\ 0  &0  &0  &0 \end{pmatrix} \end{array} $$

Consider the row space and the column space of this result. Our first point made above says that a basis for the row space is easy to get: simply collect together all of the rows with leading entries. However, because this is a reduced echelon form matrix, a basis for the column space is just as easy: take the columns containing the leading entries, that is,  $$ \langle \vec{e}_1,\vec{e}_2 \rangle  $$. (Linear independence is obvious. The other columns are in the span of this set, since they all have a third component of zero.) Thus, for a reduced echelon form matrix, bases for the row and column spaces can be found in essentially the same way&mdash; by taking the parts of the  matrix, the rows or columns, containing the leading entries.

So our second point in this subsection is that the column space and row space of a matrix have the same dimension. Our third and final point is that the concepts that we've seen arising naturally in the study of vector spaces are exactly the ones that we have studied with linear systems.

So if the system has at least one particular solution then for the set of solutions, the number of parameters equals $$n-r$$, the number of variables minus the rank of the matrix of coefficients.

Exercises
/Solutions/