Linear Algebra/Mechanics of Matrix Multiplication

In this subsection we consider matrix multiplication as a mechanical process, putting aside for the moment any implications about the underlying maps. As described earlier, the striking thing about matrix multiplication is the way rows and columns combine. The $$ i,j $$ entry of the matrix product is the dot product of row $$i$$ of the left matrix with column $$j$$ of the right one. For instance, here a second row and a third column combine to make a $$2,3$$ entry.



\begin{pmatrix} 1 & 1 \\ {\color{red} 0} & {\color{red} 1} \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 4 & 6 & {\color{red}8 } & 2\\ 5 & 7 & {\color{red}9 } & 3 \end{pmatrix} = \begin{pmatrix} 9 &13   &17                      &5  \\ 5  &7    &{\color{red}9}          &3  \\ 4 &6    &8                       &2 \end{pmatrix} $$

We can view this as the left matrix acting by multiplying its rows, one at a time, into the columns of the right matrix. Of course, another perspective is that the right matrix uses its columns to act on the left matrix's rows. Below, we will examine actions from the left and from the right for some simple matrices.

The first case, the action of a zero matrix, is very easy.

After zero matrices, the matrices whose actions are easiest to understand are the ones with a single nonzero entry.

Next in complication are matrices with two nonzero entries. There are two cases. If a left-multiplier has entries in different rows then their actions don't interact.

But if the left-multiplier's nonzero entries are in the same row then that row of the result is a combination.

Right-multiplication acts in the same way, with columns.

These observations about matrices that are mostly zeroes extend to arbitrary matrices.

An application of those observations is that there is a matrix that just copies out the rows and columns.

In short, an identity matrix is the identity element of the set of $$n \! \times \! n$$ matrices with respect to the operation of matrix multiplication.

We next see two ways to generalize the identity matrix.

The first is that if the ones are relaxed to arbitrary reals, the resulting matrix will rescale whole rows or columns.

The second generalization of identity matrices is that we can put a single one in each row and column in ways other than putting them down the diagonal.

We finish this subsection by applying these observations to get matrices that perform Gauss' method and Gauss-Jordan reduction.

To see how to perform a pivot, we observe something about those two examples. The matrix that rescales the second row by a factor of three arises in this way from the identity.



\begin{pmatrix} 1 &0  &0  \\ 0  &1  &0  \\ 0  &0  &1 \end{pmatrix} \xrightarrow[]{3\rho_2} \begin{pmatrix} 1 &0  &0  \\ 0  &3  &0  \\ 0  &0  &1 \end{pmatrix} $$

Similarly, the matrix that swaps first and third rows arises in this way.



\begin{pmatrix} 1 &0  &0  \\ 0  &1  &0  \\ 0  &0  &1 \end{pmatrix} \xrightarrow[]{\rho_1\leftrightarrow\rho_3} \begin{pmatrix} 0 &0  &1  \\ 0  &1  &0  \\ 1  &0  &0 \end{pmatrix} $$

We have observed the following result, which we shall use in the next subsection.

Until now we have taken the point of view that our primary objects of study are vector spaces and the maps between them, and have adopted matrices only for computational convenience. This subsection show that this point of view isn't the whole story. Matrix theory is a fascinating and fruitful area.

In the rest of this book we shall continue to focus on maps as the primary objects, but we will be pragmatic&mdash; if the matrix point of view gives some clearer idea then we shall use it.

Exercises
/Solutions/