Linear Algebra/Changing Map Representations

The first subsection shows how to convert the representation of a vector with respect to one basis to the representation of that same vector with respect to another basis. Here we will see how to convert the representation of a map with respect to one pair of bases to the representation of that map with respect to a different pair. That is, we want the relationship between the matrices in this arrow diagram.


 * [[Image:Linalg_map_change_basis.png|x150px]]

To move from the lower-left of this diagram to the lower-right we can either go straight over, or else up to $$V_B$$ then over to $$W_D$$ and then down. Restated in terms of the matrices, we can calculate $$\hat{H}={\rm Rep}_{\hat{B},\hat{D}}(h)$$ either by simply using $$\hat{B}$$ and $$\hat{D}$$, or else by first changing bases with $${\rm Rep}_{\hat{B},B}(\mbox{id})$$ then multiplying by $$ H={\rm Rep}_{B,D}(h) $$ and then changing bases with $${\rm Rep}_{D,\hat{D}}(\mbox{id})$$. This equation summarizes.



\hat{H}= {\rm Rep}_{D,\hat{D}}(\mbox{id})\cdot H\cdot {\rm Rep}_{\hat{B},B}(\mbox{id}) \qquad\qquad(*)$$

(To compare this equation with the sentence before it, remember that the equation is read from right to left because function composition is read right to left and matrix multiplication represent the composition.)

Naturally, we usually prefer basis changes that make the representation easier to understand. When the representation with respect to equal starting and ending bases is a diagonal matrix we say the map or matrix has been diagonalized. In Chaper Five we shall see which maps and matrices are diagonalizable, and where one is not, we shall see how to get a representation that is nearly diagonal.

We finish this subsection by considering the easier case where representations are with respect to possibly different starting and ending bases. Recall that the prior subsection shows that a matrix changes bases if and only if it is nonsingular. That gives us another version of the above arrow diagram and equation ($$*$$).

Problem 10 checks that matrix equivalence is an equivalence relation. Thus it partitions the set of matrices into matrix equivalence classes.

We can get some insight into the classes by comparing matrix equivalence with row equivalence (recall that matrices are row equivalent when they can be reduced to each other by row operations). In $$\hat{H}=PHQ$$, the matrices $$P$$ and $$Q$$ are nonsingular and thus each can be written as a product of elementary reduction matrices (see Lemma 4.8 in the previous subsection). Left-multiplication by the reduction matrices making up $$P$$ has the effect of performing row operations. Right-multiplication by the reduction matrices making up $$Q$$ performs column operations. Therefore, matrix equivalence is a generalization of row equivalence&mdash; two matrices are row equivalent if one can be converted to the other by a sequence of row reduction steps, while two matrices are matrix equivalent if one can be converted to the other by a sequence of row reduction steps followed by a sequence of column reduction steps.

Thus, if matrices are row equivalent then they are also matrix equivalent (since we can take $$Q$$ to be the identity matrix and so perform no column operations). The converse, however, does not hold.

We will close this section by finding a set of representatives for the matrix equivalence classes.

Sometimes this is described as a block partial-identity form.



\left(\begin{array}{c|c} I &Z  \\  \hline Z  &Z \end{array}\right) $$

In this subsection we have seen how to change the representation of a map with respect to a first pair of bases to one with respect to a second pair. That led to a definition describing when matrices are equivalent in this way. Finally we noted that, with the proper choice of (possibly different) starting and ending bases, any map can be represented in block partial-identity form.

One of the nice things about this representation is that, in some sense, we can completely understand the map when it is expressed in this way: if the bases are $$ B=\langle \vec{\beta}_1,\dots,\vec{\beta}_n \rangle $$ and $$ D=\langle \vec{\delta}_1,\dots,\vec{\delta}_m \rangle $$ then the map sends



c_1\vec{\beta}_1+\dots+c_k\vec{\beta}_k+c_{k+1}\vec{\beta}_{k+1}+\dots +c_n\vec{\beta}_n \;\longmapsto\; c_1\vec{\delta}_1+\dots+c_k\vec{\delta}_k+\vec{0}+\dots+\vec{0} $$

where $$ k $$ is the map's rank. Thus, we can understand any linear map as a kind of projection.



\begin{pmatrix} c_1 \\ \vdots \\ c_k \\ c_{k+1} \\ \vdots \\ c_n \end{pmatrix}_B \;\mapsto\; \begin{pmatrix} c_1 \\ \vdots \\ c_k \\ 0 \\ \vdots \\ 0 \end{pmatrix}_D $$

Of course, "understanding" a map expressed in this way requires that we understand the relationship between $$ B $$ and $$ D $$. However, despite that difficulty, this is a good classification of linear maps. }}

Exercises
/Solutions/