Linear Algebra/Inverses

We now consider how to represent the inverse of a linear map.

We start by recalling some facts about function inverses. Some functions have no inverse, or have an inverse on the left side or right side only.

(An example of a function with no inverse on either side is the zero transformation on $$\mathbb{R}^2$$.) Some functions have a two-sided inverse map, another function that is the inverse of the first, both from the left and from the right. For instance, the map given by $$\vec{v}\mapsto 2\cdot \vec{v}$$ has the two-sided inverse $$\vec{v}\mapsto (1/2)\cdot\vec{v}$$. In this subsection we will focus on two-sided inverses. The appendix shows that a function has a two-sided inverse if and only if it is both one-to-one and onto. The appendix also shows that if a function $$f$$ has a two-sided inverse then it is unique, and so it is called "the" inverse, and is denoted $$f^{-1}$$. So our purpose in this subsection is, where a linear map $$h$$ has an inverse, to find the relationship between $${\rm Rep}_{B,D}(h)$$ and $${\rm Rep}_{D,B}(h^{-1})$$ (recall that we have shown, in Theorem II.2.21 of Section II of this chapter, that if a linear map has an inverse then the inverse is a linear map also).

Because of the correspondence between linear maps and matrices, statements about map inverses translate into statements about matrix inverses.

Here is the arrow diagram giving the relationship between map inverses and matrix inverses. It is a special case of the diagram for function composition and matrix multiplication.



Beyond its place in our general program of seeing how to represent map operations, another reason for our interest in inverses comes from solving linear systems. A linear system is equivalent to a matrix equation, as here.



\begin{array}{*{2}{rc}r} x_1 &+  &x_2  &=  &3  \\ 2x_1 &-  &x_2  &=  &2 \end{array} \quad\Longleftrightarrow\quad \begin{pmatrix} 1 &1  \\ 2  &-1 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} 3 \\ 2 \end{pmatrix} \qquad\qquad (*)$$

By fixing spaces and bases (e.g., $$\mathbb{R}^2,\mathbb{R}^2$$ and $$\mathcal{E}_2,\mathcal{E}_2$$), we take the matrix $$H$$ to represent some map $$h$$. Then solving the system is the same as asking: what domain vector $$\vec{x}$$ is mapped by $$h$$ to the result $$\vec{d}\,$$? If we could invert $$h$$ then we could solve the system by multiplying $${\rm Rep}_{D,B}(h^{-1})\cdot{\rm Rep}_{D}(\vec{d})$$ to get $${\rm Rep}_{B}(\vec{x})$$.

We finish by describing the computational procedure usually used to find the inverse matrix.

This procedure will find the inverse of a general $$n \! \times \! n$$ matrix. The $$2 \! \times \! 2$$ case is handy.

We have seen here, as in the Mechanics of Matrix Multiplication subsection, that we can exploit the correspondence between linear maps and matrices. So we can fruitfully study both maps and matrices, translating back and forth to whichever helps us the most.

Over the entire four subsections of this section we have developed an algebra system for matrices. We can compare it with the familiar algebra system for the real numbers. Here we are working not with numbers but with matrices. We have matrix addition and subtraction operations, and they work in much the same way as the real number operations, except that they only combine same-sized matrices. We also have a matrix multiplication operation and an operation inverse to multiplication. These are somewhat like the familiar real number operations (associativity, and distributivity over addition, for example), but there are differences (failure of commutativity, for example). And, we have scalar multiplication, which is in some ways another extension of real number multiplication. This matrix system provides an example that algebra systems other than the elementary one can be interesting and useful.

Exercises
/Solutions/