Linear Algebra/Exploration

''This subsection is optional. It briefly describes how an investigator might come to a good general definition, which is given in the next subsection.''

The three cases above don't show an evident pattern to use for the general $$n \! \times \! n$$ formula. We may spot that the $$1 \! \times \! 1$$ term $$ a $$ has one letter, that the $$2 \! \times \! 2$$ terms $$ad$$ and $$bc$$ have two letters, and that the $$3 \! \times \! 3$$ terms $$aei$$, etc., have three letters. We may also observe that in those terms there is a letter from each row and column of the matrix, e.g., the letters in the $$cdh$$ term



\begin{pmatrix} &   &c \\ d          \\ &h \end{pmatrix} $$

come one from each row and one from each column. But these observations perhaps seem more puzzling than enlightening. For instance, we might wonder why some of the terms are added while others are subtracted.

A good problem solving strategy is to see what properties a solution must have and then search for something with those properties. So we shall start by asking what properties we require of the formulas.

At this point, our primary way to decide whether a matrix is singular is to do Gaussian reduction and then check whether the diagonal of resulting echelon form matrix has any zeroes (that is, to check whether the product down the diagonal is zero). So, we may expect that the proof that a formula determines singularity will involve applying Gauss' method to the matrix, to show that in the end the product down the diagonal is zero if and only if the determinant formula gives zero. This suggests our initial plan: we will look for a family of functions with the property of being unaffected by row operations and with the property that a determinant of an echelon form matrix is the product of its diagonal entries. Under this plan, a proof that the functions determine singularity would go, "Where $$T\rightarrow\cdots\rightarrow\hat{T}$$ is the Gaussian reduction, the determinant of $$T$$ equals the determinant of $$\hat{T}$$ (because the determinant is unchanged by row operations), which is the product down the diagonal, which is zero if and only if the matrix is singular". In the rest of this subsection we will test this plan on the $$2 \! \times \! 2$$ and $$3 \! \times \! 3$$ determinants that we know. We will end up modifying the "unaffected by row operations" part, but not by much.

The first step in checking the plan is to test whether the $$2 \! \times \! 2$$ and $$3 \! \times \! 3$$ formulas are unaffected by the row operation of pivoting: if



T \xrightarrow[]{k\rho_i+\rho_j} \hat{T} $$

then is $$ \det(\hat{T})=\det(T) $$? This check of the $$2 \! \times \! 2$$ determinant after the $$k\rho_1+\rho_2$$ operation



\det( \begin{pmatrix} a    &b       \\ ka+c  &kb+d    \\ \end{pmatrix} ) = a(kb+d)-(ka+c)b = ad-bc $$

shows that it is indeed unchanged, and the other $$2 \! \times \! 2$$ pivot $$k\rho_2+\rho_1$$ gives the same result. The $$3 \! \times \! 3$$ pivot $$k\rho_3+\rho_2$$ leaves the determinant unchanged


 * $$\begin{array}{rl}

\det( \begin{pmatrix} a   &b    &c    \\ kg+d &kh+e &ki+f \\ g    &h    &i \end{pmatrix} ) &=\begin{array}{l} a(kh+e)i+b(ki+f)g+c(kg+d)h \\ \ -h(ki+f)a-i(kg+d)b-g(kh+e)c \end{array}                                \\ &=aei + bfg + cdh - hfa - idb - gec \end{array}$$

as do the other $$3 \! \times \! 3$$ pivot operations.

So there seems to be promise in the plan. Of course, perhaps the $$4 \! \times \! 4$$ determinant formula is affected by pivoting. We are exploring a possibility here and we do not yet have all the facts. Nonetheless, so far, so good.

The next step is to compare $$ \det(\hat{T}) $$ with $$ \det(T) $$ for the operation



T \xrightarrow[]{ {\rho}_i \leftrightarrow {\rho}_j } \hat{T} $$

of swapping two rows. The $$2 \! \times \! 2$$ row swap $$\rho_1\leftrightarrow\rho_2$$



\det( \begin{pmatrix} c &d \\ a  &b \end{pmatrix} ) = cb - ad $$

does not yield $$ ad-bc $$. This $$\rho_1\leftrightarrow\rho_3$$ swap inside of a $$3 \! \times \! 3$$ matrix



\det( \begin{pmatrix} g &h  &i \\ d  &e  &f \\ a  &b  &c \end{pmatrix} ) = gec + hfa + idb - bfg - cdh - aei $$

also does not give the same determinant as before the swap &mdash; again there is a sign change. Trying a different $$3 \! \times \! 3$$ swap $$\rho_1\leftrightarrow\rho_2$$



\det( \begin{pmatrix} d &e  &f \\ a  &b  &c \\ g  &h  &i \end{pmatrix} ) = dbi + ecg + fah - hcd - iae - gbf $$

also gives a change of sign.

Thus, row swaps appear to change the sign of a determinant. This modifies our plan, but does not wreck it. We intend to decide nonsingularity by considering only whether the determinant is zero, not by considering its sign. Therefore, instead of expecting determinants to be entirely unaffected by row operations, will look for them to change sign on a swap.

To finish, we compare $$ \det(\hat{T}) $$ to $$ \det(T) $$ for the operation



T \xrightarrow[]{ k{\rho}_i } \hat{T} $$

of multiplying a row by a scalar $$k\neq 0$$. One of the $$2 \! \times \! 2$$ cases is



\det( \begin{pmatrix} a  &b   \\ kc  &kd \end{pmatrix} ) = a(kd) - (kc)b =k\cdot (ad-bc) $$

and the other case has the same result. Here is one $$3 \! \times \! 3$$ case


 * $$\begin{array}{rl}

\det( \begin{pmatrix} a   &b    &c   \\ d    &e    &f   \\ kg   &kh   &ki \end{pmatrix} ) &= \begin{array}{l} ae(ki) + bf(kg) + cd(kh)               \\ \quad -(kh)fa - (ki)db - (kg)ec \end{array}                                     \\ &= k\cdot(aei + bfg + cdh - hfa - idb - gec) \end{array}$$

and the other two are similar. These lead us to suspect that multiplying a row by $$k$$ multiplies the determinant by $$k$$. This fits with our modified plan because we are asking only that the zeroness of the determinant be unchanged and we are not focusing on the determinant's sign or magnitude.

In summary, to develop the scheme for the formulas to compute determinants, we look for determinant functions that remain unchanged under the pivoting operation, that change sign on a row swap, and that rescale on the rescaling of a row. In the next two subsections we will find that for each $$n$$ such a function exists and is unique.

For the next subsection, note that, as above, scalars come out of each row without affecting other rows. For instance, in this equality



\det( \begin{pmatrix} 3 &3  &9  \\ 2  &1  &1  \\ 5  &10 &-5 \end{pmatrix} ) =3 \cdot \det( \begin{pmatrix} 1 &1  &3  \\ 2  &1  &1  \\ 5  &10 &-5 \end{pmatrix} ) $$

the $$3$$ isn't factored out of all three rows, only out of the top row. The determinant acts on each row of independently of the other rows. When we want to use this property of determinants, we shall write the determinant as a function of the rows: "$$ \det (\vec{\rho}_1,\vec{\rho}_2,\dots\vec{\rho}_n) $$", instead of as "$$ \det(T) $$" or "$$ \det(t_{1,1},\dots,t_{n,n}) $$". The definition of the determinant that starts the next subsection is written in this way.

Exercises
/Solutions/