Linear Algebra/Gauss-Jordan Reduction

Gaussian elimination coupled with back-substitution solves linear systems, but it's not the only method possible. Here is an extension of Gauss' method that has some advantages.

Note that the pivot operations in the first stage proceed from column one to column three while the pivot operations in the third stage proceed from column three to column one.

This extension of Gauss' method is Gauss-Jordan reduction. It goes past echelon form to a more refined, more specialized, matrix form.

The disadvantage of using Gauss-Jordan reduction to solve a system is that the additional row operations mean additional arithmetic. The advantage is that the solution set can just be read off.

In any echelon form, plain or reduced, we can read off when a system has an empty solution set because there is a contradictory equation, we can read off when a system has a one-element solution set because there is no contradiction and every variable is the leading variable in some row, and we can read off when a system has an infinite solution set because there is no contradiction and at least one variable is free.

In reduced echelon form we can read off not just what kind of solution set the system has, but also its description. Whether or not the echelon form is reduced, we have no trouble describing the solution set when it is empty, of course. The two examples above show that when the system has a single solution then the solution can be read off from the right-hand column. In the case when the solution set is infinite, its parametrization can also be read off of the reduced echelon form. Consider, for example, this system that is shown brought to echelon form and then to reduced echelon form.



\left(\begin{array}{cccc|c} 2 &6  &1  &2  &5  \\ 0  &3  &1  &4  &1  \\ 0  &3  &1  &2  &5 \end{array}\right) \xrightarrow[]{-\rho_2+\rho_3} \left(\begin{array}{cccc|c} 2 &6  &1  &2  &5  \\ 0  &3  &1  &4  &1  \\ 0  &0  &0  &-2 &4 \end{array}\right)$$
 * $$\xrightarrow[\begin{array}{c}\\[-19pt]\scriptstyle (1/3)\rho_2 \\[-5pt]\scriptstyle -(1/2)\rho_3\end{array}]{(1/2)\rho_1}

\;\xrightarrow[-\rho_3+\rho_1]{(4/3)\rho_3+\rho_2} \;\xrightarrow[]{-3\rho_2+\rho_1} \left(\begin{array}{cccc|c} 1 &0  &-1/2  &0  &-9/2  \\ 0  &1  &1/3   &0  &3  \\ 0  &0  &0     &1  &-2 \end{array}\right) $$

Starting with the middle matrix, the echelon form version, back substitution produces $$-2x_4=4$$ so that $$x_4=-2$$, then another back substitution gives $$3x_2+x_3+4(-2)=1$$ implying that $$x_2=3-(1/3)x_3$$, and then the final back substitution gives $$2x_1+6(3-(1/3)x_3)+x_3+2(-2)=5$$ implying that $$x_1=-(9/2)+(1/2)x_3$$. Thus the solution set is this.



S=\{\begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix} =\begin{pmatrix} -9/2 \\ 3 \\ 0 \\ -2 \end{pmatrix} +\begin{pmatrix} 1/2 \\ -1/3 \\ 1 \\ 0 \end{pmatrix}x_3 \,\big|\, x_3\in\mathbb{R}\} $$

Now, considering the final matrix, the reduced echelon form version, note that adjusting the parametrization by moving the $$x_3$$ terms to the other side does indeed give the description of this infinite solution set.

Part of the reason that this works is straightforward. While a set can have many parametrizations that describe it, e.g., both of these also describe the above set $$S$$ (take $$t$$ to be $$x_3/6$$ and $$s$$ to be $$x_3-1$$)



\{\begin{pmatrix} -9/2 \\ 3 \\ 0 \\ -2 \end{pmatrix} +\begin{pmatrix} 3 \\ -2 \\ 6 \\ 0 \end{pmatrix}t \,\big|\, t\in\mathbb{R}\} \qquad \{\begin{pmatrix} -4 \\ 8/3 \\ 1 \\ -2 \end{pmatrix} +\begin{pmatrix} 1/2 \\ -1/3 \\ 1 \\ 0 \end{pmatrix}s \,\big|\, s\in\mathbb{R}\} $$

nonetheless we have in this book stuck to a convention of parametrizing using the unmodified free variables (that is, $$x_3=x_3$$ instead of $$x_3=6t$$). We can easily see that a reduced echelon form version of a system is equivalent to a parametrization in terms of unmodified free variables. For instance,


 * $$\begin{array}{rl}

x_1 &=4-2x_3 \\ x_2 &=3-x_3 \end{array} \quad\Longleftrightarrow\quad \left(\begin{array}{ccc|c} 1 &0  &2  &4  \\ 0  &1  &1  &3  \\ 0  &0  &0  &0 \end{array}\right) $$

(to move from left to right we also need to know how many equations are in the system). So, the convention of parametrizing with the free variables by solving each equation for its leading variable and then eliminating that leading variable from every other equation is exactly equivalent to the reduced echelon form conditions that each leading entry must be a one and must be the only nonzero entry in its column.

Not as straightforward is the other part of the reason that the reduced echelon form version allows us to read off the parametrization that we would have gotten had we stopped at echelon form and then done back substitution. The prior paragraph shows that reduced echelon form corresponds to some parametrization, but why the same parametrization? A solution set can be parametrized in many ways, and Gauss' method or the Gauss-Jordan method can be done in many ways, so a first guess might be that we could derive many different reduced echelon form versions of the same starting system and many different parametrizations. But we never do. Experience shows that starting with the same system and proceeding with row operations in many different ways always yields the same reduced echelon form and the same parametrization (using the unmodified free variables).

In the rest of this section we will show that the reduced echelon form version of a matrix is unique. It follows that the parametrization of a linear system in terms of its unmodified free variables is unique because two different ones would give two different reduced echelon forms.

We shall use this result, and the ones that lead up to it, in the rest of the book but perhaps a restatement in a way that makes it seem more immediately useful may be encouraging. Imagine that we solve a linear system, parametrize, and check in the back of the book for the answer. But the parametrization there appears different. Have we made a mistake, or could these be different-looking descriptions of the same set, as with the three descriptions above of $$S$$? The prior paragraph notes that we will show here that different-looking parametrizations (using the unmodified free variables) describe genuinely different sets.

Here is an informal argument that the reduced echelon form version of a matrix is unique. Consider again the example that started this section of a matrix that reduces to three different echelon form matrices. The first matrix of the three is the natural echelon form version. The second matrix is the same as the first except that a row has been halved. The third matrix, too, is just a cosmetic variant of the first. The definition of reduced echelon form outlaws this kind of fooling around. In reduced echelon form, halving a row is not possible because that would change the row's leading entry away from one, and neither is combining rows possible, because then a leading entry would no longer be alone in its column.

This informal justification is not a proof; we have argued that no two different reduced echelon form matrices are related by a single row operation step, but we have not ruled out the possibility that multiple steps might do. Before we go to that proof, we finish this subsection by rephrasing our work in a terminology that will be enlightening.

Many different matrices yield the same reduced echelon form matrix. The three echelon form matrices from the start of this section, and the matrix they were derived from, all give this reduced echelon form matrix.



\begin{pmatrix} 1 &0  \\ 0  &1 \end{pmatrix} $$

We think of these matrices as related to each other. The next result speaks to this relationship.

This lemma suggests that "reduces to" is misleading&mdash; where $$ A\longrightarrow B $$, we shouldn't think of $$ B $$ as "after" $$ A $$ or "simpler than" $$A$$. Instead we should think of them as interreducible or interrelated. Below is a picture of the idea. The matrices from the start of this section and their reduced echelon form version are shown in a cluster. They are all interreducible; these relationships are shown also. We say that matrices that reduce to each other are "equivalent with respect to the relationship of row reducibility". The next result verifies this statement using the definition of an equivalence.

The diagram below shows the collection of all matrices as a box. Inside that box, each matrix lies in some class. Matrices are in the same class if and only if they are interreducible. The classes are disjoint&mdash; no matrix is in two distinct classes. The collection of matrices has been partitioned into row equivalence classes.

One of the classes in this partition is the cluster of matrices shown above, expanded to include all of the nonsingular $$2 \! \times \! 2$$ matrices.

The next subsection proves that the reduced echelon form of a matrix is unique; that every matrix reduces to one and only one reduced echelon form matrix. Rephrased in terms of the row-equivalence relationship, we shall prove that every matrix is row equivalent to one and only one reduced echelon form matrix. In terms of the partition what we shall prove is: every equivalence class contains one and only one reduced echelon form matrix. So each reduced echelon form matrix serves as a representative of its class.

After that proof we shall, as mentioned in the introduction to this section, have a way to decide if one matrix can be derived from another by row reduction. We just apply the Gauss-Jordan procedure to both and see whether or not they come to the same reduced echelon form.

Exercises
/Solutions/