Linear Algebra/Introduction to Matrices and Determinants

The determinant is a function which associates to a square matrix an element of the field on which it is defined (commonly the real or complex numbers).

Matrices
Informally an m&times;n matrix (plural matrices) is a rectangular table of entries from a field (that is to say that each entry is an element of a field). Here m is the number of rows and n the number of the columns in the table. Those unfamiliar with the concept of a field, can for now assume that by a field of characteristic 0 (which we will denote by F) we are referring to a particular subset of the set of complex numbers.

An m&times;n matrix (read as m by n matrix), is usually written as:


 * $$A=\left(\begin{matrix}

a_{11}&a_{12}&\cdots&a_{1n}\\ a_{21}&a_{22}&\cdots&a_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ a_{m1}&a_{m2}&\cdots&a_{mn} \end{matrix}\right)$$

The $$i^{th}$$ row is an element of $F^n$, showing the n components $$\begin{pmatrix}a_{i1} & a_{i2} &\cdots a_{in} \end{pmatrix}$$. Similary the $$j^{th}$$ column is an element of $$F^m$$ showing the m components $$\begin{pmatrix}a_{1j}\\a_{2j}\\\vdots\\a_{mj}\end{pmatrix}$$.

Here m and n are called the dimensions of the matrix. The dimensions of a matrix are always given with the number of rows first, then the number of columns. It is also said that an m by n matrix has an order of m&times;n.

Formally, an m&times;n matrix M is a function $$M:A\rightarrow F$$ where A = {1,2...m} &times; {1,2...n} and F is the field under consideration. It is almost always better to visualize a matrix as a rectangular table (or array) then as a function.

A matrix having only one row is called a row matrix (or a row vector) and a matrix having only one column is called a column matrix (or a column vector). Two matrices of the same order whose corresponding entries are equal are considered equal. The (i,j)-entry of the matrix (often written as $$A_{ij}$$ or $$A_{i,j}$$) is the element at the intersection of the $$i^{th}$$ row (from the top) and the $$j^{th}$$ column (from the left).

For example,


 * $$\begin{pmatrix} 3 & 4 & 8 \\

2 & 7 & 11 \\                       1 & 1 & 1  \end{pmatrix}$$

is a 3&times;3 matrix (said 3 by 3). The 2nd row is $$\begin{pmatrix}2 & 7 & 11 \end{pmatrix}$$ and the 3rd column is $$\begin{pmatrix} 8\\ 11 \\ 1\end{pmatrix}$$. The (2,3) entry is the entry at intersection of the 2nd row and the 3rd column, that is 11.

Some special kinds of matrices are:


 * A square matrix is a matrix which has the same number of rows and columns. A diagonal matrix is a matrix with non zero entries only on the main diagonal (ie at $$A_{i,i}$$ positions).

\begin{cases} 1, & i=j \\ 0, & i \neq j \end{cases}$$
 * The unit matrix or identity matrix In, is the matrix with elements on the diagonal set to 1 and all other elements set to 0. Mathematically, we may say that for the identity matrix $$I_{i,j}$$ (which is usually written as $$\delta_{i,j}$$ and called Kronecker's delta) is given by: $$ \delta_{i,j} =

For example, if n = 3:

I_3 = \begin{bmatrix} 1 & 0 & 0 \\   0 & 1 & 0 \\    0 & 0 & 1  \end{bmatrix} .$$

\begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{bmatrix}^{\mathrm{T}} \!\! \;\! = \, \begin{bmatrix} 1 & 3 & 5\\ 2 & 4 & 6 \end{bmatrix} \; $$
 * The transpose of an m-by-n matrix A is the n-by-m matrix AT formed by turning rows into columns and columns into rows, i.e. $$A_{i,j} = A^T_{j,i} \forall i,j$$. An example is $$

1 & 2 & 3\\ 2 & 4 & -5\\ 3 & -5 & 6\end{bmatrix}$$
 * A square matrix whose transpose is equal to itself is called a symmetric matrix; that is, A is symmetric if $$A^{\mathrm{T}} = A.\,$$. An example is $$\begin{bmatrix}

0 & -3 & 4\\ 3 & 0 & -5\\ -4 & 5 & 0\end{bmatrix}$$
 * A square matrix whose transpose is equal to its negative is called skew-symmetric matrix; that is, A is skew-symmetric if $$A^{\mathrm{T}} = -A.\,$$. An example is $$\begin{bmatrix}

Properties of these matrices are developed in the exercises.

Determinants
To define a determinant of order n, suppose there are n2 elements of a field sij where i and j are less than or equal to n. Define the following function (this function is important in the definition):

S(a1,a2,a3,...,an)=# of reversals, meaning the number of times an1n2, for each possible combination.

Suppose you have a permutation of numbers from 1 to n {a1,a2,a3,...,an). Then define a term of the determinant to be equal to (-1)S(a1,a2,a3,...,an)s1a1,s2a2,s3a3,...,snan. The sum of all possible terms (i. e. through all possible permutations) is called the determinant.

Theorem
Definition: The transpose of a matrix A, AT is the matrix resulting when the columns and rows are interchanged i. e. the matrix sji when A is the matrix sij A matrix and its transpose have the same determinant:
 * $$\det(A^\top) = \det(A). \,$$

Proof
All terms are the same, and the signs of the terms are also unchanged since all reversals remain reversals. Thus, the sum is the same.

Theorem
Interchanging two rows (or columns) changes the sign of the determinant:
 * $$\det \begin{bmatrix} \cdots \\ \mbox{row A} \\ \cdots \\ \mbox{row B} \\ \cdots \end{bmatrix} = - \det \begin{bmatrix}\cdots \\ \mbox{row B} \\ \cdots \\ \mbox{row A} \\ \cdots \end{bmatrix} $$.

Proof
To show this, suppose two adjacent rows (or columns) are interchanged. Then any reversals in a term would not be affected except for the reversal of the elements of that term within that row (or column), in which case adds or subtracts a reversal, thus changing the signs of all terms, and thus the sign of the matrix. Now, if two rows, the ath row and the (a+n)th are interchanged, then interchange successively the ath row and (a+1)th row, and then the (a+1)th row and (a+2)th row, and continue in this fashion until one reaches the (a+n-1)th row. Then go backwards until one goes back to the ath row. This has the same effect as switching the ath row and the (a+n)th rows, and takes n-1 switches for going forwards, and n-2 switches for going backwards, and their sum must then be an odd number, so it multiplies by -1 an odd number of times, so that its total effect is to multiply by -1.

Corollary
A determinant with two rows (or columns) that are the same has the value 0. Proof: This determinant would be the additive inverse of itself since interchanging the rows (or columns) does not change the determinant, but still changes the sign of the determinant. The only number for which it is possible is when it is equal to 0.

Theorem
It is linear on the rows and columns of the matrix.
 * $$\det \begin{bmatrix} \ddots & \vdots & \ldots \\ \lambda a_1 + \mu b_1 & \cdots & \lambda a_n + \mu b_n \\   \cdots & \vdots & \ddots \end{bmatrix} = \lambda \det \begin{bmatrix}  \ddots & \vdots & \cdots \\ a_1 & \cdots & a_n \\   \cdots & \vdots & \ddots \end{bmatrix} + \mu \det \begin{bmatrix}  \ddots & \vdots & \cdots \\  b_1 & \cdots & b_n \\   \cdots & \vdots & \ddots \end{bmatrix}$$

Proof
The terms are of the form a1...$$\lambda a + \mu b$$...an. Using the distributive law of fields, this comes out to be a1...$$\lambda a$$...an + a1...$$\mu b$$...an, an thus its sum of such terms is the sum of the two determinants:

$$\lambda \det \begin{bmatrix} \ddots & \vdots & \cdots \\ a_1 & \cdots & a_n \\   \cdots & \vdots & \ddots \end{bmatrix} + \mu \det \begin{bmatrix}  \ddots & \vdots & \cdots \\  b_1 & \cdots & b_n \\   \cdots & \vdots & \ddots \end{bmatrix}$$

Corollary
Adding a row (or column) times a number to another row (or column) does not affect the value of a determinant.

Proof
Suppose you have a determinant A with the kth column added by another column times a number: $$ \begin{bmatrix} a_{11} & a_{12} & a_{13} & \ldots & a_{1k}+\mu a_{1b} & \ldots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \ldots & a_{2k}+\mu a_{2b} & \ldots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \ldots & a_{3k}+\mu a_{3b} & \ldots & a_{3n} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\  a_{n1} & a_{n2} & a_{n3} & \ldots & a_{nk}+\mu a_{nb} & \ldots & a_{nn} \end{bmatrix}$$

where akb are elements of another column. By the linear property, this is equal to

$$ \begin{bmatrix} a_{11} & a_{12} & a_{13} & \ldots & a_{1k} & \ldots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \ldots & a_{2k} & \ldots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \ldots & a_{3k} & \ldots & a_{3n} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\  a_{n1} & a_{n2} & a_{n3} & \ldots & a_{nk} & \ldots & a_{nn} \end{bmatrix} +  \begin{bmatrix} a_{11} & a_{12} & a_{13} & \ldots & \mu a_{1b} & \ldots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \ldots & \mu a_{2b} & \ldots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \ldots & \mu a_{3b} & \ldots & a_{3n} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\  a_{n1} & a_{n2} & a_{n3} & \ldots & \mu a_{nb} & \ldots & a_{nn} \end{bmatrix}$$

The second number is equal to 0 because it has two columns that are the same. Thus, it is equal to $$ \begin{bmatrix} a_{11} & a_{12} & a_{13} & \ldots & a_{1k} & \ldots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \ldots & a_{2k} & \ldots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \ldots & a_{3k} & \ldots & a_{3n} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\  a_{n1} & a_{n2} & a_{n3} & \ldots & a_{nk} & \ldots & a_{nn}\end{bmatrix}$$

which is the same as the matrix A.


 * It is easy to see that $$\det(rI_n) = r^n \,$$ and thus
 * $$\det(rA) = \det(rI_n \cdot A) = r^n \det(A) \,$$ for all $$n$$-by-$$n$$ matrices $$A$$ and all scalars $$r$$.


 * A matrix over a commutative ring R is invertible if and only if its determinant is a unit in R. In particular, if A is a matrix over a field such as the real or complex numbers, then A is invertible if and only if det(A) is not zero. In this case we have


 * $$\det(A^{-1}) = \det(A)^{-1}. \,$$

Expressed differently: the vectors v1,...,vn in Rn form a basis if and only if det(v1,...,vn) is non-zero.

The determinants of a complex matrix and of its conjugate transpose are conjugate:
 * $$\det(A^*) = \det(A)^*. \,$$