Solutions To Mathematics Textbooks/Algebra (9780132413770)/Chapter 1

Exercise 1.7
We show that

$$\begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}^n = \begin{pmatrix} 1 & n & n(n+1)/2 \\ 0 & 1 & n \\ 0 & 0 & 1 \end{pmatrix}$$.

When $$n = 1$$, the equation holds. Assume it holds for $$n = k$$ and consider $$n=k+1$$. Then we have

$$\begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}^{k+1} = \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix} \times \begin{pmatrix} 1 & k & k(k+1)/2 \\ 0 & 1 & k \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & k+1 & (k+1)(k+2)/2 \\ 0 & 1 & k+1 \\ 0 & 0 & 1 \end{pmatrix}$$.

Exercise 3.4
Using row operations we can simplify a matrix to the row-echelon form. If also column operations are allowed, starting from the first row, we can scale and subtract the pivot elements from all the non-zero elements on a row. Hence, we can bring the matrix to a form where only the pivot elements are ones, and the rest are 0. If the matrix has full rank, only row operations are enough.

Exercise 4.6
Let $$ A \in \mathbb{R}^{n \times n}, D \in \mathbb{R}^{k \times k}, B \in \mathbb{R}^{n \times k} $$ and $$M = \begin{pmatrix} A & B \\ 0 & D \end{pmatrix} \in \mathbb{R}^{n+k}$$. We want to show that $$\det M = \det A \cdot \det D$$.

To show this, we use the formula (1.6.4) for the determinant: $$\det M = \sum_{p \in S_{n+k}} \textrm{sign}(p) m_{1, p(1)}\cdots m_{n, p(n)}\cdots m_{n+1, p(n+1)}\cdots m_{n+k, p(n+k)} $$. Since $$m_{n+i, p(n+i)} = 0 $$ whenever $$1 \leq i \leq k, p(n+i) \leq n $$, the products in the sum are nonzero only when $$p $$ is a permutation such that $$p(\{1,2,\ldots,n\}) = \{1,2, \ldots,n\} $$and $$p(\{n+1,n+2,\ldots,n+k\}) = \{n+1,n+2, \ldots,n+k\} $$. Therefore we can write

$$\begin{alignat}{1} \det M  & = \sum_{\pi \in S_n, \sigma \in S_k} \textrm{sign}(\pi)\textrm{sign}(\sigma) m_{1, \pi(1)}\cdots m_{n, \pi(n)}\cdots m_{n+1, \sigma(1)}\cdots m_{n+k, \sigma(n+k)} \\ & = \sum_{\pi \in S_n} \textrm{sign}(\pi)m_{1, \pi(1)} \cdots m_{n, \pi(n)} \sum_{\sigma \in S_k} \textrm{sign}(\sigma)m_{1, \sigma(1)} \cdots m_{n, \sigma(k)} \\ & = \det A \cdot \det D

\end{alignat} $$.

Exercise 5.1
$$(12)(13)(14)(15) = (15432) $$

$$(123)(234)(345) = (12)(54) $$

$$(1234)(2345) = (12453) $$

$$(12)(23)(34)(45)(51) = (2345) $$

Exercise 5.2
Consider the permutation $$p = (1324) $$.


 * a) The permutation matrix associated to p is $$P = \begin{pmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\end{pmatrix} $$.
 * b) $$p = (1324) = (14)(12)(13) $$
 * c) sign(p) = -1.

Exercise 6.2
Let $$A \in \mathbb{Z}^{n \times n} $$. Claim: $$A $$ is invertible and $$A^{-1} $$ has integer entries if and only if $$\det A = \pm 1 $$.

Assume '''$$A^{-1} $$''' exists and has integer entries. First notice that from the determinant formula

$$\det A = \sum_{p \in S_{n}} \textrm{sign}(p) a_{1, p(1)}\cdots a_{n, p(n)} $$

We immediately see that if the entries $$a_{i,j} $$ are integral, then the determinant must be integral as a sum of products of integers. Conversely, if the determinant is not an integer, at least one of the entries has to be non-integral.

Next we observe that since '''$$\det(A) \neq 0 $$, $$\det(AA^{-1}) = \det(A)\det(A^{-1}) = 1 \Longrightarrow \det(A^{-1}) = \frac{1}{\det(A)} $$'''. So unless $$\det A = \pm 1 $$, $$\det (A^{-1}) $$ is not an integer and hence $$A^{-1} \notin \mathbb{Z}^{n \times n} $$ contradicting the assumption.

Then assume that $$\det A = \pm 1 $$. Then $$A^{-1} $$ exists and the cofactor matrix formula (Theorem 1.6.9) tells us that $$A^{-1} = \frac{1}{\det A} \textrm{cof}(A) $$. The entries of the cofactor matrix are given by $$\textrm{cof}(A)_{i,j} = (-1)^{i+j} \det A_{ji} $$, where $$A_{ji} $$ is the matrix $$A $$ with row $$i $$ and column $$j $$ removed. Clearly if $$A \in \mathbb{Z}^{n \times n} $$, the cofactor matrix then has integral entries and if $$\det A = \pm 1 $$, also the inverse has integral entries.

Exercise M.8
a) The only problem here is assuming that $$L $$ would be the right inverse of $$A $$. Necessarily, if $$A \in \mathbb {R}^{m \times n} $$, we have to have $$L \in \mathbb {R}^{n \times m} $$, in which case the product $$AL $$ is not even defined.

b) The sequence is correct, and shows that the equality $$AX = ALB $$ holds. If $$L $$ was also the right inverse of $$A $$, we would necessarily have $$m = n $$ in order to have the product $$AL $$ defined.

Exercise M.11
a) We have the variables $$x_{0,1},x_{-1,0},x_{0,0},x_{1,0},x_{0,-1}$$and the boundary conditions $$\beta_{2,0}=\beta_{1,-1}=\beta_{-2,0}=\beta_{-1,-1}=\beta_{0,-2}=0$$ and $$\beta_{1,1}=\beta_{0,2}=\beta_{-1,1}=1$$. Using the discrete Laplace equation these conditions translate to the equations

$$\begin{array}{llcr} (0,1): & \beta_{0,2} + x_{0,0} + \beta_{1,1} + \beta_{-1,1} -4 x_{0,1} & = & 0 \\ (-1,0): & \beta_{-1,1} + x_{0,0} + \beta_{-2,0} + \beta_{-1,-1} -4 x_{-1,0} & = & 0 \\ (0,0): & x_{1,0} + x_{-1,0} + x_{0,1} + x_{0,-1} -4 x_{0,0} & = & 0 \\ (1,0): & \beta_{2,0} + x_{0,0} + \beta_{1,1} + \beta_{1,-1} -4 x_{1,0} & = & 0 \\ (0,-1): & \beta_{0,-2} + x_{0,0} + \beta_{-1,-1} + \beta_{1,-1} -4 x_{0,-1} & = & 0 \\

\end{array}$$,

which simplifies to the linear system

$$\begin{pmatrix} -4 & 0 & 1 & 0 & 0 \\ 0 & -4 & 1 & 0 & 0 \\ 1 & 1 & -4 & 1 & 1 \\ 0 & 0 & 1 & -4 & 0 \\ 0 & 0 & 1 & 0 & -4 \\ \end{pmatrix} \begin{pmatrix} x_{0,1} \\ x_{-1,0} \\ x_{0,0} \\ x_{1,0} \\ x_{0,-1} \\ \end{pmatrix} = \begin{pmatrix} -3 \\ -1 \\ 0 \\ -1 \\ 0 \\ \end{pmatrix}$$.

The solution to this system is given by multiplying the equation from the left with the inverse of the coefficient matrix.

b) Assume the maximum value is achieved on some point $$x_{u,v}$$ inside the region $$R$$. Since $$x_{u,v}$$ is the average of its four neighbors, one of the neighbors must have a larger value than $$x_{u,v}$$, contradicting the assumption.

c) Let $$A $$ be the matrix obtained from writing the linear system of the discrete Dirichlet problem with entries $$a_{ij}$$. We have $$a_{ii} = -4$$ and $$a_{ij} \in \{0, 1\}$$ for $$i \neq j$$. Using the diagonal element of each row, we can eliminate the corresponding column for every other row. There are at most 4 non-zero, non-diagonal elements on each column, each having the value at most 1. Hence, when conducting the row eliminations, we will never eliminate a diagonal element. Therefore, we can reduce the matrix $$A$$ to the row-echelon form with no rows of only 0 elements. Hence, the matrix is invertible and so the linear system has a unique solution.