Solutions To Mathematics Textbooks/Algebra (9780132413770)/Chapter 3

Exercise 1.2
Using an "educated guess" one observes that $$5^{p-1} \equiv_p 1$$. With this, it is easy to see that $$5^{-1} \equiv_7 3$$, $$5^{-1} \equiv_{11} 9$$, $$5^{-1} \equiv_{13} 8$$ and $$5^{-1} \equiv_{17} 7$$.

Exercise 1.3
$$(x^3 + 3x^2+3x+1)(x^4 + 4x^3 + 6x^2 + 4x + 1) = x^7 + 7 x^6 + 21 x^5 + 35 x^4 + 35 x^3 + 21 x^2 + 7 x + 1 \equiv_7 x^7 + 1$$, as all the coefficients divisible by 7 reduce to 0.

Exercise 1.10
Let us denote the matrices (appearing in the same order as in the book) by $$0, 1, A, B$$. We need to check the following:


 * $$0, 1, A, B$$ is a group with matrix addition and $$0$$ as the identity. We get the following addition table

So we see that the elements with addition form an abelian group with $$0$$ as the identity.
 * $$1, A, B$$ is a group with $$1$$ as the identity. Again we have the multiplication table
 * $$1, A, B$$ is a group with $$1$$ as the identity. Again we have the multiplication table
 * $$1, A, B$$ is a group with $$1$$ as the identity. Again we have the multiplication table

Again, we see that the elements with matrix multiplication form an abelian group with $$1$$ as the identity.


 * The distributive law follows from the distributive law for matrices in general.

Exercise 1.11
Writing out a product and sum of two elements from the given set, and noticing that the coefficients of both elements, and thus their sums and products are in $$\mathbb{F}_3$$, implies that the sum and product are in the set. To see that each non-zero element has an inverse in the $$+$$ -operation is trivial. To see the same for the product operation, write the equations coming from the condition $$zz^{-1} = 1$$, where $$z$$ is a known element from the set and $$z^{-1}$$ a candidate for its inverse with unknown coefficients as a linear system. By Corollary 3.2.8 this system has a solution. The distribution law is immediate.

Exercise 2.2
a) The space of symmetric matrices is a vector space, since the sum of two symmetric matrices is a symmetric matrix, and a scaling of a symmetric matrix by a scalar is symmetric.

b) The space of invertible matrices is not a vector space, since it does not contain the zero matrix.

c) The space of upper triangular matrices is also a vector space by similar reasoning as used in part a).

Exercise 3.1
One possible basis for the space of symmetric matrices is for example the matrices $$A_{ij}$$ for $$i = 1, \ldots, n,~ j \leq i$$ that have zeros everywhere but in the $$i,j$$ and $$j, i$$ entry. There are $$\binom{n}{2}$$ such matrices, and they are linearly independent, since no two such matrices have ones in the same entry. Furthermore, the matrices $$A_{ij}$$ are symmetric and clearly any symmetric matrix can be written as a linear combination of $$A_{ij}$$.

Exercise 3.7
Let $$c_{ij} \in \mathbb{R}$$ be coefficients such that

$$\sum_{i, j} c_{ij} X_iY_j^t = 0$$. (1)

The matrix $$\sum_{i, j} c_{ij} X_iY_j^t$$ has as the $$k$$th column the vector $$\sum_{i, j} c_{ij}Y_{jk} X_i = \sum_i \left(\sum_j c_{ij}Y_{jk} \right) X_i$$ where $$Y_{jk}$$is the $$k$$th element of the vector $$Y_j$$. Denote $$\alpha_i = \sum_j c_{ij}Y_{jk}$$, so that the (1) implies together with the condition that the vectors $$X_i$$ form a basis that $$\alpha_i = 0$$ for all $$i$$. So then we must have $$\sum_j c_{ij}Y_{jk} = 0$$ for all $$k, i$$. This implies that $$\sum_j c_{ij}Y_j = 0$$ for all $$i$$, but since the vectors $$Y_j$$ form a basis, we must have $$c_{ij} = 0$$ for all $$i$$.

Exercise 3.8
Let $$A$$ be the matrix with the vectors $$v_1, \ldots, v_n$$ as column vectors. Let $$X, B \in F^n$$. Then $$AX = B$$ is equivalent to saying that $$B$$ is a linear combination of the vectors $$v_1, \ldots, v_n$$. By Theorem 1.2.21, $$AX = B$$ has a unique solution $$X$$ if and only if $$A$$ is invertible.

In particular, this means that also $$AX = 0$$ has a unique solution $$X = (0, \ldots, 0)$$. This shows that 1) $$v_1, \ldots, v_n$$ span the space $$F^n$$ and 2) $$v_1, \ldots, v_n$$ are linearly independent.

Exercise 4.2
a) $$\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}$$.

b) $$\begin{pmatrix} 0 & \cdots & 0 & 1 \\ 0 & \cdots & 1 & 0 \\ \vdots & \vdots & \vdots & \vdots  \\ 1 & 0 &  \cdots & 0 \end{pmatrix}$$.

c) $$\begin{pmatrix} 1 & -\frac{1}{2} \\ 0 & \frac{\sqrt{3}}{2} \end{pmatrix}$$ or $$\begin{pmatrix} 1 & -\frac{1}{2} \\ 0 & -\frac{\sqrt{3}}{2} \end{pmatrix}$$.

Exercise 4.3
The given operations correspond to row operations on matrices. By Theorem 1.2.16, any matrix that is invertible can be reduced to the identity using such operations. In Exercise 3.8 we proved that the columns of a matrix form a basis if and only if the matrix is invertible.

Exercise 4.4
a) Any basis in $$V$$ corresponds to a matrix that is invertible, i.e., an element of $$GL_2(\mathbb{F}_p)$$. On the other hand the column vectors of any element from $$GL_2(\mathbb{F})$$ form basis vectors for $$V$$.

b) For $$GL_2(\mathbb{F}_p)$$ we have that there are in total $$p^4$$ matrices in $$\mathbb{F}_p^{2 \times 2}$$ of which we have to count the ones that are not invertible. Considering the columns of a matrix in $$\mathbb{F}_p^{2 \times 2}$$, we have


 * $$p^2-1$$ first columns that are not the $$(0, 0)^t$$ column vector, and $$p-1$$ scalings of the first column with a value other than zero.
 * If the first column is $$(0, 0)^t$$, the second column can be chosen in $$p^2-1$$ ways such that it is not also the $$(0, 0)^t$$ vector.
 * If the second column is $$(0, 0)^t$$, the first column can also be chosen in $$p^2-1$$ ways such that it is not the $$(0, 0)^t$$ vector.
 * There is exactly one matrix with both columns $$(0, 0)^t$$.

Combining these facts, we get that there are $$p^4 - (p^2-1)(p-1) - (p^2-1) - (p^2-1) - 1 = p(p+1)(p-1)^2$$invertible matrices in $$\mathbb{F}_p^{2 \times 2}$$.

For $$SL_2(\mathbb{F}_p)$$ we want to compute the number of matrices in $$\mathbb{F}_p^{2 \times 2}$$ with determinant equal to 1. In $$GL_2(\mathbb{F}_p)$$ there are equally many elements with determinant 1, 2, 3, etc.Therefore, the number of elements in $$GL_2(\mathbb{F}_p)$$ is the number of elements with determinant 1 times $$p-1$$. From the previous calculation we thus get that the number of elements in $$SL_2(\mathbb{F}_p)$$ is $$p(p+1)(p-1)$$.

Exercise 4.5
a) The key for finding the number of subspaces is to find the number of linearly independent vectors in $$\mathbb{F}_p^3$$.

b) The case of $$\mathbb{F}_p^4$$ can be generalised from the previous case:
 * Subspaces of dimension 0: 1.
 * Subspaces of dimension 1: Each subspace is spanned by a nonzero vector of the form $$(a, b, c)^t$$ with $$a,b,c \in \mathbb{F}_p$$. There are $$p^3-1$$ such vectors. For any such given vector, there are $$p-1$$ nonzero scalings with a scalar in $$\mathbb{F}_p$$. Hence, the number of linearly independent vectors is $$(p^3-1)/(p-1) = p^2 + p + 1$$. Each such vector spans a subspace that is different from the subspaces spanned by the others.
 * Subspaces of dimension 2: Let $$W$$ be some maximal collection of linearly independent vectors of $$\mathbb{F}_p^3$$. We know that $$|W| = p^2 + p + 1$$, and any two vectors from $$W$$ span a two-dimensional subspace of $$\mathbb{F}_p^3$$. We can choose two vectors from $$W$$ in $$\binom{p^2 + p + 1}{2}$$ ways, but this is not the number of two-dimensional subspaces of $$\mathbb{F}_p^3$$. Indeed, say we choose $$v_1, v_2 \in W$$ such that $$v_1 \neq v_2$$. Then $$V = \textrm{Span}(v_1, v_2)$$ is a subspace of $$\mathbb{F}_p^3$$ containing $$p^2$$ points and $$p^2 / (p-1) = p + 1$$ linearly independent vectors. As $$p > 1$$, this means $$V$$ contains some vector $$w \in W, w \neq v_1, v_2$$. The number of pairs of linearly independent vectors in $$V$$ is $$\binom{p + 1}{2}$$, and hence the number of two-dimensional subspaces of $$\mathbb{F}_p^3$$ is $$\binom{p^2 + p + 1}{2} / \binom{p+1}{2} = p^2 + p + 1 $$. Another way to arriving the same conclusion is as follows: Let $$V$$ be a subspace of $$\mathbb{F}_p^3$$ of dimension 2. Then, $$V$$ is spanned by two linearly independent vectors, and there is a vector $$v \in \mathbb{F}_p^3$$ such that $$v \notin V$$. In other words, the vectors in $$V$$ are linearly independent of $$v$$. We know that there are $$p^2 + p + 1$$ linearly independent vectors in $$\mathbb{F}_p^3$$, so whenever we choose one of such vectors, we are left with a subspace of dimension 2 that does not contain the chosen vector (but contains all the others). Hence, there are also $$p^2 + p + 1$$ subspaces of dimension 2.
 * Subspaces of dimension 3: 1.


 * Subspaces of dimeansion 0: 1.
 * Subspaces of dimension 1: The number of linearly independent vectors can be calculated similarly as in a), and we get $$(p^4-1)/(p-1) = p^3 + p^2 + p + 1$$.
 * Subspaces of dimension 2: Similarly as in the case of $$\mathbb{F}_p^3$$, we have that the number of two-dimensional subspaces is $$\binom{p^3 + p^2 + p + 1}{2} / \binom{p+1}{2} = (p+1)(p^2 + p + 1) $$.
 * Subspaces of dimension 3: Similarly as in the case of $$\mathbb{F}_p^3$$, we have that for each three-dimensional subspace we have a one-dimensional subspace "left over". Therefore the number of three-dimensional subspaces is $$p^3 + p^2 + p + 1$$.
 * Subspaces of dimension 4: 1.

Exercise 5.1
Let $$V$$ be the space of symmetric and $$W$$ the space of skew-symmetric matrices. It is clear that $$\dim V = \frac{1}{2}n(n+1)$$ and $$\dim W = \frac{1}{2}n(n-1)$$ and that $$V \cap W$$ contains only the zero matrix, so the spaces are independent. By Proposition 3.6.4 b), we have $$\dim(V + W) = \dim V + \dim W = n^2 = \dim \mathbb{R}^{n \times n}$$ and so by Proposition 3.4.23, $$V + W = \mathbb{R}^{n \times n}$$.

Exercise 5.2
The condition $$\textrm{Tr}(M) = 0$$ introduces a linear dependency between the elements of the matrix. Therefore, we have $$\dim(W_1) = n^2-1$$, and thus any one-dimensional subspace of $$\mathbb{R}^{n \times n}$$ that is independent of $$W_1$$ suffices. For example we can take as $$W_2$$ the span of the matrix for which the top-left corner element is 1 and the rest are 0. Then $$W_1 + W_2 = \mathbb{R}^{n \times n}$$.

Exercise 6.1
The given vectors span the set of sequences that are constant apart from a finite set of indices.

Exercise M.3
a) Let $$x(t) = a_2t^2 + a_1t + a_0$$ and $$y(t) = b_2t^2 + b_1t + b_0$$, and $$f(x,y) = c_{2,0}x^2 + c_{0,2}y^2 + c_{1,1}xy + c_{1,0}x + c_{0,1} y + c_{0,0}$$. Then we have also $$f(x(t),y(t)) = d_4t^4 + d_3t^3 + d_2t^2 + d_1t + d_0$$. The coefficients $$d_i$$ are linear in the coefficients $$c_{j,k}$$, so

we can solve as follows

$$\begin{array}{lcl} d_4 & = & c_{2,0}a_2^2 + c_{0,2}b_2^2 + c_{1,1}a_2b_2 \\ d_3 & = & 2c_{2,0}a_2a_1 + 2c_{0,2}b_2b_1 + c_{1,1}(a_2b_1 + a_1b_2) \\ d_2 & = & c_{2,0}(a_1^2 + 2a_2a_0) + c_{0,2}(b_1^2 + 2b_2b_0) + c_{1,1}(a_1b_1+a_2b_0+a_0b_2) + c_{1,0}a_2+c_{0,1}b_2 \\ d_1 & = & c_{2,0}(a_1^2 +2a_1a_0) + c_{0,2}(b_1^2+2b_1b_0) + c_{1,1}(a_1b_0+a_0b_1) + c_{1,0}a_1 + c_{0,1}b_1 \\ d_0 & = & c_{2,0}a_0^2 + c_{0,2}b_0^2 + c_{1,1}a_0b_0 + c_{1,0}a_0 + c_{0,1}b_0 + c_{0,0} \end{array}$$

Setting each $$d_i$$ to zero yields a system of equations

$$\begin{pmatrix} a_2^2 & b_2^2 & a_2b_2 & 0 & 0 & 0 \\ 2a_2a_1 & 2b_2b_1 & a_2b_1+a_1b_2 & 0 & 0 & 0 \\ a_1^2 + 2a_2a_0 & b_1^2 + 2b_2b_0 & a_1b_1+a_2b_0+a_0b_2 & a_2 & b_2 & 0 \\ a_1^2 +2a_1a_0 & b_1^2+2b_1b_0 & a_1b_0+a_0b_1 & a_1 &b_1 & 0\\ a_0^2 & b_0^2 & a_0b_0 & a_0 & b_0 & 1 \end{pmatrix} \begin{pmatrix} c_{2,0} \\ c_{0,2} \\ c_{1,1} \\ c_{1,0} \\ c_{0,1} \\ c_{0,0} \end{pmatrix} = 0$$.

By Corollary 1.2.14 this system has a solution where at least one of the coefficients $$c_{j,k}$$ is non-zero, so there is a polynomial $$f(x,y)$$ that is not identically zero, but $$f(x(t),y(t)) = 0$$ for every $$t$$.

b) We can solve for example $$f(x,y) = x^3 + x^2 - y^2$$ using similar approach to part a).

c) Let $$x(t)$$ be a polynomial of degree $$d_x$$ and $$y(t)$$ polynomial of degree $$d_y$$, so that $$x(t) = \sum_{i = 0}^{d_x} a_it^i$$ and $$y(t) = \sum_{i = 0}^{d_y} b_it^i$$. Let $$f(x,y) = \sum_{i = 0}^{d_y}\sum_{j=0}^{d_x} c_{i,j}x^iy^j$$ be a polynomial with unknown coefficients $$c_{i,j} \in \mathbb{R}$$. In order to have this polynomial vanish at $$f(x(t),y(t))$$, we have to solve the equations that set the coefficient of $$t^i$$ to 0 for each $$i \leq d_yd_x$$ in the polynomial $$f(x(t),y(t))$$. These equations are linear in $$c_{i,j}$$, and there are $$d_xd_y+1$$ of them. On the other hand, there are $$(d_x+1)(d_y+1)$$ variables $$c_{i,j}$$, so by Corollary 1.2.14 the linear system has a non-zero solution. Note that in part a) we restricted the degree of the polynomial $$f(x,y)$$ to 2, and thus did not end up with as many equations as in this proof.