Solutions To Mathematics Textbooks/Algebra (9780132413770)/Chapter 2

Exercise 1.1
$$a(bc)=ab=a$$ and $$(ab)c=ac=a$$. If $$1,a \in S$$ and $$a \neq 1$$, we have $$1 \cdot a = 1$$, so the only set $$S$$ with this composition law and identity is the set $$S = \{1\}$$.

Exercise 2.3
a) $$y = x^{-1}w^{-1}z$$

b) $$yzx = x^{-1}xyzx = x^{-1}x=1 $$, but $$yxz \neq 1 $$ in general. Consider for example the permutations $$x = (12), y = (13), z = (123) $$, so $$xyz = (12)(13)(123) = 1 $$, but $$yxz = (13)(12)(123) = (132) $$.

Exercise 2.4
a) $$GL_n(\mathbb{R}) $$ is a subgroup of $$GL_n(\mathbb{C}) $$

b) $$\{-1,1\} $$ is a subgroup of $$\mathbb{R}^\times $$.

c) Positive integers are not a subgroup of $$\mathbb{Z}^+ $$, as they do not contain inverse elements of even 0.

d) The positive reals form a subgroup of $$\mathbb{R}^\times $$.

e) The set $$H $$ is not a subgroup of $$GL_2(\mathbb{R}) $$, as it does not contain the identity matrix.

Exercise 2.5
Let $$H $$ be a subgroup of $$G $$. If $$H $$ has an identity element (possibly different from the identity of $$G $$), then we have $$1_H h = h = 1_G h $$, and multiplying both sides by $$h^{-1} $$ shows the identity elements are the same. Similarly if $$h $$ has an inverse in $$H $$, we necessarily have $$hh^{-1}_H = 1 = hh^{-1}_G $$ implying the inverses are the same.

Exercise 3.1
Using the standard Euclidean algorithm we get $$\gcd(321, 123) = 3 $$ and $$-18 \cdot 321 + 47 \cdot 123 = 3 $$.

Exercise 3.2
Let $$a, b \in \{1,2,3,...\} $$ be positive integers such that $$a+b $$ is a prime number. Then, if $$\gcd(a,b) = d $$, we have $$a = a_0 d, b = b_0d $$, so $$a + b = d(a_0 + b_0) $$. Because we want $$a + b $$ to be prime, the only possibility is $$d = 1 $$.

Exercise 4.3
Let $$a,b \in G$$ and $$k$$ such that $$(ab)^k = 1$$. Then $$(ab)^{k-1} = (ab)^{k-1} abb^{-1}a^{-1} = (ab)^k (ab)^{-1} = (ab)^{-1}$$ so $$(ba)^k = baba\ldots baba = b (ab)^{k-1} a = b (ab)^{-1} a = b b^{-1}a^{-1}a = 1$$.

Exercise 4.4
Assume that a group $$G $$ doesn't have a proper subgroup. Then


 * $$G $$ must be finite. If $$G $$ is infinite, take any $$g\in G $$ such that $$g $$ doesn't generate $$G $$, and consider the set $$H = \{g^k | k \in \mathbb{Z}\} $$. Since $$g $$ does not generate $$G $$, $$H $$ is a proper subgroup of $$G $$, as is easy to show.
 * $$G $$ has to be cyclic. Indeed, assume $$G $$ is not cyclic, so no single element generates $$G $$. Then, for any element $$g \in G $$, $$H = \{g^k | k \in \mathbb{N}\} $$ is a subgroup of $$G $$, since $$G $$ is finite.
 * $$G $$ has to have a prime number order. For contradiction, assume the order of $$G $$ is $$pq $$. Then, since $$G $$ is a finite cyclic group, generated by an element $$g $$, we have that $$H = \{g^{pk} | k \in \mathbb{N}\} $$ is a proper subgroup.
 * Finally, if $$G $$ has a prime order, no element of $$G $$ generates a proper subgroup of $$G $$. This is because any element is of the form $$g^k $$ for some element $$g \in G $$ and integer $$k $$. From Proposition 2.4.3 we know that the order of $$g^k $$ is $$n/d $$, where $$n $$ is the order of the group and $$d = \gcd(n, k) $$. In this case $$n $$ is a prime number, so the order of $$g^k $$ is either $$n $$ or 1 (in the case that $$k = n $$).

Exercise 4.5
Let $$G = $$ be a cyclic group. Take any subgroup $$H \subseteq G$$. We assume $$G $$ is finite (as the finite case has the main focus in the book, and in fact the definition on page 46 implicitly assumes so), so that $$H $$ is finite as well. Let $$H = \{g^{k_1}, \ldots, g^{k_m}\} $$. Then, since $$H $$ is a subgroup, for every $$a_1, \ldots a_m \in \mathbb{Z} $$ it holds $$g^{a_1k_1} \ldots g^{a_mk_m} = g^{a_1k_1 + \ldots a_mk_m} \in H $$. Hence, for the exponents $$q \in \mathbb{Z}$$ for which $$g^q \in H$$, it holds $$q \in \mathbb{Z}k_1 + \ldots + \mathbb{Z}k_m$$. Theorem 2.3.3 tells us that we can write $$\mathbb{Z}k_1 + \ldots + \mathbb{Z}k_m = \mathbb{Z}d$$ for some integer $$d$$. This shows that any element of $$H $$ is of the form $$g^{kd} $$ so $$H =  $$.

Exercise 4.8 b
Elementary matrices of the first type are matrices $$E$$ with the entries $$e_{ii} = 1$$, $$e_{ij} = a$$ for exactly one pair of indices $$i,j$$ and 0 otherwise. It is easy to see from the determinant formula that the determinant of such matrices is always 1. From the product rule for the determinant it follows that all products of such matrices have determinant 1.

For the other direction, we consider the 2 by 2 case first and let $$A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}$$ with $$ad-bc = 1$$. Adding a scaled row to the other row and using the relation between the entries, we can manipulate the matrix as

$$\begin{pmatrix} a & b \\ c & d \end{pmatrix} \rightarrow \begin{pmatrix} 1 & b+\frac{d(1-a)}{c} \\ c & d \end{pmatrix} \rightarrow \begin{pmatrix} 1 & \frac{d-1}{c} \\ c & d \end{pmatrix} \rightarrow \begin{pmatrix} 1 & \frac{d-1}{c} \\ 0 & 1 \end{pmatrix}\rightarrow \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} $$.

Since we can arrive to the identity matrix using only operations corresponding to multiplication with matrices of the first type and the condition $$ad-bc = 1$$, the original matrix $$A$$ can be produced from the identity by reversing the operations. Hence, we can generate $$A$$ with only elementary matrices of the first type.The general case follows by induction: for any $$M \in \mathbb{R}^{n \times n}$$ matrix with determinant 1, assume we can manipulate $$M$$ into the form

$$M' =\begin{pmatrix} I_{n-1} & b \\ c^T & d \end{pmatrix} $$,

where $$b,c \in \mathbb{R}^{n-1} $$ and $$d \in \mathbb{R} $$. It is easy to see that we can further manipulate $$M' $$ as follows:

$$\begin{pmatrix} I_{n-1} & b \\ c^T & d \end{pmatrix} \rightarrow \begin{pmatrix} I_{n-1} & b \\ 0 & 1 \end{pmatrix} \rightarrow \begin{pmatrix} I_{n-1} & 0 \\ 0 & 1 \end{pmatrix}

$$.

Note that in the second step the element $$d$$ manipulates to 1 as we eliminate the row entries of the vector $$c$$, since otherwise the determinant of the manipulated matrix would not be equal to 1.

Exercise 4.9
Elements of order 2 necessarily swap a pair of elements, or two pairs of elements. Such permutations are $$(12), (13), (14), (23), (24), (34), (12)(34), (13)(24), (14)(23) $$ totaling 9 permutations or order 2.

Exercise 4.11
a) Any transposition that swaps elements $$a, b $$ has a permutation matrix $$T $$ that is an identity matrix where rows $$a, b $$ have been swapped. These are elementary matrices of the second type. Any permutation matrix $$P $$ corresponding to a permutation in $$S_n $$ is an identity matrix whose rows have been permuted according to the permutation. From the permutation interpretation of $$P $$ we know that $$P $$ is invertible, so by Theorem 1.2.6 it is a product of elementary matrices. It is easy to see that only elementary matrices of type 2 are essential in such a product, as scaling or adding rows together are not needed. Hence, $$P $$ can be decomposed into a product of elementary matrices of the second type, implying that the permutation is a product of transpositions.

b) First we show the following claim:

Claim: Any product of two transpositions can be written as a product of three-cycles.

Proof: Let $$(ab)(cd) $$ be a product of two transpositions, where $$a,b,c,d $$ are some integers (possible some equal). Then we can write $$(ab)(cd) = (acb)(acd) $$.

Since any permutation in $$S_n $$ is generated by transpositions, and the determinant of a transposition matrix is $$-1 $$, any permutation in $$A_n $$ is a product of an even number of transpositions. Let $$\pi \in A_n $$ be a permutation, with a decomposition into transpositions $$\pi = \tau_1\cdots\tau_{2n} $$. Then we can group the product as $$\pi = (\tau_1\tau_2)(\tau_3\tau_4)\cdots(\tau_{2n-1}\tau_{2n}) $$, where each product of two transpositions is also expressible as a product of three-cycles as shown in Claim.

Exercise 5.3
Let $$A, B \in U$$ so that $$AB = \begin{pmatrix} a_{11} & a_{12} \\ 0 & a_{22} \end{pmatrix} \begin{pmatrix} b_{11} & b_{12} \\ 0 & b_{22} \end{pmatrix} = \begin{pmatrix} a_{11}b_{11} & a_{11}b_{12} + a_{12}b_{22} \\ 0 & a_{22}b_{22} \end{pmatrix}$$. Then $$\varphi(AB) = a_{11}^2b_{11}^2 = \varphi(A)\varphi(B)$$ so $$\varphi $$ is a group homomorphism. We have $$\ker \varphi = \{A \in U : a_{11} = 0\}$$ and $$\textrm{im}~\varphi = \mathbb{R}^\times$$.

Exercise 6.2
Any homomorphism $$\varphi: \mathbb{Z}^+ \rightarrow \mathbb{Z}^+$$ satisfies $$\varphi(a + b) = \varphi(a) + \varphi(b)$$, so $$\varphi$$ must be a linear function with integer coefficient. In other words, $$\varphi(k) = m \cdot k$$ with $$m \in \mathbb{Z}$$. For $$\varphi$$ to be injective, it suffices that $$m \neq 0$$. For $$\varphi$$ to be surjective, we need $$m \in \{-1, 1\}$$, since clearly $$\textrm{im}~\varphi = \mathbb{Z}^+m$$. The surjective functions $$\varphi$$ are also bijective, so they are isomorphisms.

Exercise 6.6
Two elements $$a,b \in G$$ are conjugate, if for some $$g \in G$$, $$ga = bg$$. It is easy to see that for the given matrices, any matrix of the form $$\begin{pmatrix} 0 & s \\ s & t \end{pmatrix}$$ with $$s \neq 0$$ does the job. This matrix is also invertible, as its determinant is $$-s^2$$. Since the determinant is negative, the matrices are not conjugate in $$SL_2(\mathbb{R})$$.

Exercise 7.1
Let $$a,b \in G$$, and $$\sim$$ be a relation given by $$a \sim b$$ if and only if there exists $$g \in G$$ such that $$a = gbg^{-1}$$

Reflexivity: $$a = 1 a 1^{-1}$$, so $$a \sim a$$.

Symmetry: if $$a \sim b$$, then $$a = gbg^{-1}$$, so $$b = g^{-1}ag$$. Hence $$b \sim a$$, and vice versa.

Transitivity: if $$a \sim b$$ and $$b \sim c$$, then $$a = gbg^{-1}$$ and $$b = hch^{-1}$$, so $$a = ghch^{-1}g^{-1} = ghc(gh)^{-1}$$. Therefore $$a \sim c$$.

Exercise 7.5

 * The set $$\{(s, s): s \in \mathbb{R}\}$$ defines an equivalence relation on $$\mathbb{R}$$ that is the same as the usual "=" relation on the reals.
 * The relation defined by the empty set satisfies symmetry and transitivity, but fails reflexivity.
 * The locus $$\{(x, y): xy + 1 = 0, x,y \in \mathbb{R}\}$$ satisfies symmetry and transitivity, but fails reflexivity (for example $$0 \sim 0$$ is not true).
 * The locus $$\{(x, y): x^2y - xy^2-x+y = 0, x,y \in \mathbb{R}\}$$ is reflexive ($$x \sim x$$ because $$x^3 - x^3 -x + x = 0$$), symmetric (since $$x^2y - xy^2-x+y = -(y^2x - yx^2-y+x)$$) and transitive (If $$(x, y)$$ and $$(y, z)$$ are solutions to the equation, then by symmetry also $$(y, x)$$ and $$(z, y)$$ are. Therefore also $$(z, x)$$ and thus $$(x, z)$$ is a solution.)

Exercise 7.6
The number of equivalence relations in a set is equal to the number of partitions of the set. If the set has 5 elements, we have the following partitions:


 * Partitions to 1 set: 1
 * Partitions to 2 sets of sizes 1, 4: 5
 * Partitions to 2 sets of sizes 2, 3: $$\binom{5}{2}\binom{3}{3} = 10$$
 * Partitions to 3 sets of sizes 1, 1, 3: $$\frac{1}{2}\binom{5}{1}\binom{4}{1}\binom{3}{3} = 10$$
 * Partitions to 3 sets of sizes 1, 2, 2: $$\frac{1}{2}\binom{5}{1}\binom{4}{2}\binom{2}{2} = 15$$
 * Partitions to 4 sets of sizes 1, 1, 1, 2: $$\frac{1}{6}\binom{5}{1}\binom{4}{1}\binom{3}{1}\binom{2}{2} = 10$$
 * Partitions to 5 sets of size 1 each: 1

The total number of partitions is thus 52.

Exercise 8.3
Let $$G$$ be a group such that $$|G| = p^k$$ for some prime $$p$$. For any $$g \in G$$, $$$$ is a subgroup of $$G$$ and by Lemma 2.8.7 has order $$p^m$$ where $$m \leq k$$. If $$m = 1$$, we are done as we have found an element of order $$p$$. Otherwise, consider $$h = g^{p^{m-1}}$$ and notice that $$h^p = \left(g^{p^{m-1}}\right)^p = g^{p^m} = 1$$, so $$h$$ has order $$p$$.

Exercise 8.4
For a group $$G$$ of 35 elements we have a few cases.


 * If the group is cyclic, i.e., $$G = $$, we have that $$g^5$$ has order 7 and $$g^7$$ has order 5.
 * If the group is not cyclic, then for every $$h \in G$$ there is some integer $$k < 35$$ such that $$$$ is a subgroup of $$G$$ order $$k$$. The order of $$$$ divides the order of $$G$$, so if $$h$$ is not the identity, the order of $$h$$ is either 5 or 7. Assume $$h$$ is of order 5 and there is another element $$x$$ of order 5 that is not a power of $$h$$. Then the subgroup of $$G$$ generated by $$h$$ and $$x$$ has order 25. Indeed, all elements $$h^ix^j$$ for $$i,j \in \{0, \ldots, 4\}$$ are distinct (if not, we have for $$i_1\neq i_2, j_1 \neq j_2$$ that $$h^{i_1}x^{j_1} = h^{i_2}x^{j_2} \Rightarrow h^{i_1-i_2} =x^{j_2-j_1} $$, which implies that $$x$$ is a power of $$h$$), and there is 25 ways to choose the exponents. But 25 does not divide 35, so there cannot be another subgroup of order 5, and thus any element $$x$$ that is not a power of $$h$$ must generate a cyclic group of order 7.

Exercise 8.5
If a group $$G$$ contains an element $$g$$ of order 6 and an element $$h$$ of order 10, then $$$$ and $$$$ are subgroups of $$G$$ and thus 6 and 10 divide the order of $$G$$. Hence, we can say that $$|G|\geq 30$$.

Exercise 8.10
Let $$H\subset G$$ be a subgroup such that $$\left[G:H \right] = 2 $$. Then, since left (and right) cosets partition the group, we have $$G = H \cup gH = H \cup Hg$$ for some $$g \notin H$$. Because $$|H| = |gH| = |Hg|$$, we must have $$gH = Hg$$ and so by Proposition 2.8.17 $$H$$ is normal.

For an example that this is not true for $$H\subset G$$ such that $$\left[G:H \right] = 3 $$, consider $$G = S_3$$ and $$H = \{1, (12)\}$$. Since $$|H| = 2$$, we have indeed $$\left[G:H \right] = 3 $$. Then, $$(13)H = \{(13), (123)\}$$ and $$H(13) = \{(13), (132)\}$$, so $$H$$ is not normal.

Exercise 8.12
Claim 1: For all $$h, g \in S$$ we have $$hg \in S$$.

Proof: Assume that $$hg \notin S$$. Then $$hS \neq S$$, since $$hg \in hS$$. But since $$1 \in S$$, we have $$h \in hS$$, which contradicts the assumption that the cosets of $$S$$ partition $$G$$.

Claim 2: For all $$h \in S$$ we have $$h^{-1} \in S$$.

Proof: Assume $$h^{-1} \in aS$$ for some $$a \in G$$. Then we have $$h^{-1} = ag$$ for some $$g \in S$$. This implies $$1 = agh$$, and by Claim 1 we have $$hg \in S$$, so $$ahg \in aS$$. But then $$1 \in aS$$, so unless $$aS = S$$ we have a contradiction.

Claim 1 and Claim 2 together prove that $$S$$ is a subgroup of $$G$$.

Exercise 9.4
The numbers involved are so small that solving by brute force becomes viable. To solve $$2x \equiv_9 5$$ we note that $$2\cdot 5 = 10 \equiv_9 1$$ so $$2^{-1} \equiv_9 5$$ so $$x \equiv_9 5\cdot 5 \equiv_9 25 \equiv_9 7$$.

To investigate the congruence $$2x \equiv_6 5$$ we note by computing every possibility that there is no element $$2^{-1}$$ modulo 6. Therefore there is no solution to the congruence.

Exercise 9.5
Given equations

$$\begin{cases} 2x-y \equiv_n 1 \\ 4x+3y \equiv_n 2 \end{cases}$$

we can solve $$y \equiv_n 2x-1$$ and substitute to the other congruence equation to obtain $$10x \equiv_n 5$$. This equation has a solution exactly when there exists $$10^{-1}$$ modulo $$n$$, or in other words an integer $$m$$ such that $$10m \equiv_n 1$$. This translates to saying that the equation $$10m + an = 1$$ must have an integer solution $$a$$. But that is only possible when $$\gcd(10, n) = 1$$.

Exercise 10.1
It is not clear from the problem statement what the answer should look like. Therefore, the following candidate for answer might not be what is intended.

We show that each cycle of the form $$(a_1a_2\ldots a_n)$$ has sign $$(-1)^{n-1}$$. This is easy to see, as we can decompose $$(a_1a_2\ldots a_n) = (a_1a_n)(a_1a_{n-1})\cdots(a_1a_2)$$. Each transposition corresponds to an elementary matrix of type two, which has determinant $$-1$$. The product form of the cycle has $$n-1$$ transpositions, so the sign is as claimed. Therefore, if a permutation has a cycle decomposition to $$m$$ cycles, each having $$n_i$$ terms, the sign of the permutation is $$(-1)^{\sum n_i-1} $$.

Exercise 10.3
Let $$G = , G' = $$ and $$\varphi(x^i) = y^i$$. We have $$\ker \varphi = \{1, x^6\}$$, and $$G$$ has the subgroups $$<1>, , , , , G$$ and $$G'$$ the subgroups $$<1>, , , G'$$. The correspondences given by $$\varphi $$ are thus Notice that the subgroups $$<1>$$ and $$<x^4>$$ do not contain $$\ker \varphi$$.

Exercise 11.3
Let $$G = <x>, H = <y>$$ and assume $$K = G \times H$$ is generated by an element $$(x^n, y^m)$$. Then, for every element of $$k \in K$$ we would have $$k = (x^n, y^m)^i$$ for some $$i$$. But since $$G, H$$ are infinite, there are no $$i_1, i_2$$ such that $$(x^n, y^m)^{i_1} = (x, 1)$$ and $$(x^n, y^m)^{i_2} = (1, y)$$, since the first condition would require $$n \neq 0, m = 0$$ and the second $$n = 0, m \neq 0$$. The assumption that $$G, H$$ are infinite means that $$x^n = 0 \Leftrightarrow n = 0$$ and $$y^m = 0 \Leftrightarrow m = 0$$.

Exercise 11.4
a) $$G = R^\times$$ is isomorphic to $$H \times K = \{-1, 1\} \times \{x \in \mathbb{R} : x > 0\}$$ by the function $$\varphi: G \rightarrow H \times K$$ given by $$\varphi(g) = \left( \frac{g}{|g|}, |g| \right)$$ which has the inverse $$\varphi^{-1}((h, k)) = h \cdot k$$. Seeing that $$\varphi$$ is a homomorphism is a direct consequence of multiplication rules of real numbers and properties of absolute value.

b) Let $$G $$ be the set of invertible upper triangular matrices, $$H $$ the set of invertible diagonal matrices and $$K$$ the set of upper diagonal matrices with ones on the diagonal. In order to have a homomorphism $$\varphi: G \rightarrow H \times K$$, we must have

$$\varphi\left( \begin{pmatrix} g_1 & g_2 \\ 0 & g_3 \end{pmatrix} \right) = \left( \begin{pmatrix} h_1 & 0 \\ 0 & h_2 \end{pmatrix}, \begin{pmatrix} 1 & k \\ 0 & 1 \end{pmatrix} \right)$$,

for appropriate reals $$g_i, h_i$$ and $$k$$. Using this notation for we have

$$\varphi\left( \begin{pmatrix} g_1 & g_2 \\ 0 & g_3 \end{pmatrix} \cdot \begin{pmatrix} g'_1 & g'_2 \\ 0 & g'_3 \end{pmatrix} \right) = \varphi\left( \begin{pmatrix} g_1g_1' & g_1g_2' + g_2g_3' \\ 0 & g_3g_3' \end{pmatrix} \right) = \left( \begin{pmatrix} h_1h_1' & 0 \\ 0 & h_2h_2' \end{pmatrix}, \begin{pmatrix} 1 & h_1k' + h_2k \\ 0 & 1 \end{pmatrix} \right) $$. (1)

On the other hand,

$$\varphi\left( \begin{pmatrix} g_1 & g_2 \\ 0 & g_3 \end{pmatrix} \right) \cdot \varphi \left( \begin{pmatrix} g'_1 & g'_2 \\ 0 & g'_3 \end{pmatrix} \right) = \left( \begin{pmatrix} h_1 & 0 \\ 0 & h_2 \end{pmatrix}, \begin{pmatrix} 1 & k \\ 0 & 1 \end{pmatrix} \right) \cdot \left( \begin{pmatrix} h'_1 & 0 \\ 0 & h'_2 \end{pmatrix}, \begin{pmatrix} 1 & k' \\ 0 & 1 \end{pmatrix} \right) = \left( \begin{pmatrix} h_1h_1' & 0 \\ 0 & h_2h_2' \end{pmatrix}, \begin{pmatrix} 1 & k + k' \\ 0 & 1 \end{pmatrix} \right)$$. (2)

For $$\varphi$$ to be a homomorphism, we require these two to evaluate equal. However, the second coordinate in (1) has a dependency on $$h_1$$ and $$h_2$$, whereas (2) does not have it. Hence, (1) and (2) are not equal in general.

c) Let $$G = \mathbb{C}^\times $$, $$H = (\{\theta \in \mathbb{R} : \theta \in [0, 2\pi] \}, +) $$ (the angles of a circle with sum as the group operation) and $$H = \mathbb{R}_{> 0}^\times $$. Then $$G$$ is isomorphic to $$H \times K$$ via the homomorphism $$\varphi: G \rightarrow H \times K$$ given by $$\varphi(g) = \left( \arg(g), |g| \right)$$. That $$\varphi$$ is a bijective homomorphism follows easily from the polar form representation of complex numbers.

Exercise 12.1
Let $$H = \{1, (12)\}$$ be a subgroup of $$S_3$$. Clearly $$H $$ is not normal, as $$(13)H = \{(13), (123)\} \neq \{(13), (132)\} = H(13)$$. Then $$(13)H = \{(13), (123)\}$$ and $$(23)H = \{(23), (132)\}$$. It is a straightforward computation to check that $$(12)H(23)H = \{1, (12), (23), (132)\}$$. This set has 4 elements, but all cosets must have size 2, so it is not a coset.

Exercise 12.5
Let $$G $$ be the group of upper triangular matrices $$\begin{pmatrix} a & b \\ 0 & d \end{pmatrix} $$ with $$a, d \neq 0 $$.

a) Let $$S $$ be the subset of $$G $$ defined by $$b = 0

$$, i.e., the set of invertible diagonal matrices. It is easy to see that $$S $$ is a group, as the inverse for each such matrix is also a diagonal matrix, and the product of two diagonal matrices is a diagonal matrix, and the identity is a diagonal matrix. $$S $$ is however not a normal subgroup, since for any $$\begin{pmatrix} g_1 & g_2 \\ 0 & g_3 \end{pmatrix} \in G $$ with inverse $$\begin{pmatrix} g_1^{-1} & -g_2/(g_1g_3) \\ 0 & g_3^{-1} \end{pmatrix} \in G $$ we can compute

$$\begin{pmatrix} g_1 & g_2 \\ 0 & g_3 \end{pmatrix} \begin{pmatrix} a & 0 \\ 0 & d \end{pmatrix} \begin{pmatrix} g_1^{-1} & -g_2/(g_1g_3) \\ 0 & g_3^{-1} \end{pmatrix} = \begin{pmatrix} ag_1 & dg_2 \\ 0 & dg_3 \end{pmatrix} \begin{pmatrix} g_1^{-1} & -g_2/(g_1g_3) \\ 0 & g_3^{-1} \end{pmatrix} = \begin{pmatrix} a & -ag_2/g_3+dg_2/g_3 \\ 0 & d \end{pmatrix} $$,

which is not in $$S $$ when $$a \neq d

$$.

b) Let $$S $$ be the subset of $$G $$ defined by $$d = 1

$$. Again, it is easy to see that $$S $$ is a subgroup of $$G $$. As above, we can compute

$$\begin{pmatrix} g_1 & g_2 \\ 0 & g_3 \end{pmatrix} \begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix} \begin{pmatrix} g_1^{-1} & -g_2/(g_1g_3) \\ 0 & g_3^{-1} \end{pmatrix} = \begin{pmatrix} ag_1 & bg_2+1 \\ 0 & g_3 \end{pmatrix} \begin{pmatrix} g_1^{-1} & -g_2/(g_1g_3) \\ 0 & g_3^{-1} \end{pmatrix} = \begin{pmatrix} a & -ag_2/g_3 + bg_2/g_3 + g_3^{-1} \\ 0 & 1 \end{pmatrix} \in S $$,

so $$S $$ is a normal subgroup. The cosets are the sets containing the matrices

$$\begin{pmatrix} g_1 & g_2 \\ 0 & g_3 \end{pmatrix} \begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} ag_1 & bg_2+1 \\ 0 & g_3 \end{pmatrix}

$$,

which are distinguished by the parameter $$g_3 \neq 0 $$, as the variables $$a,b $$ can be chosen arbitrarily (as long as $$a \neq 0

$$). Hence, the quotient group is isomorphic to $$\mathbb{R}^\times $$. The homomorphism whose kernel $$S $$ is is given by $$\varphi \left( \begin{pmatrix} g_1 & g_2 \\ 0 & g_3 \end{pmatrix} \right) = g_3 $$.

c) Let $$S $$ be the subset of $$G $$ defined by $$a = d \neq 0 $$. Then again $$S $$ is clearly a subgroup of $$G $$. We have

$$\begin{pmatrix} g_1 & g_2 \\ 0 & g_3 \end{pmatrix} \begin{pmatrix} a & b \\ 0 & a \end{pmatrix} \begin{pmatrix} g_1^{-1} & -g_2/(g_1g_3) \\ 0 & g_3^{-1} \end{pmatrix} = \begin{pmatrix} ag_1 & ag_2 + bg_1 \\ 0 & ag_3 \end{pmatrix} \begin{pmatrix} g_1^{-1} & -g_2/(g_1g_3) \\ 0 & g_3^{-1} \end{pmatrix} = \begin{pmatrix} a & bg_1/g_3 \\ 0 & a \end{pmatrix} \in S $$,

so $$S $$ is a normal subgroup.To investigate the cosets of this group, we compute

$$\begin{pmatrix} g_1 & g_2 \\ 0 & g_3 \end{pmatrix} S = \left\{ \begin{pmatrix} ag_1 & ag_2 + bg_1 \\ 0 & ag_3 \end{pmatrix} : a \neq 0,b \in \mathbb{R} \right\}

$$.

Here we can choose $$b $$ as we like, so the top-right element doesn't impose any conditions on the cosets. Then, the elements of $$G $$ for which it holds $$g_1 \neq 0, g_3 = 1 $$ generates a coset of $$S $$ uniquely. This way we also obtain all cosets of $$S $$, since for every choice of $$g_1, g_3 \neq 0 $$ we can choose $$a = 1/g_3 $$ and we see that this element is in the coset generated by $$g_1' =g_1/g_3, g'_3 = 1 $$.

The quotient group is again isomorphic to $$\mathbb{R}^\times $$ with the homomorphism whose kernel $$S $$ is given by $$\varphi \left( \begin{pmatrix} g_1 & g_2 \\ 0 & g_3 \end{pmatrix} \right) = g_1/g_3 $$.

Exercise M.6 a, b
a) We observe the following:


 * Reflexivity: A point $$a \in \mathbb{R}^k$$ is connected to itself by the path $$X(t) = a$$. If $$a \in S$$, then the path is contained in $$S$$.
 * Symmetry: Let $$a,b \in \mathbb{R}^k$$ such that there exists a path $$X$$ from $$a$$ to $$b$$ contained in $$S$$. Then the path $$Y(t) = X(1-t)$$ is a path from $$b$$ to $$a$$ contained in $$S$$
 * Transitivity. If $$a,b,s \in \mathbb{R}^k$$ such that $$X$$ is a path from $$a$$ to $$b$$ contained in $$S$$ and $$Y$$ from $$b$$ to $$c$$ contained in $$S$$, then $$Z(t) = \begin{cases} X(2t), & \text{if } t < 1/2 \\ Y(2t-1), & \text{if } t \geq 1/2 \end{cases}$$ is a path from $$a$$ to $$c$$ contained in $$S$$.

b) Path connection is an equivalence relation as shown in part a). Hence, path connected subsets partition the set $$S$$. In particular, the transitivity property ensures that if two points can be path connected, any points connected to those two points can be connected with both of the points.

Exercise M.7
a) Let $$A,B,C,D \in G \subset GL_n(\mathbb{R})$$ such that $$A \sim B$$ given by $$X(t)$$ and $$C \sim D$$ given by $$Y(t)$$. Then $$Z(t) = X(t)Y(t)$$ connects $$AC$$ to $$BD$$. This path is also in $$G$$, since both paths $$X(t)$$ and $$Y(t)$$ are in $$G$$ and $$G$$ is a group.

b) We show that for any matrix $$A \sim I$$, also the matrix $$BAB^{-1} \sim I$$ for any $$B \in GL_n(\mathbb{R})$$. If $$X(t)$$ is the path joining $$A$$ and $$I$$, simply consider the path $$Y(t) = BX(t)B^{-1}$$, since $$Y(1) = BIB^{-1} = I$$. Hence, all matrices connected to identity by path form a normal subgroup.

Exercise M.8
a) Elementary matrices of the first type are matrices $$E$$ with the entries $$e_{ii} = 1$$, $$e_{ij} = a$$ for exactly one pair of indices $$i,j$$ and 0 otherwise. For each such $$E$$, there is a path to $$I$$ just by setting $$X(t)$$ as the same matrix as $$E$$, with the exception that the element $$e_{ij} = (1-t)a$$. Clearly this is a continuous path from $$E$$ to $$I$$, and since $$X(t)$$ is an elementary matrix of the first type, it also has determinant 1, and thus the path stays in $$SL_n(\mathbb{R})$$. Now, if we have the elementary matrices of the first type $$E_1, E_2 $$, we have by M.7 a) that $$E_1E_2$$ is connected to $$I$$, as $$E_1, E_2 $$ are connected to $$I$$. We have now shown that any product of elementary matrices of type 1 are path connected to $$I$$, which implies that $$SL_n(\mathbb{R})$$ is path connected, as the elementary matrices of type 1 generate $$SL_n(\mathbb{R})$$.

b) Let $$A \in GL_n(\mathbb{R})$$ such that $$\det(A) > 0$$. We can path connect $$A$$ to a matrix with determinant 1 by $$X(t) = \sqrt[n]{1 - t + \frac{t}{\det(A)}}A $$ so that $$\det(X(t)) = \left(1 - t + \frac{t}{\det(A)}\right) \det(A) $$. This path is composition of continuous functions since $$\det(A) > 0$$ and $$X(t) $$ has always positive determinant, so it is a continuous path in $$GL_n(\mathbb{R})$$. Therefore, matrices with $$\det(A) > 0$$ form a connected component of $$GL_n(\mathbb{R})$$. By the same reasoning, if we just assumed that $$\det(A) < 0$$, we can reason that the matrices with $$\det(A) < 0$$ form a connected component of $$GL_n(\mathbb{R})$$.

What remains to show is that these components are not connected. Let $$A$$ be a matrix with a positive determinant and $$X(t) $$ a path connecting $$A$$ to a matrix $$B$$ with a negative determinant. Then, $$X(t) $$ has elements $$x_{ij}(t) $$ that are continuous functions. From the determinant formula (1.6.4) we see that the determinant is then also a continuous function of the elements, as it is a sum of product of continuous functions. Then, since $$\det(X(0)) > 0 $$ and $$\det(X(1)) < 0 $$, we must have $$\det(X(t_0)) = 0 $$ for some $$t_0 \in (0, 1) $$. Therefore $$X(t) $$ is a path not contained in $$GL_n(\mathbb{R})$$, and hence $$GL_n(\mathbb{R})$$ is a disjoint union of two connected components.

Exercise M.9
Double cosets partition a group. Indeed, for every $$g \in G$$, the double coset $$HgK$$ contains $$g$$, since $$1 \in H \cap K$$. On the other hand, if $$g \in Hg_1K \cap Hg_2K$$, then there are elements $$h_1, h_2 \in H, k_1, k_2 \in K$$ such that $$h_1g_1k_1 = h_2g_2k_2$$. This implies that $$g_1 = h_1^{-1}h_2g_2k_2k_1^{-1}$$, and since $$H,K$$ are subgroups, $$h_1^{-1}h_2 \in H, k_2k_1^{-1} \in K$$, so $$g_1 \in Hg_2K$$. This implies in turn that $$Hg_1K = Hg_2K$$.

Exercise M.14
Multiplying from the left with the matrices $$E,E'$$ and their inverses corresponds to adding and subtracting rows. Multiplying from the right corresponds to adding and subtracting columns. Now, consider any $$A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in SL_2(\mathbb{Z})$$.

Claim 1: We can bring the matrix to the form $$\begin{pmatrix} a' & b' \\ c' & d' \end{pmatrix}$$ where $$a' = 1$$ or $$c' = 1$$ only by adding and subtracting rows.

Proof. If $$c = 0$$, we have $$d \neq 0$$ since the matrix has determinant 1, so we can add the second column to the first one to obtain another matrix with determinant 1, whose entry on the first column, second row is non-zero. So we assume from now on that $$c \neq 0$$. Using the row operations, we can perform what is essentially the Euclidean algorithm on the first column by subtracting the smaller column 1 element from the larger. Since $$\det(A) = 1$$, we have $$ad - bc = 1$$, which implies that $$\gcd(a,c) = 1$$ and so the row operation Euclidean algorithm terminates once it produces 1 on one of the rows of the 1st column.

Now, using the matrix produced in Claim 1, we can simply bring the matrix to the identity form only by applying row and column additions/subtractions. Hence, we have $$E_1\cdots E_n A F_1\cdots F_m = I$$, where $$E_i, F_j$$ are some of the matrices $$E,E'$$ and their inverses.This can be solved to show that $$A = E_1^{-1}\cdots E_n^{-1} F_1^{-1}\cdots F_m^{-1} $$.