Commutative Algebra/Algebras and integral elements

Algebras
note to self: 21.4 is false when the constant polynomials are allowed!

Within an algebra it is thus true that we have an addition and a multiplication, and many of the usual rules of algebra stay true. Thus the name algebra.

Of course, there are some algebras whose multiplication is not commutative or associative. If the underlying ring is commutative, the ring gives a certain commutativity property in the sense of
 * $$r(sa) = (rs)a = (sr)a = s(ra)$$.

Note that this means that $$Z$$, together with the operations inherited from $$A$$, is itself an $$R$$-algebra; the necessary rules just carry over from $$A$$.

Example 21.3: Let $$R$$ be a ring, let $$S$$ be another ring, and let $$\varphi: R \to S$$ be a ring homomorphism. Then $$S$$ is an $$R$$-algebra, where the module operation is given by
 * $$rs := \varphi(r) s$$,

and multiplication and addition for this algebra are given by the multiplication and addition of $$S$$, the ring.

Proof:

The required rules for the module operation follow as thus:
 * 1) $$1_r s = \varphi(1_R) s = 1_S s = s$$
 * 2) $$r (s + t) = \varphi(r)(s + t) = \varphi(r)s + \varphi(r) t = rs + rt$$
 * 3) $$(r + r') s = \varphi(r + r') s = (\varphi(r) + \varphi(r'))s = rs + r's$$
 * 4) $$r(r' s) = \varphi(r) r' s = \varphi(r) \varphi(r') s = \varphi(r r') s = (r r') s$$

Since in $$S$$ we have all the rules for a ring, the only thing we need to check for the $$R$$-bilinearity of the multiplication is compatibility with the module operation.

Indeed,
 * $$(rs) t = \varphi(r) s t = r (st)$$

and analogously for the other argument.

We shall note that if we are given an $$R$$-algebra $$A$$, then we can take a polynomial $$p \in R[x_1, \ldots, x_n]$$ and some elements $$a_1, \ldots, a_n$$ of $$A$$ and evaluate $$p(a_1, \ldots, a_n) \in A$$ as thus:

The commutativity of multiplication (1.) and addition (3.) ensure that this procedure does not depend on the choices of order, that can be made in regard to addition and multiplication.
 * 1) Using the algebra multiplication, we form the monomials $$a_1^{k_1} a_2^{k_2} \cdots a_n^{k_n}$$.
 * 2) Using the module operation, we multiply each monomial with the respective coefficient: $$r_{k_1, \ldots, k_n} a_1^{k_1} a_2^{k_2} \cdots a_n^{k_n}$$.
 * 3) Using the algebra addition (=module addition), we add all these $$r_{k_1, \ldots, k_n} a_1^{k_1} a_2^{k_2} \cdots a_n^{k_n}$$ together.

Proof:

The first claim follows from the very definition of subalgebras of $$A$$: The closedness under the three operations. For, if we are given any elements of $$R[a_1, \ldots, a_n]$$, applying any operation to them is just one further step of manipulations with the elements $$a_1, \ldots, a_n$$.

We go on to prove the equation
 * $$R[a_1, \ldots, a_n] = \bigcap_{\{a_1, \ldots, a_n\} \subseteq Z \subseteq A \atop Z \text{ subalgebra}} Z$$.

For "$$\subseteq$$" we note that since $$a_1, \ldots, a_n$$ are contained within every $$Z$$ occuring on the right hand side. Thus, by the closedness of these $$Z$$, we can infer that all finite manipulations by the three algebra operations (addition, multiplication, module operation) are included in each $$Z$$. From this follows "$$\subseteq$$".

For "$$\supseteq$$" we note that $$R[a_1, \ldots, a_n]$$ is also a subalgebra of $$A$$ containing $$\{a_1, \ldots, a_n\}$$, and intersection with more things will only make the set at most smaller.

Now if any other subalgebra of $$A$$ is given that contains $$a_1, \ldots, a_n$$, the intersection on the right hand side of our equation must be contained within it, since that subalgebra would be one of the $$Z$$.

Exercises

 * Exercise 21.1.1:

Symmetric polynomials
That means, we can permute the variables arbitrarily and still get the same result.

This section shall be devoted to proving a very fundamental fact about these polynomials. That is, there are some so-called elementary symmetric polynomials, and every symmetric polynomial can be written as a polynomial in those elementary symmetric polynomials.

Without further ado, we shall proceed to the theorem that we promised:

Hence, every symmetric polynomial is a polynomial in the elementary symmetric polynomials.

Proof 1:

We start out by ordering all monomials (remember, those are polynomials of the form $$x_1^{k_1} x_2^{k_2} \cdots x_{n-1}^{k_{n-1}} x_n^{k_n}$$), using the following order:
 * $$x_1^{k_1} x_2^{k_2} \cdots x_{n-1}^{k_{n-1}} x_n^{k_n} < x_1^{m_1} x_2^{m_2} \cdots x_{n-1}^{m_{n-1}} x_n^{m_n} :\Leftrightarrow \begin{cases}

k_1 + \cdots + k_n < m_1 + \cdots + m_n & \\ \text{or} & \\ \big(k_1 + \cdots + k_n = m_1 + \cdots + m_n \big) \wedge \big( k_j < m_j, \text{ where } j := \min_{1 \le i \le n} k_i \neq m_i \big) & \end{cases}$$. With this order, the largest monomial of $$s_{n,m}$$ is given by $$x_1 \cdots x_m$$; this is because for all monomials of $$s_{n,m}$$, the sum of the exponent equals $$m$$, and the last condition of the order is optimized by monomials which have the first zero exponent as late as possible.

Furthermore, for any given $$r_1, \ldots, r_n \in \mathbb N_0$$, the largest monomial of
 * $$s_{n, 1}^{r_1} \cdots s_{n, n}^{r_n}$$

is given by $$x_1^{r_1 + \cdots + r_n} x_2^{r_2 + \cdots + r_n} \cdots x_{n-1}^{r_{n-1} + r_n} x_n^{r_n}$$; this is because the sum of the exponents always equals $$r_1 + 2 r_2 + \cdots + (n-1) r_{n-1} + n r_n$$, further the above monomial does occur (multiply all the maximal monomials from each elementary symmetric factor together) and if one of the factors of a given monomial of $$s_{n, 1}^{r_1} \cdots s_{n, n}^{r_n}$$ coming from an elementary symmetric polynomial is not the largest monomial of that elementary symmetric polynomial, we may replace it by a larger monomial and obtain a strictly larger monomial of the product $$s_{n, 1}^{r_1} \cdots s_{n, n}^{r_n}$$; this is because a part of the sum $$r_1 + 2 r_2 + \cdots + (n-1) r_{n-1} + n r_n$$ is moved to the front.

Now, let a symmetric polynomial $$f \in R[x_1, \ldots, x_n]$$ be given. We claim that if $$x_1^{k_1} x_2^{k_2} \cdots x_{n-1}^{k_{n-1}} x_n^{k_n}$$ is the largest monomial of $$f$$, then we have $$k_1 \ge k_2 \ge \cdots \ge k_{n-1} \ge k_n$$.

For assume otherwise, say $$k_j < k_{j+1}$$. Then since $$f$$ is symmetric, we may exchange the exponents of the $$j$$-th and $$j+1$$-th variable respectively and still obtain a monomial of $$f$$, and the resulting monomial will be strictly larger.

Thus, if we define for $$j = 1, \ldots, n-1$$
 * $$d_j := k_j - k_{j+1}$$

and furthermore $$d_n := k_n$$, we obtain numbers that are non-negative. Hence, we may form the product
 * $$h(x) := s_{n, 1}^{d_1} \cdots s_{n, n}^{d_n}$$,

and if $$c$$ is the coefficient of the largest monomial of $$f$$, then the largest monomial of
 * $$f(x) - c h(x)$$

is strictly smaller than that of $$f$$; this is because the largest monomial of $$h$$ is, by our above computation and calculating some telescopic sums, equal to the largest monomial of $$f$$, and the two thus cancel out.

Since the elementary symmetric polynomials are symmetric and sums, linear combinations and products of symmetric polynomials are symmetric, we may repeat this procedure until we are left with nothing. All the stuff that we subtracted from $$f$$ collected together then forms the polynomial in elementary symmetric polynomials we have been looking for.

Proof 2:

Let $$f \in R[x_1, \ldots, x_n]$$ be an arbitrary symmetric polynomial, and let $$d$$ be the degree of $$f$$ and $$n$$ be the number of variables of $$f$$.

In order to prove the theorem, we use induction on the sum $$n + d$$ of the degree and number of variables of $$f$$.

If $$n + d = 1$$, we must have $$n = 1$$ (since $$d = 1$$ would imply the absurd $$n = 0$$). But any polynomial of one variable is already a polynomial of the symmetric polynomial $$s_{1, 1}(x) = x$$.

Let now $$n+d = k$$. We write
 * $$f(x_1, \ldots, x_n) = g(x_1, \ldots, x_n) + x_1 \cdots x_n h(x_1, \ldots, x_n)$$,

where every monomial occuring within $$g$$ lacks at least one variable, that is, is not divisible by $$x_1 \cdots x_n$$.

The polynomial $$g$$ is still symmetric, because any permutation of a monomial that lacks at least one variable, also lacks at least one variable and hence occurs in $$g$$ with same coefficient, since no bit of it could have been sorted to the "$$x_1 \cdots x_n h(x_1, \ldots, x_n)$$" part.

The polynomial $$h$$ has the same number of variables, but the degree of $$h$$ is smaller than the degree of $$f$$. Furthermore, $$h$$ is symmetric because of
 * $$h(x_1, \ldots, x_n) = \frac{f(x_1, \ldots, x_n) - g(x_1, \ldots, x_n)}{x_1 \cdots x_n}$$.

Hence, by induction hypothesis, $$h$$ can be written as a polynomial in the symmetric polynomials:
 * $$h(x_1, \ldots, x_n) = p_1(s_{n, 1}(x_1, \ldots, x_n), \ldots, s_{n, n}(x_1, \ldots, x_n))$$

for a suitable $$p_1 \in R[x_1, \ldots, x_n]$$.

If $$n = 1$$, then $$f$$ is a polynomial of the elementary symmetric polynomial $$s_{1, 1}(x)$$ anyway. Hence, it is sufficient to only consider the case $$n \ge 2$$. In that case, we may define the polynomial
 * $$q(x_1, \ldots, x_{n-1}) := g(x_1, \ldots, x_{n-1}, 0)$$.

Now $$q$$ has one less variable than $$f$$ and at most the same degree, which is why by induction hypothesis, we find a representation
 * $$q(x_1, \ldots, x_{n-1}) = p_2(s_{n-1,1}(x_1, \ldots, x_{n-1}), \ldots, s_{n-1,n-1}(x_1, \ldots, x_{n-1}))$$

for a suitable $$p_2 \in R[x_1, \ldots, x_{n-1}]$$.

We observe that for all $$j \in \{1, \ldots, n-1\}$$, we have $$s_{n-1, j}(x_1, \ldots, x_{n-1}) = s_{n, j}(x_1, \ldots, x_{n-1}, 0)$$. This is because the unnecessary monomials just vanish. Hence,
 * $$g(x_1, \ldots, x_{n-1}, 0) = p_2(s_{n,1}(x_1, \ldots, x_{n-1}, 0), \ldots, s_{n,n-1}(x_1, \ldots, x_{n-1}, 0))$$.

We claim that even
 * $$g(x_1, \ldots, x_{n-1}, x_n) = p_2(s_{n,1}(x_1, \ldots, x_{n-1}, x_n), \ldots, s_{n,n-1}(x_1, \ldots, x_{n-1}, x_n)) ~(*)$$.

Indeed, by the symmetry of $$g$$ and $$s_{n, 1}, \ldots, s_{n, n-1}$$ and renaming of variables, the above equation holds where we may set an arbitrary of the variables equal to zero. But each monomial of $$g$$ lacks at least one variable. Hence, by successively equating coefficients in $$(*)$$ where one of the variables is set to zero, we obtain that the coefficients on the right and left of $$(*)$$ are equal, and thus the polynomials are equal.

Integral dependence
A polynomial of the form
 * $$x^n + a_{n-1} x^{n-1} + \cdots + a_1 x + a_0$$ (leading coefficient equals $$1$$)

is called a monic polynomial. Thus, $$r$$ being integral over $$S$$ means that $$r$$ is the root of a monic polynomial with coefficients in $$S$$.

Whenever we have a subring $$S \subseteq R$$ of a ring $$R$$, we consider the module structure of $$R$$ as an $$S$$-module, where the module operation and summation are given by the ring operations of $$R$$.

Proof:

1. $$\Rightarrow$$ 2.: Let $$r$$ be integral over $$S$$, that is, $$r^n = - a_{n-1} r^{n-1} + \cdots + a_1 r + a_0$$. Let $$b_k r^k + b_{k-1} r^{k-1} + \cdots + b_1 r + b_0$$ be an arbitrary element of $$S[r]$$. If $$j$$ is larger or equal $$n$$, then we can express $$r^j$$ in terms of lower coefficients using the integral relation. Repetition of this process yields that $$1, r, r^2, \ldots, r^{n-1}$$ generate $$S[r]$$ over $$S$$.

2. $$\Rightarrow$$ 3.: $$T = S[r]$$.

3. $$\Rightarrow$$ 4.: Set $$M = T$$; $$T$$ is faithful because if $$u \in S[r]$$ annihilates $$T$$, then in particular $$u = u \cdot 1 = 0$$.

4. $$\Rightarrow$$ 1.: Let $$M$$ be such a module. We define the morphism of modules
 * $$\phi: M \to M, m \mapsto rm$$.

We may restrict the module operation of $$M$$ to $$S$$ to obtain an $$S$$-module. $$\phi$$ is also a morphism of $$S$$-modules. Further, set $$I = S$$. Then $$\phi(M) \subseteq M = IM$$ ($$1 \in S$$). The Cayley–Hamilton theorem gives an equation
 * $$r^n + a_{n-1} r^{n-1} + \cdots + a_1 r + a_0 = 0$$, $$a_{n-1}, \ldots, a_0 \in S$$,

where $$r$$ is to be read as the multiplication operator by $$r$$ and $$0$$ as the zero operator, and by the faithfulness of $$M$$, $$r^n + a_{n-1} r^{n-1} + \cdots + a_1 r + a_0 = 0$$ in the usual sense.

Proof:

Let $$s \in S$$. Since $$\mathbb F$$ is a field, we find an inverse $$s^{-1} \in \mathbb F$$; we don't know yet whether $$s^{-1}$$ is contained within $$S$$. Since $$\mathbb F$$ is integral over $$S$$, $$s^{-1}$$ satisfies an equation of the form
 * $$(s^{-1})^n + a_{n-1} (s^{-1})^{n-1} + \cdots + a_1 s^{-1} + a_0 = 0$$

for suitable $$a_{n-1}, \ldots, a_1, a_0 \in S$$. Multiplying this equation by $$s^{n-1}$$ yields
 * $$s^{-1} = - (a_{n-1} + a_{n-2} s + \cdots + a_1 s^{n-2} + a_0 s^{n-1}) \in S$$.

Proof 1 (from the Atiyah–Macdonald book):

If $$x, y \in R$$ are integral over $$S$$, $$y$$ is integral over $$S[x]$$. By theorem 21.10, $$S[x]$$ is finitely generated as $$S$$-module and $$S[x][y] = S[x, y]$$ is finitely generated as $$S[x]$$-module. Hence, $$S[x, y]$$ is finitely generated as $$S$$-module. Further, $$S[x + y] \subseteq S[x, y]$$ and $$S[x \cdot y] \subseteq S[x, y]$$. Hence, by theorem 21.10, $$x + y$$ and $$x \cdot y$$ are integral over $$S$$.

Proof 2 (Dedekind):

If $$x, y$$ are integral over $$S$$, $$S[x]$$ and $$S[y]$$ are finitely generated as $$S$$-modules. Hence, so is
 * $$S[x] \cdot S[y] := \left\{ \sum_{j=1}^n a_j b_j \big| n \in \mathbb N, a_j \in S[x], b_j \in S[y] \right\}$$.

Furthermore, $$S[xy] \subseteq S[x] \cdot S[y]$$ and $$S[x + y] \subseteq S[x] \cdot S[y]$$. Hence, by theorem 21.10, $$x \cdot y, x + y$$ are integral over $$S$$.