Analytic Number Theory/Tools from complex analysis

Infinite products
Lemma 5.1 (Convergence of real products):

Let $$(a_n)_{n \in \mathbb N}$$ be such that
 * $$\sum_{n=1}^\infty |a_n|$$

converges absolutely. Then if $$\forall n \in \mathbb N : |b_n| \le |a_n|$$,
 * $$\prod_{n=1}^\infty (1 + b_n)$$

converges.

Proof: Without loss of generality, we assume $$|b_n| < \frac{1}{2}$$ for all $$n \in \mathbb N$$.

Denote
 * $$p_n := \prod_{j=1}^n (1 + b_n)$$.

Then we have
 * $$q_n := \log(p_n) = \sum_{j=1}^n \log(1 + b_j)$$.

We now apply the Taylor formula of first degree with Lagrange remainder to $$\log(1+x)$$ at $$1$$ to obtain for $$|x| < \frac{1}{2}$$
 * $$\log(1+x) = \frac{1}{2} x - \frac{1}{2(1+\xi)^2} x^2$$, $$\xi \in \left( \frac{1}{2}, \frac{3}{2} \right)$$.

Hence, we have for $$|x| < \frac{1}{2}$$
 * $$|\log(1+x)| \le \left| \frac{1}{2} x - \frac{1}{2(1+\xi)^2} x^2 \right| \le |x|$$, $$\xi \in \left( \frac{1}{2}, \frac{3}{2} \right)$$.

Hence, $$|\log(1 + b_j)| \le |b_j|$$ and thus we obtain the (even absolute) convergence of the $$q_n$$; thus, by the continuity of the exponential, also the $$p_n$$ converge.

Proof:

We define
 * $$p_n := \prod_{j=1}^n (1 + s_j)$$, $$q_n := \prod_{j=1}^n (1 + a_j)$$. We note that
 * $$|p_n| \le \prod_{j=1}^n (1 + |s_j|) \le q_n$$.

Without loss of generality we may assume that all the products are nonzero; else we have immediate convergence (to zero).

We now prove that $$(p_n)_{n \in \mathbb N}$$ is a Cauchy sequence. Indeed, we have
 * $$|p_{n+k} - p_n| = |p_n| \left| \frac{p_{n+k}}{p_n} - 1 \right|$$

and furthermore
 * $$\begin{align}

\left| \frac{p_{n+k}}{p_n} - 1 \right| & = \left| s_{n+1} + \cdots + s_{n+k} + s_{n+1} s_{n+2} + \cdots + s_{n+1} \cdots s_{n+k} \right| \\ & \le |s_{n+1}| + \cdots + |s_{n+k}| + |s_{n+1} s_{n+2}| + \cdots + |s_{n+1} \cdots s_{n+k}| \\ & \le a_{n+1} + \cdots + a_{n+k} + a_{n+1} a_{n+2} + \cdots + a_{n+1} \cdots a_{n+k} \\ & = \frac{q_{n+k}}{q_n} - 1 = \left| \frac{q_{n+k}}{q_n} - 1 \right| \end{align}$$ and therefore
 * $$|p_{n+k} - p_n| = |p_n| \left| \frac{p_{n+k}}{p_n} - 1 \right| \le |q_n| \left| \frac{q_{n+k}}{q_n} - 1 \right| = |q_{n+k} - q_n|$$.

Since $$q_n \to q$$, it is a Cauchy sequence, and thus, by the above inequality, so is $$(p_n)_{n \in \mathbb N}$$. The last claim of the theorem follows by taking $$k \to \infty$$ in the above inequality.

Proof 1:

We prove the theorem using lemma 5.1 and the comparison test.

Indeed, by lemma 5.1 the product
 * $$\prod_{j=1}^\infty(1 + |a_j|)$$

converges. Hence by theorem 5.2, we obtain convergence and the desired inequality.

Proof 2 (without the inequality):

We prove the theorem except the inequality at the end from lemma 5.1 and by using the Taylor formula on $$\arcsin$$.

We define $$p_n := \prod_{j=1}^n (1 + s_j)$$. Then since every complex number satisfies $$z = |z| e^{i \arg(z)}$$, we need to prove the convergence of the sequences $$(|p_n|)_{n \in \mathbb N}$$ and $$(\arg(p_n))_{n \in \mathbb N}$$.

For the first sequence, we note that the convergence of $$(|p_n|)_{n \in \mathbb N}$$ is equivalent to the convergence of $$(|p_n|^2)_{n \in \mathbb N}$$. Now for each $$k \in \mathbb N$$
 * $$|1 + $$

Proof:

First, we note that $$g(z)$$ is well-defined for each $$z$$ due to theorem 5.2. In order to prove that the product is holomorphic, we use the fact from complex analysis that if a sequence of functions converging locally uniformly to another function has infinitely many holomorphic members, then the limit is holomorphic as well. Indeed, we note by the inequality in theorem 5.3, that we are given uniform convergence. Hence, the theorem follows.

The Weierstraß factorisation
The following lemma is of great importance, since we can deduce three important theorems from it:
 * 1) The existence of holomorphic functions with prescribed zeroes
 * 2) The Weierstraß factorisation theorem (a way to write any holomorphic function made up from linear factors and the exponential)
 * 3) The Mittag-Leffler theorem (named after Gösta Mittag-Leffler (one guy))

Lemma 5.5:

Let $$(a_n)_{n \in \mathbb N}$$ be a sequence of complex numbers such that
 * $$0 < |a_1| \le |a_2| \le \cdots$$

and
 * $$\lim_{n \to \infty} |a_n| = \infty$$.

Then the function
 * $$\prod_{n=1}^\infty \left( 1 - \frac{s}{a_n} \right) e^{\sum_{k=1}^{n-1} (-1)^{k+1} \frac{s^k}{k a_n^k}}$$

has exactly the zeroes $$\{a_n | n \in \mathbb N\}$$ in the correct multiplicity.

Proof:

Define for each $$n \in \mathbb N$$
 * $$u_n(s) := \left( 1 - \frac{s}{a_n} \right) e^{\sum_{k=1}^{n-1} (-1)^{k+1} \frac{s^k}{k a_n^k}}$$.

Our plan is to prove that $$\prod_{n=1}^\infty u_n(s)$$ converges uniformly in every subcircle of the circle of radius $$|a_N|$$ for every $$N \in \mathbb N$$. Since the function $$z \mapsto \log(1 + z)$$ is holomorphic in a unit ball around zero, it is equal to its Taylor series there, i.e.
 * $$\log(1 + z) = \sum_{k=1}^\infty \frac{z^k}{k}$$.

Hence, for $$|s| < |a_n|$$
 * $$\log\left(u_n(s)\right) = \sum_{k=n}^\infty (-1)^{k+1} \frac{s^k}{k a_n^k}$$.

Let now $$N \in \mathbb N$$ be given and $$n \ge N$$ be arbitrary. Then we have for $$|s| < (1 - \epsilon)|a_N|$$, $$\epsilon > 0$$ arbitrary
 * $$\begin{align}

\left|\log\left(u_n(s)\right)\right| & = \left|\sum_{k=n}^\infty (-1)^{k+1} \frac{s^k}{k a_n^k}\right| \\ & \le \sum_{k=n}^\infty \left|\frac{s^k}{k a_n^k}\right| \\ & \le \sum_{k=n}^\infty (1 - \epsilon)^k & = (1 - \epsilon)^n \frac{1}{\epsilon} \end{align}$$. Now summing over $$n \ge N$$, we obtain
 * $$\left| \sum_{n=N}^\infty \log\left(u_n(s)\right) \right| \le \sum_{n=N}^\infty (1 - \epsilon)^n \frac{1}{\epsilon} < \infty$$

for all $$|s| < (1 - \epsilon)|a_N|$$. Hence, we have uniform convergence in that circle; thus the sum of the logarithms is holomorphic, and so is the original product if we plug everything into the exponential function (note that we do have $$\exp(\log(z)) = z$$ even if $$z$$ is an arbitrary complex number).

Note that our method of proof was similar to how we proved lemma 5.1. In spite of this, it is not possible to prove the above lemma directly from theorem 5.4 since the corresponding series does not converge if the $$a_n$$ are chosen increasing too slowly.

Proof:

We order $$(s_n)_{n \in \mathbb N}$$ increasingly according to the modulus $$|s_n|$$ and the standard greater or equal order on the real numbers. We go on to observe that then $$|s_n| \to \infty$$, since if it were to remain bounded, there would be an accumulation point according to the Heine–Borel theorem. Also, the sequence is zero only finitely many often (otherwise zero would be an accumulation point). After eliminating the zeroes from the sequence $$(s_n)_{n \in \mathbb N}$$ we call the remaining sequence $$(a_n)_{n \in \mathbb N}$$. Let $$m \in \mathbb N$$ the number of zeroes in $$(s_n)_{n \in \mathbb N}$$. Then due to lemma 5.5, the function
 * $$s^m \prod_{n=1}^\infty \left( 1 - \frac{s}{a_n} \right) e^{\sum_{k=1}^{n-1} (-1)^{k+1} \frac{s^k}{k a_n^k}}$$

has the required properties.

Proof:

First, we note that $$(s_n)_{n \in \mathbb N}$$ does not have an accumulation point, since otherwise $$f$$ would be the constant zero function by the identity theorem from complex analysis. From theorem 5.6, we obtain that the function $$g(s) := s^m \prod_{n=1}^\infty \left( 1 - \frac{s}{a_n} \right) e^{\sum_{k=1}^{n-1} (-1)^{k+1} \frac{s^k}{k a_n^k}}$$ has exactly the zeroes $$\{s_n|n \in\mathbb N\}$$ with the right multiplicity, where the sequence $$(a_n)_{n \in \mathbb N}$$ are the nonzero elements of the sequence $$(s_n)_{n \in \mathbb N}$$ ordered ascendingly with respect to their absolute value and $$m \in \mathbb N$$ is the number of zeroes within the sequence $$(s_n)_{n \in \mathbb N}$$. We have that $$f/g$$ has no zeroes and is bounded and hence holomorphic due to Riemann's theorem on resolvable singularities. For, if $$f/g$$ were unbounded, it would have a singularity at a zero $$z_0$$ of $$g$$. This singularity can not be essential since dividing $$g$$ by finitely many linear factors would eliminate that singularity. Hence we have a pole, and this would be resolvable by multiplying linear factors to $$f/g$$. But then $$g/f$$ has a zero of the order of that pole, which is not possible since we may eliminate all the zeroes of $$g/f$$ by writing $$f = (z - z_0)^l h$$, $$h$$ holomorphic and nonzero at $$z_0$$, where $$l$$ is the order of the zero of $$f$$ at $$z_0$$.

Hence, $$f/g$$ has a holomorphic logarithm on $$\mathbb C$$, which we shall denote by $$H$$. This satisfies
 * $$z^m e^{H(z)} \prod_{n=1}^\infty \left(1 - \frac{z}{a_n}\right) e^{\sum_{k=1}^{n-1} (-1)^{k+1} \frac{z^k}{k a_n^k}} = f(z)$$.

Proof:

From theorem 5.7 we obtain a function $$g$$ with zeroes $$\{s_n|n \in \mathbb N\}$$ in the right multiplicity. Set $$f = 1/g$$.

The Hadamard factorisation
In this subsection, we strive to factor certain holomorphic functions in a way that makes them even easier to deal with than the Weierstraß factorisation. This is the Hadamard factorisation. It only works for functions satisfying a certain growth estimate, but in fact, many important functions occuring in analytic number theory do satisfy this estimate, and thus that factorisation will give us ways to prove certain theorems about those functions.

In order to prove that we may carry out a Hadamard factorisation, we need some estimates for holomorphic functions as well as some preparatory lemmata.

Estimates for holomorphic functions
Proof:

Set $$m := N(r)$$ and define the function $$g: \mathbb C \to \mathbb C$$ by
 * $$g(s) := \begin{cases}

f(s) \prod_{j=1}^m \frac{R^2 - s \overline{s_j}}{R (s - s_j)} & s \notin \{s_1, \ldots, s_m\} \\ \lim_{t \to s} f(s) \prod_{j=1}^m \frac{R^2 - t \overline{s_j}}{R (t - s_j)} & \text{otherwise} \end{cases}$$, where the latter limit exists by developing $$f$$ into a power series at $$s$$ and observing that the constant coefficient vanishes. By Riemann's theorem on removable singularities, $$g$$ is holomorphic. We now have
 * $$|g(0)| = |f(0)| \prod_{j=1}^m \frac{R}{|s_j|}$$,

and if further $$|s| = R$$, then $$\frac{|s|}{R} = 1$$ and hence we may multiply that number without change to anything to obtain for $$j \in \{1, \ldots, m\}$$
 * $$\begin{align}

\left| \frac{R^2 - s \overline{s_j}}{R (s - s_j)} \right| = 1 & \Leftrightarrow \frac{|s|}{R} \left| \frac{R^2 - s \overline{s_j}}{R (s - s_j)} \right| = 1 \\ & \Leftrightarrow \left| \frac{R^2 s - s^2 \overline{s_j}}{R^2 s - R^2 s_j} \right| = 1 \\ & \Leftrightarrow \left|\frac{R^2 - s \overline{s_j}}{R^2 - R^2 \frac{s_j}{s}} \right| = 1 \end{align}$$. Now writing $$s = \sigma + i t$$ and $$s_j = \sigma_j + i t_j$$, we obtain on the one hand
 * $$s \overline{s_j} = \sigma \sigma_j + t t_j + i (t \sigma_j - \sigma t_j)$$

and on the other hand
 * $$R^2 \frac{s}{s_j} = R^2 \frac{\sigma_j \sigma + t_j t + i (t_j \sigma - \sigma_j t)}{\sigma^2 + t^2}$$.

Hence,
 * $$\overline{s \overline{s_j}} = R^2 \frac{s}{s_j}$$,

which is why both $$s \overline{s_j}$$ and $$R^2 \frac{s}{s_j}$$ have the same distance to $$R^2$$, since $$R^2$$ lies on the real axis.

Hence, due to the maximum principle, we have
 * $$|f(0)| \prod_{j=1}^m \frac{R}{|s_j|} = |g(0)| \le \max_{|s| = R} |g(s)| = \max_{|s| = R} |f(s)|$$.

Proof:

First, we consider the case $$s_0 = 0$$ and $$f(0) = 0$$. We may write $$f$$ in its power series form
 * $$f(s) = \sum_{j=1}^\infty a_j s^j, s \in B_R(0)$$,

where $$a_j = \frac{f^{(j)}(0)}{j!}$$. If we write $$\partial B_R(0) \ni s = R e^{i \varphi}$$ and $$a_j = |a_j| e^{\varphi_j}$$, we obtain by Euler's formula
 * $$\Re \left(a_j s\right) = |a_j| R^j \cos(j \varphi + \varphi_j)$$

and thus
 * $$\Re f(s) = \sum_{j=1}^\infty |a_j| R^j \cos(j \varphi + \varphi_j)$$.

Since the latter sum is majorised by the sum
 * $$\sum_{j=1}^\infty |a_j| R^j$$,

it converges absolutely and uniformly in $$\varphi$$. Hence, by exchanging the order of integration and summation, we obtain
 * $$\int_0^{2 \pi} \Re f(R e^{i \varphi}) d\varphi = 0$$

due to
 * $$\int_0^{2 \pi} \cos(j \varphi + \varphi_j) d \varphi = \left[ \frac{1}{j} \sin \left(j \varphi + \varphi_j \right) \right]^{\varphi=2\pi}_{\varphi = 0} = 0$$

and further for all $$n \in \mathbb N$$
 * $$\int_0^{2 \pi} \Re f(R e^{i \varphi}) \cos(n \varphi + \varphi_n) d\varphi = \pi |a_n| R^n$$

due to
 * $$\int_0^{2\pi} \cos(j \varphi + \varphi_j) \cos(n \varphi + \varphi_n) d\varphi = \pi \delta_{j,n}$$,

as can be seen using integration by parts twice and $$1 = \sin^2 + \cos^2$$. By monotonicity of the integral, we now have
 * $$\pi |a_n| R^n \le \int_0^{2 \pi} \Re f(R e^{i \varphi}) (1 + \cos(n \varphi + \varphi_n)) d\varphi \le 2 \pi M$$.

This proves the theorem in the case $$s_0 = 0 = f(0)$$. For the general case, we define
 * $$g(s) := f(s + s_0) - f(s_0)$$.

Then $$s_0 = 0 = g(0)$$, hence by the case we already proved
 * $$\left| \frac{f^{n}(s_0)}{n!} \right| = \left| \frac{g^{n}(0)}{n!} \right| \le \frac{2}{R^n} \max_{|s| = R} \Re g(s) = \frac{2}{R^n} (M - \Re f(s_0))$$.

Further preparations
Lemma 5.14: