Analytic Combinatorics/Tauberian Theorem

Introduction
'Tauberian' refers to a class of theorems with many applications. The full scope of Tauberian theorems is too large to cover here.

We prove here only a particular Tauberian theorem due to Hardy, Littlewood and Karamata, which is useful in analytic combinatorics.

Theorem
Theorem due to Hardy.

If $$f(y) \sim y^{-\sigma} L(y^{-1})$$ as $$y \to 0$$, where $$\sigma > 0$$ and $$L(y)$$ is a slowly varying function, then $$a_n \sim \frac{n^{\sigma-1}}{\Gamma(\sigma)} L(n)$$ when $$n \to \infty$$.

Bear in mind that if $$f(x) = \sum a_n x^n = \frac{1}{(1 - x)^\sigma} L(\frac{1}{1 - x})$$ we can convert it to a General Dirichlet series (without changing the value of the coefficients we are interested in) by the substitution $$x = e^{-y}$$ such that
 * $$f(y) = \sum a_n (e^{-y})^n = \frac{1}{(1 - e^{-y})^\sigma} L(\frac{1}{1 - e^{-y}}) \sim y^{-\sigma} L(y^{-1})$$

as $$y \to 0$$.

This gives us
 * $$[x^n]\frac{1}{(1 - x)^\sigma} L(\frac{1}{1 - x}) \sim \frac{n^{\sigma-1}}{\Gamma(\sigma)} L(n)$$

which is the form it is given in Flajolet and Sedgewick 2009, pp. 435.

Proof
Proof due to Hardy.

If $$f(x) = \frac{1}{(1 - x)^\sigma} L(\frac{1}{1 - x}) = \sum_0^\infty a_n x^n$$ then this method actually finds the asymptotic estimate of $$s_n = \sum_{i=0}^n a_i$$ after which we can find $$a_n$$ by $$s_n - s_{n-1}$$ or by finding $$s_n$$ in $$(1 - x) f(x)$$.

Let $$a(t)$$ be a step function where $$a(n) - a(n-1) = a_n$$, $$a(n) = \sum_{i=0}^n a_i$$ and $$a(0) = 0$$.


 * $$a(y^{-1}) = \int_0^{y^{-1}} da(t)$$

Using integration by parts :
 * $$\int_0^{y^{-1}} da(t) = a(y^{-1}) - a(0) - \int_0^{y^{-1}} a(t) d1 = a(y^{-1})$$

Because $$a(0) = \int_0^{y^{-1}} a(t) d1 = 0$$.

If $$g(x) = x^{-1} \quad (e^{-1} \leq x \leq 1), \quad g(x) = 0 \quad (0 \leq x < e^{-1})$$ then


 * $$\int_0^{y^{-1}} da(t) = \int_0^\infty e^{-yt} g(e^{-yt}) da(t)$$

Because when $$0 \leq t \leq y^{-1}$$ then $$e^{-yt} g(e^{-yt})$$ ranges from $$e^{-1} g(e^{-1})$$ to $$e^0 g(e^0)$$ so $$e^{-yt} g(e^{-yt}) = \frac{e^{yt}}{e^{yt}} = 1$$ and when $$t > y^{-1}$$ then $$e^{-yt} g(e^{-yt}) = 0$$.

Lemma 1

 * $$\int_0^\infty e^{-yt} g(e^{-yt}) da(t) \sim \frac{1}{\Gamma(\sigma)} y^{-\sigma} L(y^{-1}) \int_0^\infty e^{-t} t^{\sigma-1} g(e^{-t}) dt$$

as $$y \to 0$$.

To prove this we need two further lemmas.

Lemma 2

 * $$\int_0^\infty e^{-yt} Q(e^{-yt}) da(t) \sim \frac{1}{\Gamma(\sigma)} y^{-\sigma} L(y^{-1}) \int_0^\infty e^{-t} t^{\sigma-1} Q(e^{-t}) dt$$

where $$Q$$ is any polynomial and as $$y \to 0$$.


 * $$\int_0^\infty e^{-yt} Q(e^{-yt}) da(t) = \int_0^\infty e^{-yt} (\cdots + e^{-ytn} + \cdots) da(t) = \int_0^\infty \cdots + \int_0^\infty e^{-yt} e^{-ytn} da(t) + \int_0^\infty \cdots$$


 * $$\int_0^\infty e^{-yt} e^{-ytn} da(t) = \int_0^\infty e^{-(n + 1)yt} da(t) \sim (n + 1)^{-\sigma} y^{-\sigma} L(\frac{1}{(n + 1)y})$$

as $$y \to 0$$ by assumption in the theorem.


 * $$(n + 1)^{-\sigma} y^{-\sigma} L(\frac{1}{(n + 1)y}) \sim (n + 1)^{-\sigma} y^{-\sigma} L(\frac{1}{y})$$

by definition of slowly varying function.


 * $$(n + 1)^{-\sigma} y^{-\sigma} L(\frac{1}{y}) = y^{-\sigma} L(\frac{1}{y}) \frac{1}{\Gamma(\sigma)} \int_0^\infty e^{-t} e^{-tn} t^{\sigma-1} dt$$


 * $$\cdots + y^{-\sigma} L(\frac{1}{y}) \frac{1}{\Gamma(\sigma)} \int_0^\infty e^{-t} e^{-tm} t^{\sigma-1} dt + y^{-\sigma} L(\frac{1}{y}) \frac{1}{\Gamma(\sigma)} \int_0^\infty e^{-t} e^{-tn} t^{\sigma-1} dt + \cdots = y^{-\sigma} L(\frac{1}{y}) \frac{1}{\Gamma(\sigma)} \int_0^\infty e^{-t} Q(e^{-t}) t^{\sigma-1} dt$$

Lemma 3
If $$g(x)$$ is real-valued and Riemann integrable in the open interval $$(0, 1)$$ and $$\sigma > 0$$ then there exist polynomials $$p(x)$$ and $$P(x)$$ such that $$p(x) < g(x) < P(x)$$ and


 * $$\int_0^1 (P(x) - p(x)) (log\frac{1}{x})^{\sigma-1} dx = \int_0^\infty e^{-t} t^{\sigma-1} (P(e^{-t}) - p(e^{-t})) dt < \epsilon \Gamma(\sigma)$$

Construct continuous functions $$h$$ and $$H$$ such that


 * $$h \leq g \leq H, \quad \int_0^1 (H - g) dx < \epsilon, \quad \int_0^1 (g - h) dx < \epsilon$$

Then


 * $$\int_0^1 (H - g) (log\frac{1}{x})^{\sigma-1} dx < \epsilon \int_0^1 (log\frac{1}{x})^{\sigma-1} dx = \epsilon \Gamma(\sigma)$$

and


 * $$\int_0^1 (g - h) (log\frac{1}{x})^{\sigma-1} dx < \epsilon \int_0^1 (log\frac{1}{x})^{\sigma-1} dx = \epsilon \Gamma(\sigma)$$

By the Weierstrass approximation theorem there are polynomials $$q$$ and $$Q$$ such that $$|Q - h| < \epsilon$$ and $$|h - q| < \epsilon$$. If $$p(x) = q(x) - \epsilon$$ and $$P(x) = Q(x) + \epsilon$$ then $$p(x) < g(x) < P(x)$$ as required by the lemma and


 * $$\int_0^1 (P(x) - g(x)) (log\frac{1}{x})^{\sigma-1} dx \leq \int_0^1 (P(x) - Q(x)) (log\frac{1}{x})^{\sigma-1} dx + \int_0^1 |Q(x) - h(x)| (log\frac{1}{x})^{\sigma-1} dx + \int_0^1 (h(x) - g(x)) (log\frac{1}{x})^{\sigma-1} dx < 3 \epsilon \Gamma(\sigma)$$

By virtue of $$g(x)$$ being Riemann-integrable, we can find finite step functions $$g_1$$ and $$g_2$$ such that $$g_1(x) \leq g(x) \leq g_2(x)$$ and


 * $$\int_0^1 (g_2(x) - g_1(x)) (log\frac{1}{x})^{\sigma-1} dx < 3 \epsilon \Gamma(\sigma)$$

Then, we have proven above there are polynomials $$p(x)$$ and $$P(x)$$ such that $$p(x) < g_1(x) \leq g(x) \leq g_2(x) < P(x)$$ and


 * $$\int_0^1 (P(x) - g_2(x)) (log\frac{1}{x})^{\sigma-1} dx < \epsilon \Gamma(\sigma), \quad \int_0^1 (g_1(x) - p(x)) (log\frac{1}{x})^{\sigma-1} dx < \epsilon \Gamma(\sigma)$$

Combining these we can complete the proof of lemma 3:


 * $$\int_0^1 (P(x) - p(x)) (log\frac{1}{x})^{\sigma-1} dx < 5 \epsilon \Gamma(\sigma)$$

Going back to the proof of lemma 1


 * $$\limsup_{y \to \infty} \int_0^\infty e^{-yt} g(e^{-yt}) da(t) \leq \lim_{y \to \infty} \int_0^\infty e^{-yt} P(e^{-yt}) da(t) = \frac{1}{\Gamma(\sigma)} y^{-\sigma} L(y^{-1}) \int_0^\infty e^{-t} t^{\sigma-1} P(e^{-t}) dt$$

by lemma 2.

Lemma 3 implies that


 * $$\int_0^\infty e^{-t} t^{\sigma-1} (P(e^{-t}) - g(e^{-t})) dt < \epsilon \Gamma(\sigma)$$

so that


 * $$\frac{1}{\Gamma(\sigma)} y^{-\sigma} L(y^{-1}) \int_0^\infty e^{-t} t^{\sigma-1} P(e^{-t}) dt - \frac{1}{\Gamma(\sigma)} y^{-\sigma} L(y^{-1}) \int_0^\infty e^{-t} t^{\sigma-1} g(e^{-t}) dt < \epsilon$$

and


 * $$\frac{1}{\Gamma(\sigma)} y^{-\sigma} L(y^{-1}) \int_0^\infty e^{-t} t^{\sigma-1} P(e^{-t}) dt < \frac{1}{\Gamma(\sigma)} y^{-\sigma} L(y^{-1}) \int_0^\infty e^{-t} t^{\sigma-1} g(e^{-t}) dt + \epsilon$$

and finally


 * $$\limsup_{y \to \infty} \int_0^\infty e^{-yt} g(e^{-yt}) da(t) < \frac{1}{\Gamma(\sigma)} y^{-\sigma} L(y^{-1}) \int_0^\infty e^{-t} t^{\sigma-1} g(e^{-t}) dt + \epsilon$$

By a similar argument, one can prove


 * $$\liminf_{y \to \infty} \int_0^\infty e^{-yt} g(e^{-yt}) da(t) > \frac{1}{\Gamma(\sigma)} y^{-\sigma} L(y^{-1}) \int_0^\infty e^{-t} t^{\sigma-1} g(e^{-t}) dt - \epsilon$$

Combining both these results we conclude the proof of lemma 1


 * $$\int_0^\infty e^{-yt} g(e^{-yt}) da(t) \sim \frac{1}{\Gamma(\sigma)} y^{-\sigma} L(y^{-1}) \int_0^\infty e^{-t} t^{\sigma-1} g(e^{-t}) dt$$

Putting it all together:
 * $$a(y^{-1}) = \int_0^{y^{-1}} da(t) = \int_0^\infty e^{-yt} g(e^{-yt}) da(t) \sim \frac{1}{\Gamma(\sigma)} y^{-\sigma} L(y^{-1}) \int_0^\infty e^{-t} t^{\sigma-1} g(e^{-t}) dt = \frac{1}{\Gamma(\sigma)} y^{-\sigma} L(y^{-1}) \int_0^1 t^{\sigma-1} dt = \frac{1}{\Gamma(\sigma + 1)} y^{-\sigma} L(y^{-1})$$

or


 * $$a(y) = \frac{y^\sigma}{\Gamma(\sigma + 1)} L(y)$$.

As $$y \to \infty$$. Then, if $$f(x) = \frac{1}{(1 - x)^\sigma} L(\frac{1}{1 - x})$$


 * $$a_n = [x^n]\frac{1}{(1 - x)^\sigma} L(\frac{1}{1 - x}) = [s_n]\frac{(1 - x)}{(1 - x)^\sigma} L(\frac{1}{1 - x}) = [s_n]\frac{1}{(1 - x)^{\sigma-1}} L(\frac{1}{1 - x}) = \frac{n^{\sigma-1}}{\Gamma(\sigma)} L(n)$$