Analytic Combinatorics/Darboux Method

Introduction
Darboux's method is one way of estimating the coefficients of generating functions involving roots.

It is easier than Singularity Analysis, but it applies to a smaller set of functions.

Big O
We will make use of the "Big O" notation.


 * $$f(z) = O(g(z))$$ as $$z \to \zeta$$

which means there exists positive real numbers $$M, \delta$$ such that if $$0 < |z - \zeta| < \delta$$:


 * $$|f(z)| \leq M g(z)$$

Alternatively, we can say that


 * $$f(z) = O(g(n))$$

for positive integer $$n$$, meaning there exists a positive real number $$M$$ and positive integer $$N$$ such that:


 * $$|f(n)| \leq M g(n)$$ for $$n \geq N$$

Theorem
Theorem due to Wilf.

If we have a function $$(1 - z)^\beta f(z)$$ where $$\beta \notin \{0, 1, 2, \ldots\}$$ where $$f(z)$$ has a radius of convergence greater than $$1$$ and an expansion near 1 of $$\sum_{j\geq0} f_j (1 - z)^j$$, then:
 * $$[z^n](1 - z)^\beta f(z) = \sum_{j=0}^m f_j \frac{n^{-\beta-j-1}}{\Gamma(-\beta-j)} + O(n^{-m-\beta-2})$$

Example
The theorem is a bit abstract, so I will show an example of how you might use it before going into the proof.

Taking an example from Wilf :
 * $$\frac{e^{-z/2-z^2/4}}{\sqrt{1 - z}} = (1 - z)^{-\frac{1}{2}} e^{-z/2-z^2/4}$$

$$e^{-z/2-z^2/4}$$ is a complete function, so its radius of convergence is greater than 1.

Near 1 it can be expanded using the Taylor series:
 * $$e^{-z/2-z^2/4} = e^{-3/4} + e^{-3/4}(1 - z) + \cdots$$

Therefore, for $$m = 0$$:
 * $$[z^n]\frac{e^{-z/2-z^2/4}}{\sqrt{1 - z}} = e^{-3/4} \frac{n^{-\frac{1}{2}}}{\Gamma(\frac{1}{2})} + O(n^{-\frac{3}{2}})$$

Or, if we want more precision we can set $$m = 1$$:
 * $$[z^n]\frac{e^{-z/2-z^2/4}}{\sqrt{1 - z}} = e^{-3/4} \frac{n^{-\frac{1}{2}}}{\Gamma(\frac{1}{2})} + e^{-3/4} \frac{n^{-\frac{3}{2}}}{\Gamma(-\frac{1}{2})} + O(n^{-\frac{5}{2}})$$

and so on.

Proof
Proof due to Wilf.

We have:
 * $$(1 - z)^\beta f(z) = \sum_{j\geq0} f_j (1 - z)^{\beta+j}$$

and:
 * $$\sum_{j\geq0} f_j (1 - z)^{\beta+j} = \sum_{j=0}^m f_j (1 - z)^{\beta+j} + \sum_{j>m} f_j (1 - z)^{\beta+j}$$

By factoring out $$(1 - z)^{\beta+m+1}$$ from the last sum:
 * $$\sum_{j>m} f_j (1 - z)^{\beta+j} = (1 - z)^{\beta+m+1} g(z)$$

Therefore:
 * $$(1 - z)^\beta f(z) = \sum_{j=0}^m f_j (1 - z)^{\beta+j} + (1 - z)^{\beta+m+1} g(z)$$

We have to prove that:
 * 1) $$[z^n]\sum_{j=0}^m f_j (1 - z)^{\beta+j} = \sum_{j=0}^m f_j \frac{n^{-\beta-j-1}}{\Gamma(-\beta-j)}$$
 * 2) $$[z^n](1 - z)^{\beta+m+1} g(z) = O(n^{-m-\beta-2})$$

Proof of 1
By applying :
 * $$[z^n]\sum_{j=0}^m f_j (1 - z)^{\beta+j} = \sum_{j=0}^m f_j [z^n](1 - z)^{\beta+j} \sim \sum_{j=0}^m f_j \frac{n^{-\beta-j-1}}{\Gamma(-\beta-j)}$$

Proof of 2

 * $$[z^n](1 - z)^{\beta+m+1} = O(n^{-m-\beta-2})$$ (by )
 * $$[z^n]g(z) = O(\theta^n) \quad (0 < \theta < 1)$$ (because, by assumption in the theorem, the radius of convergence $$R$$ is greater than $$1$$ and Cauchy's inequality tells us that $$[z^n]g(z) = O\left(\frac{1}{R^n}\right)$$ and $$\frac{1}{R^n} < 1$$)


 * $$\left|\sum_{0\leq k\leq\frac{n}{2}} h_k g_{n-k}\right| \leq \left\{\max_{0\leq k\leq\frac{n}{2}} |h_k|\right\}\left\{\sum_{0\leq k\leq\frac{n}{2}} A\theta^{n-k}\right\} = \max\{B, Bn^{-m-\beta-2}\} C\theta^{\frac{n}{2}} \leq D\theta^{\frac{n}{2}} = O(\theta^{\frac{n}{2}})$$ (for $$A, B, C, D$$ constants and assuming that $$-m-\beta-2 < 0$$).


 * $$\left|\sum_{\frac{n}{2}\leq k\leq n} h_k g_{n-k}\right| \leq \left\{\max_{\frac{n}{2}\leq k\leq n} |h_k|\right\}\left\{\sum_{\frac{n}{2}\leq k\leq n} A\theta^{n-k}\right\} \leq Bn^{-m-\beta-2} = O(n^{-m-\beta-2})$$

because $$\left\{\sum_{\frac{n}{2}\leq k\leq n} A\theta^{n-k}\right\} \leq C \max_{\frac{n}{2}\leq k\leq n} \theta^{n-k} = C$$ because $$\theta^2 \leq \theta^1 \leq \theta^0 = 1$$.

Putting it all together:
 * $$[z^n](1 - z)^{\beta+m+1} g(z) = O(\theta^{\frac{n}{2}}) + O(n^{-m-\beta-2}) = O(n^{-m-\beta-2})$$

because $$C\theta^{\frac{n}{2}} = o(n^{-m-\beta-2})$$ because $$\theta < 1$$.

Lemma 1

 * $$[z^n](1 - z)^{\beta} \sim \frac{n^{-\beta-1}}{\Gamma(-\beta)}$$

Proof:
 * $$[z^n](1 - z)^{\beta} = \binom{\beta}{n}(-1)^n = \binom{n-\beta-1}{n} = \frac{\Gamma(n-\beta)}{\Gamma(-\beta)\Gamma(n+1)} = \frac{1}{\Gamma(-\beta)(n-\beta)^{(\beta+1)}} \sim \frac{n^{-\beta-1}}{\Gamma(-\beta)}$$

where $$x^{(n)}$$ is the rising factorial.

Szegő's theorem
We can apply a similar theorem to functions with multiple singularities. From Wilf and Szegő.

If $$f(z)$$ is analytic in $$|z| < 1$$, has a finite number of singularities $$e^{i\phi_1}, e^{i\phi_2}, \cdots, e^{i\phi_r}$$ on the unit circle $$|z| = 1$$ and in the neighbourhood of each singularity has the expansion


 * $$f(z) = \sum_{v \geq 0} c_v^{(k)}(1 - ze^{-i\phi k})^{\alpha_k + v\beta_k} \quad (k = 1, 2, \cdots, r \text{ and } \beta_k > 0)$$

Then we have the asymptotic series


 * $$[z^n]f(z) = \sum_{v \geq 0} \sum_{k = 1}^{r} c_v^{(k)} \binom{\alpha_k + v\beta_k}{n} (-e^{i\phi k})^n$$