User:Dom walden/Multivariate Analytic Combinatorics/Cauchy-Hadamard Theorem and Exponential Bounds

Theorem
Let $$\alpha$$ be an n-dimensional vector of natural numbers ($$\alpha = (\alpha_1, \cdots, \alpha_n) \in \N^n$$) with $$||\alpha|| = \alpha_1 + \cdots + \alpha_n$$, then $$f(z)$$ converges with radius of convergence $$\rho = (\rho_1, \cdots, \rho_n) \in \R^n$$ with $$\rho^\alpha = \rho_1^{\alpha_1} \cdots \rho_n^{\alpha_n}$$ if and only if


 * $$\limsup_{||\alpha||\to\infty} \sqrt[||\alpha||]{|c_\alpha|\rho^\alpha}=1$$

where


 * $$f(z) = \sum_{\alpha\geq0}c_\alpha(z-a)^\alpha := \sum_{\alpha_1\geq0,\ldots,\alpha_n\geq0}c_{\alpha_1,\ldots,\alpha_n}(z_1-a_1)^{\alpha_1}\cdots(z_n-a_n)^{\alpha_n}$$

Proof
Set $$z = a + t\rho$$ $$(z_i = a_i + t\rho_i)$$, then


 * $$\sum_{\alpha \geq 0} c_\alpha (z - a)^\alpha = \sum_{\alpha \geq 0} c_\alpha \rho^\alpha t^{||\alpha||} = \sum_{\mu \geq 0} \left( \sum_{||\alpha|| = \mu} |c_\alpha| \rho^\alpha \right) t^\mu$$

This is a power series in one variable $$t$$ which converges for $$|t| < 1$$ and diverges for $$|t| > 1$$. Therefore, by the Cauchy-Hadamard theorem for one variable


 * $$\limsup_{\mu \to \infty} \sqrt[\mu]{\sum_{||\alpha|| = \mu} |c_\alpha| \rho^\alpha} = 1$$

Setting $$|c_m| \rho^m = \max_{||\alpha|| = \mu} |c_\alpha| \rho^\alpha$$ gives us an estimate


 * $$|c_m| \rho^m \leq \sum_{||\alpha|| = \mu} |c_\alpha| \rho^\alpha \leq (\mu + 1)^n |c_m| \rho^m$$

Because $$\sqrt[\mu]{(\mu + 1)^n} \to 1$$ as $$\mu \to \infty$$


 * $$\sqrt[\mu]{|c_m| \rho^m} \leq \sqrt[\mu]{\sum_{||\alpha|| = \mu} |c_\alpha| \rho^\alpha} \leq \sqrt[\mu]{|c_m| \rho^m} \implies \sqrt[\mu]{\sum_{||\alpha|| = \mu} |c_\alpha| \rho^\alpha} = \sqrt[\mu]{|c_m| \rho^m} \qquad (\mu \to \infty)$$

Therefore


 * $$\limsup_{||\alpha||\to\infty} \sqrt[||\alpha||]{|c_\alpha|\rho^\alpha} = \limsup_{\mu \to \infty} \sqrt[\mu]{|c_m| \rho^m} = 1$$

Example
For the central diagonal of our example, $$\Delta \frac{1}{1 - x - y} = \sum_{n \geq 0} f_{n, n} x^n y^n$$:


 * $$\limsup_{n \to \infty} \sqrt[n]{|f_{n,n}| x^n y^n} = 1 \implies \limsup_{n \to \infty} |f_{n,n}| = \frac{1}{(x y)^n}$$

$$x^n y^n$$ is at its largest when $$x = y = \frac{1}{2}$$ so that $$\limsup_{n \to \infty} |f_{n,n}| = 4^n$$.

We know by Stirling's approximation that this is a good estimate.

But what about a diagonal along an arbitrary ray, like the above example $$\Delta^{(2, 1)} \frac{1}{1 - x - y}$$?


 * $$\limsup_{|n\textbf{r}| \to \infty} \sqrt[|n\textbf{r}|]{|f_{2n,n}| x^{2n} y^n} = 1 \implies \limsup_{|n\textbf{r}| \to \infty} |f_{2n,n}| = \frac{1}{(x^2 y)^n}$$

If we keep $$x = y = \frac{1}{2}$$ then $$\limsup_{|n\textbf{r}| \to \infty} |f_{2n,n}| = 8^n$$

This isn't a good estimate.

Better to use $$x = \frac{2}{3}, y = \frac{1}{3}$$ then $$\limsup_{|n\textbf{r}| \to \infty} |f_{2n,n}| = \left(\frac{27}{4}\right)^n = (6.75)^n$$

Convex optimisation
In the below, the function we are interested in is $$F(\textbf{z}) = \frac{G(\textbf{z})}{H(\textbf{z})}$$.

We therefore want to find the $$\textbf{w}$$ on the domain of convergence of $$F(\textbf{z})$$ that minimises $$\textbf{w}^{-\textbf{r}}$$.

The subject of convex optimisation already has the tools for this, but in order to use it we need to transform the domain of convergence to be a convex set and $$\textbf{w}^{-\textbf{r}}$$ to be a convex function.

Give examples of how useful convex is...

Fortunately, the logarithmic image of the domain of convergence of a power series of a complex function is convex.

Therefore, we define


 * $$Relog(\textbf{z}) = (\log |z_1|, \cdots, \log |z_d|)$$


 * $$amoeba(H) = \{Relog(\textbf{z}) : H(\textbf{z}) = 0\}$$

The domain of convergence of our function can now be defined as the complement of this amoeba


 * $$amoeba(H)^c = \R^d \setminus amoeba(H)$$

This may leave us with multiple unconnected components, each one for a different Laurent series expansion. Denote the component we are interested in as $$B$$


 * $$\mathcal{D} = Relog^{-1}(B)$$



The logarithmic image of $$\textbf{w}^{-\textbf{r}}$$ is $$h(\textbf{w}) = -\textbf{r} \cdot Relog(\textbf{w})$$. Because $$\log$$ is a concave function, $$-\log$$ is convex.

So we now have a problem of minimising a convex function over a convex set.

We want to find the supporting hyperplane to $$\bar B$$ with outward-facing normal $$-\nabla h(\textbf{w})$$.



Critical point equations
This happens when the supporting hyperplane defined above coincides with the tangent plane with normal $$\nabla H(\textbf{w})$$.

This means they are not linearly independent and therefore the matrix


 * $$\begin{pmatrix}

\frac{\partial H}{\partial z_1}(\textbf{w}) & \cdots & \frac{\partial H}{\partial z_d}(\textbf{w}) \\ r_1/w_1 & \cdots & r_d/w_d \end{pmatrix}$$

is rank deficient, or its 2 x 2 submatrices have zero determinants. This is equivalent to a system of equations referred to as the critical point equations


 * $$H(\textbf{w}) = 0 \quad r_j w_1 \frac{\partial H}{\partial z_1}(\textbf{w}) - r_1 w_j \frac{\partial H}{\partial z_j}(\textbf{w}) = 0 \quad (2 \leq j \leq d).$$

Caution
Needs to be smooth...

Not necessarily tight...