Fractals/Mathematics/Newton method


 * from calculus book
 * from numerical methods book
 * other

types
Newton method can be used for finding successively better approximations to one root (zero) x of function f :


 * $$x : f(x) = 0 \,.$$

If one wants to find another root must apply the method again with other initial point.

How to find all roots ?

Newton method can be applied to :
 * a real-valued function
 * a complex-valued function
 * a system of equations

systems of nonlinear equations
One may also use Newton's method to solve systems of
 * 2 (non-linear) equations
 * with 2 complex variables : $$x_1, x_2$$

$$ \begin{cases} F_1(x_1, x_2) = 0 \\ F_2(x_1, x_2) = 0 \end{cases} $$

It can be expressed more synthetically using vector notation  :

$$ \mathbf F(\mathbf x) = 0$$

where $$ \mathbf x$$ is a vector of 2 complex variables :

$$ \mathbf x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$$

and $$ \mathbf F$$ is a vector of functions ( or is a vector-valued function of the vector variable x ) :

$$ \mathbf F = \begin{bmatrix} F_1 \\ F_2 \end{bmatrix}$$

Solution can be find by iteration :

$$\mathbf x^{(k+1)} = \mathbf x^{(k)} + \mathbf s ,\ k=0,1,2,\ldots. $$

where s is an increment ( which is now a vector of 2 components ):

$$ \mathbf s = \mathbf x^{(k+1)}- \mathbf x^{(k)} $$ Increment can be computed by :

$$\mathbf s = J^{-1}(\mathbf x^{(k)}) \mathbf F(\mathbf x^{(k)}) $$

Here $$J$$ is Jacobian matrix of $$ \mathbf F$$ at $$ \mathbf x^{(k)}$$ :

$$J_{\mathbf F} (\mathbf x^{(k)}) = (F'(\mathbf x))_{ij} = \dfrac{\partial F_i}{\partial x_j}(\mathbf x)     = \begin{bmatrix}   \dfrac{\partial F_1}{\partial x_1} & \dfrac{\partial F_1}{\partial x_2}\\[1em] \dfrac{\partial F_2}{\partial x_1} & \dfrac{\partial F_2}{\partial x_2} \end{bmatrix}$$

and $$J^{-1}$$ is inverse of the Jacobian matrix $$J$$.

In case of
 * a small number of equations use the inverse matrix.
 * a big number of equations : rather than actually computing the inverse of this matrix, one can save time by solving the system of linear equations $$J_F(x_n) (x_{n+1} - x_n) = -F(x_n)\,\!$$ for the unknown xn+1 − xn.

Algorithm in pseudocode :
 * start with an initial approximation ( guess) $$\mathbf x^{k} ; k = 0$$
 * repeat until convergence ( iterate)
 * solve: $$\mathbf s = J^{-1}(\mathbf x^{(k)}) \mathbf F(\mathbf x^{(k)}) $$
 * compute : $$\mathbf x^{(k+1)} = \mathbf x^{(k)} + \mathbf s $$
 * $$ k = k + 1 $$

Stop criteria :
 * absolute error : $$ |s| < 1.e-12 $$

What is needed ?

 * function f ( a differentiable function )
 * it's derivative f'
 * starting point x0 ( which should be in the basin of root's attraction )
 * stopping rule ( criteria to stop iteration )

stopping rule
$$|x_{n+1}-x_n|<\epsilon\;$$

Pitfalls or failure
Why method fails to find a root ?
 * method has a cycle and never converge (
 * inflection point
 * local minimum
 * method is diverging not converging
 * local minimum
 * multiple root ( multiplicity of root > 1, for example : f(x) = f'(x) = 0). Slow approximation, derivative tends to zero, trouble in division step ( do not divide by zero ). Use modified Newton's method
 * slow convergence
 * bad value of the initial approximation ( an initial guess far from the root )
 * a function whose derivative vanishes near a root, or whose second derivative is unbounded near a root

"For most problems, the application of Newton's method in double precision has 2-3 steps of global orientation and then 4-6 steps of quadratic convergence before the precision of the double floating point format is exceeded. Thus, one can be even bolder, if after 10 steps the iteration does not converge, the initial point was bad, be it that it leads to a periodic orbit or divergence to infinity. Most likely, close-by initial points will behave similarly, so make a non-trivial change in the initial point for the start of the next run of the iteration."LutzL

Pseudocode
The following is an example of using the Newton's Method to help find a root of a function  which has derivative.

The initial guess will be $$x_0 = 1$$ and the function will be $$f(x) = x^2 - 2$$ so that $$f'(x) = 2x$$.

Each new iterative of Newton's method will be denoted by. We will check during the computation whether the denominator becomes too small (smaller than  ), which would be the case if $$f'(x_n) \approx 0$$, since otherwise a large amount of error could be introduced.

code

 * c
 * matlab : newtonm function
 * Maxima CAS
 * scilab
 * Mathematica
 * Animation engine for explanatory math videos in python

error analysis

 * math.stackexchange question: what-is-the-equation-for-the-error-of-the-newton-raphson-method
 * wolframalpha: Newton's method sin z = 4
 * wolphram alpha : using Newton's method find (53)^(1/3)
 * solve x^5-2 using newton method with x0=2 to 50 digits

= applications of Newton's method in case of iterated complex quadratic polynomials=

recurrence relations
A recurrence relation is an equation ( map) that recursively defines a sequence of points (for example orbit). To compute the sequence one needs:
 * starting point
 * map (equation = function = recurrence relation )

definitions
Define the iterated function $$f^n$$ by:

$$\begin{align} f^0 (z, c) &= z \\ f^1 (z, c) &= z^2 + c \\ f^{m+1} (z, c) &= f (f^m (z, c), c) \end{align}$$

Please note the other function:

$$ F(z) = f(z) - z $$

which is used for :
 * finding periodic points

Pick arbitrary names for the iterated function and it's derivatives These derivatives can be computed by recurrence relations.

Derivation of recurrence relations
First basic rules:
 * Iterated function $$f^{m+1} (z, c) = f (f^m (z, c), c)$$ is a function composition
 * Applay chain rule for composite functions, so if $$f(x) = h(g(x))$$ then $$f'(x) = h'(g(x)) \cdot g'(x). $$

$$ A_0 \quad \xrightarrow{definition\ of\ A}\ f^0 (z, c) \quad \xrightarrow{definition\ of\ f}\ \quad z $$

$$B_0 \quad \xrightarrow{definition\ of\ B}\ \dfrac{\partial}{\partial z} f^0 (z, c) \xrightarrow{definition\ of\ f}\ \dfrac{\partial}{\partial z} z \xrightarrow{derivative}\ 1 $$

$$C_0 \quad \xrightarrow{definition\ of\ C}\ \dfrac{\partial}{\partial z} \dfrac{\partial}{\partial z} f^0 (z, c) \xrightarrow{definition\ of\ f}\  \dfrac{\partial}{\partial z}\dfrac{\partial}{\partial z} z \xrightarrow{derivative}  \dfrac{\partial}{\partial z} 1 \xrightarrow{derivative} 0 $$

$$D_0 \quad \xrightarrow{definition\ of\ D } \dfrac{\partial}{\partial c} f^0 (z, c) \xrightarrow{ definition\ of\  f } \dfrac{\partial}{\partial c} z \xrightarrow{ derivative} 0$$

$$E_0 \quad \xrightarrow{ definition\ of\ E} \dfrac{\partial}{\partial c} \dfrac{\partial}{\partial z} f^0 (z, c)   \quad \xrightarrow{ definition of f} \dfrac{\partial}{\partial c} \dfrac{\partial}{\partial z} z  \quad \xrightarrow{ derivative } \dfrac{\partial}{\partial c} 1  \quad \xrightarrow{ derivative } 0 $$

$$A_{m+1} \quad \xrightarrow{definition of A} f^{m+1} (z, c)  \quad \xrightarrow{definition\ of\ f} f (f^m (z, c), c)  \quad \xrightarrow{definition\ of\ A} f (A_m, c)  \quad \xrightarrow{definition\ of\ f} A_m ^ 2 + c $$

$$B_{m+1} \quad \xrightarrow{definition\ of\ B} \dfrac{\partial}{\partial z} A_{m+1}  \quad \xrightarrow{definition\ of\ A} \dfrac{\partial}{\partial z} (A_m ^ 2 + c)  \quad \xrightarrow{distributivity} \dfrac{\partial}{\partial z} A_m ^ 2 + \dfrac{\partial}{\partial z} c  \quad \xrightarrow{constant\ derivative} \dfrac{\partial}{\partial z} A_m ^ 2 + 0  \quad \xrightarrow{zero } \dfrac{\partial}{\partial z} A_m ^ 2   \quad \xrightarrow{chain\ rule} 2 A_m (\dfrac{\partial}{\partial z} A_m)  \quad \xrightarrow{definition\ of\ B} 2 A_m B_m $$

$$C_{m+1} \quad \xrightarrow{definition\ of\ C} \dfrac{\partial}{\partial z} B_{m+1}  \quad \xrightarrow{definition\ of\ B} \dfrac{\partial}{\partial z} (2 A_m B_m)  \quad \xrightarrow{linearity} 2 \dfrac{\partial}{\partial z} (A_m B_m)  \quad \xrightarrow{product\ rule} 2 ( (\dfrac{\partial}{\partial z} A_m) B_m + A_m (\dfrac{\partial}{\partial z} B_m) )  \quad \xrightarrow{definition\ of\ B} 2 ( B_m B_m + A_m (\dfrac{\partial}{\partial z} B_m) )  \quad \xrightarrow{algebra} 2 ( B_m ^ 2 + A_m (\dfrac{\partial}{\partial z} B_m) )  \quad \xrightarrow{definition\ of\ C} 2 ( B_m ^ 2 + A_m C_m ) $$

$$D_{m+1} \quad \xrightarrow{definition\ of\ D} \dfrac{\partial}{\partial c} A_{m+1}  \quad \xrightarrow{definition\ of\ A} \dfrac{\partial}{\partial c} (A_m ^ 2 + c)  \quad \xrightarrow{distributivity} \dfrac{\partial}{\partial c} A_m ^ 2 + \dfrac{\partial}{\partial c} c  \quad \xrightarrow{derivative} \dfrac{\partial}{\partial c} A_m ^ 2 + 1  \quad \xrightarrow{chain\ rule} 2 A_m (\dfrac{\partial}{\partial c} A_m) + 1  \quad \xrightarrow{definition\ of\ D} 2 A_m D_m + 1 $$

$$E_{m+1} \quad \xrightarrow{definition\ of\ E} \dfrac{\partial}{\partial c} B_{m+1}  \quad \xrightarrow{definition\ of\ B} \dfrac{\partial}{\partial c} (2 A_m B_m)  \quad \xrightarrow{linearity} 2 \dfrac{\partial}{\partial c} (A_m B_m)  \quad \xrightarrow{product\ rule} 2 ( A_m (\dfrac{\partial}{\partial c} B_m) + (\dfrac{\partial}{\partial c} A_m) B_m )  \quad \xrightarrow{definition of\ E} 2 ( A_m E_m + (\dfrac{\partial}{\partial c} A_m) B_m )  \quad \xrightarrow{definition\ of\ D} 2 ( A_m E_m + D_m B_m ) $$

Computing
or without complex numbers:

/* Maxima CAS */ (%i1) a:ax+ay*%i; (%o1)                                                                                                         %i ay + ax (%i2) b:bx+by*%i; (%o2)                                                                                                         %i by + bx (%i3) bn:2*a*b; (%o3)                                                                                                2 (%i ay + ax) (%i by + bx) (%i4) realpart(bn); (%o4)                                                                                                     2 (ax bx - ay by) (%i5) imagpart(bn); (%o5)                                                                                                     2 (ax by + ay bx) (%i6) c:cx+cy*%i; (%o6)                                                                                                         %i cy + cx (%i7) an:a*a+c; 2 (%o7)                                                                                                 %i cy + cx + (%i ay + ax) (%i8) realpart(an); 2    2 (%o8)                                                                                                        cx - ay  + ax (%i9) imagpart(an); (%o9)                                                                                                        cy + 2 ax ay

so :
 * bx = 2 (ax bx - ay by)
 * by = 2 (ax by + ay bx)
 * ax = cx - ay^2 + ax^2
 * ay = cy + 2 ax ay

where:
 * a = ax + ay*i
 * b = bx + by*i
 * c = cx + cy*i

center
To use it in a GUI program :
 * click on the menu item : nucleus
 * using a mouse mark rectangle region around center of hyperbolic component

A center ( or nucleus ) $$c = n$$ of period $$p$$ satisfies :


 * $$f^p (0, n) = 0$$

Applying Newton's method in one variable:

$$ n_{m+1} = N(n_m) = n_m - \frac{f^p(0, n_m)}{\dfrac{\partial }{\partial c}f^p(0, n_m)} = n_m - \frac{A_p (0, n_m)}{D_p (0, n_m)} $$

where initial estimate $$n_0 $$ is arbitrary choosen ( abs(n) <2.0 )

stoppping rule : $$A_p=0 \Leftrightarrow n_{m+1} = n_m$$

boundary
The boundary of the component with center $$n$$ of period $$p$$ can be parameterized by internal angles.

The boundary point $$c = b$$ at angle $$t$$ (measured in turns) satisfies system of 2 equations :

$$ \begin{cases} f^p(w, b) - w &= 0 \\ \dfrac{\partial }{\partial z}f^p(w, b) - e^{2 \pi i t} &= 0 \end{cases} $$

Defining a function of two complex variables:

$$\begin{align} g \begin{pmatrix} z \\ c \end{pmatrix} &= \begin{pmatrix} f^p(z, c) - z \\ \dfrac{\partial }{\partial z}f^p(z, c) - e^{2 \pi i t} \end{pmatrix} \end{align}$$

and applying Newton's method in two variables:

$$\begin{align} \boldsymbol{v}_0 = \begin{pmatrix} n \\ n \end{pmatrix} \\ J_g(\boldsymbol{v}_m) (\boldsymbol{v}_{m+1} - \boldsymbol{v}_m) = -g(\boldsymbol{v}_m) \end{align}$$

where

$$\begin{align} J_g = \begin{pmatrix} \dfrac{\partial}{\partial z} (f^p(z, c) - z) & \dfrac{\partial}{\partial c} (f^p(z, c) - z) \\ \dfrac{\partial}{\partial z} (\dfrac{\partial }{\partial z}f^p(z, c) - e^{2 \pi i t}) & \dfrac{\partial}{\partial c} (\dfrac{\partial }{\partial z}f^p(z, c) - e^{2 \pi i t}) \end{pmatrix} \end{align}$$

This can be expressed using the recurrence relations as:

$$\begin{align} \begin{pmatrix} w_0 \\ b_0 \end{pmatrix} &= \begin{pmatrix} n \\ n \end{pmatrix} \\ \begin{pmatrix} B_p - 1 & D_p \\ C_p & E_p \end{pmatrix} \begin{pmatrix} w_{m+1} - w_m \\ b_{m+1} - b_m \end{pmatrix} &= - \begin{pmatrix} A_p - w_m \\ B_p - e^{2 \pi i t} \end{pmatrix} \end{align}$$

where $$\{A,B,C,D,E\}_p$$ are evaluated at $$(w_m, b_m)$$.

Example
"The process is for numerically finding one boundary point on one component at a time - you can't solve for the whole boundary of all the components at once. So pick and fix one particular nucleus n and one particular internal angle t " (Claude Heiland-Allen )

Lets try find point of boundary ( bond point )

$$c = b$$

of hyperbolic component of Mandelbrot set for :
 * period p=3
 * angle t=0
 * center ( nucleus ) c = n = -0.12256116687665+0.74486176661974i

There is only one such point.

Initial estimates (first guess) will be center ( nucleus ) $$c = n$$ of this components. This center ( complex number) will be saved in a vector of complex numbers containing two copies of that nucleus:

$$\begin{align} \boldsymbol{v}_0 = \begin{pmatrix} n\\ n \end{pmatrix} = \begin{pmatrix} -0.12256116687665+0.74486176661974i\\ -0.12256116687665+0.74486176661974i \end{pmatrix}

\end{align}$$

The boundary point at angle $$t$$ (measured in turns) satisfies system of 2 equations :

$$ \begin{cases} f^3(z, c) - z &= 0 \\ \dfrac{\partial }{\partial z}f^3(z, c) - e^{2 \pi i 0} &= 0 \end{cases} $$

so :

$$ \begin{cases} z^8 +4cz^6 +6c^2z^4+2cz^4 +4c^3z^2 +4c^2z^2 -z+c^4 +2c^3 +c^2+c = 0 \\ 8z^7 +24cz^5 +24c^2z^3 +8cz^3 +8c^3z +8c^2z-1 = 0 \end{cases} $$

Then function g

$$\begin{align} g(\boldsymbol{v}) = \begin{cases} z^8 +4cz^6 +6c^2z^4+2cz^4 +4c^3z^2 +4c^2z^2 -z+c^4 +2c^3 +c^2+c \\ 8z^7 +24cz^5 +24c^2z^3 +8cz^3 +8c^3z +8c^2z-1 \end{cases} \end{align}$$

Jacobian matrix $$J$$ is

$$ J_g(\boldsymbol{v}) = \begin{pmatrix} 8z^7+24cz^5+24c^2z^3+8cz^3+8c^3z+8c^2z-1 & 4z^6+12cz^4+2z^4+12c^2z^2+8cz^2+4c^3+6c^2+2c+1\\ 56z^6+120cz^4+72c^2z^2+24cz^2+8c^3+8c^2 & 24z^5+48cz^3+8z^3+24c^2z+16cz \end{pmatrix} $$

$$ J^{-1}_g(\boldsymbol{v}) = \begin{pmatrix} \frac{24z^5+48cz^3+8z^3+24c^2z+16cz}{d}& \frac{-4*z^6-12*c*z^4-2*z^4-12*c^2*z^2-8*c*z^2-4*c^3-6*c^2-2*c-1}{d}\\ \frac{-56*z^6-120*c*z^4-72*c^2*z^2-24*c*z^2-8*c^3-8*c^2}{d}& \frac{8*z^7+24*c*z^5+24*c^2*z^3+8*c*z^3+8*c^3*z+8*c^2*z-1}{d} \end{pmatrix} $$

where denominator d is :

$$d = (24z^5+48cz^3+8z^3+24c^2z+16cz)(8z^7+24cz^5+24c^2z^3+8cz^3+8c^3z+8c^2z-1)+(-56z^6-120cz^4-72c^2z^2-24cz^2-8c^3-8c^2)(4z^6+12cz^4+2z^4+12c^2z^2+8cz^2+4c^3+6c^2+2c+1)$$

Then find better aproximation of point c=b using iteration :

$$ \boldsymbol{v}_{m+1} = \frac{ \boldsymbol{v}_m -g(\boldsymbol{v}_m)}{J_g(\boldsymbol{v}_m)} = (\boldsymbol{v}_m -g(\boldsymbol{v}_m) ) J^{-1}_g(\boldsymbol{v}_m) $$

using $$ \boldsymbol{v}_0$$ as an starting point :

$$ \boldsymbol{v}_0$$

$$ \boldsymbol{v}_1 = (\boldsymbol{v}_0 -g(\boldsymbol{v}_0) ) J^{-1}_g(\boldsymbol{v}_0)$$

$$ \boldsymbol{v}_2 = (\boldsymbol{v}_1 -g(\boldsymbol{v}_1) ) J^{-1}_g(\boldsymbol{v}_1)$$

$$ ...$$

$$ \boldsymbol{v}_{m+1} = (\boldsymbol{v}_m -g(\boldsymbol{v}_m) ) J^{-1}_g(\boldsymbol{v}_m)$$

size
Newton's method (providing the initial estimate is sufficiently good and the function is well-behaved) provides successively more accurate approximations. How to tell when the approximation is good enough? One way might be to compare the center with points on its boundary, and continue to increase precision until this distance is accurate to enough bits.

Algorithm:


 * 1) given a center location estimate $$n_m$$ accurate to $$P$$ bits of precision
 * 2) compute a boundary location estimate $$b_m$$ accurate to the same precision, using center $$n_m$$ as starting value
 * 3) compute $$|n_m - b_m|$$ to find an estimate of the size of the component
 * 4) if it isn't zero, and if it is accurate enough (to a few 10s of bits, perhaps) then finish
 * 5) otherwise increase precision, refine center estimate to new precision, try again from the top

Measuring effective accuracy of the distance between two very close points might be done by comparing floating point exponents with floating point mantissa precision.

internal coordinate


Internal coordinate of hyperbolic component of Mandelbrot set
 * https://mathr.co.uk/blog/2013-04-01_interior_coordinates_in_the_mandelbrot_set.html

It is possible to map hyperbolic component $$\Eta$$ to  closed unit disk centered at origin $$\bar D_1 $$:

$$ \lambda_p : \Eta_p \to \bar D_1 $$

$$ \lambda_p (c) = b $$

$$\bar D_1 = \{ b \in \mathbb{C}: \left | b \right | \le 1 \}$$.

This relation is described by system of 2 equations :

$$ \begin{cases} F^p(z,c) = z \\ \frac{\partial}{\partial z} F^p(z,c) = b \end{cases} $$

where
 * p is the period of the target hyperbolic component on parameter plane
 * $$\theta$$ the desired internal angle of points c and b
 * $$r \le 1$$ is an internal radius
 * $$b = r * e^{2 \pi i \theta} $$ is a point of unit circle = internal coordinate
 * c is a point of parameter plane
 * z is a point of dynamic plane
 * $$(F^0(z,c) = z$$ and $$F^{q+1}(z,c) = F^q(F(z,c)^2 + c)$$

The algorithm by Claude Heiland-Allen for finding internal coordinate b from c :
 * choose c
 * check c ( bailout test on dynamical plane). When c is outside the Mandelbrot set, give up now ( or compute external coordinate );
 * Start with period one : $$p = 1$$
 * while $$p < p_{max}$$
 * Find periodic point $$z_0$$ such that $$F^p(z_0,c)=z_0 $$using Newton's method in one complex variable;
 * Find b by evaluating $$\frac{\partial}{\partial z} F^p(z,c)$$ at $$z_0$$
 * If $$ \left | b \right | \le 1$$ then return b
 * otherwise continue with the next p : $$p = p+1$$

To solve such system of equations one can use Newton method.

Parameter ray
External parameter rays

equipotential line
Boundaries of potential or escape time level sets ( equipotential curves) are polynomial lemniscates. Computing from symbolic equations gets exponentially more complicated as boundary gest closer to Mandelbrot or Julia set.

Is there a more efficient way of computing for the Mandelbrot lemniscates?

Complex iteration of $$z \to z^2 + c$$ gives a degree $$2^{n-1}$$ polynomial in $$c$$ in $$O(n)$$ work.

Suppose


 * $$c = x + i y$$


 * $$f_c(z) = z^2 + c$$


 * $$f_c^{\circ (n+1)}(z) = f_c^{\circ n}(f_c(z))$$

Then


 * $$\frac{d}{dc} f_c^{\circ(n+1)}(z) = 2 f_c^{\circ n}(z) \frac{d}{dc} f_c^{\circ n}(z) + 1$$

These can be calculated together in one inner loop (being careful not to overwrite old values that are still needed).

Now you can use Newton's root finding method to solve the implicit form $$f_c^{\circ n}(0) = r e^{2 \pi i \theta} = t$$ by


 * $$c_{m+1} = c_{m} - \frac{f_{c_m}^{\circ n}(0) - t}{\frac{d}{dc}f_{c_m}^{\circ n}(0)}$$

Use the previous $$c$$ as initial guess for next $$\theta$$. The increment in $$\theta$$ needs to be smaller than about $$\frac{1}{4}$$ for the algorithm to be stable. Note that $$\theta$$ (measured in turns) wraps around the unit circle $$2^{n-1}$$ times before $$c$$ gets back to its starting point.

This approach is inspired by An algorithm to draw external rays of the Mandelbrot set by Tomoki Kawahira

Periodic points

 * of complex quadratic polynomial
 * general numerical method for complex quadratic polynomial

=See also=
 * The Newton-Raphson fractal
 * Calculus : Newton Method
 * Newton-Hines fractals
 * Fractalzoomer - root finding methods

=References=
 * NEWTON’S METHOD IN PRACTICE: FINDING ALL ROOTS OF POLYNOMIALS OF DEGREE ONE MILLION EFFICIENTLY by DIERK SCHLEICHER AND ROBIN STOLL
 * NEWTON’S METHOD IN PRACTICE II: THE ITERATED REFINEMENT NEWTON METHOD AND NEAR-OPTIMAL COMPLEXITY FOR FINDING ALL ROOTS OF SOME POLYNOMIALS OF VERY LARGE DEGREES by DIERK SCHLEICHER AND ROBIN STOLL
 * Finding polynomial roots by dynamical systems -- a case studyby Sergey Shemyakov, Roman Chernov, Dzmitry Rumiantsau, Dierk Schleicher, Simon Schmitt, Anton Shemyakov