Introduction to Mathematical Physics/Some mathematical problems and their solution/Boundary problems, variational methods

Variational formulation
Let us consider a boundary problem: Let us suppose that there is a unique solution $$u$$ of this problem. For a functional space $$V$$ sufficiently large the previous problem, may be equivalent to the following:

This is the variational form of the boundary problem. To obtain the equality eqvari, we have just multiplied by a "test function" $$<v|$$ equation eqLufva. A "weak form" of this problem can be found using Green formula type formulas: the solution space is taken larger and as a counterpart, the test functions space is taken smaller. let us illustrate those ideas on a simple example:

It may be that, as for the previous example, the solution function space and the test function space are the same. This is the case for most of linear operators $$L$$ encountered in physics. In this case, one can associate to $$L$$ a bilinear form $$a$$. The (weak) variational problem is thus:

There exist theorems (Lax-Milgram theorem for instance) that prove the existence and uniqueness of the solution of the previous problem provari2, under certain conditions on the bilinear form $$a(u,v)$$.

Finite elements approximation
Let us consider the problem provari2 :

The method of approximation consists of choosing a finite dimensional subspace $$V_h$$ of $$V$$ and the problem to solve becomes:

A basis of $$\{v_h^i\}$$ of $$V_h$$ is chosen to satisfy boundary conditions. The problem is reduced to finding the components of $$u_h$$:

If $$a$$ is a bilinear form

and to find the coefficients $$$$, the problem is reduced to solving a linear system (often close to a diagonal system) that can be solved by classical algorithms which can be direct methods (Gauss,Cholesky) or iterative (Jacobi, Gauss-Seidel, relaxation). Note that if the vectors of the basis of $$V_h$$ are eigenvectors of $$L$$, then the solving of the system is immediate (diagonal system). This is the basis of the spectral methods for solving evolution problems. If $$a$$ is not linear, we have to solve a nonlinear system. Let us give an example of a basis $$\{v_h^i\}$$.

Finite difference approximation
Finite difference method is one of the most basic methods to tackle PDE problems. It is not strictly speaking a variational approximation. It is rather a sort of variational method where weight functions $$w_k$$ are Dirac functions $$\delta_k$$. Indeed, when considering the boundary problem,

instead of looking for an approximate solution $$u_h$$ which can be decomposed on a basis of weight functions $$w_k$$:

the action of $$L$$ on $$u$$ is directly expressed in terms of Dirac functions, as well as the right hand term of equation eqfini:

\begin{rem} If $$L$$ contains derivatives, the following formulas are used : right formula, order 1:

right formula order 2:

Left formulas can be written in a similar way. Centered formulas, second order are:

Centered formulas, fourth order are:

\end{rem} One can show that the equation eqfini2 is equivalent to the system of equations:

One can see immediately that equation eqfini2 implies equation eqfini3 in choosing "test" functions $$v_i$$ of support $$[x_i-1/2,x_i+1/2]$$ and such that $$v_i(x_i)=1$$.

Minimization problems
A minimization problem can be written as follows:

The solving of minimization problems depends first on the nature of the functional $$J$$ and on the space $$V$$. As usual, the functional $$J(u)$$ is often approximated by a function of several variables $$J(u_1,\dots,u_N)$$ where the $$u_i$$'s are the coordinate of $$u$$ in some base $$E_i$$ that approximates $$V$$. The methods to solve minimization problems can be classified into two categories: One can distinguish problems without constraints (see Fig. figcontraintesans) and problems with constraints (see Fig. figcontrainteavec). Minimization problems without constraints can be tackled theoretically by the study of the zeros of the differential function $$dF(u)$$ if exists. Numerically it can be less expensive to use dedicated methods. There are methods that don't use derivatives of $$F$$ (downhill simplex method, direction-set method) and methods that use derivative of $$F$$ (conjugate gradient method, quasi-Newton methods). Details are given in. Problems with constraints reduce the functional space $$U$$ to a set $$V$$ of functions that satisfy some additional conditions. Note that those sets $$V$$ are not vectorial spaces: a linear combination of vectors of $$V$$ are not always in $$V$$. Let us give some example of constraints: Let $$U$$ a functional space. Consider the space

where $$\phi_{i}(v)$$ are $$n$$ functionals. This is a first example of constraints. It can be solved theoretically by using Lagrange multipliers , \index{constraint}. A second example of constraints is given by

where $$\phi_{i}(v)$$ are $$n$$ functionals. The linear programming (see example exmplinepro) problem is an example of minimization problem with such constraints (in fact a mixing of equalities and inequalities constraints).







Physical principles have sometimes a natural variational formulation (as natural as a PDE formulation). We will come back to the variational formulations at the section on least action principle (see section secprinmoindreact ) and at the section secpuisvirtu on the principle of virtual powers.

Lagrange multipliers
Lagrange multipliers method is an interesting approach to solve the minimization problem of a function of $$N$$ variables with constraints, {\it i. e. } to solve the following problem: \index{contrainte}\index{Lagrange multiplier}

The Lagrange multiplier method is used in statistical physics (see sectionchapphysstat). In a problem without any constraints, a solution $$u$$ satisfies:

In the case with constraints, the $$n$$ coordinates $$u_i$$ of $$u$$ are not independent. Indeed, they should satisfy the relations:

The Lagrange multiplier method consists of looking for $$n$$ numbers $$\lambda_i$$ called "Lagrange multipliers" such that:

One obtains the following equation system: