Introduction to Mathematical Physics/Some mathematical problems and their solution/Boundary, spectral and evolution problems

In order to help the reading of the next chapters, a quick classification of various mathematical problems encountered in the modelization of physical phenomena is proposed in the present chapter. More precisely, the problems considered in this chapter are those that can be reduced to the finding of the solution of a partial differential equation (PDE). Indeed, for many physicists, to provide a model of a phenomenon means to provide a PDE describing this phenomenon. They can be boundary problems, spectral problems, evolution problems. General ideas about the methods of exact and approximate solving of those PDE is also proposed. This chapter contains numerous references to the "physical" part of this book which justifies the interest given to those mathematical problems.

In classical books about PDE, equations are usually classified into three categories: hyperbolic, parabolic and elliptic equation. This classification is connected to the proof of existence and unicity of the solutions rather than to the actual way of obtaining the solution. We present here another classification connected to the way one obtains the solutions: we distinguish mainly boundary problems and evolution problems.

Let us introduce boundary problems:

This class of problems can be solved by integral methods (see section chapmethint) and by variational methods (see section chapmetvar). Let us introduce a second class of problems: evolution problems. Initial conditions are usually implied by time variables. One knows at time $$t_0$$ the function $$u(x,y,t_0)$$ and try to get the values of $$u(x,y,t)$$ for $$t$$ greater than $$t_0$$. This our second class of problem:

Of course in some problems time can act "like" a space variable. This is the case when shooting problems where $$u$$ should satisfy a condition at $$t=t_0$$ and another condition at $$t=t_1$$.

The third class of problem (spectral problem) often arises from the solving (see section chapmethspec) of linear evolution problems (evolution problems where the operator $$L$$ is linear):

The difference between boundary problems and evolution problem rely on the different role played by the different independent variables on which depend the unknown function. Space variables usually implies boundary conditions. For instance the elevation $$u(x,y)$$ of a membrane which is defined for each position $$x,y$$ in a domain $$\Omega$$ delimited by a boundary $$\partial \Omega$$ should be zero on the boundary. If the equation satisfied by $$u$$ at equilibrium and at position $$(x,y)$$ in $$\Omega$$ is:

then $$L$$ should be a differential operator acting on space variables which is at least of the second order.

Let us now develop some ideas about boundary conditions. In the case of ordinary differential equations (ODE) the unicity of solution is connected to the initial conditions via the Cauchy theorem. For instance to determine fully a solution of an equation:

one needs to know both $$u(0)$$ and $$\frac{du}{dt}(0)$$. Boundary conditions are more subtle, since the space can have a dimension greater than one. Let us consider a 1-D boundary problem. The equation is then an ODE that can be written:

Boundary conditions are imposed for two (at least) different values of the space variables $$x$$. Thus the operator should be at least of second order. Let us take $$L=\frac{d^2}{dx^2}$$. Equation eqgenerwrit is then called Laplace equation\index{Laplace equation}. The elevation of a string obey to such an equation, where $$f$$ is the distribution of weight on the string. A clamped string (see Fig. figcordef) corresponds to the case where the boundary conditions are:

Those conditions are called Dirichlet conditions. A sliding string (see Fig. figcordeg) corresponds to the case where the boundary conditions are:

Those conditions are called Neumann conditions. Let us recall here the definition of adjoint operator\index{adjoint operator}:



One can show that $$\mathcal H$$, the space of functions zero in $$0$$ and $$L$$ is a Hilbert space for the scalar product $$\mathrel{<} u,v \mathrel{>}=\int_0^L u(x)v(x) dx$$. (One speaks of Sobolev space, see section  secvafor). Moreover, one can show that the adjoint of $$L$$ is $$L^*=\frac{\partial^{2}}{\partial x^{2}}$$:

As $$L^*=L$$, $$L$$ is called self-adjoint.

One can also show that the space $$\mathcal H_2$$ of function with derivative zero at $$x=0$$ and at $$x=L$$ is a Hilbert space for the scalar product: $$\mathrel{<} u,v \mathrel{>}=\int_0^Lu(x)v(x) dx$$. Using equation eqadjoimq, one shows that $$L$$ is also self-adjoint.

The form \index{form (definite)}

is negative definite in the case of the clamped string and negative (no definite) in the case of the sliding string. Indeed, in this last case, function $$u=\mbox{ constant }$$ makes $$$$ zero.

Let us conclude those remarks about boundary conditions by the compatibility between the solution $$u$$ of a boundary problem

and the right hand member $$f$$ ([#References|references]).