Ordinary Differential Equations/Structure of Differential Equations

Differential equations are all made up of certain components, without which they would not be differential equations. In working with a differential equation, we usually have the objective of solving the differential equation. A solution in this context is a new function with all the derivatives gone. If this is impossible, we go for a numerical solution.

Differential Equations
The first and most basic example of a differential equation is the one we are already familar with from calculus. That is
 * $$y'(x)=f(x)$$

In this case we know how to solve for y (eliminate the derivative) by integrating f. So we know that
 * $$y(x)=\int_{a}^x f(x)\,dx+c.$$

Recall from the fundamental theorem of calculus that $$\int_{a}^x f(x)\,dx$$ is an anti-derivative for f(x) for any choice of a. Notice that there is an arbitrary constant c and so we get a family of solutions, one for each choice of c. Often in the study in the book we will encounter initial value problems. These are problems where we are asked to find a solution to an ordinary differential equation that passes through some initial point (x0, y0), where x0 is the independent and y0 the dependent variable. To find which solution passes through this point, one simply plugs x0 into the equation for x and y0 in for y(x0). This allows us to make a specfic choice for c which normally would be arbitrary.
 * $$\begin{align}y_0&=\int_a^{x_0} f(x)\,dx+c\\ c&=y_0-\int_a^{x_0}f(x)\,dx\end{align}$$

If we substitute this choice for c into the expression for y we find that:
 * $$\begin{align}y(x)&=\int_a^x f(x)\,dx+y_0-\int_a^{x_0}f(x)\,dx\\&=y_0+\int_{x_0}^x f(x)\,dx\end{align}$$

Notice this is really a statement of the fundamental theorem calculus.

Note that n can be interpreted as the degree of derivation and that $$y^{(n)}$$ is the first variable which in turn is itself a function of x. This definition can be a lot to swallow. It helps to take an example. Suppose F(t1, t2, t3)=t1-cos(t3)t2. Then F(y',y,x)=0 becomes
 * $$y'-\cos(x)y=0\quad\text{or}\quad y'=\cos(x)y$$.

Thus, by our definition above, y&prime;=cos(x)y is a first order ordinary differential equation.

In general we will run into problems if some restrictions are not placed on the function F. For example, if we didn't require F to depend on its first variable, then we could have taken a function like F(t1, t2, t3)=1-cos(t3)t2 which is independent of its first variable. In which case F(y&prime;, y, x) = 0 simply becomes 1 &minus; cos(x)y=0 which involves no derivatives at all! It would be very odd indeed to call this a first order differential equation.

Specific examples of ordinary differential equations we are familiar with from calculus would be:


 * $$\frac{dy}{dx}=x \, $$


 * $$\frac{dy}{dx}=2 \sin x^2. \, $$

However, they can also involve the higher order derivatives of y with respect to x. For example:


 * $$xy\frac{d^2y}{dx^2}+y\frac{dy}{dx}+e^{3x}=0$$

is also an ordinary differential equation.

Characteristics of Differential Equations
The order of a differential equation is the order of the highest derivative involved in the equation. Thus:


 * $$\frac{d^2y}{dx^2}-4\frac{dy}{dx}-3y=27x^2$$

is a second-order differential equation, as the highest derivative is the second: d²y/dx².

The degree of a polynomial differential equation is the power to which the highest derivative is raised.

Linear and Non-Linear Differential Equations
DEs fall into two major types: linear and non-linear.

Linear DEs are the simpler kind. A partial differential equation or an ordinary differential equation that has a degree of 1 and no higher degree is called linear.Thus,


 * $$4\frac{dy}{dx}-3y=27x^2$$

is a linear DE.

Non-Linear DEs are much more complex, as they are any DEs that are not linear. For example,


 * $$y^2, \,\,\,\sqrt{y}\,\,\,\cos y $$


 * $$\left( \frac{d^2y}{dx^2} \right)^2=-7y$$


 * $$ \sqrt{ \frac{d^2y}{dx^2}}+y^2=x$$

are non-linear DEs.

Only a tiny proportion of non-linear DEs are solvable exactly - most have to be approximated.

Homogeneous Differential Equations
A homogeneous DE is one in which only the terms involving y ( includes the derivatives of y ) are present in the equation. No terms involving the independent variables must be present in the equation. Therefore:


 * $$\frac{d^2y}{dx^2}-y=0$$

is homogeneous. If something is left over, then the DE is non-homogeneous, like this one:


 * $$\frac{d^2y}{dx^2}-y=2x$$

A non-zero constant on the right-hand-side also implies a non-homogeneous DE - after all a constant is still a function.

Generally, if a DE can be written as:


 * $$a_n(x)\frac{d^ny}{dx^n}+\ldots+a_1(x)\frac{dy}{dx}+a_0(x)y=0,$$

where an(x), etc are functions of x, it is homogeneous. However, if it can only be written as


 * $$a_n(x)\frac{d^ny}{dx^n}+\ldots+a_1(x)\frac{dy}{dx}+a_0(x)y=b(x),$$

where b(x) is a function of x, it is non-homogeneous.

Solutions of a Differential Equation
A solution of this differential equation is any function y=f(x), which, when substituted into the above equation, satisfies the equation.

An equation of the form


 * $$f_1(x,y,C_1,C_2,C_3,...,C_n)=0$$

with $$C_1,C_2,C_3,...,C_n$$ as arbitrary constants is called an integral solution of the differential equation if all functions y=f(x) that are solutions to the integral solution when $$C_1,C_2,C_3,...,C_n$$ are substituted for any values (with the possibility of restrictions) are solutions to the differential equation. Originally, James Bernoulli in 1689 used the term integral and Euler used the term particular integral in 1768. The word solution seems to have first appeared around 1774 by Lagrange, and through Poincaré this term has been established.

A third type of solution is called the parametric solution in the form

$$x=x(t,C_1,C_2,C_3,...,C_n)$$

and

$$y=y(t,C_1,C_2,C_3,...,C_n)$$

with arbitrary constants $$C_1,C_2,C_3,...,C_n$$ whenever all functions y=f(x) that make the second equation an identity are also solutions to the differential equation.

People have tried to define general solutions (formerly known as complete integral or complete integral equations due to Euler, these two terms now mean something different) to be integral solutions with arbitrary constants, and singular solutions to be integral solutions which are not contained in the general solution. However, these definitions have turned out to be contradictory, since it may be possible that given one general solution that excludes a singular solution, that another general solution may be found that includes the singular solution. Thus, the idea of singular solutions is contradictory and there is no good way to work with these terms.

Instead, we are going to define general solutions to be an integral solution that includes all solutions of the DE, and a particular solution to be any single solution or integral solution of the DE.

When solving a DE in the crude sense, we aim to find ways to solve equations in particular forms to solutions directly, or to reduce them to a more amenable form. Later, we will aim to solve a DE in a more general sense.

An initial value problem is a differential equation together with the initial conditions that the solution $$y=f(x)$$ also satisfy the equations


 * $$y_0=f(x_0)$$
 * $$y_1=f'(x_0)$$
 * $$y_2=f''(x_0)$$

...
 * $$y_n=f^{(n)}(x_0)$$

at a specific $$x_0$$. If the $$x_0$$ are different, then it is called a boundary value problem with boundary conditions.

We first consider the simple case of the equation $$y'=f(x)$$. This is easily solvable with the following theorem that you probably have already proved in Calculus:

Relationship to other types of equation
The following types of equation are not normally encountered in a first course in differential equations but are included here to illustrate the range of problems where differential equations play a role.

It is possible to formulate equations where the function being sought is part of the integrand. Such equations are known as integral equations. It is a theorem in differential equations that states that virtually any differential equation can be reformulated as an integral equation. Integral equations are normally studied after differential equations have been mastered. In practice it is sometimes the case that the corresponding integral equation may be easier to solve than the original differential equation.

It is also possible to encounter equations which include both derivatives and integrals. These equations may or may not be convertible to either purely differential or integral equations.

Another related area is that of difference equations. These equations involve the formation of derivatives where the denominator is not an infinitely small quantity but one of finite size. Their methods of solution parallel those of differential equations. One major difference in their solutions is the role played by the exponential function in differential equations is often replaced by another value which may be complex.

Equations containing both difference and differential terms are not commonly encountered in practice. These may be difficult to solve in closed form.

Differential equations may be formulated for matrices as well as for real and complex numbers. Because matrix multiplication is not in general commutative while solving these equations careful attention to the order of the factors must be paid.

Additionally, fractional differential equations, which may be either ordinary or partial differential equations, also present some peculiarities and for this reason are also studied after a firm grounding in the more usual forms have been mastered.

Fractional differential equations are rarely mentioned in most text books so a brief note is included here. Typical ordinary differential equations involve integer power of derivatives while fractional differential equations involve any power. This class of equation has been studied almost as long as the other types of differential equation but other than the semi derivative equations - those involving powers of +/- 1/2 - methods for solving them in closed form are not known. Many examples of the diffusion equation - a commonly occurring partial differential equation in physics and chemistry - can be reformulated in terms of a semiderivative equation and solved immediately.

One reason for the difficulties encountered with this type of differential equation is because the range of potential solutions is much larger than those encountered elsewhere. Integer valued derivatives require a function to be differentiable: only functions of this type can be solutions to a typical differential equation. Fractional derivatives may be applied to completely discontinuous functions and some generalized functions. Methods for identifying these less well studied functions as solutions to fractional differential equations have yet to be developed systematically.

Existence and Uniqueness theorems
As well as attempting to solve a new differential equation it is frequently worthwhile determining if a solution to the equation actually exists and if it does whether the solution is unique. The answers to these questions will be addressed in the section on the existence and uniqueness theorems that will be proved later.

Since most differential equations cannot be solved in closed form, numerical solutions are of great importance. While the existence theorems may seem to be rather esoteric to the beginner they are of considerable importance when attempting a numerical solution: in practice it is very helpful to know that a solution really does exist before trying to compute it.

Understanding when solutions exist and are unique often provides qualitative information about solutions. For example, the basic theorems about uniqueness state that for each initial condition, there is a unique solution. This immediately implies that two solutions can never intersect. If they did you could take the intersection point as the your initial data and the uniqueness theorem would imply the solutions are the same function. We will discuss more about qualitative behavior toward the second part of the text.