Parallel Spectral Numerical Methods/Separation of Variables

Separation of variables is a technique which can be used to solve both ODEs and PDEs. The basic idea for an equation in two variables is to rewrite the equation so that each of the two variables is located on different sides of an equality sign, and since both sides of the equation depend on different variables, the two sides must be equal to a constant. We introduce this idea with the simple first order linear ODE

As long as $$y(t) \ne 0$$ for any value of $$t$$, we can formally separate variables and rewrite eq. $$ as

Now we can solve for $$y(t)$$ by integrating both sides

Where $$a, b,$$ and $$c$$ are arbitrary constants of integration.
 * We now perform a similar example for a linear partial differential equation. The heat

equation is

We suppose that $$u = X(x)T(t)$$, so that we obtain

We can rewrite this as

where $$C$$ is a constant independent of $$x$$ and $$t$$. The two sides can be integrated separately to get $$T(t) = exp(-Ct)$$ and either $$X(x) = sin(\sqrt{C}x)$$ or $$X(x) = cos(\sqrt{C}x).$$ Since the heat equation is linear, one can then add different solutions to the heat equation and still obtain a solution of the heat equation. Hence solutions of the heat equation can be found by

where the constants $$ \alpha_{n}, \beta_{n}$$ and $$C_{n}$$ are appropriately chosen. Convergence of such series to an actual solution is studied in mathematics courses on analysis (see for example Evans or Renardy and Rogers ), however the main ideas necessary to choose the constants, $$ \alpha_{n}, \beta_{n}$$ and $$C_{n}$$ and hence construct such solutions are typically encountered towards the end of a calculus course or at the beginning of a differential equations course, see for example Courant and John or Boyce and DiPrima. Here, we consider the case where $$x \in [0, 2\pi]$$, and for which we have periodic boundary conditions. In this case $$\sqrt{C_n}$$ must be integers, which we choose to be non-negative to avoid redundancies. At time $$t = 0$$, we shall suppose that the initial condition is given by

Now,

and

Thus we can consider the trigonometric polynomials as being orthogonal vectors. It can be shown that a sum of these trigonometric polynomials can be used to approximate a wide class of periodic functions on the interval $$[0, 2\pi]$$; for well behaved functions, only the first few terms in such a sum are required to obtain highly-accurate approximations. Thus, we can expand the initial condition into a sum of trigonometric functions,

Multiplying the above equation by either $$sin(\sqrt{C_n}x)$$ or $$cos(\sqrt{C_n}x)$$ and using the orthogonality of the functions, we deduce that

and

Most ODEs and PDEs of practical interest will not be separable. However, the ideas behind separation of variables can be used to allow one to find series solutions to a wide class of PDEs. These series solutions can also be found numerically and are what we will use to find approximate solutions to PDEs, and so the ideas behind this simple examples are quite useful.

Exercises
1) Solve the ordinary differential equation

$$u_{t} = u(u - 1)$$

$$u(t = 0) = 0.8$$

using separation of variables.

2)
 * a) Use separation of variables to solve the partial differential equation

$$u_{tt} = u_{xx}$$

with

$$u(x = 0, t) = u(x = 2\pi, t),$$

$$u(x; t = 0) = sin(6x) + cos(4x)$$

and

$$u_{t}(x, t = 0) = 0.$$


 * b) Create plots of your solution at several different times and/or create an animation of the solution you have found.


 * c) The procedure required to find the coefficients in the Fourier series expansion for the initial condition can become quite tedious/intractable. Consider the initial condition $$u(x, t = 0) = exp(sin(x))$$. Explain why it would be difficult to compute the Fourier coefficients for this by hand. Also explain why it would be nice to have an algorithm or computer program that does this for you.