Introduction to Biological Systems and Soft Condensed Matter

Lecture notes for the course given by Prof. David Andelman, Tel-Aviv University 2009

Original author: Guy Cohen, Tel-Aviv University

[[Media:biosoft.pdf|PDF version.]] [[Media:biosoft diagrams.pdf|All diagrams.]]

Introduction and Preliminaries
We will make several assumptions throughout the course:


 * 1) The physics in question are generally in the classical regime, $$\hbar\rightarrow0$$.
 * 2) Materials are "soft": quantitatively, this implies that all relevant energy scales are of the order of $$k_{B}T$$.
 * 3) Condensed matter physics deals with systems composed of $$O\left(10^{23}\right)$$ particles, and statistical mechanics applies. We are always interested in a reduced description, in terms of continuum mechanics and elasticity, hydrodynamics, macroscopic electrodynamics and so on.

We begin with an example from Chaikin & Lubensky, the story of an H2O molecule. This molecule is bound together by a chemical bond which is around $$20k_{B}T$$ at room temperature and not easily broken under normal circumstances. What happens when we put $$\sim10^{23}$$ water molecules is a container? First of all, with such large numbers we can safely discuss phases of matter: namely $$\underset{(\mbox{amorphous/crystal ice)}}{\mbox{solid}}\leftrightarrow\underset{\mbox{(water)}}{\mbox{fluid}}\leftrightarrow\underset{\mbox{(steam)}}{\mbox{gas}}.$$ Gas is typical to low density, high temperature and low pressure. It is generally prone to changes in shape and volume, homogeneous, isotropic, weakly interacting and insulating. This is the least orderedform of matter relevant to our scenario, and relatively easy to treat since order parameters are small. The liquid phase is typical of intermediate temperatures. It flows but is not very compressible. It is homogeneous, isotropic, dense and strongly interacting. Its response to external forces depends on the rate of its deformation. Liquids are hard to treat theoretically, as their intermediate properties make simple approximations less effective. The solid is a dense ordered phase with low entropy and strong interactions. It is anisotropic and does not flow, it strongly resists compression and its response to forces depends on the amount of deformation they cause (elastic). Transitions between these phases occur at specific values of thermodynamic parameters (see diagram (1)). First order changes (volume/density "jumps" at the transition, and no jump in pressure/temperature) occur on the lines; at the critical liquid/gas point, second order phase transitions occur; at the triple point, all three phases (solid/liquid/gas) coexist. The systems we are interested in are characterized by several kinds of interactions between their constituent molecules: for example, Coulombic interactions of the form $$\frac{q^{2}}{r^{2}}$$ when charged particles are present, fixed dipole interaction of the form $$\frac{\mathbf{p}_{1}\cdot\mathbf{p}_{2}}{r^{3}}$$ when permanent dipoles exist, and almost always induced dipole/van der Waals interaction of the form $$\frac{\Delta\mathbf{p}_{1}\cdot\Delta\mathbf{p}_{2}}{r^{6}}$$. At close range we also have the "hard core" or steric repulsion, sometimes modeled by a $$\sim\frac{1}{r^{12}}$$ potential. Simulations often use the so-called $$12-6$$ Lennard-Jones potential $$U=4\varepsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right]$$(as pictured in (2)), which with appropriate parameters correctly describes both condensation and crystallization in some cases. Starting from a classical Hamiltonian such as $$H=\sum_{i}\left(\frac{\mathbf{p}_{i}^{2}}{2m}+V_{VdW}\right)$$, we can predict all three phases of matter and the transitions between them. In biological systems, this simple picture does not suffice: the basic consideration behind this is that of effects which occur at different scales between the nanometric scale, through the mesoscopic and up to the macroscopic scale. Biological systems are mesoscopic in nature, and their properties cannot be described correctly when a coarse-graining is performed without accurately accounting for mesoscopic properties.

A few examples follow:

Liquid crystals
The most basic assumption we need in order to model liquid crystals is that isotropy at the molecular level is broken: molecules are represented by rods rather than spheres. Such a description was suggested by Onsager and others, and leads to three phases as shown in (3).

Polymers
When molecules are interconnected at mesoscopic ranges, new phases and properties are encountered.

Soap/beer foam
This kind of substance is approximately 95% agent, with the remainder water – yet it behaves like a weak solid as long as its deformations are small. This is because a tight formation of ordered cells separated by thin liquid films is formed, and in order for the material to change shape the cells must be rearranged. This need for restructuring is the cause of such systems' solid-like resistance to change.

Structured fluids
Polymers or macromolecules in liquid state, liquid crystals, emulsions and colloidal solutions and gels display complex visco-elastic behavior as a result of mesoscopic super-structures within them.

Soft 2D membranes
Interfaces between fluids have interesting properties: they act as a 2D liquid within the interface, yet respond elastically to any bending of the surface. Surfactant molecules will spontaneously form membranes within the same fluid, which also have these properties at appropriate temperatures. Surfactants in solution also form lamellar structures - multilayered structures in which the basic units are the membranes rather than single molecules.

Polymers
Books: Doi, de Gennes, Rubinstein, Doi & Edwards.

Brief history
Natural polymers like rubber have been known since the dawn of history, but not understood. The first artificial polymer was made $$\sim1905$$. Stadinger was the first to understand that polymers are formed by molecular chains and is considered to be the father of synthetic polymers. Most polymers were made by petrochemical industry. Nylon was born in 1940. Various uses and unique properties (light, strong, thermally insulating; available in many different forms from strings and sheets to bulk; cheap, easy to process, shape and mass-produce...) have made them very attractive commercially. Later on, some leading scientists were Kuhn and Flory in chemistry (30's to 70's) and Stockmayer in physical chemistry (50's and 60's). The famous modern theory of polymers was first formulated by P.G. de Gennes and Sam Edwards.

What is a polymer?
Material composed of chains, having a repeating basic unit (monomer). Connections between monomers are made by chemical (covalent) bonds, and are strong at room temperature. $$\left[A\right]_{N}\equiv\overset{N\mbox{ times}}{\overbrace{A-A-A-...-A}}.$$ $$N$$ is the polymerization index.

Polymerization is also the name of the process by which polymers are synthesized, which involves a chain reaction where a reactive site exists at the end of the chain. Some chemical reactions increase the chain length by one unit, while simultaneously moving the reactive site to the new end: $$\left[A\right]_{N}+\left[A\right]_{1}\rightarrow\left[A\right]_{N+1}.$$ There also exist condensation processes, by which chains unite: $$\left[A\right]_{N}+\left[A\right]_{M}\rightarrow\left[A\right]_{N+M},$$ where $$N,\, M\ge1$$. A briefer notation, dropping the name of the monomer, is $$\left(N\right)+\left(M\right)\rightarrow\left(N+M\right).$$

Consider the example of hydrocarbon polymers, where we have a monomer which is $$\mathrm{C_{2}H_{4}}$$(Check this...). As a larger number of such units is joined together to become polyethylene molecules, the material composed of these molecules changes drastically in nature:

Types of polymer structures
Polymers can exist in different topologies, which affect the macroscopic properties of the material they form (see (4)):


 * Linear chains (this is the simplest case, which we will be discussing).
 * Rings (chains connected at the ends).
 * Stars (several chain arms connected at a central point).
 * Tree (connected stars).
 * Comb (one main chain with side chains branching out).
 * Dendrimer (ordered branching structure).

Polymer phases of matter
Depending on the environment and larger-scale structure, polymers can exist in many states:


 * Gas of isolated chains (not very relevant).
 * In solution (water or organic solvents). In dilute solutions, polymer chains float freely like gas molecules, but their length alters their behavior.
 * In a liquid state of chains (called a melt).
 * In solid state (plastic) – crystals, poly-crystals, amorphous/glassy materials.
 * Liquid crystal formed by polymer chains (Polymeric Liquid Cristal or PLC)
 * Gels and rubber: networks of chains tied together.

Some basic models of polymer chains
The simplest model of an ideal polymer chain is the freely jointed chain (FJC), where each monomer performs a completely independent random rotation. Here, at equilibrium the end-to-end length of the chain is $$R_{0}\simeq N^{\frac{1}{2}}\ell=L^{\frac{1}{2}}\ell^{\frac{1}{2}}$$, where $$L=N\ell$$ is the contour length.

A slightly more realistic model is the freely rotating chain (FRC), where monomers are locked at some chemically meaningful bond angle $$\vartheta$$ and rotate freely around it via the torsional angle $$\varphi$$. Here, $$\begin{matrix} R_{0}^{2} & \simeq & L\ell_{eff}\sim N^{\frac{1}{2}},

\ell_{eff} & = & \ell\frac{1+\cos\vartheta}{1-\cos\vartheta}.\end{matrix}$$ Note that for $$\left\langle \cos\vartheta\right\rangle =0$$ we find that $$\ell_{eff}=\ell$$ and this is identical to the FJC. For very small $$\vartheta\sim\varepsilon$$, we can expand the cosine an obtain $$ \ell_{eff}\underset{\longrightarrow}\frac{2\ell}{\varepsilon}\left(1-\frac{\varepsilon}{2}\right)\gg\ell.$$ This is the rigid rod limit (to be discussed later in detail).

A second possible improvement is the hindered rotation (HR) model. Here the angles $$\varphi_{i}$$ have a minimum-energy value, and are taken from an uncorrelated Boltzmann distribution with some potential $$V\left(\varphi_{i}\right)$$. This gives $$\begin{matrix} R_{0}^{2} & \simeq & L\ell_{eff}\sim N,

\ell_{eff} & = & \ell\left(\frac{1+\cos\vartheta}{1-\cos\vartheta}\right)\left(\frac{1+\left\langle \cos\varphi\right\rangle }{1-\left\langle \cos\varphi\right\rangle }\right).\end{matrix}$$

Another option is called the rotational isomeric state model. Here, a finite number of angles are possible for each monomer junction and the state of the full chain is given in terms of these. Correlations are also taken into account and the solution is numeric, but aside from a complicated $$\ell_{eff}$$ this is still an ideal chain with $$R_{0}^{2}\simeq L\ell_{eff}\sim N$$.

Calculating the end-to-end radius
For the polymer chain of (5), obviously we will always have $$\left\langle \mathbf{R}_{N}\right\rangle =0$$. The variance, however, is generally not zero: using $$\mathbf{R}_{N}=\sum_{i=1}^{N}\mathbf{r}_{i}$$, $$ \left\langle \mathbf{R}_{N}^{2}\right\rangle =\sum_{i,j=1}^{N}\mathbf{r}_{i}\cdot\mathbf{r}_{j}=\sum_{i,j=1}^{N}\ell^{2}\left\langle \cos\vartheta_{ij}\right\rangle .$$

FJC

In the freely jointed chain (FJC) model, there are neither correlations between different sites nor restrictions on the rotational angles. We therefore have $$\left\langle \cos\vartheta_{ij}\right\rangle =\frac{1}{\ell^{2}}\left\langle \mathbf{r}_{i}\cdot\mathbf{r}_{j}\right\rangle =\delta_{ij}$$, and $$ \left\langle \mathbf{R}_{N}^{2}\right\rangle =\sum_{ij}\ell^{2}\delta_{ij}=N\ell^{2}=L\ell.$$

Therefore, $$R_{0}\equiv\sqrt{\left\langle \mathbf{R}_{N}^{2}\right\rangle }=N^{\frac{1}{2}}\ell$$.

FRC

In the freely rotating chain model, the bond angles are held constant at angles $$\vartheta_{i}$$ while the torsion angles $$\varphi_{i}$$ are taken from a uniform distribution between $$-\pi$$ and $$\pi$$. This introduces some correlation between the angles: since (for one definition of the $$\varphi_{i}$$) $$\mathbf{r}_{i+1}=\cos\vartheta_{i}\mathbf{r}_{i}+\sin\vartheta_{i}(\sin\varphi_{i}\mathbf{\hat{y}}\times\mathbf{r}_{i}+\cos\varphi_{i}\mathbf{\hat{x}}\times\mathbf{r}_{i})$$, and since the $$\varphi_{i}$$ are independent and any averaging over a sine of cosine of one or more of them will result in a zero, only the $$\varphi_{i}$$ independent terms survive and by recursion this correlation has the simple form $$ \left\langle \mathbf{r}_{i}\cdot\mathbf{r}_{j}\right\rangle =\ell^{2}\left(\cos\vartheta\right)^{\left|i-j\right|}.$$ The end-to-end radius is $$\begin{array}{lcl} R_{0}^{2} & = & \sum_{ij=1}^{N}\left\langle \mathbf{r}_{i}\cdot\mathbf{r}_{j}\right\rangle \\

& = & \sum_{i=1}^{N}\overset{\overbrace{\left\langle \mathbf{r}_{i}^{2}\right\rangle }}+\ell^{2}\sum_{i=1}^{N}\overset{\overbrace{\sum_{j=1}^{i-1}\left(\cos\vartheta\right)^{i-j}}}+\ell^{2}\sum_{i=1}^{N}\overset{\overbrace{\sum_{j=i+1}^{N}\left(\cos\vartheta\right)^{j-i}}}\\

& = & N\ell^{2}+\ell^{2}\sum_{i=1}^{N}\left[\sum_{k=1}^{i-1}\left(\cos\vartheta\right)^{k}+\sum_{k=1}^{N-i}\left(\cos\vartheta\right)^{k}\right].\end{array}$$ At large $$N$$ we can approximate the two sums in $$k$$ by the series $$\sum_{k=1}^{\infty}\cos^{k}\vartheta=\frac{\cos\vartheta}{1-\cos\vartheta}$$, giving $$ R_{0}^{2}\simeq N\ell^{2}+2\ell^{2}\sum_{i=1}^{N}\frac{\cos\vartheta}{1-\cos\vartheta}=N\ell^{2}\frac{1+\cos\vartheta}{1-\cos\vartheta}.$$

To extract the Kuhn length $$\ell_{eff}$$ from this expression, we rewrite in in the following way: $$ R_{0}^{2}=N\left(\ell\sqrt{\frac{1+\cos\vartheta}{1-\cos\vartheta}}\right)^{2}\equiv N\ell\ell_{eff}=L\ell_{eff},$$ $$ \ell_{eff}=\ell\frac{1+\cos\vartheta}{1-\cos\vartheta}.$$ To go back from this to the FRC limit, we would consider a chain with a random distribution of $$\vartheta$$ angles such that $$\left\langle \cos\vartheta\right\rangle =0$$.

Gyration radius
Consider once again the polymer chain of (5). Define: $$ R_{g}^{2}=\frac{1}{N}\sum_{i=1}^{N}\left(\mathbf{R}_{i}^{\prime}-\mathbf{R}_{CM}^{\prime}\right)^{2}\equiv\frac{1}{N}\sum_{i=1}^{N}\mathbf{R}_{i}^{2}.$$ The unprimed coordinate system is refocused on the center of mass, such that $$\sum_{i}\mathbf{R}_{i}=0$$. Now, it is easier to work with the following expression: $$\begin{matrix} \frac{1}{2N^{2}}\sum_{ij}\left(\mathbf{R}_{i}-\mathbf{R}_{j}\right)^{2} & = & \frac{1}{2N^{2}}\sum_{ij}\left(-2\mathbf{R}_{i}\cdot\mathbf{R}_{j}+2\mathbf{R}_{i}\cdot\mathbf{R}_{i}\right)

& = & \frac{2N}{2N^{2}}\sum_{i}\mathbf{R}_{i}^{2}-\frac{1}{N^{2}}\overset{\overbrace{\left(\sum_{i}\mathbf{R}_{i}\right)}}\cdot\overset{\overbrace{\left(\sum_{j}\mathbf{R}_{j}\right)}}

& = & R_{g}^{2}.\end{matrix}$$ We will calculate $$R_{g}$$ for a long FJC. For $$N\gg1$$ we can replace the sums with integrals, obtaining $$ \begin{array}{lcr} \left\langle R_{g}^{2}\right\rangle & = & \frac{1}{2N^{2}}\sum_{ij}\overset{\scriptstyle {\left|i-j\right|\ell^{2}}}{\overbrace{\left\langle \left(\mathbf{R}_{i}-\mathbf{R}_{j}\right)^{2}\right\rangle }}\\

& = & \frac{1}{2N^{2}}\int_{0}^{N}\mathrm{d}u\int_{0}^{N}\mathrm{d}v\,\ell^{2}\left|u-v\right|\\

& = & \frac{2}{2N^{2}}\int_{0}^{N}\mathrm{d}u\int_{0}^{u}\mathrm{d}v\,\ell^{2}\left(u-v\right)\\

& = & \frac{\ell^{2}}{N^{2}}\int_{0}^{N}\mathrm{d}u\left[u^{2}-\frac{1}{2}u^{2}\right]\\

& = & \frac{1}{6}N\ell^{2}. \end{array} $$ This gives the gyration radius for an FJC: $$ R_{g}^{2}=\frac{1}{6}N\ell^{2}.$$

Polymers and Gaussian distributions
An ideal chain is a Gaussian chain, in the sense that the end-to-end radius is taken from a Gaussian distribution. We will see two proofs of this.

Random walk proof

One way to show this (see Rubinstein, de Gennes) is to begin with a random walk. For one dimension, if we begin at $$x=0$$ and at each time step $$i$$ move left or right with moves $$x_{i}=\pm\ell$$ and the final displacement $$x=\sum_{i}x_{i},$$ then $$ x=\ell\left(N_{+}-N_{-}\right)\equiv\ell N.$$ We define $$Z_{N}\left(x\right)$$ as the number of configurations of $$N$$ steps with a final displacement of $$x$$. $$P_{N}\left(x\right)$$ is the associated normalized probability. $$\begin{matrix} Z_{N}\left(x\right) & = & \frac{N!}{\left(N_{+}\right)!\left(N_{-}\right)!}\underset{\longrightarrow}\frac{C}{\sqrt{N}}e^{-\frac{x^{2}}{2\left\langle x^{2}\right\rangle }},

\left\langle x^{2}\right\rangle & = & N\ell^{2}.\end{matrix}$$ In fact, for $$N\rightarrow\infty$$ the central limit theorem tells us that $$x=\sum_{i}x_{i}$$ will have a Gaussian distribution for any distribution of the $$x_{i}$$. This can be extended to $$d$$ dimensions with a displacement $$\mathbf{R}=\sum_{i}\mathbf{x}_{i}$$: $$\begin{matrix} Z_{N}^{d} & = & \left(\frac{C}{\sqrt{N}}\right)^{d}\exp\left\{ -\frac{dR^{2}}{2\left\langle R^{2}\right\rangle }\right\} ,

\left\langle R^{2}\right\rangle & = & d\left\langle \mathbf{x}_{i}^{2}\right\rangle =N\ell^{2}.\end{matrix}$$ To find the normalization constant $$C$$ we must integrate over all dimensions: $$\begin{array}{lcl} 1=\int\mathrm{d}\mathbf{R}Z_{N}\left(\mathbf{R}\right) & = & \left(\int\mathrm{d}xZ_{N}\left(x\right)\right)^{d}\\

& = & \left(\frac{C}{\sqrt{N}}\sqrt{2\pi\left\langle x^{2}\right\rangle }\right)^{d},\\

& \Downarrow\end{array}$$ $$ P_{N}^{d}\left(\mathbf{R}\right)=\left(\frac{d}{2\pi N\ell^{2}}\right)^{\frac{d}{2}}\exp\left\{ -\frac{dR^{2}}{2N\ell^{2}}\right\} .$$

Some notes:
 * An ideal chain can now be redefined as one such that $$P_{N}^{d}\left(R\right)$$ is Gaussian in any dimension $$d\ge1$$.
 * This is also true for a long chain with local interactions only, such that $$R_{0}^{2}=N\ell\ell_{eff}=L\ell_{eff}\sim N$$.
 * The probability of being in a spherical shell with radius $$\mathbf{R}$$ is $$4\pi R^{2}P_{N}\left(R\right)$$.
 * The chance of returning to the origin $$P_{N}\left(R=0\right)$$ is $$\left(\frac{d}{2\pi\ell^{2}N}\right)^{\frac{d}{2}}\sim\left(\frac{1}{N}\right)^{\frac{d}{2}}\equiv N^{-\gamma}$$. $$\gamma=\frac{d}{2}$$ is typical of an ideal chain.
 * For any dimension $$d\ge1$$, $$R_{0}=\sqrt{\left\langle R^{2}\right\rangle }\sim N^{\frac{1}{2}}$$.

Formal proof

Another way to show this follows, which is also extensible to other distributions of the $$\left\{ \mathbf{r}_{i}\right\} $$.

In general, we can write $$ P_{N}\left(\mathbf{R}\right)=\int\mathrm{d}\mathbf{r}_{1}\int\mathrm{d}\mathbf{r}_{2}...\int\mathrm{d}\mathbf{r}_{N}\Psi\left(\mathbf{r}_{1},...,\mathbf{r}_{N}\right)\delta\left(\mathbf{R}-\sum_{i=1}^{N}\mathbf{r}_{i}\right).$$ In the absence of correlations, we can factorize $$\Psi$$: $$ \Psi\left(\mathbf{r}_{1},...,\mathbf{r}_{N}\right)=\psi\left(\mathbf{r}_{1}\right)...\psi\left(\mathbf{r}_{N}\right).$$ For example, for a freely jointed chain $$\psi\left(\mathbf{r}_{i}\right)=\alpha\delta\left(\left|\mathbf{r}_{i}\right|-\ell\right)$$. The normalization constant is found from $$\int\psi\left(r_{i}\right)4\pi r_{i}^{2}\mathrm{d}r_{i}=4\pi\alpha\ell^{2}=1$$, giving $$ \psi\left(\mathbf{r}_{i}\right)=\frac{1}{4\pi\ell^{2}}\delta\left(\left|\mathbf{r}_{i}\right|-\ell\right).$$ We can replace the delta functions with $$\delta\left(\mathbf{r}\right)=\frac{1}{\left(2\pi\right)^{3}}\int\mathrm{d}\mathbf{k}e^{i\mathbf{k}\cdot\mathbf{r}}$$, leaving us with $$\begin{matrix} P_{N}\left(\mathbf{R}\right) & = & \frac{1}{\left(2\pi\right)^{3}}\int\mathrm{d}\mathbf{k}e^{i\mathbf{k}\cdot\mathbf{R}}\int\mathrm{d}\mathbf{r}_{1}...\int\mathrm{d}\mathbf{r}_{N}\prod_{i}\left[e^{-i\mathbf{k}\cdot\mathbf{r}_{i}}\psi\left(\mathbf{r}_{i}\right)\right]

& = & \frac{1}{\left(2\pi\right)^{3}}\int\mathrm{d}\mathbf{k}e^{i\mathbf{k}\cdot\mathbf{R}}\left[\int d\mathbf{r}e^{-i\mathbf{k}\cdot\mathbf{r}}\psi\left(\mathbf{r}\right)\right]^{N}.\end{matrix}$$ In spherical coordinates, $$\begin{matrix} \int\mathrm{d}\mathbf{r}e^{-i\mathbf{k}\cdot\mathbf{r}}\psi\left(\mathbf{r}\right) & = & \int r^{2}\mathrm{d}r\mathrm{d}\vartheta\mathrm{d}\varphi\sin\vartheta e^{-ikr\cos\vartheta}\frac{1}{4\pi\ell^{2}}\delta\left(r-\ell\right)

& \overset{=} & \frac{1}{2}\int_{-1}^{1}\mathrm{d}\alpha e^{-ik\ell\alpha}

& = & \frac{\sin k\ell}{k\ell},\end{matrix}$$ which gives $$ P_{N}\left(\mathbf{R}\right)=\left(\frac{1}{2\pi}\right)^{3}\int\mathrm{d}\mathbf{k}e^{i\mathbf{k}\cdot\mathbf{r}}\left(\frac{\sin k\ell}{k\ell}\right)^{N}.$$ We are left with the task of evaluating the integral. This can be done analytically with the Laplace method for large $$N$$, since the largest contribution is around $$k\ell=0$$: we can approximate $$\left(\frac{\sin k\ell}{k\ell}\right)^{N}$$ by $$\left(1-\frac{\left(k\ell\right)^{2}}{6}+...\right)^{N}\simeq e^{-\frac{\left(k\ell\right)^{2}N}{6}}$$. The integral is then $$\begin{array}{lcl} P_{n}\left(\mathbf{R}\right) & = & \left(\frac{1}{2\pi}\right)^{3}\int\mathrm{d}\mathbf{k}e^{i\mathbf{k}\cdot\mathbf{R}}e^{-\frac{k^{2}\ell^{2}N}{6}}\\

& = & \left(\frac{1}{2\pi}\right)^{3}\int\mathrm{d}k_{1}\mathrm{d}k_{2}\mathrm{d}k_{3}\exp\left[\sum_{\alpha}\left(ik_{\alpha}R_{\alpha}-\frac{Nk_{\alpha}^{2}\ell^{2}}{6}\right)\right]\\

& = & \left(\frac{1}{2\pi}\right)^{3}\prod_{\alpha}\int\mathrm{d}k_{\alpha}\exp\left(ik_{\alpha}R_{\alpha}-\frac{Nk_{\alpha}^{2}\ell^{2}}{6}\right)\\

& = & \left(\frac{3}{2\pi N\ell^{2}}\right)^{\frac{3}{2}}\exp\left\{ -\frac{3R^{2}}{2N\ell^{2}}\right\} .\end{array}$$ This is, of course, the same Gaussian form we have obtained from the random walk (we have done the special case of $$d=3$$, but once again this process can be repeated for a general dimension $$d\ge1$$).

03/26/2009

Worm-like chain
In considering the $$\vartheta\rightarrow0$$ limit of the freely rotating chain, we have seen that $$\ell_{eff}\sim\frac{\ell}{\vartheta^{2}}\rightarrow\infty$$. This is of course unphysical, and this limit is actually important for many interesting cases of stiff chains (for instance, DNA). If we take the $$N\rightarrow\infty$$ limit along with $$\vartheta\rightarrow0$$ and start over, we can make the following change of variables: $$\begin{matrix} \left\langle \mathbf{r}_{i}\cdot\mathbf{r}_{j}\right\rangle & = & \ell^{2}\left\langle \cos\vartheta_{ij}\right\rangle

& = & \ell^{2}\left(\cos\vartheta\right)^{\left|i-j\right|}

& = & \ell^{2}\exp\left[-\frac{\left|i-j\right|\ell}{\ell_{p}}\right],\end{matrix}$$ which defines the persistence length $$\ell_{p}$$. For the FRC model, $$ \ell_{p}=-\frac{\ell}{\ln\cos\vartheta}.$$ This is a useful concept in general, however: it defines the typical length scale over which correlations between chain angles dies out, and is therefore an expression of the chain's rigidity.

At small $$\vartheta$$ we can expand the logarithm to get $$\begin{matrix} \ln\cos\vartheta & \simeq & \ln\left(1-\frac{\vartheta^{2}}{2}\right)\approx-\frac{\vartheta^{2}}{2}

& \Downarrow\end{matrix}$$ $$ \ell_{p}\simeq\frac{2\ell}{\vartheta^{2}}.$$ Taking the continuum limit carefully then requires us to consider $$N\rightarrow\infty$$ and $$\ell\rightarrow0$$ such that $$R_{max}=N\ell\cos\frac{\vartheta}{2}\simeq N\ell$$ is constant. Now, we can calculate the end-to-end length $$R_{0}^{2}=\left\langle R_{N}^{2}\right\rangle $$ at the continuum limit using out the new form for the correlations: $$\begin{matrix} R_{0}^{2} & = & \ell\sum_{ij}\left(\cos\vartheta\right)^{\left|i-j\right|}=\ell^{2}\sum_{ij}\exp\left\{ -\frac{\ell\left|i-j\right|}{\ell_{p}}\right\}

& \rightarrow & \ell^{2}\int_{0}^{R_{m}}\frac{\mathrm{d}u}{\ell}\int_{0}^{R_{m}}\frac{\mathrm{d}v}{\ell}\exp\left\{ -\frac{\left|u-v\right|}{\ell_{p}}\right\} .\end{matrix}$$ To simplify the calculation, we can define the dimensionless variable $$u^{\prime}=u/\ell_{p}$$, $$v^{\prime}=v/\ell_{p}$$ and $$R_{m}^{\prime}=R_{m}/\ell_{p}$$. With these replacements, $$\begin{array}{lcr} \frac{R_{0}^{2}}{\ell_{p}^{2}} & = & \int_{0}^{R_{m}^{\prime}}\mathrm{d}u^{\prime}\int_{0}^{R_{m}^{\prime}}\mathrm{d}v^{\prime}e^{-\left|u^{\prime}-v^{\prime}\right|}\\

& = & \int_{0}^{R_{m}^{\prime}}\mathrm{d}u^{\prime}\int_{0}^{u^{\prime}}\mathrm{d}v^{\prime}e^{-u^{\prime}}e^{v^{\prime}}+\int_{0}^{R_{m}^{\prime}}\mathrm{d}u^{\prime}\int_{u^{\prime}}^{R_{m}^{\prime}}\mathrm{d}v^{\prime}e^{u^{\prime}}e^{-v^{\prime}}\\

& = & \int_{0}^{R_{m}^{\prime}}\mathrm{d}u^{\prime}e^{-u^{\prime}}\left(e^{u^{\prime}}-1\right)+\int_{0}^{R_{m}^{\prime}}\mathrm{d}u^{\prime}e^{u^{\prime}}\left(e^{-u^{\prime}}-e^{-R_{m}^{\prime}}\right)\\

& = & R_{m}^{\prime}+\left(e^{-R_{m}^{\prime}}-1\right)+R_{m}^{\prime}-e^{-R_{m}^{\prime}}\left(e^{R_{m}^{\prime}}-1\right)\\

& = & 2R_{m}^{\prime}-2\left(1-e^{-R_{m}^{\prime}}\right).\end{array}$$ The final result (known as the Kratchky-Porod worm-like-chain or WLC) is $$ R_{0}^{2}=\left\langle R_{N}^{2}\right\rangle =2\ell_{p}R_{max}-2\ell_{p}^{2}\left(1-e^{-\frac{R_{max}}{\ell_{p}}}\right).$$ Importantly, is does not depend on $$\vartheta$$ or $$N$$ but only on the physically transparent persistence length and contour length.

We will consider the two limits where one parameter is much larger than the other. First, for $$\ell_{p}\gg R_{max}$$ we encounter the rigid rod limit: we can expand the previous expression into $$\begin{matrix} R_{0}^{2} & = & 2\ell_{p}R_{max}-2\ell_{p}^{2}\left(1-1+\frac{R_{max}}{\ell_{p}}-\frac{1}{2}\left(\frac{R_{max}}{\ell_{p}}\right)^{2}+...\right)

& = & R_{max}^{2}+\vartheta\left(\frac{R_{max}^{3}}{\ell_{p}^{3}}\right),

& \Downarrow

R_{0} & \sim & N.\end{matrix}$$ The fact that $$R_{0}\sim N$$ rather than $$R_{0}\sim N^{\frac{1}{2}}$$ is a result of the long-range correlations we have introduced, and is an indication that at this regime the material is in an essentially different phase. Somewhere between the ideal chain and the rigid rod, a crossover regime must exist.

For $$\ell_{p}\ll R_{max}$$ we can neglect the exponent, obtaining $$\begin{matrix} R_{0}^{2} & \simeq & 2\ell_{p}R_{max},

\ell_{p} & \simeq & \frac{2\ell}{\vartheta^{2}},

R_{max} & \simeq & N\ell.\end{matrix}$$ This therefore returns us to the ideal chain limit, with a Kuhn length $$\ell_{eff}=2\ell_{p}$$. The crossover phenomenon we discussed occurs on the chain itself here as we observe correlation between its pieces at differing length scales: at small scales ($$\sim\ell_{p}$$) it behaves like a rigid rod, while at long scales we have an uncorrelated random walk. An interesting example is a DNA chain, which can be described by a worm-like chain with $$\ell_{p}\approx500\AA$$ and $$R_{max}\simeq10\mu m\gg\ell_{p}$$: it will therefore typically cover a radius of $$R_{0}\sim7000\AA$$.

Free Energy of the Ideal Chain and Entropic Springs
We have calculated distributions of $$\mathbf{R}$$ for Gaussian chains with $$N$$ components, $$Z_{N}\left(\mathbf{R}\right)$$. Let's consider the entropy of such chains: $$\begin{matrix} S_{N}\left(\mathbf{R}\right) & = & k_{B}\ln Z_{N}\left(\mathbf{R}\right)

P_{N}\left(\mathbf{R}\right) & = & \frac{Z_{N}\left(\mathbf{R}\right)}{\int\mathrm{d}\mathbf{R}Z_{N}\left(\mathbf{R}\right)}=\left(\frac{3}{2\pi N\ell^{2}}\right)^{\frac{3}{2}}\exp\left(-\frac{3R^{2}}{2N\ell^{2}}\right).\end{matrix}$$ The logarithm of $$Z_{N}\left(\mathbf{R}\right)$$ is the same as that of $$P_{N}\left(\mathbf{R}\right)$$, aside from a factor which does not depend on $$\mathbf{R}$$. Therefore, $$\begin{matrix} S_{N}\left(\mathbf{R}\right) & = & \overset{\overbrace{k_{B}\ln\left(\int Z_{N}\left(\mathbf{R}\right)\mathrm{d}\mathbf{R}\right)+\frac{3}{2}k_{B}\ln\left(\frac{3}{2\pi N\ell^{2}}\right)}}-\frac{3}{2}k_{B}\frac{R^{2}}{N\ell^{2}}

& = & S_{N}\left(0\right)-\frac{3}{2}k_{B}\frac{R^{2}}{N\ell^{2}}.\end{matrix}$$ The free energy is $$\begin{matrix} F_{N}\left(\mathbf{R}\right) & = & U_{N}\left(\mathbf{R}\right)-TS_{N}\left(\mathbf{R}\right)

& = & \frac{3}{2}k_{B}T\frac{R^{2}}{N\ell^{2}}\,\,+\underset{\underbrace{F_{N}\left(0\right)}}\end{matrix}$$ since $$U_{N}\left(\mathbf{R}\right)=U_{N}\left(0\right)$$ for an ideal chain.

What does $$F_{N}\left(\mathbf{R}\right)$$ mean? It represents the energy needed to stretch the polymer, and this energy is $$\sim R^{2}$$ like a harmonic spring ($$U\sim\frac{1}{2}kx^{2}$$) with $$k=\frac{3k_{B}T}{N\ell^{2}}\sim\frac{T}{N}$$. Note that the polymer becomes less elastic (more rigid) as the temperature increases, unlike most solids. This is a physical result and can be verified experimentally: for instance, the spring constant of rubber (which is made of networks of polymer chains) increases linearly with temperature. Consider an experiment where instead of holding the chain at constant length, we apply a perturbatively weak force $$\pm\mathbf{f}$$ to its ends and measure its average length. We can perform a Legendre transform between distance and force: from equality of forces along the direction in which they are applied, $$\begin{array}{lcl} f_{x} & = & \frac{\partial F_{N}}{\partial R_{x}}=\frac{\partial}{\partial R_{x}}\left(\frac{3k_{B}T}{2N\ell^{2}}R^{2}\right)=\overset{\overbrace{\frac{3k_{B}T}{N\ell^{2}}}}R_{x},\\

& \Downarrow\\

\mathbf{f} & = & k\mathbf{R}.\end{array}$$ To be in this linear response ($$\mathbf{f}\sim\mathbf{r}$$) region, we must demand that $$\mathbf{R}\sim\left|\mathbf{R}_{0}\right|\ll R_{max}=N\ell$$, and to stress this we can write $$ \mathbf{f}=\left(\frac{3k_{B}T}{\ell}\right)\frac{\mathbf{R}}{R_{max}}.$$ Numerically, with a nanometric $$\ell$$ and at room temperature the forces should be in the picoNewton range to meet this requirement. A more rigorous treatment which works at arbitrary forces can be carried out by considering an FJC with oppositely charged ($$\pm q$$) ends in an electric field $$\mathbf{E}\parallel\mathbf{\hat{z}}$$. The chain's sites are at $$\mathbf{r}_{i}$$ with $$\mathbf{R}\equiv\mathbf{R}_{N}-\mathbf{R}_{0}$$. The potential is $$\begin{array}{lcl} U_{elec} & = & +q\mathbf{E}\cdot\mathbf{R}_{0}-q\mathbf{E}\cdot\mathbf{R}_{N}=f\mathbf{R}\\

& \Downarrow\\

\mathbf{f} & = & q\mathbf{E}.\end{array}$$ Since $$\mathbf{R}=\sum_{i}\mathbf{r}_{i}$$, we can write the potential as $$ U_{elec}=-q\mathbf{E}\cdot\mathbf{R}=-q\mathbf{E}\cdot\left(\sum_{i}\mathbf{r}_{i}\right)=-f\ell\sum_{i}\cos\vartheta_{i},$$ with $$\cos\vartheta_{i}=\mathbf{\hat{z}\cdot}\mathbf{r}_{i}$$. The partition function is $$\begin{matrix} Z_{N}\left(\mathbf{f}\right) & = & \int\mathrm{d}\mathbf{r}_{1}...\int\mathrm{d}\mathbf{r}_{N}\Psi\left(\left\{ \mathbf{r}_{i}\right\} \right)e^{-\beta U_{elec}\left(\left\{ \mathbf{r}_{i}\right\} \right)}

& = & \mbox{Tr}_{\Psi_{i}}e^{-\beta U\left(\left\{ \psi_{i}\right\} \right)}.\end{matrix}$$ The function $$\Psi$$ is separable into product of functions $$\psi\left(\mathbf{r}_{i}\right)=\frac{1}{4\pi\ell^{2}}\delta\left(\left|\mathbf{r}_{i}\right|-\ell\right)$$. Now, $$ \exp\left\{ -\frac{U_{elec}}{k_{B}T}\right\} =\exp\left\{ \frac{f\ell}{k_{B}T}\sum_{i}\cos\vartheta_{i}\right\} .$$ In spherical coordinates $$\mathbf{r}_{i}=\left(r_{i},\vartheta_{i},\varphi_{i}\right)$$ we can solve the integral: $$\begin{array}{lcl} Z_{N}\left(\mathbf{f}\right) & = & \left[\int_{0}^{\infty}\mathrm{d}r\frac{r^{2}}{4\pi\ell^{2}}\delta\left(r-\ell\right)\right]^{N}\times\left[\int_{0}^{2\pi}\mathrm{d}\varphi\right]^{N}\times\prod_{i}\int_{0}^{\pi}\mathrm{d}\vartheta_{i}\sin\vartheta_{i}e^{\frac{f\ell}{k_{B}T}\cos\vartheta_{i}}\\

& \underset{=} & \left(\frac{1}{4\pi}\right)^{N}\left(2\pi\right)^{N}\left[\int_{-1}^{1}\mathrm{d}xe^{\frac{f\ell}{k_{B}T}x}\right]^{N}\\

& = & \frac{1}{2^{N}}\left[\frac{2k_{B}T}{f\ell}\sinh\left(\frac{f\ell}{k_{B}T}\right)\right]^{N}\\

& = & \left[\frac{k_{B}T}{f\ell}\sinh\left(\frac{f\ell}{k_{B}T}\right)\right]^{N}.\end{array}$$ The Gibbs free energy (Gibbs because the external force is fixed) is then $$ G_{N}\left(\mathbf{f}\right)=-k_{B}T\ln Z_{N}\left(\mathbf{f}\right)=-k_{B}TN\ln\left[\sinh\left(\frac{f\ell}{k_{B}T}\right)\right]+k_{B}TN\ln\left(\frac{f\ell}{k_{B}T}\right),$$ and the average extension $$\begin{matrix} \left\langle R\right\rangle _{f} & = & -\frac{\partial G_{N}\left(f\right)}{\partial f}

& = & -k_{B}TN\coth\left(\overset{\overbrace{\frac{f\ell}{k_{B}T}}}\right)\frac{\ell}{k_{B}T}+k_{B}TN\frac{1}{f}

& = & N\ell\left[\coth\alpha-\frac{1}{\alpha}\right]\equiv N\ell\mathcal{L}\left(\alpha\right)\end{matrix}$$ The Langevin function $$\mathcal{L}\left(\alpha\right)=\coth\alpha-\frac{1}{\alpha}$$ is also typical of spin magnetization in external magnetic fields and of dipoles in electric fields at finite temperatures. 04/02/2009

Introduction to fractals
Book: B. Mandelbrot.

A fractal is an object with fractal dimensionality , called also the Hausdorff dimension. This implies a new definition of dimensionality, which we will discuss. Consider a sphere of radius $$R$$. It is considered three-dimensional because it has $$V=\frac{4\pi}{3}R^{3}$$ and $$M=\rho V\sim R^{D}$$ for $$D=3$$. A plane has by the same reasoning $$M\sim R^{D}$$ for $$D=2$$, and is therefore a $$2D$$ object. Fractals are mathematical objects such that by the same sort of calculation they will have $$M\sim R^{D_{f}}$$, for a $$D_{f}$$ which is not necessarily an integer number (this definition is due to Hausdorff). One example is the Koch curve (see (7)): in each of its iterations, we decrease the length of a segment by a factor of 3 and decrease its mass by a factor of 4. We will therefore have $$\begin{array}{lcl} M_{2} & = & \frac{1}{4}M_{1}=A\left(r_{2}\right)^{D_{f}}=A\left(\frac{1}{3}r_{1}\right)^{D_{f}},\\

M_{1} & = & A\left(r_{1}\right)^{D_{f}}.\\

& \Downarrow\\

\frac{1}{4} & = & \left(\frac{1}{3}\right)^{D_{f}}\Rightarrow D_{f}=\frac{\ln4}{\ln3}\simeq1.26\mbox{ and }1<D_{f}<2.\end{array}$$ Note that a fractal's "real" length is infinite, and its approximations will depend on the resolution. The structure exhibits self-similarity: namely, on different length scales it will look the same. This can be seen in the Koch snowflake: at any magnification, a part of the curve looks similar to the whole curve. There's a very nice animation of this in Wikipedia. The total length of the curve depends on the ruler used to measure it: the actual length at iteration $$n$$ is $$L_{0}\left(\frac{4}{3}\right)^{n}$$. Another definition for the fractal dimension is $$ D_{f}=\frac{\ln\frac{L_{0}}{\ell_{0}}}{\ln\frac{\ell_{1}}{\ell_{0}}}=\frac{\ln4}{\ln3}.$$

Linking fractals to polymers
Consider the ideal Gaussian chain again. It has $$R_{0}^{2}=N\ell^{2}\sim N$$. Since $$N$$ is proportional to the mass, we have an object with a fractal dimension of 2 no matter what the dimensionality of the actual space is. We can say that a polymer in $$d$$-space fills only $$D_{f}\le d$$ dimensions of the space it occupies, where $$D_{f}$$ is 2 for an ideal polymer Gaussian and $$2\le D_{f}\le d$$ in general. Flory has shown that in some cases a non-ideal polymer can also have $$D_{f}<2$$, in particular when a self-avoiding walk (SAW) is accounted for. The SAW as opposed to the Gaussian walk (GW) is the defining property of a physical rather than ideal polymer, and gives a fractal dimension of $$D_{f}\approx1.66$$. A collapsed polymer has $$D_{f}=3$$ and fills space completely. Note that two polymers with fractal dimensions $$D_{f}$$ and $$D_{f}^{*}$$ do not "feel" each other statistically if $$D_{f}+D_{f}^{*}<d$$.

Polymers, Path Integrals and Green's Functions
Books: Doi & Edwards, F. Wiegel, or Feynman & Hibbs.

Local Gaussian chain model and the continuum limit
This model is also known as LGC. We start from an FJC in 3D where $$\Psi=\prod_{i}\psi\left(\mathbf{r}_{i}\right)$$ and $$\psi\left(\mathbf{r}_{i}\right)=\frac{1}{4\pi\ell^{2}}\delta\left(\mathbf{r}_{i}-\ell\right)$$. By the central limit theorem $$\mathbf{R}=\sum_{i}\mathbf{r}_{i}$$ will always be taken from a Gaussian distribution when the number of monomers is large (whatever the form of $$\psi$$, as long as it is symmetrical around zero such that $$\left\langle \mathbf{r}_{i}\right\rangle =0$$): $$ P_{N}\left(\mathbf{R}\right)=\left(\frac{3}{2\pi N\ell^{2}}\right)^{\frac{3}{2}}\exp\left(-\frac{3R^{2}}{2N\ell^{2}}\right).$$ In the LGC approximation we exchange the rigid rods for Gaussian springs with $$\left\langle \mathbf{r} _{i}\right\rangle =0$$ and $$\left\langle \mathbf{r}_{i}^{2}\right\rangle =\ell^{2}$$, by setting $$ \psi\left(\mathbf{r}_{i}\right)=\left(\frac{3}{2\pi\ell^{2}}\right)^{\frac{3}{2}}\exp\left(-\frac{3R^{2}}{2\ell^{2}}\right).$$ We can then obtain for the full probability distribution $$\begin{matrix} \Psi\left(\left\{ \mathbf{r}_{i}\right\} \right) & = & \prod_{i}\psi\left(\mathbf{r}_{i}\right)

& = & \left(\frac{3}{2\pi\ell^{2}}\right)^{\frac{3N}{2}}\exp\left(-\sum_{i=1}^{N}\frac{3\left(\mathbf{R}_{i}-\mathbf{R}_{i-1}\right)^{2}}{2\ell^{2}}\right),\end{matrix}$$ where $$\mathbf{r}_{i}=\mathbf{R}_{i}-\mathbf{R}_{i-1}$$. $$\Psi$$ describes $$N$$ harmonic springs with $$k=\frac{3k_{B}T}{\ell^{2}}$$ connected in series: $$\begin{matrix} U_{0}\left(\left\{ \mathbf{R}_{i}\right\} \right) & = & \frac{3}{2\ell^{2}}k_{B}T\sum_{i=1}^{N}\left(\mathbf{R}_{i}-\mathbf{R}_{i-1}\right)^{2},

\Psi & \sim & e^{-\frac{U_{0}}{k_{B}T}}.\end{matrix}$$ An exact property of the Gaussian distributions we have been using is that a sub chain of $$m-n$$ monomers (such as the sub chain starting at index $$m$$ and ending at $$n$$) will also have a Gaussian distribution of the end-to-end length: $$\begin{matrix} P\left(\mathbf{R}_{m}-\mathbf{R}_{n},m-n\right) & = & \left(\frac{3}{2\pi\left|n-m\right|\ell^{2}}\right)^{\frac{3}{2}}\exp\left(-\frac{3R^{2}}{2\left|n-m\right|\ell^{2}}\right),

\left\langle \left(\mathbf{R}_{m}-\mathbf{R}_{n}\right)^{2}\right\rangle & = & \ell^{2}\left|n-m\right|.\end{matrix}$$ At the continuum limit, we will get Wiener distributions : the correct way to calculate the limit is to take $$N\rightarrow\infty$$ and $$\ell\rightarrow0$$ with $$N\ell=L$$ remaining constant. The length along the chain up to site $$n$$ is then described by $$n\ell\rightarrow s$$, $$0\le s\le L$$. At this limit we can also substitute derivatives $$\frac{\partial\mathbf{R}}{\partial s}=\frac{1}{\ell}\frac{\partial\mathbf{R}}{\partial n}$$ for the finite differences $$\frac{\mathbf{R}_{i}-\mathbf{R}_{i-1}}{\ell}$$, such that $$\begin{matrix} \sum_{i=1}^{N}\frac{1}{\ell^{2}}\left(\mathbf{R}_{i}-\mathbf{R}_{i-1}\right)^{2} & \rightarrow & \int_{0}^{L}\frac{\mathrm{d}s}{\ell}\left(\frac{\partial\mathbf{R}}{\partial s}\right)^{2}=\int_{0}^{N}\mathrm{d}n\frac{1}{\ell^{2}}\left(\frac{\partial\mathbf{R}\left(n\right)}{\partial n}\right)^{2}

\Psi\left(\left\{ \mathbf{R}_{i}\right\} \right) & \rightarrow & \mbox{const.}\times\exp\left\{ -\frac{3}{2\ell^{2}}\int_{0}^{N}\left(\frac{\partial\mathbf{R}\left(n\right)}{\partial n}\right)^{2}\mathrm{d}n\right\} .\end{matrix}$$

If we add an external spatial potential $$U\left(\mathbf{R}_{i}\right)$$ (which is single-body), its contribution to the free energy will amount in a factor of $$ \exp\left\{ -\frac{1}{k_{B}T}\sum_{i=1}^{N}U\left(\mathbf{R}_{i}\right)\right\} \rightarrow\exp\left\{ -\frac{1}{k_{B}T}\int_{0}^{N}U\left(\mathbf{R}\left(n\right)\right)\mathrm{d}n\right\} .$$ to the Boltzmann factor. 04/23/2009

Functional path integrals and the continuum distribution function
Books: F. Wiegel, Doi & Edwards.

Consider what happens when we hold the ends of a chain defined by $$\left\{ \mathbf{R}_{i}\right\} $$ in place, such that $$\mathbf{R}_{0}=\mathbf{R}^{\prime}$$ and $$\mathbf{R}_{N}=\mathbf{R}$$. We can calculate the probability of this configuration from $$ P_{N}\left(\mathbf{R}_{0},\mathbf{R}_{N}\right)=\prod_{i=1}^{N-1}\int\mathrm{d}\mathbf{R}_{i}\Psi\left(\left\{ \mathbf{R}_{i}\right\} \right).$$ At the continuum limit the definition of the chain configurations translates into a function $$\mathbf{R}\left(n\right)$$ and the product of integrals can be taken as a path integral according to $$\prod_{i=1}^{N-1}\int\mathrm{d}\mathbf{R}_{i}\rightarrow\int\mathcal{D}\mathbf{R}\left(n\right)$$. The probability for each configuration with our constraint is a functional of $$\mathbf{R}\left(n\right)$$. The partition function is: $$ Z_{N}\left(\mathbf{R},\mathbf{R}^{\prime}\right)=\int_{\mathbf{R}_{0}=\mathbf{R}^{\prime}}^{\mathbf{R}_{N}=\mathbf{R}}\mathcal{D}\mathbf{R}\left(n\right)\exp\left\{ -\frac{3}{2\ell^{2}}\int_{0}^{N}\left(\frac{\partial\mathbf{R}\left(n\right)}{\partial n}\right)^{2}\mathrm{d}n-\frac{1}{k_{B}T}\int_{0}^{N}U\left(\mathbf{R}\left(n\right)\right)\mathrm{d}n\right\} ,$$ and we can normalize it to obtain a probability distribution function, given in terms of this path integral: $$ P_{N}\left(\mathbf{R},\mathbf{R}^{\prime}\right)=\frac{Z_{N}\left(\mathbf{R},\mathbf{R}^{\prime}\right)}{\int Z_{N}\left(\mathbf{R},\mathbf{R}^{\prime}\right)\mathrm{d}\mathbf{R}\mathrm{d}\mathbf{R}^{\prime}}.$$ We now introduce the Green's function $$G\left(\mathbf{R},\mathbf{R}^{\prime};N\right),$$which as we will soon see describes the evolution from $$\mathbf{R}^{\prime}$$ to $$\mathbf{R}$$ in $$N$$ steps. We define it as: $$ G\left(\mathbf{R}_{N}=\mathbf{R},\mathbf{R}_{0}=\mathbf{R}^{\prime};N\right)\equiv\frac{\int_{\mathbf{R}_{0}=\mathbf{R}^{\prime}}^{\mathbf{R}_{N}=\mathbf{R}}\mathcal{D}\mathbf{R}\left(n\right)\exp\left\{ -\frac{3}{2\ell^{2}}\int_{0}^{N}\left(\frac{\partial\mathbf{R}\left(n\right)}{\partial n}\right)^{2}\mathrm{d}n-\frac{1}{k_{B}T}\int_{0}^{N}U\left(\mathbf{R}\left(n\right)\right)\mathrm{d}n\right\} }{\int\mathrm{d}\mathbf{R}\mathrm{d}\mathbf{R^{\prime}}\int_{\mathbf{R}_{0}=\mathbf{R}^{\prime}}^{\mathbf{R}_{N}=\mathbf{R}}\mathcal{D}\mathbf{R}\left(n\right)\exp\left\{ -\frac{3}{2\ell^{2}}\int_{0}^{N}\left(\frac{\partial\mathbf{R}\left(n\right)}{\partial n}\right)^{2}\mathrm{d}n\right\} }.$$ Note that while the nominator is proportional to the probability $$P_{N}$$, the denominator does not include include the external potential.

$$G$$ has several important properties:

G\left(\mathbf{R},\mathbf{R}^{\prime};N\right)=\int\mathrm{d}\mathbf{R}^{\prime\prime}G\left(\mathbf{R},\mathbf{R}^{\prime\prime};N-i\right)G\left(\mathbf{R}^{\prime\prime},\mathbf{R}^{\prime};i\right).$$ We can use this property to compute expectations values of observables. If we have some function of a specific monomer $$A\left(\mathbf{R}_{i}\right)$$, for instance: $$ \left\langle A\left(\mathbf{R}_{i}\right)\right\rangle =\frac{\int\mathrm{d}\mathbf{R}_{N}\mathrm{d}\mathbf{R}_{0}\mathrm{d}\mathbf{R}_{i}G\left(\mathbf{R}_{N},\mathbf{R}_{i};N-i\right)A\left(\mathbf{R}_{i}\right)G\left(\mathbf{R}_{i},\mathbf{R}_{0};i\right)}{\int\mathrm{d}\mathbf{R}_{N}\mathrm{d}\mathbf{R}_{0}G\left(\mathbf{R}_{N},\mathbf{R}_{0};N\right)}.$$
 * 1) It is equal to the exact probability $$P_{N}$$ for Gaussian chains in the absence of external potential.
 * 2) If we consider that the chain might be divided into one sub chain between step $$0$$ and $$i$$ and a second sub chain from step $$i$$ to step $$N$$, then $$

\frac{\partial G}{\partial N}-\frac{\ell^{2}}{6}\frac{\partial^{2}G}{\partial\mathbf{R}^{2}}+\frac{U\left(\mathbf{R}\right)}{k_{B}T}G=\delta\left(\mathbf{R}-\mathbf{R}^{\prime}\right)\delta\left(N\right).$$
 * 1) The Green's function is the solution of the differential equation (see proof in Doi & Edwards and in homework): $$


 * 1) The Green's function is defined as 0 for $$N<0$$ and is equal to $$\delta\left(\mathbf{R}-\mathbf{R}^{\prime}\right)$$ when $$N\rightarrow0$$ in order to satisfy the boundary conditions.

Relationship to quantum mechanics
This equation for $$N>0$$, $$\mathbf{R}\neq\mathbf{R}^{\prime}$$ is very similar in form to the Schrödinger equation. To see this, we can rewrite it as: $$ \left[\frac{\partial}{\partial N}-\overset{\overbrace{\frac{\ell^{2}}{6}\frac{\partial^{2}}{\partial\mathbf{R}^{2}}+\frac{U\left(\mathbf{R}\right)}{k_{B}T}}}\right]G\left(\mathbf{R},\mathbf{R}^{\prime};N\right)=\left[\frac{\partial}{\partial N}-\mathcal{L}\right]G\left(\mathbf{R},\mathbf{R}^{\prime};N\right)=0.$$ If we make the replacement $$N\rightarrow\frac{it}{\hbar}$$, $$\mathcal{L}\rightarrow\mathcal{H}$$ and $$\frac{\ell^{2}}{6}\rightarrow\frac{\hbar^{2}}{2m}$$ this is identical to $$-i\hbar\frac{\partial}{\partial t}G=\left[-\frac{\hbar^{2}\triangledown^{2}}{2m}+V\left(\mathbf{R}\right)\right]G=\mathcal{H}G$$. Like the quantum Hamiltonian the Hermitian operator $$\mathcal{L}$$ has eigenfunctions such that $$\mathcal{L}\varphi_{k}=E_{k}\varphi_{k}$$, which according to Sturm-Liouville theory span the solution space ($$\sum_{k}\varphi_{k}^{*}\left(\mathbf{r}^{\prime}\right)\varphi_{k}\left(\mathbf{r}\right)=\delta\left(\mathbf{r}-\mathbf{r}^{\prime}\right)$$) and can be orthonormalized ($$\int\varphi_{k}^{*}\varphi_{m}\mathrm{d}\mathbf{r}=\delta_{km}$$).

The solution of the non-homogeneous problem is therefore $$ G\left(\mathbf{R},\mathbf{R}^{\prime};N\right)=\sum_{k}\varphi_{k}^{*}\left(\mathbf{R}\right)\varphi_{k}\left(\mathbf{R}^{\prime}\right)e^{-NE_{k}},$$ where the $$\varphi_{k}$$ are solutions of the homogeneous equation $$\left(\mathcal{L}-E_{n}\right)\varphi_{n}=0$$.

Example A polymer chain in a box of dimensions $$L_{x}\times L_{y}\times L_{z}$$: The potential $$U$$ is $$0$$ within the box and $$\infty$$ on the edges. The boundary conditions are $$G\left(\mathbf{R},\mathbf{R}^{\prime};N\right)=0$$ if $$\mathbf{R}$$ or $$\mathbf{R}^{\prime}$$ are on the boundary. The function is also separable in Cartesian coordinates: $$ G\left(\mathbf{R},\mathbf{R}^{\prime};N\right)=\prod_{i=1}^{3}g_{i}\left(R_{i},R_{i}^{\prime};N\right).$$ Let's solve for $$g_{1}\equiv g_{x}$$ (the other $$g$$ functions are similar): $$ \left(\frac{\partial}{\partial N}-\frac{\ell^{2}}{6}\frac{\partial^{2}}{\partial R_{1}^{2}}\right)u\left(R_{1},N\right)=0.$$ If we separate variables again with the ansatz $$u\left(R_{1},N\right)=\varphi\left(R_{1}\right)e^{-EN}$$ we obtain $$\begin{array}{lcl} -E\varphi-\frac{\ell^{2}}{6}\varphi^{\prime\prime} & = & 0,\\

& \Downarrow\\

\varphi\left(R_{1}\right) & = & A\sin k_{1}R_{1}+B\cos k_{1}R_{1}.\end{array}$$ With the boundary condition $$ \begin{cases} \varphi\left(0\right)=0 & \Rightarrow B=0,\\ \varphi\left(L_{x}\right)=0 & \Rightarrow k_{1}=\frac{n\pi}{L_{x}}. \end{cases} $$ This gives an expression for the energy and eigenfunctions: $$\begin{matrix} E_{n} & = & \left(\frac{\ell^{2}\pi^{2}}{6L_{x}}\right)n^{2}=E_{1}n^{2},

\varphi_{n}\left(R_{1}\right) & = & \sqrt{\frac{2}{L_{x}}}\sin\left(\frac{n\pi}{L_{x}}R_{1}\right),

u_{n}\left(R_{1}\right) & = & \sqrt{\frac{2}{L_{x}}}\sin\left(\frac{n\pi}{L_{x}}R_{1}\right)e^{-E_{n}N}.\end{matrix}$$ The Green's function can finally be written as $$ g_{1}\left(R_{1},R_{1}^{\prime};N\right)=\sum_{n=1}^{\infty}\frac{2}{L_{x}}\sin\left(\frac{n\pi}{L_{x}}R_{1}\right)\sin\left(\frac{n\pi}{L_{x}}R_{1}^{\prime}\right)e^{-NE_{n}}.$$ Since with the Cartesian symmetry of the box the partition function $$Z=\prod_{i=1}^{3}Z_{i}$$ is also separable and using $$ \int_{0}^{L_{x}}\sin\frac{n\pi}{L_{x}}x\mathrm{d}x=\begin{cases} \frac{2L_{x}}{n\pi} & n=2,\,4,\,6,\,...\,\mbox{(even),} \\

0 & n\in1,\,3,\,5,\,...\,\mbox{(odd)}\end{cases}$$ we can calculate $$\begin{matrix} Z_{x} & = & \int\mathrm{d}x\int dx^{\prime}g_{1}\left(x,x^{\prime};N\right)

& = & \frac{2}{L_{x}}\sum_{n=1,\,3,\,5...}\left(\frac{2L_{x}}{n\pi}\right)^{2}\exp\left(-N\frac{\ell^{2}\pi^{2}n^{2}}{6L_{x}^{2}}\right)

& = & \frac{8L_{x}}{\pi^{2}}\sum_{n=1,\,3,\,5...}\frac{1}{n^{2}}e^{-n^{2}E_{1}N}.\end{matrix}$$ We can now go on to calculate $$F=-k_{B}T\ln Z_{x}Z_{y}Z_{z}$$, and we can for instance calculate the pressure on the box edges in the $$x$$ direction: $$ P_{x}=-\frac{1}{L_{y}L_{z}}\frac{\partial F}{\partial L_{x}}.$$ Two limiting cases can be done analytically: first, if the box is much larger than the polymer, $$L_{i}\gg\sqrt{N}\ell$$ and $$\begin{array}{lcr} \frac{1}{n^{2}}\exp\left\{ -\frac{\pi^{2}\ell^{2}N}{L_{x}^{2}}n^{2}\right\} & \approx & \frac{1}{n^{2}},\\

\sum_{n=1,\,3,\,5...}\frac{1}{n^{2}} & = & \frac{\pi^{2}}{8},\\

& \Downarrow &\\

P_{x} & = & -\frac{1}{L_{y}L_{z}}\frac{\partial}{\partial L_{x}}\left(-k_{B}T\ln Z_{x}Z_{y}Z_{z}\right)\\

& = & -\frac{k_{B}T}{L_{y}L_{z}}\frac{1}{Z_{x}}\frac{\partial Z_{x}}{\partial L_{x}}\\

& = & \frac{k_{B}T}{Z_{x}Z_{y}Z_{z}}.\end{array}$$ This is equivalent to a dilute gas of polymers (done here for a single chain). At the opposite limit, $$L_{i}\ll\sqrt{N}\ell$$, the polymer should be "squeezed". The Gaussian approximation will be no good if we squeeze too hard, but at least for some intermediate regime we can neglect all but the first term in the series: $$\begin{array}{lcr} Z_{x} & = & \frac{8L_{x}}{\pi^{2}}\sum_{n=1,\,3,\,5...}\frac{1}{n^{2}}e^{-\frac{\pi^{2}\ell^{2}n^{2}}{6L_{x}^{2}}N}\\

& \approx & \frac{8L_{x}}{\pi^{2}}e^{-\frac{\pi^{2}\ell^{2}}{6L_{x}^{2}}N},\\

P_{x} & = & -\frac{k_{B}T}{L_{y}L_{z}}\frac{\partial\ln Z_{x}}{\partial L_{x}}\\

& \approx & \frac{k_{B}T}{V}\left\{ \frac{1}{L_{x}}+\frac{\pi^{2}\ell^{2}N}{3L_{x}^{3}}\right\}\\

& = & \frac{k_{B}T}{V}\left\{ 1+\frac{\pi^{2}\ell^{2}N}{3L_{x}^{2}}\right\} .\end{array}$$ There is a large extra pressure caused by the "squeezing" of the chain and the corresponding loss of its entropy.

04/30/2009

The same formalism can be used to treat polymers near a wall or in a well near a wall, for instance (see the homework for details). In the well case, like in the similar quantum problem, we will have bound states for $$T<T_{c}$$ (where the critical temperature is defined by a critical value of $$\beta_{c}V_{0}=\frac{V_{0}}{k_{B}T_{c}}$$, and describes the condition for the potential well to be "deep" enough to contain a bound state).

Dominant ground state
Note that since $$ G=\sum_{n=0}^{\infty}\varphi_{n}\left(x\right)\varphi_{n}^{*}\left(x^{\prime}\right)e^{-NE_{n}},$$ where $$N$$ is positive and the $$E_{n}$$ are real and ordered (assuming no degeneracy, $$E_{0}<E_{1}<E_{2}<...$$), at large $$N$$ we can neglect all but the leading terms (smallest energies) and $$ G\approx\varphi_{0}\left(x\right)\varphi_{0}^{*}\left(x^{\prime}\right)e^{-NE_{0}}+\varphi_{1}\left(x\right)\varphi_{1}^{*}\left(x^{\prime}\right)e^{-NE_{1}}+....$$ This is possible because the exponent is decreasing rather than oscillating, as it is in the quantum mechanics case. Taking only the first term in this series is called the dominant ground state approximation.

Virial expansion
So far, in treating Gaussian chains, we have neglected any long-ranged interactions. However, polymers in solution cannot self-intersect, and this introduces interactions $$V\left(\mathbf{R}_{i}-\mathbf{R}_{j}\right)$$ into the picture which are local in real-space, but are long ranged in terms of the contour spacing – that is, they are not limited to $$i\approx j$$. The importance of this effect depends on dimensionality: it is easy to imagine that intersections in 2D are more effective in restricting a polymer's shape than intersections in 3D.

The interaction potential $$V\left(\mathbf{r}\right)$$ can in general have both attractive and repulsive parts, and depends on the detailed properties of the solvent. If we consider it to be due to a long ranged attractive Van der-Waals interaction and a short ranged repulsive hard-core interaction, it might be modeled by a $$6-12$$ Lennard-Jones potential. To treat interaction perturbatively within statistical mechanics, we can use a virial expansion (this is a statistical-mechanical expansion in powers of the density, useful for systematic perturbative corrections to non-interacting calculations when one wants to include many-body interactions). The second virial coefficient is $$ v_{2}=\int\mathrm{d}^{3}r\left[1-e^{-\frac{V\left(r\right)}{k_{B}T}}\right].$$ To make the calculation easy, consider a potential even simpler than the 6-12 Lennard-Jones: $$ V\left(\mathbf{r}\right)=\begin{cases} \infty & r<\sigma,\\

-\varepsilon & \sigma2\sigma.\end{cases}$$ This gives $$\begin{matrix} v_{2} & = & \overset{\overbrace{\int_{r<\sigma}\mathrm{d}^{3}r\left[1-e^{-\frac{V\left(r\right)}{k_{B}T}}\right]}}+\overset{\overbrace{\int_{\sigma\frac{\varepsilon}{\ln\frac{8}{7}}$$ or negative (signifying attraction) for $$k_{B}T<\frac{\varepsilon}{\ln\frac{8}{7}}$$. While the details of this calculation depend on our choice and parametrization of the potential, in general we will have some special temperature known as the $$\vartheta$$ temperature (in our case $$k_{B}\vartheta=\frac{\varepsilon}{\ln\frac{8}{7}}$$) where $$ v_{2}\left(\vartheta\right)=0.$$ This allows us to define a good solvent: such a solvent must have $$T>\vartheta$$ at our working temperature. This assures us (within the second Virial approximation, at least) that the interactions are repulsive and (as can be shown separately) the chain is swollen. A bad solvent for which $$T<\vartheta$$ will have attractive interactions, resulting in collapse. A solvent for which $$T=\vartheta$$ is called a $$\vartheta$$ solvent, and returns us to a Gaussian chain unless the next Virial coefficient is taken.

Lattice model
A common numerical treatment for this kind of system is to draw the polymer on a grid and make Monte-Carlo runs, where steps must be self-avoiding and their probability is taken from a thermal distribution while maintaining detailed balance. This gives in 3D $$R_{N}\simeq\ell N^{\nu}$$ where $$\nu\approx0.588$$.

Renormalization group
A connection between SAWs and critical phenomena was made by de Gennes in the 1970s. Some of the similarities are summarized in the table below. Using renormalization group methods, de Gennes showed by analogy to a certain spin model that $$\begin{matrix} \nu\left(\varepsilon\right) & = & \frac{1}{2}+\frac{1}{16}\varepsilon+\frac{15}{512}\varepsilon^{2}+\vartheta\left(\varepsilon^{3}\right),

\varepsilon & \equiv & 4-d.\end{matrix}$$ This gives in 3D a result very close to the SAW: $$\nu_{RG}=\frac{1}{2}+\frac{1}{16}+\frac{15}{512}+\vartheta\left(\varepsilon^{3}\right)=0.5625+\vartheta\left(\varepsilon^{3}\right)$$.

Flory model
This is a very crude model which gives surprisingly good results. We write the free energy as $$F_{tot}\left(R\right)=F_{int}+F_{ent}$$. For the entropic part we take the expression for an ideal chain: $$S_{N}\left(R\right)=-\frac{d}{2}k_{B}\frac{R^{2}}{N\ell^{2}}+S_{N}\left(0\right)$$, $$F_{ent}=-TS_{N}$$. For the interaction, we use the second virial coefficient: $$\begin{matrix} \frac{F_{int}\left(R\right)}{k_{B}T} & = & \frac{1}{2}\nu_{2}\int\left[c\left(r\right)\right]^{2}\mathrm{d}^{3}r.\end{matrix}$$ Here $$c\left(r\right)$$ is a local density such that its average value is $$\left\langle c\right\rangle =\frac{N}{V}\sim\frac{N}{R^{d}}$$. If we neglect local fluctuations in $$c$$, then $$\begin{matrix} \int\left[c\left(r\right)\right]^{2}\mathrm{d}^{3}r & = & V\left\langle c^{2}\left(r\right)\right\rangle \approx V\left\langle c\left(r\right)\right\rangle ^{2}=R^{2}\left(\frac{N}{R^{d}}\right)^{2},

\frac{F_{int}}{k_{B}T} & \approx & \frac{1}{2}v_{2}N^{2}R^{-d}.\end{matrix}$$ The total free energy is then $$ \frac{F_{tot}}{k_{B}T}\approx\frac{d}{2}\frac{R^{2}}{N\ell^{2}}+\frac{1}{2}v^{2}N^{2}R^{-d}.$$ The free parameter here is $$R$$, but we do not know how it relates to $$N$$. For constant $$N$$ the minimum is at $$ R_{F}=\left(\frac{v_{2}}{2}\ell^{2}\right)^{\frac{1}{d+2}}N^{\frac{3}{d+2}},$$ which gives the Flory exponent $$ \nu_{F}=\frac{3}{d+2}.$$ This exponent is exact for 1, 2 and 4 dimensions, and gives a very good approximation (0.6) for 3 dimensions, but it misses completely for more than 4 dimensions. For a numerical example consider a polymer of $$\sim10^{5}$$ monomers each of which is about $$5\AA$$ in length. From the expressions above, $$ R=\begin{cases} 1600\AA & \mbox{GW,}\\

5000\AA & \mbox{Flory,}\\

4400\AA & \mbox{SAW.}\end{cases}$$ This difference is large enough to be experimentally detectable by the scattering techniques to be explained next.

The reason the Flory method provides such good results turns out to be a matter of lucky cancellation between two mistakes, both of which are by orders of magnitude: the entropy is overestimated and the correlations are underestimated. This is discussed in detail in all the books.

Field Theory of SAW
Books: Doi & Edwards, Wiegel

The seminal article of S.F. Edwards in 1965 was the first application of field-theoretic methods to the physics of polymers. To insert interactions into the Wiener distribution, we take sum over the two-body interactions $$\frac{1}{2}\sum_{ij}V\left(\mathbf{R}_{i}-\mathbf{R}_{j}\right)$$ to the continuum limit $$\frac{1}{2}\int_{0}^{N}\mathrm{d}n\int_{0}^{N}\mathrm{d}mV\left(\mathbf{R}\left(m\right)-\mathbf{R}\left(n\right)\right)$$.

This formalism is rather complicated and not much can be done by hand. One possible simplification is to consider an excluded-volume (or self-exclusion) interaction of Dirac delta function form, which prevents two monomers from occupying the same point in space: $$ V\left(\mathbf{R}_{i}-\mathbf{R}_{j}\right)=k_{B}Tv_{2}\delta\left(\mathbf{R}_{i}-\mathbf{R}_{j}\right).$$ The advantage of this is that a simple form is obtained in which only the second virial coefficient $$v_{2}$$ is taken into account. The expression for the distribution is then $$ \Psi\left(\left\{ \mathbf{R}_{n}\right\} \right)\sim\exp\left\{ -\frac{3}{2\ell^{2}}\int_{0}^{N}\mathrm{d}n\left(\frac{\partial R\left(n\right)}{\partial n}\right)^{2}-\frac{v_{2}}{2}\int_{0}^{N}\mathrm{dn}\int_{0}^{N}\mathrm{d}m\delta\left(\mathbf{R}_{m}-\mathbf{R}_{n}\right)\right\} .$$ With expressions of this sort, one can apply standard field-theory/many-body methods to evaluate the Green's function and calculate observables. This is more advanced and we will not be going into it. 05/07/2009

The form factor
Materials can be probed by scattering experiments, and for dilute polymer solutions this is one way to learn about the polymers within them. Laser scattering requires relatively little equipment and can be done in any lab, while x-ray scattering (SAXS) requires a synchrotron and neutron scattering (SANS) requires a nuclear reactor. We will discuss structural properties on the scale of chains rather than individual monomers, which means relatively small wavenumbers. It will also soon be clear that small angles are of interest.

If we assume that the individual monomers act as point scatterers (see (8)) and consider a process which scatters the incoming wave at $$\mathbf{k}_{i}$$ to $$\mathbf{k}_{f}$$, we can define a scattering angle $$\vartheta$$ and a scattering wave vector $$\mathbf{k}=\mathbf{k}_{f}-\mathbf{k}_{i}$$ (which becomes smaller in magnitude as the angle $$\vartheta$$ becomes smaller). We then measure scattered waves at some outgoing angle for some incoming angle as illustrated in (9), where in fact many chain scatterers are involved we should have an ensemble average over the chain configurations (which should be incoherent since the chains are far apart compared with the typical decoherence length scale). All this is discussed in more detail below.

Within a chain scattering is mostly coherent such that that the scattered wavefunction is $$\Psi=\sum_{i=1}^{N}a_{i}e^{i\mathbf{k}\cdot\mathbf{R}_{i}}$$. The intensity or power should be proportional to $$I=\left|\Psi\right|^{2}=\sum_{i,j=1}^{N}a_{i}a_{j}^{*}e^{i\mathbf{k}\cdot\left(\mathbf{R}_{i}-\mathbf{R}_{j}\right)}$$). If we specialize to homogeneous chains where $$a_{i}=a$$, then $$ I=\left|a\right|^{2}\sum_{i,j=1}^{N}e^{i\mathbf{k}\cdot\left(\mathbf{R}_{i}-\mathbf{R}_{j}\right)}.$$ This expression is suitable for a single static chain in a specific configuration $$\left\{ \mathbf{R}_{i}\right\} $$. For an ensemble of chains in solution, we average over all chain configurations incoherently, defining the structure factor $$S\left(k\right)$$: $$\begin{matrix} \left\langle I\right\rangle  & = & \left\langle \Psi^{2}\right\rangle ,

S\left(k\right) & \equiv & \frac{\left\langle \left|\Psi\left(k\right)\right|^{2}\right\rangle }{\left\langle \left|\Psi\left(0\right)\right|^{2}\right\rangle }.\end{matrix}$$ The normalization is with respect to the unscattered wave at $$k=0$$, $$\left|\Psi\left(0\right)\right|^{2}=a^{2}N^{2}$$. Note that in an isotropic system like the system of chain molecules in a solvent, the structure factor must depend only on the magnitude of $$k$$.

Inserting the expression for $$\Psi^{2}$$ into the above equation gives $$ S\left(k\right)=\frac{1}{N^{2}}\left\langle \sum_{i,j=1}^{N}e^{i\mathbf{k}\cdot\left(\mathbf{R}_{i}-\mathbf{R}_{j}\right)}\right\rangle .$$ We now switch to spherical coordinates with $$\mathbf{z}$$ parallel to $$\mathbf{k}$$ with the added notation $$\mathbf{R}_{ij}=\mathbf{R}_{i}-\mathbf{R}_{j}$$. Since in these coordinates $$\mathbf{k}\cdot\mathbf{R}_{ij}=kR_{ij}\cos\vartheta$$, we can write $$\begin{matrix} \left\langle e^{i\mathbf{q}\cdot\mathbf{R}_{ij}}\right\rangle & = & \frac{1}{4\pi}\int_{0}^{2\pi}\mathrm{d}\varphi\int_{0}^{\pi}\mathrm{d}\vartheta\sin\vartheta e^{ikR_{ij}\cos\vartheta}

& = & \frac{1}{2}\int_{-1}^{1}\mathrm{d}xe^{ikR_{ij}x}

& = & \frac{\sin\left(kR_{ij}\right)}{kR_{ij}},\end{matrix}$$

$$ S\left(k\right)=\frac{1}{N^{2}}\sum_{ij}\left\langle \frac{\sin\left(kR_{ij}\right)}{kR_{ij}}\right\rangle _{\mathrm{configurations}}.$$

The gyration radius and small angle scattering
For small $$k$$ (which at least in the elastic case implies small $$\vartheta$$), we can expand the above expression for $$S\left(k\right)$$ in powers of $$kR_{ij}$$ to obtain $$\begin{matrix} S\left(k\right) & \simeq & \frac{1}{N^{2}}\sum_{ij}\left\langle 1-\left(\frac{kR_{ij}}{3!}\right)^{2}\right\rangle

& = & \frac{1}{N^{2}}N^{2}-\frac{1}{6}\frac{1}{N^{2}}\left|\mathbf{k}\right|^{2}\sum_{ij}\left\langle \mathbf{R}_{ij}^{2}\right\rangle

& = & 1-\frac{1}{3}k^{2}R_{g}^{2}.\end{matrix}$$ The last equality is due to the fact $$R_{g}^{2}=\frac{1}{2N^{2}}\sum_{ij}\left\langle \mathbf{R}_{ij}^{2}\right\rangle $$. If the scattering is elastic, $$\left|\mathbf{k}_{i}\right|=\left|\mathbf{k}_{f}\right|=\frac{2\pi}{\lambda}$$ and $$ k=\left|\mathbf{k}_{i}-\mathbf{k}_{f}\right|=\sqrt{\mathbf{k}_{i}^{2}+\mathbf{k}_{f}^{2}-2\mathbf{k}_{i}\cdot\mathbf{k}_{f}}=\left|\mathbf{k}_{i}\right|\sqrt{1+1-2\cos\vartheta}=\frac{2\pi}{\lambda}\cdot2\sin\frac{\vartheta}{2}.$$ With this expression for $$k$$ in terms of the angle $$\vartheta$$, the structure factor is then $$\begin{matrix} S\left(k\right) & \simeq & 1-\frac{1}{3}k^{2}R_{g}^{2}.

& = & 1-\frac{16\pi^{2}}{3}\frac{\sin^{2}\frac{\vartheta}{2}}{\lambda^{2}}R_{g}^{2}.\end{matrix}$$ From an experimental point of view, we can plot $$S$$ as a function of $$k^{2}\sim\sin^{2}\frac{\vartheta}{2}$$ and determine the polymer's gyration radius $$R_{g}$$ from the slope.

The approximation we have made is good when $$kR_{g}\sim\frac{\sin\frac{\vartheta}{2}}{\lambda}R_{g}\ll1$$, and this determines the range of angles that should be taken into account: we must have $$\sin\frac{\vartheta}{2}\sim\frac{\vartheta}{2}\lesssim\frac{\lambda}{R_{g}}$$. For laser scattering usually $$\lambda\sim500\mathrm{nm}$$ (about enough to measure $$R_{g}$$) while for neutron scattering $$\lambda\sim0.3\mathrm{nm}$$ (meaning we must take only very small angles into account to measure $$R_{g}$$, but also allowing for more detailed information about correlations within the chain to be collected).

Debye scattering function
Around 1947, Debye gave an exact result (the Debye function ) for Gaussian chains: $$\begin{matrix} \left\langle e^{i\mathbf{k}\cdot\left(\mathbf{R}_{i}-\mathbf{R}_{j}\right)}\right\rangle & = & e^{-\frac{\ell^{2}k^{2}}{6}\left|i-j\right|},

S_{D}\left(k\right) & = & \frac{2}{\left(k^{2}R_{g}^{2}\right)^{2}}\left(k^{2}R_{g}^{2}-1+e^{-\overset{=x}{\overbrace{k^{2}R_{g}^{2}}}}\right),\end{matrix}$$ $$ S_{D}\left(x\right)=\frac{2}{x^{2}}\left(x-1+e^{-x}\right).$$ At the limit where $$x\ll1$$ we can expand $$S\left(x\right)$$ around $$x=0$$, yielding the $$k\rightarrow0$$ limit we have encountered earlier. For $$x\gg1$$, $$S\left(x\right)=\frac{2}{x^{2}}\left(x-1+e^{-x}\right)\simeq\frac{2}{x}$$.

This also works very well for non-Gaussian chains in non-dilute solutions, where a small percentage of the chains is replaced by isotopic variants. This gives an effectively dilute solution of isotopic chains, which can be distinguished from the rest, and these chains are effectively Gaussian for reasons which we will mention later. An example from Rubinstein is neutron scattering from PMMA as done by R. Kirste et al. (1975), which fits very nicely to the Debye function for $$R_{g}\approx130\AA$$. In general, however, a SAW in a dilute solution modifies the tail of the Debye function, since $$\rho\left(k\right)\sim k^{-D_{f}}$$ and $$D_{f}=\frac{5}{3}$$ for a SAW.

The structure factor and monomer correlations
Consider the full distribution function of the distances $$\mathbf{R}_{ij}=\mathbf{R}_{i}-\mathbf{R}_{j}$$. This is related to the correlation function for monomer $$i$$: $$ g_{i}\left(\mathbf{r}\right)=\frac{1}{N}\sum_{j=1}^{N}\left\langle \delta\left(\mathbf{r}-\mathbf{R}_{ij}\right)\right\rangle .$$ This function is evaluated by fixing a certain monomer $$i$$ and counting which other monomers are at a distance $$\mathbf{r}$$ from it, averaging over all chain configurations. If we now average over all monomers $$1\le i\le N$$, we obtain $$ g\left(\mathbf{r}\right)=\left\langle g_{i}\left(\mathbf{r}\right)\right\rangle =\frac{1}{N^{2}}\sum_{ij}\left\langle \delta\left(\mathbf{r}-\mathbf{R}_{ij}\right)\right\rangle .$$ Fourier transforming it, $$ \int\mathrm{d}\mathbf{r}\, g\left(\mathbf{r}\right)e^{i\mathbf{k}\cdot\mathbf{r}}=\frac{1}{N^{2}}\sum_{ij}\left\langle e^{i\mathbf{k}\cdot\mathbf{R}_{ij}}\right\rangle =S\left(k\right).$$ The fact that the structure function is the Fourier transform of the scatterer density correlation function is, of course, not unique to the case of polymers. At large $$k$$, it can be shown (homework) that if $$S\left(k\right)\sim k^{-D_{f}}$$ then $$g\left(r\right)\sim\frac{1}{r^{d-D_{f}}}$$. We can therefore determine the fractal dimension of the chain from the large $$k$$ tail of the structure factor (see table).

Dilute and semi-dilute solutions
Up to this point, we have considered only independent chains in dilute solutions. We have also discussed the quality of solvents and the $$\vartheta$$ temperature. Now, we consider multiple chains in a good solvent (good because we do not want them in a collapsed state). The concentrations of monomers $$c$$ is defined as the number of monomers (for all chains) per unit volume. A solution is dilute if the typical distance between chains is more that $$R_{g}$$ and semi-dilute if it is more that $$R_{g}$$. Between these limits, the concentration passes through a crossover value $$c^{*}$$ where the inter-chain distance is equal to the typical chain size $$R_{g}$$.

We can calculate $$c^{*}$$ by calculating the concentration of monomers within a single chain and equating it to the average monomer concentration: $$ c^{*}\sim\frac{N}{R_{g}^{d}}=\frac{N^{1-d\nu}}{\ell^{d}}.$$ For instance, in a 3D SAW $$d=3$$ and $$\nu=\frac{3}{5}$$ such that $$c^{*}=N^{-0.8}\ell^{-3}$$. We can also work in terms of volume fraction $$\phi=\ell^{3}c$$. This turns out to be very small (for $$N=10^{6}$$ it is about 0.001 and for $$N=10^3$$ it is about 0.4%). 05/14/2009

Free energy of mixing
If we have a mixture of two components - $$N_{A}$$ units of $$A$$ and $$N_{B}$$ units of $$B$$ on a lattice model with cell length $$\ell$$ such that $$N_{A}+N_{B}=N$$ is the total number of cells – we can define the relative volumes $$\phi\equiv\phi_{A}=\frac{N_{A}}{N_{A}+N_{B}}$$ and $$\phi_{B}=1-\phi$$. The free energy of mixing (in the simple isotropic case) is then $$ F_{\mathrm{mix}}=U_{\mathrm{mix}}-TS_{\mathrm{mix}}.$$

From a combinatorical argument and with the help of the Stirling series, $$ S_{\mathrm{mix}}=k_{B}\ln\frac{\left(N_{A}+N_{B}\right)!}{N_{A}!N_{B}!}\simeq-N_{A}\ln\overset{\overbrace{\frac{N_{A}}{N_{A}+N_{B}}}}-N_{B}\overset{\overbrace{\frac{N_{B}}{N_{A}+N_{B}}}}.$$ The average entropy of mixing per cell is therefore $$ -\frac{S_{\mathrm{mix}}}{k_{B}\left(N_{A}+N_{B}\right)}=-\frac{\sigma_{\mathrm{mix}}}{k_{B}}=\phi\ln\phi+\left(1-\phi\right)\ln\left(1-\phi\right).$$

We now consider interactions $$U_{AA}$$, $$U_{BB}$$ and $$U_{AB}$$ between nearest neighbors of the two species. The specific form of the interaction depends on the coordination number $$z$$, or the number of nearest neighbors per grid point: for instance, on a square 2D grid $$z=4$$. The mixing interaction energy can be written as $$ U\mathrm{_{mix}=N_{AB}U_{AB}+N_{AA}U_{AA}+N_{BB}U_{BB}},$$ where the $$N_{ij}$$ count the number of boundaries of the different types within the system. In the mean field approximation, we can evaluate them by neglecting local variations in density: $$\begin{matrix} N_{AB} & = & N_{A}z\frac{N_{B}}{N_{A}+N_{B}}=z\left(N_{A}+N_{B}\right)\phi\left(1-\phi\right),

N_{BB} & = & \frac{z}{2}\left(N_{A}+N_{B}\right)\left(1-\phi\right)^{2},

N_{AA} & = & \frac{z}{2}\left(N_{A}+N_{B}\right)\phi^{2}.\end{matrix}$$ The interaction energy per-particle due to mixing is then $$ u_{\mathrm{mix}}=\frac{U_{\mathrm{mix}}}{N_{A}+N_{B}}=z\phi\left(1-\phi\right)U_{AB}+\frac{z}{2}\left(1-\phi\right)^{2}U_{BB}+\frac{z}{2}\phi^{2}U_{AA},$$ and we will subtract from it the enthalpy of the "pure" system, where the components are unmixed: $$ \frac{U_{0}}{N_{A}+N_{B}}=\left(1-\phi\right)\frac{z}{2}U_{BB}+\phi\frac{z}{2}U_{AA}.$$ The difference between these two quantities it the change in enthalpy per unit cell due to mixing: $$\begin{matrix} \Delta u_{\mathrm{mix}} & = & \frac{U_{\mathrm{mix}}-U_{0}}{N_{A}+N_{B}}=\overset{\overbrace{z\left(U_{AB}-\frac{1}{2}U_{AA}-\frac{1}{2}U_{BB}\right)}}\phi\left(1-\phi\right)=k_{B}T\chi\phi\left(1-\phi\right),\end{matrix}$$ $$ \chi=\frac{1}{k_{B}T}\left[U_{AB}-\frac{U_{AA}-U_{BB}}{2}\right]z.$$

The sign of the Flory parameter $$\chi$$ determines whether the minimum of the energy will be at the center or edges of the parabola in $$\phi$$. $$\begin{matrix} \Delta f_{\mathrm{mix}} & = & \frac{\Delta F_{\mathrm{mix}}}{N_{A}+N_{b}}=\Delta u_{\mathrm{mix}}-T\sigma_{\mathrm{mix}},\end{matrix}$$ $$ \Delta f_{\mathrm{mix}}=k_{B}T\chi\phi\left(1-\phi\right)+k_{B}T\phi\ln\phi+k_{B}T\left(1-\phi\right)\ln\left(1-\phi\right).$$ This is the MFT approximation for the free energy of mixing.

The Flory-Huggins model for polymer solutions
This is based on work mostly done by Huggins around 1942. The basic idea is to consider a lattice like the one shown in (11), with chains (inhabiting $$N=5$$ blocks in the example) in a solvent (which can also be a set of chains, but in the example the number of blocks per solvent unit is $$S=1$$). The enthalpy of mixing $$U_{\mathrm{mix}}$$ is approximately independent of the change from the molecule-solvent system to this polymer-solvent system, at least within the MFT approximation. We can therefore set $$\phi=\frac{N_{P}}{N_{P}+N_{S}}$$ ($$N_{P}$$ is the number of monomers and $$N_{S}$$ the number of solvent units; $$\frac{N_{p}}{N}$$ is the number of chains) and use the previous expressions for $$\Delta u_{\mathrm{mix}}$$ and $$\chi$$. The fact we have chains rather than individual monomers is of crucial importance when we calculate the entropy, though: chains have more constraints and therefore a lower entropy than isolated monomers. We will make an approximation (correct to first order in $$N$$ for $$N\gg1$$) based on the assumption that the chains are solid objects and can only be transformed, rather than also rotated and conformed around their center of mass.

This gives, making the Stirling approximation as before, $$\begin{array}{lcl} S_{\mathrm{mix}} & \simeq & k_{B}\frac{\left(N_{P}+N_{S}\right)!}{N_{S}!\left(\frac{N_{P}}{N}\right)!},\\

-\frac{\sigma_{\mathrm{mix}}}{k_{B}} & \simeq & \frac{\phi}{N}\ln\phi+\left(1-\phi\right)\ln\left(1-\phi\right)+\alpha\phi,\\

\alpha & = & \frac{1}{N}\ln\left(\frac{N_{S}+N_{P}}{N}\right)-\ln\left(N_{S}+N_{P}\right)+1-\frac{1}{N}.\end{array}$$ If we neglect the term linear in $$\phi$$, which we will later show is of no importance, these two expressions lead to the Flory-Huggins free energy of mixing: $$ f\left(\phi,T\right)=\frac{\Delta f_{\mathrm{mix}}}{k_{B}T}=\chi\phi\left(1-\phi\right)+\frac{\phi}{N}\ln\phi+\left(1-\phi\right)\ln\left(1-\phi\right).$$ Compared to our previous expression, we see that the only difference is in the division of the second term by $$N$$.

Polymer/solvent phase transfers
A system composed of a polymer immersed in a solvent can be in a uniform phase (corresponding to a good solvent) or separated into two distinct phases (a bad solvent). Qualitatively, this depends on $$\chi$$: the entropic contribution to the free energy from $$\sigma_{\mathrm{mix}}$$ will always prefer mixing, but the preference of $$\Delta u_{\mathrm{mix}}$$ depends on the sign of $$\chi$$. Phase transfers can only possibly exist if $$\chi>0$$, because otherwise the total change in energy due to mixing is always negative. When discussing Helmholtz free energy, $$\phi$$ is the degree of freedom - however, in the physical case of interest it constant and we must perform a Legendre transformation, or in other words introduce a Lagrange multiplier to impose the constraint that $$\phi=\Phi$$. We therefore define $$ g=f-\mu\phi,$$ and after $$g$$ is minimized $$\mu$$ will be determined so as to maintain our constraint (it turns out that $$\mu$$ is the difference between the chemical potentials of the polymer and solvent). When $$g$$ has multiple minima ($$\frac{dg}{d\phi}=\frac{df}{d\phi}-\mu=0$$ for more than one $$\phi$$), a phase transfer can exist.

If $$g$$ has only one minimum at $$\phi$$, then we must have $$f^{\prime}\left(\phi\right)=\mu$$. If $$g$$ has two minima, a first order phase transfer will exist when the free energy $$g$$ at these two minima is the same. This amount to a common tangent construction condition for $$f$$ (see 12): $$ \begin{cases} f\left(\phi_{1}\right)-\mu\phi_{1}=f\left(\phi_{2}\right)-\mu\phi_{2},\\

f^{\prime}\left(\phi_{1}\right)=f^{\prime}\left(\phi_{2}\right)=\mu.\end{cases}$$ This requires $$f\left(\phi_{1}\right)-f\left(\phi_{2}\right)=\mu\left(\phi_{1}-\phi_{2}\right)$$. The two formulations (in terms of $$g$$ and $$f$$) are of course identical. The common tangent actually describes the free energy $$f$$ of a mixed phase system (having a volume $$v_{1}$$ at concentration $$\phi_{1}$$ and a volume $$v_{2}$$ at concentration $$\phi_{2}$$, such that $$\Phi=\frac{v_{1}\phi_{1}}{v_{1}\phi_{1}+v_{2}\phi_{2}}$$). When $$\phi_{1}<\phi<\phi_{2}$$ this line is always lower than the concave profile of the uniform system with concentration $$\phi$$, and therefore the mixed-phase system must be the stable state.

Note that any additional term to $$f$$ which is linear in $$\phi$$ will only produce a shift in $$\mu$$, and not qualitatively change the phase diagram. This is because $$\begin{array}{lcl} f & \rightarrow & f+\alpha\phi\\

& \Downarrow\\

g & \rightarrow & \left(f+\alpha\phi\right)-\mu\phi=f-\left(\mu-\alpha\right)\phi.\end{array}$$

Returning to the Flory-Huggins mixing energy, for $$\chi>0$$ we can see that $$f$$ has two minima and the system can therefore be in two phases. For $$\chi<0$$ only one minimum exist, and therefore only one phase. Generalizing beyond the Flory-Huggins model, at any temperature $$T$$ there exists some $$\chi\left(T\right)$$, and often a dependence $$\chi\left(T\right)=A-\frac{B}{T}$$ works well experimentally (we have found a dependence $$\sim\frac{1}{T}$$ assuming that the interactions are independent of temperature). For every $$T$$ or $$\chi$$, we can find $$\phi_{1}$$ and $$\phi_{2}$$ from the procedure above where two phases exist. This produces a phase diagram similar to (13), where the $$\phi\left(T\right)$$ curve is known as the binodal or demixing curve. The phase diagram (13) includes a few more details: one is the critical point $$\left(T_{c},\phi_{c}\right)$$ or $$\left(\chi_{c},\phi_{c}\right)$$, beyond which two solution can no longer exist. Another is the spinodal curve, existing within the demixing curve at $$f^{\prime\prime}=0$$, marks the point of transition between metastability and instability (within the spinodal curve, phase spearation occurs spontaneously, while between the spinodal and binodal curves it requires some initial nucleation). The spinodal curve is usually quite close to the binodal curve, and since it can be found analytically provides a useful estimate: $$\begin{array}{lcl} f\left(\phi\right) & = & \chi\phi\left(1-\phi\right)+\frac{\phi}{N}\ln\phi+\left(1-\phi\right)\ln\left(1-\phi\right),\\

& \Downarrow\\

f^{\prime\prime}\left(\phi\right) & = & -2\chi+\frac{1}{N}\frac{1}{\phi}+\frac{1}{1-\phi},\\

& \Downarrow & {\scriptstyle f^{\prime\prime}\left(\phi\right)=0}\\

2\chi_{s} & = & \frac{1}{1-\phi}+\frac{1}{N}\frac{1}{\phi}.\end{array}$$ The endpoint of the spinodal curve is also the endpoint of the binodal curve; also, this endpoint is the same for the $$\chi\left(\phi\right)$$ and $$T\left(\phi\right)$$ curves. We can find it from $$\begin{array}{lcl} 0 & = & \left.\frac{\partial\chi_{s}}{\partial\phi}\right|_{c}=\frac{1}{2}\left[-\frac{1}{N}\frac{1}{\phi_{c}^{2}}+\frac{1}{\left(1-\phi_{c}\right)^{2}}\right],\\

& \Downarrow\\

\phi_{c} & = & \frac{1}{1+\sqrt{N}}.\end{array}$$ Inserting this into the equation for $$\chi_{s}$$ gives $$ \chi_{c}=\frac{1}{2}\left(1+\frac{1}{\sqrt{N}}\right)^{2}.$$

There is a great deal to expand on here. Chapter 4 in Rubinstein is a good place to start.

Introduction and Motivation
We will differentiate between several types of surfaces:

We will talk now separately about flat interfaces first, and then extend the discussion to curved and fluctuating interfaces.
 * An outer surface (or boundary) between a liquid phase and a solid boundary or surface. This surface needs not be in thermal equilibrium and exists under external constraints.
 * An interface between two phases in equilibrium with each other, like the A/B liquid mixture that was studied earlier.
 * Membranes have a molecular thickness and are in equilibrium with surrounding water.

Ginzburg-Landau formalism
The simplest kind of non-homogeneous system one can imagine may be described by the variation in some order parameter or concentration as a function of a single spatial direction, $$\phi\left(z\right)$$. For instance, if we have a gas at $$z\rightarrow+\infty$$ and a liquid at $$z\rightarrow-\infty$$, there will be some crossover regime between them. This kind of physics can be treated with a Ginzburg-Landau formalism, which can be derived from the continuum limit of a lattice gas/Ising model.

If every cell $$i$$ (with size $$\ell$$) is parametrized by a discrete spin variable $S_{i}$ such that $$\begin{array}{lcl} S_{i}=1 & \Leftrightarrow & \mbox{cell }i\mbox{ contains molecule }A, \\

S_{i}=0 & \Leftrightarrow & \mbox{cell }i\mbox{ contains molecule }B,\end{array}$$ we may write the Hamiltonian as $$ \mathcal{H}=\frac{1}{2}\sum_{ij}J_{ij}S_{i}\left(1-S_{j}\right).$$ The $$\left\{ J_{ij}\right\} $$ are the interaction constants between cells. Note that $$ J_{ij}S_{i}\left(1-S_{j}\right)+J_{ji}S_{j}\left(1-S_{i}\right)=\begin{cases} 0 & S_{i}=S_{j},

J_{ij} & \left({\scriptstyle S_{i}=1}\wedge{\scriptstyle S_{j}=0}\right)\vee\left({\scriptstyle S_{i}=0}\wedge{\scriptstyle S_{j}=1}\right).\end{cases}$$ The partition function is $$ Z=\mbox{Tr}_{\left\{ S_{i}\right\} }e^{-\beta\mathcal{H}},$$ with $$F=-k_{B}T\ln Z$$. We can formulate now a mean-field theory (by neglecting correlations such as: $$\left\langle S_{i}S_{j}\right\rangle \approx\left\langle S_{i}\right\rangle \left\langle S_{j}\right\rangle $$) for this model in cases of spatial inhomogeneities (presence of walls and interfaces). The full development is left as an exercise: the result assumes a local thermal equilibrium $$\phi_{i}=\left\langle S_{i}\right\rangle $$ and gives $$\begin{matrix} F_{0} & = & \left\langle F_{0}\right\rangle

& = & \frac{1}{2}\sum_{ij}J_{ij}\left\langle S_{i}\right\rangle \left(1-\left\langle S_{j}\right\rangle \right)+k_{B}T\sum_{i}\left[\phi_{i}\ln\phi_{i}+\left(1-\phi_{i}\right)\ln\left(1-\phi_{i}\right)\right].\end{matrix}$$ Separating this $F_{0}$ into internal energy and entropy, $$ U=\frac{1}{2}\sum_{ij}J_{ij}\phi_{i}\left(1-\phi_{j}\right),$$ $$ -TS=k_{B}T\sum_{i}\left[\phi_{i}\ln\phi_{i}+\left(1-\phi_{i}\right)\ln\left(1-\phi_{i}\right)\right].$$ In the continuum limit $$\phi_{i}\rightarrow\phi\left(z\right)$$ and $$\sum_{i}\rightarrow\int\frac{\mathrm{d}\mathbf{r}}{\ell^{3}}$$ and neglecting long-term interactions, we can perform a Taylor expansion: $$ \phi_{i}-\phi_{j}=\phi\left(\mathbf{r}_{i}\right)-\phi\left(\mathbf{r}_{i}+\boldsymbol{\ell}\right)\simeq-\boldsymbol{\ell}\cdot\left.\triangledown\phi\right|_{\mathbf{r}_{i}},$$ $$\begin{matrix} J_{ij}\phi_{i}\left(1-\phi_{j}\right) & = & \frac{1}{2}J_{ij}\left[\left(\phi_{i}-\phi_{j}\right)^{2}-\phi_{i}^{2}-\phi_{j}^{2}+2\phi_{i}\right]

\sum_{i\neq j}J_{ij}\phi_{i}\left(1-\phi_{j}\right) & \rightarrow & \frac{1}{4}\int\frac{\mathrm{d}\mathbf{r}}{\ell^{3}}zJ\left(-\phi^{2}-\phi^{2}+2\phi\right)+\frac{1}{4}\int\frac{\mathrm{d}\mathbf{r}}{\ell^{3}}\sum_{ij}J_{ij}\left(\boldsymbol{\ell}_{j}\cdot\triangledown\phi\right)^{2}\end{matrix}$$ $z$ is the coordination number. $$ U=\frac{1}{2}\int\frac{\mathrm{d}\mathbf{r}}{\ell^{3}}\left[zJ\phi\left(1-\phi\right)\right]+\frac{1}{4}\int\frac{\mathrm{d}\mathbf{r}}{\ell^{3}}J\ell^{2}\left(\triangledown\phi\right)^{2}.$$ Adding the continuum limit entropy, $$\begin{matrix} F & = & \int\mathrm{d}\mathbf{r}\left[f_{0}\left(\phi\right)+\frac{1}{2}B\left(\triangledown\phi\right)^{2}\right],

f_{0} & = & \frac{k_{B}T}{\ell^{3}}\left[\chi\phi\left(1-\phi\right)+\phi\ln\phi+\left(1-\phi\right)\ln\left(1-\phi\right)\right],

B & \equiv & \frac{J}{2\ell},

\frac{1}{2}zJ & \equiv & k_{B}T\chi.\end{matrix}$$ We can find the profile $$\phi\left(\mathbf{r}\right)$$ at equilibrium by minimizing the free energy functional $$F\left[\phi\left(\mathbf{r}\right)\right]$$ with respect to $$\phi(\mathbf{r})$$ and taking external constraints into account. Normally, $$B>0$$ and the minimum of $$F$$ is homogeneous other than surfaces and interfaces. If $$\phi\left(-\infty\right)=\phi\left(+\infty\right)=\phi_{A}$$, the minimal solution $$\phi\left(\mathbf{r}\right)$$ is a constant and we will have a single homogeneous phase. On the other hand, if $$ \begin{cases} \phi\left(-\infty\right)=\phi_{A},

\phi\left(+\infty\right)=\phi_{B},\end{cases}$$ and we are in the two-phase region in ($$\chi,$$$$\phi)$$ then a 1D profile must exist that solves the Euler-Lagrange equation, and becomes approximately homogeneous far from the center of the interface.

1D profile at an interface
Quite independently of the previous treatment and the microscopic model, the free energy can be written as a functional of an order parameter and its gradients: $$ F=\int\mathrm{d}\mathbf{r}\left\{ f_{0}\left(\phi\left(\mathbf{r}\right)\right)+\frac{B}{2}\left[\triangledown\phi\left(\mathbf{r}\right)\right]^{2}\right\} .$$ Since $$\left(\triangledown\phi\right)^{2}>0$$, for $$B>0$$ the system avoids strong local fluctuations and smooth states have smaller energies. A uniform state is therefore preferred, and if the system is not allowed to become fully uniform then regions of different phases form in equilibrium with each other. This is shown in (16), and can also be described by a tangent construction of the type illustrated in (12). In the two phase example above, due to the symmetry of $$f$$ in $$\phi$$ $$(\phi\longleftrightarrow1-\phi)$$, the critical point is clearly at $$\phi_{c}=\frac{1}{2}$$. We will make a Taylor expansion of $$\phi$$ around $\phi_{c}$: $$ \phi=\phi_{c}+\psi=\frac{1}{2}+\psi.$$ Due to the same symmetry in $$\phi$$, an expansion of $$f$$ in $$\psi$$ should contain only even powers. Performing this expansion gives the result $$\begin{matrix} f_{0} & = & \frac{k_{B}T}{\ell^{3}}\left[2\psi^{2}+\frac{4\psi^{4}}{3}-\ln2+\chi\left(\frac{1}{4}-\psi^{2}\right)\right]+B\left(\psi^{\prime}\right)^{2}

& = & \frac{k_{B}T}{\ell^{3}}\left[\left(2-\chi\right)\psi^{2}+\frac{4}{3}\psi^{4}\right]+\mbox{const.}\end{matrix}$$ In general the $$\frac{4}{3}$$ will be replaced by some positive numerical factor $$\frac{\gamma}{4}$$. To obtain the correct critical behavior (note that $$\chi_{c}=\chi\left(T_{c}\right)=2$$) we assume a linear dependence of the form $$-\frac{\alpha}{2}\left(T_{c}-T\right)=2-\chi$$, and minimize $$\begin{matrix} F & = & \frac{k_{B}T}{\ell^{3}}\int\left[-\frac{\alpha}{2}\left(T-T_{c}\right)\psi^{2}+\frac{\gamma}{4}\psi^{4}\right]\mathrm{d}\mathbf{r}+\frac{B}{2}\int\left(\triangledown\psi\right)^{2}\mathrm{d}\mathbf{r}.\end{matrix}$$ The above expansion of the inhomogeneous free energy is called the Ginzburg-Landau (GL) model or expansion. By applying a variational principle on this free energy, we obtain the Euler-Lagrange (EL) equations: $$\begin{matrix} \frac{\delta F}{\delta\psi\left(\mathbf{r}\right)} & = & 0,

& \Downarrow

\frac{\partial F}{\partial\psi\left(\mathbf{r}\right)}-\triangledown\cdot\frac{\partial F}{\partial\left(\triangledown\psi\right)} & = & 0.\end{matrix}$$ 06/09/2009 Here $$\begin{matrix} \triangledown\cdot\frac{\partial F}{\partial\left(\triangledown\psi\right)} & = & \sum_{i}\frac{\partial}{\partial r_{i}}\frac{\partial f}{\partial\triangledown_{i}\psi},

\triangledown_{i}\psi & = & \frac{\partial}{\partial r_{i}}\psi.\end{matrix}$$ In particular, $$\frac{B}{2}\triangledown\cdot\frac{\partial}{\partial\left(\triangledown\psi\right)}\left(\triangledown\psi\right)^{2}=B\triangledown\cdot\triangledown\psi=B\triangledown^{2}\psi$$ and $$\frac{\delta f_{0}}{\delta\psi}=-\alpha\left(T_{c}-T\right)\psi+\gamma\psi^{3}$$. The EL equation is therefore $$ -\alpha\left(T_{c}-T\right)\psi+\gamma\psi^{3}-B\triangledown^{2}\psi=0.$$ This is the well-known Ginzburg-Landau (GL) equation. For $$T>T_{c}$$ the only homogeneous (bulk) solution (arrived at by neglecting the Laplacian term) is $$ -\alpha\left(T_{c}-T\right)\psi+\gamma\psi^{3}=0\Rightarrow\psi=0.$$ In the other case when $$T<T_{c}$$, the system has two homogeneous solutions $$ \psi=\pm\sqrt{\frac{\alpha\left(T_{c}-T\right)}{\gamma}}\equiv\pm\psi_{b}.$$ If we do not neglect the derivative but assume a 1D profile with $$\psi\left(\pm\infty\right)=\pm\psi_{b}$$ and $\psi^{\prime}\left(\pm\infty\right)=$0, we must solve the equation $$ B\psi^{\prime\prime}\left(z\right)+\alpha\left(T_{c}-T\right)\psi\left(z\right)-\gamma\psi^{3}\left(z\right)=0.$$ The exact solution of the GL model is $$ \psi\left(z\right)=\psi_{b}\tanh\left(\frac{z}{\xi}\right),$$ $$ \xi^{-2}=\frac{\alpha\left(T_{c}-T\right)}{\gamma}.$$ We have introduced the correlation length $$\xi$$, which is typical of the width of the meniscus (the layer in which the phases are mixed). As a matter of fact, $$\xi$$ is also the correlation length by the definition $$\left\langle \psi\left(a\right)\psi\left(r+a\right)\right\rangle \sim\exp(-r/\xi)$$. The dependence $$\xi\sim\left(T_{c}-T\right)^{-\frac{1}{2}}$$ is the mean field result with an exponent $$\nu=\frac{1}{2}$$. In general, $$\xi\sim\left(T_{c}-T\right)^{-\nu}$$. We also have for the order parameter dependence $$\psi\sim\left(T_{c}-T\right)^{\beta}$$ where we have obtained in MFT $$\beta=\frac{1}{2}$$.

Surface energy and surface tension
Surface energy is the excess of energy in the system with respect to the bulk. Surface tension $$\sigma$$ is defined as the surface energy per unit area. Therefore, in our case of two phases separated by a meniscus, $\sigma$ can be calculated using $$ \sigma\cdot\mbox{Area}=\Delta F=F\left[\psi\left(\mathbf{r}\right)\right]-\left[\frac{1}{2}Vf_{0}\left(\psi_{b}\right)+\frac{1}{2}Vf_{0}\left(-\psi_{b}\right)\right].$$ Here, we have subtracted the bulk energy of the separate surfaces from the energy of the full system including the interface. Note that in equilibrium, by definition $$f_{0}\left(\psi_{b}\right)=f_{0}\left(-\psi_{b}\right)$$. With the 1D dependence we are treating, then, $$\psi\left(\mathbf{r}\right)=\psi\left(z\right)$$ and $$\begin{matrix} \sigma & = & \overset{\overbrace{\frac{1}{\mbox{Area}}\int\mathrm{d}x\int\mathrm{d}y}}\int_{-\infty}^{\infty}\mathrm{d}z\left[\frac{B}{2}[\psi^{\prime}\left(z\right)]^{2}+f_{0}\left(\psi\left(z\right)\right)-f_{0}\left(\psi_{b}\right)\right]

& = & \int_{-\infty}^{\infty}\mathrm{d}z\left[\frac{B}{2}[\psi'\left(z\right)]^{2}+f_{0}\left(\psi\right)-f_{0}\left(\psi_{b}\right)\right].\end{matrix}$$ This is not an extensive quantity like $$F\left[\psi\right]$$, a single number in the size of the system: it is rather a geometry independent parameter with units of energy per unit area.

The first term above is reminiscent of kinetic energy and the second of potential energy. An analogy to the classical mechanics of a point particle exists, as detailed in the following table. \begin{table}[H] \centering{}\begin{tabular}{|c|c|} \hline $$z$$ & $$t$$ (time)\tabularnewline \hline \hline $$\psi\left(t\right)$$ & $$x\left(t\right)$$ (distance)\tabularnewline \hline $$\frac{B}{2}\left(\psi^{\prime}\left(z\right)\right)^{2}$$ & $$\frac{1}{m}\dot{x}^{2}$$ (kinetic energy)\tabularnewline \hline $$f_{0}\left(\psi\right)$$ & $$-V\left(x\right)$$ (potential energy)\tabularnewline \hline $$f_{0}\left(\psi_{b}\right)$$ & $$-E$$ (total energy)\tabularnewline \hline \end{tabular} \end{table} With this analogy in mind, we can derive an expression similar to energy conservation in mechanics. From applying the variational principle to $f_{0}$ we obtain $$ \frac{\partial f_{0}}{\partial\psi}=\frac{\partial}{\partial z}\frac{\partial}{\partial\psi^{\prime}}\left[\frac{B}{2}\left(\psi^{\prime}\right)^{2}\right]=B\psi^{\prime\prime}.$$ Multiplying this by $\psi^{\prime}$ gives $$ \psi^{\prime}\frac{\partial f_{0}}{\partial\psi}=B\psi^{\prime}\psi^{\prime\prime}=\frac{B}{2}\frac{d}{dz}\left[\left(\psi^{\prime}\right)^{2}\right].$$ Integrating over $z$ from $-\infty$ to $+\infty$, $$\begin{matrix} \overset{\overbrace{\int_{-\infty}^{z}\frac{df_{0}}{d\psi}\frac{d\psi}{dz}\mathrm{d}z}} & = & \frac{B}{2}\int_{-\infty}^{z}\frac{d}{dz}\left(\psi^{\prime}\right)^{2}\mathrm{d}z=\left.\frac{B}{2}\left(\psi^{\prime}\right)^{2}\right|_{-\infty}^{z}

& \Downarrow

f_{0}\left(\psi\right)-f_{0}\left(\psi_{b}\right) & = & \frac{B}{2}\left\{ \left[\psi^{\prime}\left(z\right)\right]^{2}-\overset{=0}{\overbrace{\left(\psi^{\prime}\left(-\infty\right)\right)^{2}}}\right\}

& \Downarrow\end{matrix}$$ $$ f_{0}\left(\psi\right)-f_{0}\left(\psi_{b}\right)=\frac{1}{2}B\left(\psi^{\prime}\right)^{2}.$$ The last term disappears due to the boundary condition at $$z\rightarrow-\infty$$, where $$\psi\rightarrow\psi_{b}=\mathrm{const.}$$ and therefore $$\psi^{\prime}=0$$. The analogy between this equation and the law of conservation of mechanical energy can be stressed by writing it as $$ \overset{\overbrace{\frac{1}{2}B\left(\psi^{\prime}\right)^{2}}}\overset{\overbrace{-f_{0}\left(\psi\right)}}=\overset{\overbrace{-f_{0}\left(\psi_{b}\right)}}.$$ Returning to the surface tension, we can use this conservation law to rewrite it in the simpler form $$\begin{matrix} \sigma & = & \int_{-\infty}^{\infty}\left[\frac{B}{2}\left(\psi^{\prime}\left(z\right)\right)^{2}+\overset{\overbrace{f_{0}\left(\psi\right)-f_{0}\left(\psi_{b}\right)}}\right]\mathrm{d}z=B\int_{-\infty}^{\infty}\left(\psi^{\prime}\left(z\right)\right)^{2}\mathrm{d}z.\end{matrix}$$ An estimate may be obtained from $$ \sigma\simeq B\int_{-\infty}^{\infty}\left(\psi^{\prime}\left(z\right)\right)^{2}\mathrm{d}z\simeq B\left(\frac{\psi_{b}}{\xi}\right)^{2}\xi,$$ or $$ \sigma\simeq B\psi_{b}^{2}\xi^{-1}.$$ The exact expression for $$\sigma$$ may be obtained from the exact GL form that we have derived for $$\psi$$. In any case, the temperature dependence of $\sigma$ is of the form $$ \sigma\approx B\overset{\overbrace{\sqrt{\frac{T_{c}-T}{2B}}}}\overset{\overbrace{\frac{\alpha\left(T_{c}-T\right)}{\gamma}}}\sim\left(T_{c}-T\right)^{\frac{3}{2}}.$$ If we insert the general exponential dependencies of $$\xi$$ and $$\psi_{b}$$ into the equation, we will see that the exponent for surface energy as function of $$(T_{c}-T)$$ is $$2\beta+\nu$$. This discussion can be extended to systems which do not have symmetry between $$\phi$$ and $$1-\phi$$, such as a liquid/gas system with two densities $$n_{\ell}$$ and $$n_{g}$$. Without proof, we will state that within the GL formalism it can be shown that $$ f_{0}\left(n\right)-\frac{1}{2}\left[f_{0}\left(n_{g}\right)+f_{0}\left(n_{\ell}\right)\right]=c\left(n-n_{g}\right)^{2}\left(n-n_{\ell}\right)^{2}.$$ The surface energy will be $$ \Delta F=\int\mathrm{d}\mathbf{r}\left[\frac{1}{2}B\left(\triangledown n\right)^{2}+f_{0}\left(n\right)-\frac{1}{2}f_{0}\left(n_{g}\right)-\frac{1}{2}f_{0}\left(n_{\ell}\right)\right].$$ For a profile in the $z$ direction, $$ \sigma=\frac{\Delta F}{\mbox{Area}}=\int_{-\infty}^{\infty}\mathrm{d}z\left[\frac{B}{2}\left(n^{\prime}\right)^{2}+c[n\left(z\right)-n_{g}]^{2}[n\left(z\right)-n_{\ell}]^{2}\right].$$ After variation, one obtains for the two coexisting phases with $n_{\ell}>n>n_{g}$ $$\begin{matrix} \ln\frac{n_{\ell}-n}{n-n_{g}} & = & -\frac{z}{\xi},

\xi & = & \left(n_{\ell}-n_{g}\right)\sqrt{\frac{2c}{B}},\end{matrix}$$ with $$n_{\ell}-n_{g}\sim\left(T_{c}-T\right)^{\frac{1}{2}}$$ and $$\xi\sim\left(T_{c}-T\right)^{-\frac{1}{2}}$$. The density profile interpolates smoothly between the two phases: $$ n\left(z\right)=\frac{n_{\ell}}{1+e^{-\frac{z}{\xi}}}+\frac{n_{g}}{1+e^{+\frac{z}{\xi}}}.$$

A few generalizations:

\sigma\left(\mbox{with soap}\right)=\sigma_{\mathrm{air/water}}-\Delta\sigma(\phi_{s}),$$ where $$\phi_{s}$$is the surface concentration of the soap molecules.
 * Surfactants or surface active materials: this includes soap, detergent, biological membranes composed of biological amphiphiles called phospholipids and more. What they have in common is that they are formed of molecules with charged or polarized {}"heads" connected to long hydrocarbon {}"tails". These molecules are called amphiphyllic, since the tails are hydrophobic and the heads hydrophillic . This causes them to accumulate on interfaces between water and air, and reduce surface tension (by a factor $\sim2-$3): $$
 * Emulsions: drops of oil in water (or water in oil), stabilized by some sort of emulsifier (which is also a surfactant). Some common examples are milk and mayonnaise.

06/11/2009
 * Detergency of soap: while soap reduces surface tension between oil and water, it does not create a phase where oil and water are mixed on a molecular level. Rather, micrometric oil droplets are formed in the aqueous solution. The process of cleaning is the process where oily dirt is solubilized in the aqueous solution and is washed away from the object we clean.

Review of differential geometry
\begin{description}

[{Books:}] The book by Safran has a short introduction which will be followed here. The one by Visconti is more thorough and oriented towards other physics problems such as relativity. There also exists a multi-authored book on the subject edited by David Nelson, and a mathematical book on the theory of manifolds by Spivak. \end{description} In order to discuss surfaces and curves which exhibit local curvature, we will need to introduce a few mathematical concepts. A brief introduction follows. \paragraph{Curves} A parametric curve $$\mathbf{R}\left(u\right)$$ is a set of vectors along some contour in space, expressed as a function of the parameter $$u$$, which may vary, for example, from $$0$$ to the length $$L$$ of the curve. The differential length element $$\mathrm{d}s$$ along the curve can be expressed by $$\begin{matrix} \mathrm{d}s & = & \left|\mathbf{R}\left(u+\mathrm{d}u\right)-\mathbf{R}\left(u\right)\right|\simeq\left|\frac{\partial\mathbf{R}}{\partial u}\right|\mathrm{d}u,

& \Downarrow

\mathrm{d}s & = & \left|\frac{\partial\mathbf{R}}{\partial u}\right|\mathrm{d}u,

\frac{\mathrm{d}s}{\mathrm{d}u} & = & \left|\frac{\partial\mathbf{R}}{\partial u}\right|.\end{matrix}$$ A tangent vector $\hat{\mathbf{t}}$ can be found from $$ \hat{\mathbf{t}}=\frac{\mathrm{d}\mathbf{R}}{\mathrm{d}s}=\frac{\frac{\mathrm{d}\mathbf{R}}{\mathrm{d}u}}{\frac{\mathrm{d}s}{\mathrm{d}u}}.$$ Note that from the magnitude of this expression, $$\hat{\mathbf{t}}$$ is always a unit vector. It is tangent to the curve because it is proportional to $$\Delta\mathbf{s}=\mathbf{R}\left(u+\mathrm{d}u\right)-\mathbf{R}\left(u\right)$$.

With these definitions, we can define curvature as one extra derivative: $$ \frac{\mathrm{d}\hat{\mathbf{t}}}{\mathrm{d}s}=\frac{\mathrm{d}^{2}\mathbf{R}}{\mathrm{d}s^{2}}\equiv\kappa\hat{\mathbf{n}}_{c}.$$ The unit vector $$\hat{\mathbf{n}}_{c}$$ is a unique vector perpendicular to $$\hat{\mathbf{t}}$$ (this is easy to show by taking $$\frac{\mathrm{d}}{\mathrm{d}s}\hat{\mathbf{t}}^{2}=2\hat{\mathbf{t}}\cdot\frac{\mathrm{d}\hat{\mathbf{t}}}{\mathrm{d}s}=\frac{\mathrm{d}}{\mathrm{ds}}\left(1\right)=0$$), and we can also write $$ \mathcal{\kappa}=\left|\frac{\mathrm{d}\hat{\mathbf{t}}}{\mathrm{d}s}\right|.$$ It is also useful to define the local radius of curvature $$R=\kappa^{-1}$$. Some intuition can be gained from an analogy with the kinetics of point particles moving without a friction on a curve in space, if $$u$$ is replaced by the time $$t$$. The tangent and curvature vectors can then be related to the velocity and acceleration, respectively.

\paragraph{Surfaces}

Similarly to curves, a parametric surface $$\mathbf{r =\mathbf{R}\left(u,v\right)$$} in space can be defined as a function of two parameters. There are three scalar functions contained in this explicit definition: $$ \begin{cases} x=f\left(u,v\right),

y=g\left(u,v\right),

z=h\left(u,v\right).\end{cases}$$ Note that it is also possible to represent surfaces implicitly as $$ F\left(x,y,z\right)=0,$$ where other than its zeros $$F$$ is arbitrary.

Any explicit definition requires some particular choice of $$u$$ and $v$. For instance, one choice (called the Monge representation) is $$ \begin{cases} x=f\left(u,v\right)=u,

y=g\left(u,v\right)=v,

z=h\left(u,v\right)=h\left(x,y\right).\end{cases}$$ In vector notation, $$ \mathbf{r}=\left(x,y,h\left(x,y\right)\right).$$ This works only if there is a single $$z$$ value for each choice of $$x$$ and $$y$$, and is very convenient for surfaces which are almost flat. Another common choice useful for nearly spherical surfaces is the spherical representation, where $$u=\vartheta$$ and $$v=\varphi$$. In spherical coordinates, this is $$ \mathbf{r}=\left(r,\vartheta,\varphi\right)=\left(R\left(\vartheta,\varphi\right),\vartheta,\varphi\right).$$

We can define two tangent vectors $$\hat{\mathbf{r}}_{u}$$ and $$\hat{\mathbf{r}}_{v}$$ at every point on the surface, such that $$\hat{\mathbf{r}}_{u}\cdot\hat{\mathbf{r}}_{v}=0$$. The unit vector normal to the surface is $$\hat{\mathbf{n}}=\frac{\hat{\mathbf{r}}_{u}\times\hat{\mathbf{r}}_{v}}{\left|\hat{\mathbf{r}}_{u}\times\hat{\mathbf{r}}_{v}\right|}$$. It is easy to find the unit vector from the implicit representation, and one can usually find an implicit representation: for instance, starting from Monge $$F=z-h\left(x,y\right)=0$$. On the surface, $$F=0$$ implies $$\begin{matrix} \mathrm{d}F & = & F\left(\mathbf{r}\right)+F\left(\mathbf{r}+\mathrm{d}\mathbf{r}\right)

& = & -\mathrm{d}\mathbf{r}\cdot\triangledown F=0\end{matrix}$$ The vector $$\mathrm{d}\mathbf{r}$$ can be any vector tangent to the surface, and therefore $$\triangledown F$$ must be proportional to the normal vector: $$ \hat{\mathbf{n}}=\frac{\triangledown F}{\left|\triangledown F\right|}.$$

\paragraph{Metric of a curved surface}

A surface has been defined as an ensemble of points $$\left\{ \mathbf{r}\left(u,v\right)\right\} $$ embedded in 3-dimensional space. In order to measure length along such a surface, we must integrate along a differential length element within it: $$\begin{matrix} \mathrm{d}\mathbf{r} & = & \frac{\partial\mathbf{r}}{\partial u}\mathrm{d}u+\frac{\partial\mathbf{r}}{\partial v}\mathrm{d}v\equiv\mathbf{r}_{u}\mathrm{d}u+\mathbf{r}_{v}\mathrm{d}v,

\mathrm{d}s & = & \left|\mathrm{d}\mathbf{r}\right|,

\mathrm{d}s^{2} & = & \mathrm{d}\mathbf{r}\cdot\mathrm{d}\mathbf{r}=\left(\mathbf{r}_{u}\mathrm{d}u+\mathbf{r}_{v}\mathrm{d}v\right)\left(\mathbf{r}_{u}\mathrm{d}u+\mathbf{r}_{v}\mathrm{d}v\right)

& = & \overset{E}{\overbrace{\mathbf{r}_{u}^{2}}}\left(\mathrm{d}u\right)^{2}+2\overset{F}{\overbrace{\mathbf{r}_{u}\cdot\mathbf{r}_{v}}}\mathrm{d}u\mathrm{d}v+\overset{G}{\overbrace{\mathbf{r}_{v}^{2}}}\left(\mathrm{d}v\right)^{2}.\end{matrix}$$ The metric is defined as $$ g\equiv EG-F^{2}.$$ It is positive definite since $$ g=EG-F^{2}=\left(\mathbf{r}_{u}\cdot\mathbf{r}_{u}\right)\left(\mathbf{r}_{v}\cdot\mathbf{r}_{v}\right)-\left(\mathbf{r}_{u}\cdot\mathbf{r}_{v}\right)^{2}=\left(\mathbf{r}_{u}\times\mathbf{r}_{v}\right)^{2}>0.$$ The surface element can be expressed in terms of the metric: $$ \mathrm{d}A=\left|\mathrm{d}\mathbf{r}_{u}\times\mathrm{d}\mathbf{r}_{v}\right|=\left|\mathbf{r}_{u}\times\mathbf{r}_{v}\right|\mathrm{d}u\mathrm{d}v=\sqrt{g}\mathrm{d}u\mathrm{d}v.$$ We illustrate this in the Monge representation as an example. Here, $$\begin{matrix} \mathbf{r}_{u} & = & \mathbf{r}_{x}=\frac{\partial}{\partial x}\left(x,y,h\left(x,y\right)\right)

& = & \left(1,0,h_{x}\right),

\mathbf{r}_{v} & = & \mathbf{r}_{y}=\left(0,1,h_{y}\right).\end{matrix}$$ The surface element is $$ \mathrm{d}A=\sqrt{g}\mathrm{d}x\mathrm{d}y,$$ with the metric $$\begin{matrix} g & = & \left(\mathbf{r}_{u}\times\mathbf{r}_{v}\right)^{2}=\left|\begin{matrix}{ccc} \mathbf{\hat{x}} & \mathbf{\hat{y}} & \mathbf{\hat{z}}

1 & 0 & h_{x}

0 & 1 & h_{y}\end{matrix}\right|,\\ \sqrt{g} & = & \sqrt{1+h_{x}^{2}+h_{y}^{2}}.\end{matrix}$$ The length element is $$\begin{matrix} \mathrm{d}s^{2} & = & r_{x}^{2}\mathrm{d}x^{2}+2\mathbf{r}_{x}\cdot\mathbf{r}_{y}\mathrm{d}x\mathrm{d}y+r_{y}^{2}\mathrm{d}y^{2},

\mathrm{d}s & = & \sqrt{\left(1+h_{x}\right)^{2}\mathrm{d}x^{2}+\left(1+h_{y}\right)^{2}\mathrm{d}y^{2}}.\end{matrix}$$ and therefore we have in the Monge representation $$ \mathrm{d}s=\sqrt{\left(1+h_{x}\right)^{2}\mathrm{d}x^{2}+\left(1+h_{y}\right)^{2}\mathrm{d}y^{2}},$$ $$ \mathrm{d}A=\sqrt{1+h_{x}^{2}+h_{y}^{2}}\mathrm{d}x\mathrm{d}y.$$ In the implicit representation, one can begin the same process by writing the surface element in terms of the volume element: $$ \mathrm{d}A=\delta^{\left(3\right)}\left(\mathbf{r}-\mathbf{r}_{s}\left(x,y,z\right)\right)\mathrm{d}^{3}r,$$ using the 3D Dirac delta function $$\delta^{\left(3\right)}\left(\mathbf{r}\right)$$. A general property of the Dirac delta is that $$ \delta\left(f\left(x\right)-a\right)=\frac{\delta\left(x-f^{-1}\left(a\right)\right)}{\left.\frac{\partial f}{\partial x}\right|_{x=f^{-1}\left(a\right)}},$$ where $$f^{-1}$$ is the inverse function such that $$f\left(x\right)=a\Rightarrow f^{-1}\left(a\right)=x$$. In terms of the function $$F$$ such that the surface is defined by $F=0$, we can use this property to write $$ \delta^{\left(3\right)}\left(\mathbf{r}-\mathbf{r}_{s}\left(F\right)\right)=\delta\left(F\right)\left|\triangledown F\right|,$$ or $$ \mathrm{d}A=\delta\left(F\right)\left|\triangledown F\right|\mathrm{d}^{3}r.$$ Returning to the implicit version of the Monge representation, $$\begin{matrix} \triangledown F & = & \left(-h_{x},-h_{y},1\right),

\left|\triangledown F\right| & = & \sqrt{1+h_{x}^{2}+h_{y}^{2}},\end{matrix}$$ $$ \mathrm{d}A=\delta\left(z-h\left(x,y\right)\right)\mathrm{d}z\sqrt{1+h_{x}^{2}+h_{y}^{2}}\mathrm{d}x\mathrm{d}y.$$

\paragraph{Curvature of surfaces}

So far we have discussed first order differential expressions and the area element. This has to do with properties like surface energy $$\mathrm{d}f=\sigma\mathrm{d}A$$. Curvature is a second order property, useful in discussing deformations and fluctuations. Consider a curve $$\mathbf{r}\left(s\right)$$ with $$0<s<L$$ on a surface parametrized by $$u$$ and $$v$$. On the curve, $$u=u\left(s\right)$$ and $$v=v\left(s\right)$$. If $$\hat{\mathbf{n}}$$ is a vector normal to the surface, the local curvature (of the curve) is $$ \kappa=\frac{\mathrm{d}^{2}\mathbf{r}\left(s\right)}{\mathrm{d}s^{2}}\cdot\hat{n}=\mathbf{r}^{\prime\prime}\left(s\right)\cdot\hat{\mathbf{n}}.$$ The first derivative is $$ \frac{\mathrm{d}\mathbf{r}}{\mathrm{d}s}=\frac{\partial\mathbf{r}}{\partial u}\frac{\mathrm{d}u}{\mathrm{d}s}+\frac{\partial\mathbf{r}}{\partial v}\frac{\mathrm{d}v}{\mathrm{d}s}=\mathbf{r}_{u}u^{\prime}+\mathbf{r}_{v}v^{\prime},$$ and the second derivative $$\begin{matrix} \frac{\mathrm{d}^{2}\mathbf{r}}{\mathrm{d}s^{2}} & = & \frac{\mathrm{d}}{\mathrm{d}s}\left(\mathbf{r}_{u}u^{\prime}+\mathbf{r}_{v}v^{\prime}\right)

& = & \mathbf{r}_{uu}\left(u^{\prime}\right)^{2}+\mathbf{r}_{vv}\left(v^{\prime}\right)^{2}+2\mathbf{r}_{uv}u^{\prime}v^{\prime}+\mathbf{r}_{u}u^{\prime\prime}+\mathbf{r}_{v}v^{\prime\prime}.\end{matrix}$$ Since $$\hat{\mathbf{n}}$$ is perpendicular to $$\mathbf{r}_{u}$$ and $\mathbf{r}_{v}$, we are left with $$ \kappa=L\left(u^{\prime}\right)^{2}+N\left(v^{\prime}\right)^{2}+2Mu^{\prime}v^{\prime},$$ $$\begin{matrix} L & = & \hat{\mathbf{n}}\cdot\mathbf{r}_{uu},

N & = & \hat{\mathbf{n}}\cdot\mathbf{r}_{vv},

M & = & \hat{\mathbf{n}}\cdot\mathbf{r}_{uv}.\end{matrix}$$

\begin{minipage}[t]{1\columnwidth} \begin{shaded} (some missing formulas...)\end{shaded} \end{minipage} $$\begin{matrix} L & = & -\hat{\mathbf{n}}\cdot\mathbf{r}_{u},

N & = & -\hat{\mathbf{n}}_{v}\cdot\mathbf{r}_{v},

M & = & -\hat{\mathbf{n}}_{v}\cdot\mathbf{r}_{u}=-\hat{\mathbf{n}}_{u}\cdot\mathbf{r}_{v}.\end{matrix}$$ We finally obtain $$ \kappa=\frac{\left(\mathbf{\hat{n}}_{u}\cdot\mathbf{r}_{u}\right)\mathrm{d}u^{2}+\left(\mathbf{\hat{n}}_{v}\cdot\mathbf{r}_{u}+\mathbf{\hat{n}}_{u}\cdot\mathbf{r}_{v}\right)\mathrm{d}u\mathrm{d}v+\left(\hat{\mathbf{n}}_{v}\cdot\mathbf{r}_{v}\right)\mathrm{d}v^{2}}{\left(\mathrm{d}s\right)^{2}},$$ and with $$\mathrm{d}\mathbf{r}=\mathbf{r}_{u}\mathrm{d}u+\mathbf{r}_{v}\mathrm{d}v$$ and $\mathrm{d}\hat{\mathbf{n}}=\hat{\mathbf{n}}_{u}\mathrm{d}u+\hat{\mathbf{n}}_{v}\mathrm{d}v$, $$ \kappa=-\frac{\mathrm{d\hat{\mathbf{n}}\cdot}\mathrm{d}\mathbf{r}}{\mathrm{d}\mathbf{r}\cdot\mathrm{d}\mathbf{r}}=-\frac{\mathrm{d\hat{\mathbf{n}}\cdot}\mathrm{d}\mathbf{r}}{\mathrm{d}s^{2}}.$$ (missing diagram...)

\paragraph{Curvature tensor}

Since $\mathrm{d}\mathbf{r}\cdot\mathbf{\hat{n}}=0$, $$ \hat{\mathbf{n}}+\mathrm{d}\hat{\mathbf{n}}=\hat{\mathbf{n}}\left(\mathbf{r}+\mathrm{d}\mathbf{r}\right)=\mathbf{\hat{n}\left(\mathbf{r}\right)}+\mathrm{d}\mathbf{r}\cdot\triangledown\mathbf{\hat{n}},$$ or $$ \mathrm{d}\mathbf{\hat{n}}=\mathrm{d}\mathbf{r}\cdot\triangledown\hat{\mathbf{n}}\equiv\mathrm{d}\mathbf{r}\cdot\overleftrightarrow{Q}.$$ The quantity $$\begin{matrix} \overleftrightarrow{Q} & = & \triangledown\hat{\mathbf{n}},

Q_{ij} & = & \left(\triangledown\hat{\mathbf{n}}\right)_{ij}=\frac{\partial}{\partial r_{i}}\left(\mathbf{\hat{n}}\right)_{j},\end{matrix}$$ is a second rank tensor or dyadic.

Now, we can write $$\kappa$$ with \begin{minipage}[t]{1\columnwidth} \begin{shaded} (some missing formulas...)\end{shaded} \end{minipage} $$ \kappa=-\frac{\mathrm{d\hat{\mathbf{n}}\cdot}\mathrm{d}\mathbf{r}}{\mathrm{d}\mathbf{r}\cdot\mathrm{d}\mathbf{r}}=-\frac{\mathrm{\mathrm{d}\mathbf{r}\cdot\overleftrightarrow{Q}\cdot}\mathrm{d}\mathbf{r}}{\mathrm{d}\mathbf{r}\cdot\mathrm{d}\mathbf{r}}=-\frac{\mathrm{\mathrm{d}\mathbf{r}\cdot\overleftrightarrow{Q}\cdot}\mathrm{d}\mathbf{r}}{\mathrm{d}s^{2}},$$ or $$ \kappa=\mathbf{r}^{\prime}\left(s\right)\cdot\overleftrightarrow{Q}\cdot\mathbf{r}^{\prime}\left(s\right)=\sum_{ij}r_{i}^{\prime}Q_{ij}r_{j}^{\prime}$$ where $$\mathbf{r}'(s)={\normalcolor \frac{\mathrm{d}\mathbf{r}}{\mathrm{d}s}}$$.

This can be used for the case of an implicitly defined surface where $\hat{\mathbf{n}}=\frac{\triangledown F}{\left|\triangledown F\right|}$: $$\begin{matrix} Q_{ij} & = & \left[\triangledown\left(\frac{\triangledown F}{\left|\triangledown F\right|}\right)\right]_{ij}

& = & \partial_{i}\left(\frac{\partial_{j}F}{\left|\triangledown F\right|}\right)=\frac{\partial_{i}\partial_{j}F}{\left|\triangledown F\right|}-\left(\partial_{j}F\right)\cdot\frac{\partial_{i}F}{\left|\triangledown F\right|}.\end{matrix}$$ Using $\partial_{i}\left|\triangledown F\right|=\partial_{i}\sqrt{\left(\partial_{x}F\right)^{2}+\left(\partial_{y}F\right)^{2}+\left(\partial_{z}F\right)^{2}}$, $$ Q_{ij}=-\frac{1}{\left|\triangledown F\right|}\left[\frac{\partial_{i}\left|\triangledown F\right|\partial_{j}F}{\left|\triangledown F\right|}-\partial_{i}\partial_{j}F\right].$$

06/16/2009

\paragraph{The curvature tensor and its invariants}

The dyadic matrix $$Q=\overleftrightarrow{Q}$$ has eigenvalues $$\lambda_{i}$$, a trace $$\mathrm{Tr}Q=\sum_{i}\lambda_{i}$$ and a determinant $$\mathrm{Det}Q=\left|Q\right|=\prod_{i}\lambda_{i}$$ which are invariant under similarity transformations $$\tilde{Q}=UQU^{-1}$$. The sum of the principal minors $$\sum_{i}M_{ii}$$ is also invariant: to see this, consider the characteristic polynomial $$\begin{matrix} P\left(\lambda\right) & = & \mathrm{Det}\left(Q-\lambda I\right)

& = & \mathrm{Det}\left(U^{-1}U\right)\mathrm{Det}\left(Q-\lambda I\right)

& & \mathrm{Det}\left(U^{-1}\right)\mathrm{Det}\left(Q-\lambda I\right)\mathrm{Det}\left(U\right)

& = & \mathrm{Det}\left(U^{-1}QU-\lambda U^{-1}IU\right)

& = & \mathrm{Det}\left(\tilde{Q}-\lambda I\right).\end{matrix}$$ Here $$I$$ is the unit matrix. Expanding $$P\left(\lambda\right)$$ in powers of $\lambda$, $$ P\left(\lambda\right)=\left|Q\right|-\lambda\sum_{i}M_{ii}+\lambda^{2}\sum_{i}Q_{ii}-\lambda^{3}.$$ We can identify clearly the coefficients of the polynomial $$P(\lambda)$$as the 3 invariants. One eigenvalue is always equal to zero (as an exercise do it in the implicit representation). If we choose $$\lambda_{3}=0$$, we are reduced to two nontrivial invariants: $$\mathrm{Tr}Q=\lambda_{1}+\lambda_{2}$$ and $$\sum_{i}M_{ii}=\lambda_{1}\lambda_{2}$$(as $${\normalcolor \mathrm{Det}}(Q)=0)$$. These invariants are called the mean curvature $$H$$ and the Gaussian curvature $K$: $$ H\equiv\frac{1}{2}\left(\lambda_{1}+\lambda_{2}\right),$$ $$ K\equiv\lambda_{1}\lambda_{2}.$$

For example, in the implicit representation we can write $$\begin{matrix} \mathbf{\hat{n}} & = & \frac{\triangledown F}{\left|\triangledown F\right|},

N & \equiv & \left|\triangledown F\right|,

\mathrm{Tr}Q & = & -\frac{1}{N}\sum_{i}\left[\frac{N_{i}F_{i}}{N}-F_{ii}\right],\end{matrix}$$ where $$ \begin{cases} N_{i}=\frac{\partial}{\partial r_{i}}N,

F_{ij}=\frac{\partial}{\partial r_{i}}\frac{\partial}{\partial r_{j}}F.\end{cases}$$ Note that since, for instance, $$ N_{i}=\partial_{i}\sqrt{F_{x}^{2}+F_{y}^{2}+F_{z}^{2}}=\frac{\partial_{i}\left(F_{j}F_{j}\right)}{2\sqrt{F_{x}^{2}+F_{y}^{2}+F_{z}^{2}}}=\frac{1}{N}F_{j}F_{ij},$$ with a few more steps we can show (another exercise) that $$ H=-\frac{1}{2N^{3}}\left[2F_{x}F_{y}F_{xy}-F_{xx}\left(F_{y}^{2}+F_{z}^{2}\right)+\mathrm{2\, cyclic\, permuations}\right],$$ $$ K=-\frac{1}{2N^{3}}\left[F_{xx}F_{yy}F_{z}^{2}-F_{xy}^{2}F_{z}^{2}+2F_{xz}F_{x}\left(F_{y}F_{yz}-F_{z}F_{yy}\right)+\mathrm{2\, cyclic\, permuations}\right],$$

where by cyclic permutations we mean permuting the axes: $$x\rightarrow y\rightarrow z\rightarrow x$$. In the case of the Monge representation where $$F=z-h\left(x,y\right)$$, $$H$$ and $$K$$ have a simpler form: $$\begin{matrix} N & = & \sqrt{1+h_{x}^{2}+h_{y}^{2}},

h_{i} & = & \partial_{i}h,

F_{x} & = & -h_{x},

F_{y} & = & -h_{y},

F_{z} & = & 1.\end{matrix}$$ One can then show that $$ H=\frac{1}{2\left(\sqrt{1+h_{x}^{2}+h_{y}^{2}}\right)^{3}}\left[\left(1+h_{y}^{2}\right)h_{xx}+\left(1+h_{x}^{2}\right)h_{yy}-2h_{x}h_{y}h_{z}\right],$$ $$ K=\frac{h_{xx}h_{yy}-h_{xy}^{2}}{\left(1+h_{x}^{2}+h_{y}^{2}\right)^{2}}.$$

Small disturbances of planar surfaces
To treat nearly flat surfaces, one can use the Monge representation to expand a Taylor series around a completely flat surface in derivatives of $h\left(x,y\right)$: $$\begin{matrix} N & = & \sqrt{1+h_{x}^{2}+h_{y}^{2}}\simeq1+\frac{1}{2}\left(h_{x}^{2}+h_{y}^{2}\right),\end{matrix}$$ or equivalently $$ N=\left|\triangledown F\right|\simeq1+\frac{1}{2}\left(\triangledown_{||}h\right)^{2}.$$ From similar arguments, one can show that $$\begin{matrix} H & \simeq & \frac{1}{2}\left(h_{xx}+h_{yy}\right)=\frac{1}{2}\triangledown_{\parallel}^{2}h,

K & \simeq & h_{xx}h_{yy}-\left(h_{xy}\right)^{2}.\end{matrix}$$ In the general parametric representation, $$ \kappa=\frac{L\left(du\right)^{2}+2Mdudv+N\left(dv\right)^{2}}{E\left(du\right)^{2}+2Fdudv+G\left(dv\right)^{2}}$$ with $$L=\hat{\mathbf{n}}\cdot\mathbf{r}_{uu}$$$$M=\hat{\mathbf{n}}\cdot\mathbf{r}_{uv}$$and $$N=\hat{\mathbf{n}}\cdot\mathbf{r}_{vv}$$. Picking a unit vector $$\hat{\mathbf{a}}=l\mathbf{r}_{u}+m\mathbf{r}_{v}$$ in the plane, the curvature in the direction of $$\hat{\mathbf{a}}$$ is given by $$ \kappa=Ll^{2}+2Mlm+Nm^{2}.$$ The parameters $l$ and $m$ must obey $$ a=\hat{|\mathbf{a}}|=1\Rightarrow\hat{\mathbf{a}}\cdot\hat{\mathbf{a}}=El^{2}+2Fml+Gm^{2}=1.$$ In investigating $$\kappa\left(l,m\right)$$ as a function of the direction of $$\hat{\mathbf{a}}$$, we can find its extrema with the constraint $a=1$ by adding a Lagrange multiplier: $$\begin{matrix} \tilde{\kappa} & = & \kappa\left(l,m\right)-\lambda(El_{2}+2Flm+Nm^{2}),

& \Downarrow

\frac{\partial\tilde{\kappa}}{\partial l}=\frac{\partial\tilde{\kappa}}{\partial m} & = & 0.\end{matrix}$$ The solution takes the form of a quadratic equation $$ \kappa^{2}\left(EG-F^{2}\right)-\kappa\left(EN+GL-2FM\right)+\left(LN-M^{2}\right)=0,$$ which has 2 roots: $$\kappa_{a}$$ and $$\kappa_{b}.$$ This extremum finding process defines the principal directions, which (we will state without proof) are always perpendicular to each other. The two invariants are then $$\begin{matrix} H & = & \frac{1}{2}\left(\kappa_{a}+\kappa_{b}\right)=-\frac{EN+GL-2FM}{2\left(EG-F^{2}\right)},

K & = & \kappa_{a}\kappa_{b}=\frac{LN-M^{2}}{EG-F^{2}}.\end{matrix}$$

Consider a few cases in terms of the radii of curvature $$r_{a}=\frac{1}{\kappa_{a}}$$ and $$r_{b}=\frac{1}{\kappa_{b}}$$:

We will use the principal directions to describe a local paraboloid expansion of a nearly flat surface. In general, $$ z=z_{0}+\frac{1}{2}L\left(u-u_{0}\right)^{2}+\frac{1}{2}N\left(v-v_{0}\right)^{2}.$$ In the Monge representation, $$ z=z_{0}+\frac{1}{2}\kappa_{a}\left(x-x_{0}\right)^{2}+\frac{1}{2}\kappa_{b}\left(y-y_{0}\right)^{2}.$$
 * 1) If at some point both radii are positive, then $$\kappa_{a}$$, $$\kappa_{b}$$, $$H$$ and $$K$$ are all positive. The surface is convex around the point, as in (17a).
 * 2) If both are negative, then $$H<0$$ and $$K>0$$. The surface is concave around the point, as in (17b).
 * 3) If the two have opposing signs, $$K$$ is negative and one is at a saddle point of the surface, as in (17c).
 * 4) The special surface having $$H=(\kappa_{a}+\kappa_{b})/2=0$$ at any point is called a minimal surface  (or Schwartz surface,  after the 19th century mathematician who studied them in detail). These surfaces have a saddle at every point, as one curvature is always positive and the second negative: $$\kappa_{a}=-\kappa_{b}$$. Hence, their Gaussian curvature is always negative: $$K<0.$$

Free energy of soft surfaces
\begin{description}

[{Book:}] Landau & Lifshitz' book on Elasticity  has a chapter on elasticity of hard (solid) shells. There is also a book by Boal on elasticity and mechanics of fluid membranes. Safran's book shows how the parameters we describe can be derived from a microscopic model where the lipid (surfactant) molecules are modeled as beads connected with various springs. \end{description} Consider a liquid surface or fluid membrane: as such a surface curves, its free energy varies. Phenomenologically, $$ \Delta F=\sigma\int\mathrm{d}A+2k\int\left(H-C_{0}\right)^{2}\mathrm{d}A+\bar{k}\int K\mathrm{d}A.$$ All the integrals are taken over the surface. The fact that the above expression models a fluid membrane is related to the fact that we do not account for any lateral shear forces. Molecules composing the fluid membrane are free to flow inside the membrane but they resist elastic deformations such as bending. The first term describes the contribution of surface tension, which is proportional to the total surface area. The geometric values $$H$$ and $$K$$ are the mean and Gaussian curvatures we have already encountered. The coefficients $$k$$ and $$\bar{k}$$ (with units of energy) depend, like $$\sigma$$, on the material properties in question. The spontaneous curvature $$C_{0}$$ is also a material property: it defines a certain preferred angle (perhaps due to the shape of surfactant molecules), and its sign depends on the preferred direction of curvature. See (18) for an illustration. Unless there is an active process that causes an asymmetry in the lipid composition of the two leaflets, the bilayer will have the same lipid composition on the inside and outside, and therefore has in total $$C_{0}=0$$. Usually for fluid membranes, $$k$$ and $$\bar{k}$$ range from $$\sim1k_{B}T$$ to $$\sim40k_{B}T$$.

One example is a sphere of radius $R$, where: $$\begin{matrix} H & = & \frac{1}{2}\left(R_{1}^{-1}+R_{2}^{-1}\right)=R^{-1},

K & = & R^{-2}.\end{matrix}$$ This gives $$\begin{matrix} \left.\Delta F\right|_{\mathrm{sphere}} & = & \sigma\int\mathrm{d}A+2k\int\left(H-C_{0}\right)^{2}\mathrm{d}A+\bar{k}\int K\mathrm{d}A

& = & 4\pi R^{2}\sigma+2k\left(\frac{1}{R}-C_{0}\right)^{2}\cdot4\pi R^{2}+4\pi\bar{k}

& = & 4\pi R^{2}\sigma+8\pi k\left(1-C_{0}R\right)^{2}+4\pi\bar{k}.\end{matrix}$$ The interesting fact that the surface integral over the Gaussian curvature $$K$$ gives a constant value of $$4\pi$$ – independent on the radius $$R$$ of the sphere – has a deep meaning. It is related to the famous Gauss-Bonnet theorem which will be stated here without further details: according to this theorem, the integral over the Gaussian curvature is a topological invariant of the surface whose value is equal to $$4\pi\left(1-g\right)$$, where $$g$$ is the genus of the surface. A sphere or any closed object with no {}"holes" has $$g=0$$ and an integrated Gaussian curvature of $$4\pi$$, while a torus (or {}"donut") with one hole has $$g=1$$ and hence a zero integrated Gaussian curvature.

A second example is an infinite cylinder with radius $$R$$. Here, $$\kappa_{a}=\frac{1}{R}$$ and $\kappa_{b}=$0. The free energy per unit length is $$\begin{matrix} \frac{\Delta F}{L} & = & 2\pi R\sigma+2k\left(\frac{1}{2R}-C_{0}\right)\cdot2\pi R+\bar{k}\overset{\overbrace{\int\frac{1}{R}\cdot0\cdot\mathrm{d}A}}

& = & 2\pi R\sigma+\frac{4\pi k}{R}\left(\frac{1}{2}-C_{0}R\right)^{2}.\end{matrix}$$ An even simpler example is the infinite plane, where $$\kappa_{a}=\kappa_{b}=0\Rightarrow K=H=0$$. This yields $$ \frac{\Delta F}{\mathrm{Area}}=\sigma+2kC_{0}^{2}.$$

06/18/2009

Thermal fluctuations of a plane
\begin{description}

[{Book:}] Safran's book. \end{description} To second order in derivatives of $$h$$ in the Monge representation for $\bar{k}=$0, $$ \Delta F=\frac{\sigma}{2}\int\left(\triangledown h\right)^{2}\mathrm{d}x\mathrm{d}y+\frac{2k}{4}\left(\triangledown^{2}h\right)^{2}\mathrm{d}x\mathrm{d}y.$$ The minimum of energy is obtained for a flat surface. Going to a Fourier transformed form, we have $$\begin{matrix} h_{\mathbf{q}} & = & \left(\frac{1}{\sqrt{2\pi}}\right)^{2}\int\mathrm{d}^{2}r\, h\left(\mathbf{r}\right)e^{-i\mathbf{q}\cdot\mathbf{r}},

h\left(\mathbf{r}\right) & = & \left(\frac{1}{\sqrt{2\pi}}\right)^{2}\int\mathrm{d}^{2}q\, h_{\mathbf{q}}e^{i\mathbf{q}\cdot\mathbf{r}},

\left(\triangledown^{2}h\right)^{2} & \rightarrow & q^{4}h_{\mathbf{q}}h_{-\mathbf{q}},

\left(\triangledown h\right)^{2} & \rightarrow & q^{2}h_{\mathbf{q}}h_{-\mathbf{q}}.\end{matrix}$$ This gives for the free energy in terms of the normal surface modes $\left\{ h_{q}\right\} $: $$ \Delta F=\frac{\sigma}{2}\int\left(q_{x}^{2}+q_{y}^{2}\right)h_{\mathbf{q}}h_{-\mathbf{q}}\mathrm{d}^{2}q+\frac{k}{2}\int\left(q_{x}^{2}+q_{y}^{2}\right)^{2}h_{\mathbf{q}}h_{-\mathbf{q}}\mathrm{d}^{2}q.$$ With $$h\left(\mathbf{r}\right)$$ real, we know that $$h_{\mathbf{q}}=h_{-\mathbf{q}}^{*}$$, or $$h_{\mathbf{q}}h_{-\mathbf{q}}=\left|h_{\mathbf{q}}\right|^{2}$$. From the classical equipartition theorem we can estimate the equilibrium energy for the average of this quantity: $$\begin{matrix} \frac{k_{B}T}{2} & = & \frac{1}{2}\sigma q^{2}\left\langle h_{\mathbf{q}}h_{-\mathbf{q}}\right\rangle +\frac{1}{2}kq^{4}\left\langle h_{\mathbf{q}}h_{-\mathbf{q}}\right\rangle

& \Downarrow

\left\langle \left|h_{\mathbf{q}}\right|^{2}\right\rangle & = & \frac{k_{B}T}{\sigma q^{2}+kq^{4}}.\end{matrix}$$ It is now useful to define the new length scale $$\xi\equiv\sqrt{\frac{k}{\sigma}}$$, and examine the limits of $$k\xi\gg1$$ and $$k\xi\ll1$$. In the $$k\xi\ll1$$ limit, one obtains a surface dominated by surface tension. Consider the real space thermal correlation function $$\begin{matrix} \left\langle h\left(\mathbf{r}\right)h\left(\mathbf{r}\right)\right\rangle & = & \frac{1}{\left(2\pi\right)^{2}}\int\int e^{-i\mathbf{q}\cdot\mathbf{r}}e^{-i\mathbf{q}^{\prime}\cdot\mathbf{r}}\overset{\overbrace{\left\langle h_{\mathbf{q}}h_{\mathbf{q^{\prime}}}\right\rangle }}\mathrm{d}^{2}q\mathrm{d}^{2}q^{\prime}

& = & \frac{1}{2\pi}\int\mathrm{d}^{2}q\,\frac{k_{B}T}{\sigma q^{2}}.\end{matrix}$$ Since this integral diverges at both large and small $$q$$, to obtain a physically meaningful result we must introduce cutoffs to the range of $$\mathit{q}$$: $$q_{\mathrm{min}}=\frac{2\pi}{L}$$ where $$L$$ is the linear dimension of the system, and $$q_{\mathrm{max}}=\frac{2\pi}{a}$$ where $$a\sim3\AA$$ is the typical molecular size. This gives an example of a famous result from the 1930s, known as Landau-Peierls instability for 2-dimensional systems and the lack of an ordered phase at $T>$0: $$ \left\langle h^{2}\left(\mathbf{r}\right)\right\rangle =\frac{k_{B}T}{2\pi\sigma}\ln\frac{L}{a}.$$ Since the logarithmic divergence is very weak, it turns out that the thermal fluctuations are two or three Angstroms in size for a water surface of macroscopic (a few millimeters or centimeters) dimension. These thermal fluctuations are not easy to measure because the signal should come only from the water molecules at the water surface. In the 1980s they were measured for the first time for water surfaces at room temperature using a powerful synchrotron X-ray source. The technique employs scattering at very low angles (called grazing incidence) from the water surface and obtains the intensity of the scattered X-ray as function of $$q.$$ This quantity is proportional to $$\langle|h_{q}|^{2}\rangle$$. In the opposite limit where $$k\xi\gg1$$, the membrane is dominated by its elastic energy, and $\sigma\ll kq^{2}$ can be neglected: $$ \left\langle \left|h_{\mathbf{q}}\right|^{2}\right\rangle =\frac{k_{B}T}{\sigma q^{2}+kq^{4}}\simeq\frac{k_{B}T}{kq^{4}}\sim q^{-4}.$$ The divergence at small $$q$$ is much larger here than in the first case, and $$\begin{matrix} \left\langle h^{2}\right\rangle & = & \frac{k_{B}T}{2\pi k}\int_{q_{\mathrm{min}}}^{q_{\mathrm{max}}}\frac{1}{q^{3}}\mathrm{d}q

& = & \frac{k_{B}T}{4\pi k}\left[-\overset{\approx0}{\overbrace{\left(\frac{1}{q_{\mathrm{max}}}\right)^{2}}}+\left(\frac{1}{q_{\mathrm{min}}}\right)^{2}\right],\end{matrix}$$ and $$\begin{matrix} \left\langle h^{2}\right\rangle & = & \frac{k_{B}T}{2\left(2\pi\right)^{2}}\frac{L^{2}}{k}

& \Downarrow

\sqrt{\left\langle h^{2}\right\rangle } & \sim & L.\end{matrix}$$ In such membranes, which are dominated by elasticity, the fluctuations increase linearly with membrane size. For a membrane around $$1\mathrm{cm}$$ in length, a typical amplitude is in the $$\mu\mathrm{m}$$ range. Another interesting observation is that $$\left\langle h^{2}\right\rangle \sim T/k$$. For small $$k$$ (flexible membranes), as well as for higher temperatures, the fluctuations become larger. This is valid as long as the condition of the elastic-dominated case, $$\sigma\ll kq^{2}$$, remains satisfied. Also, recall that the source of the large membrane fluctuations comes from the small $$q$$ or large wavelengths, and not from small wiggles associated with the motion of individual molecules.

Rayleigh instability
Due to surface tension, a cylinder of liquid created in air (or surrounded by another immiscible liquid) is unstable and will break into spherical droplets. Let's consider the following model: a cylinder of length $$L$$ and smaller radius $$R_{0}\ll L$$, which contains inside it an incompressible liquid with a total volume of $$V=\pi LR_{0}^{2}$$. For simplicity, we will consider perturbations which preserve the body-of-revolution symmetry around the main axis and the cylinder length $$L$$, such that only the local radius $$\rho\left(z\right)$$ along the cylinder's axis may vary. Expanding $$\rho$$ in normal modes then gives $$ \rho\left(z\right)=R+\sum_{q\neq0}\rho_{q}e^{iqz}.$$ The mode amplitudes are $$\rho_{q}=\frac{1}{\sqrt{L}}\int\rho\left(z\right)e^{-iqz}\mathrm{d}z$$. Note that $$ \left\langle \rho\left(z\right)\right\rangle =\frac{1}{L}\int_{0}^{L}\rho\left(z\right)\mathrm{d}z=R,$$ with $$R$$ depending on $$R_{0}$$. This dependence can be found from the constant volume constraint $$\begin{array}{lcl} V & = & \pi R_{0}^{2}L=\int_{0}^{L}\pi\rho^{2}\left(z\right)\mathrm{d}z \\

& = & \pi R^{2}L+\frac{\pi}{L}\sum_{q}\sum_{q^{\prime}}\overset{\overbrace{\left(\int e^{i(q+q^{\prime})z}\mathrm{d}z\right)}}\rho_{q}\rho_{-q} \\

& \Downarrow & \\

R^{2}L & = & R_{0}^{2}L-\sum_{q\ne0}\rho_{q}\rho_{-q}.\end{array}$$ This is exact, but for small perturbations we can expand the root and obtain $$ R\simeq R_{0}\left(1-\frac{1}{2L}\sum_{q\ne0}\left|\rho_{q}\right|^{2}\right).$$ The surface energy of the distorted cylinder will be $$ F_{s}=\sigma\cdot\mathrm{Area}=\sigma\cdot\int_{0}^{L}2\pi\rho\left(z\right)\sqrt{1+\left(\frac{\partial\rho}{\partial z}\right)^{2}}\mathrm{d}z.$$ (We have used expression for the surface area of a body-of-revolution with axial symmetry). Expanding all quantities up to second order in $$\rho_{q}$$ gives $$ \begin{array}{lcl} F_{s} & \simeq & \overset{\overbrace{2\pi\sigma\int_{0}^{L}\left(\rho\left(z\right)-R\right)\mathrm{d}z}}+2\pi\sigma\int_{0}^{L}\mathrm{d}zR\left(1+\frac{1}{2}\left(\frac{\partial\rho}{\partial r}\right)^{2}\right) \\ & = & 2\pi\sigma L\left(R_{0}-\frac{R_{0}}{2L}\sum_{q}\frac{\left|\rho_{q}\right|^{2}}{R_{0}^{2}}\right)+\frac{\pi\sigma R_{0}}{L}\int_{0}^{L}\sum_{q}\sum_{q^{\prime}}\left(-q\cdot q^{\prime}\right)e^{i\left(q+q^{\prime}\right)z}\mathrm{d}z \\

& = & 2\pi\sigma LR_{0}-2\pi\sigma R_{0}\sum_{q}\frac{\left|\rho_{q}\right|^{2}}{2R_{0}^{2}}+2\pi\sigma R_{0}\sum_{q}\frac{q^{2}\left|\rho_{q}\right|^{2}}{2R_{0}^{2}}. \end{array}$$ Finally, $$ \Delta F=F_{s}-F_{s}^{0}=2\pi\sigma R_{0}\sum_{q}\frac{\left|\rho_{q}\right|^{2}}{2R_{0}^{2}}\left(q^{2}R_{0}^{2}-1\right).$$ The conclusion is that modes having $$q^{2}R_{0}^{2}\le1$$ will reduce the original cylinder free energy $$F_{s}^{0}.$$ Hence, this is the onset of an instability called the Rayleigh instability of a liquid cylinder. A liquid cylinder will spontaneously start to develop undulations of wavelength $$\lambda\sim q^{-1}\ge R_{0}$$. These undulations will grow and eventually break up the cylinder into spherical droplets of size $$R_{0}$$. Note that if we go back to the planar surface by taking the limit $$R_{0}\rightarrow\infty$$, no such instability will occur since the planar geometry has the lowest surface area with respect to any other fluctuating surface.

Polymer Dynamics
\noindent \begin{center} {\huge Physical Models in Biological } \par\end{center}{\huge \par} \noindent \begin{center} {\huge Systems and Soft Matter}

{\huge ~}

~

~{\huge }

\par\end{center}{\huge \par} \noindent \begin{center} {\huge Final Course Project}

{\huge ~}

{\huge ~}

{\huge ~}

{\huge ~}

\par\end{center}{\huge \par} \begin{center} \includegraphics[scale=0.6]{Photo-of-Combi-Formulations-Example-4} \par\end{center} ~ ~ ~ ~ \noindent \begin{center} {\huge A Guided Tour to the Essence }

{\huge ~}

{\huge of Polymer Dynamics} \par\end{center}{\huge \par} ~ \noindent \begin{center} {\large Submitted by : Shlomi Reuveni} \par\end{center}{\large \par} ~ ~ ~ ~\newpage{} \tableofcontents{} \newpage{}

What is this document all about?
This paper is submitted as a final project in the course {\small {}"Physical Models in Biological Systems and Soft Matter". }Writing this document I aimed at achieving two goals. The first was getting to know a little better a subject that I found interesting and was not covered during the course. As an interesting by product I have also profoundly improved my knowledge on diffusion, a subject I was already superficially acquainted with. The second goal was to provide an accessible exposition to the subject of polymer dynamics aimed mainly for advanced undergraduate students who are curious about the subject and would like an easy start. This is also the reason this document is titled: {}"A Guided Tour to the Essence of Polymer Dynamics" and for the fact it is written in the form of questions and answers.

The saying goes: {}"There are two ways by which one can really master a subject: research and teaching". I felt that the effort I have put into making this document readable for advanced undergraduate students taught me more than I would have learned by passive reading. I have tried hard to make this document as self contained and self explanatory as possible and therefore hope that it will be of some help to you the curious student. So, if you wonder {}"What do you mean by polymer dynamics?" and {}"How can this subject be of any interest to me?" please read on. \newpage{} \section{O.K, sum it up in a few lines so I can decide if I want to go on reading!}

What's a polymer?
A polymer is a large molecule (macro-molecule) composed of repeating structural units (monomers) typically connected by covalent chemical bonds. Due to the extraordinary range of properties accessible in polymeric materials, they have come to play an essential and ubiquitous role in everyday life – from plastics and elastomers on the one hand to natural biopolymers such as DNA and proteins that are essential for life on the other. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{Single_Polymer_Chains_AFM} \par\end{centering} \caption{Appearance of real linear polymer chains as recorded using an atomic force microscope on surface under liquid medium. Chain contour length for this polymer is $$\sim204nm$$; thickness is $$\sim0.4nm$$. Taken from: Y. Roiter and S. Minko, AFM Single Molecule Experiments at the Solid-Liquid Interface: In Situ Conformation of Adsorbed Flexible Polyelectrolyte Chains, Journal of the American Chemical Society, vol. 127, iss. 45, pp. 15688-15689 (2005) }

\end{figure}

What's polymer dynamics?
As every other molecule a polymer is also affected by the thermal motion of surrounding molecules. It is this thermal agitation that causes a flexible polymer to move about in the solution while constantly changing its shape. This motion is referred to as polymer dynamics.

\begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{Motion} \par\end{centering}

\caption{Photographs of DNA polymers in aqueous solution taken by fluorescence microscopy. There is a 1-second interval between successive frames. The motion is clearly visible. Taken from: Introduction to Polymer Physics, M. Doi Translated by H. See, Clarendon Press, 30 November 1995.}

\end{figure}

What can I find in the rest of this document?
If you ever wondered how can one understand the motion of a polymer and what are the physical properties emanating from the dynamics of these materials you should read on. We will start with the building blocks, the dynamics of a single particle in solution. We will then gradually build on, presenting two models for polymer dynamics. Experimental observations will also be discussed as we confront our models with reality.

\newpage{}

\section{I knew there must be some preliminaries, can you keep it short and to the point? }

\subsection{Why do you bore me with this? why can't I skip directly to section 4?}

If you are familiar with concepts such as Diffusion, Einstein relation and Brownian motion you would find this section easier to read. If you are also familiar with the mathematics behind these concepts, Smoluchowski and Langevin equations, you can skip directly to section 4. In order to understand polymer dynamics we have to start from something more basic. A polymer can be thought of as long chain of particles (the monomers), the particles are connected to one another and hence interact. It would be wise to first try and understand the dynamics of a single particle and only then take into account these interactions. The dynamics of a single particle lies in the heart of the section.

\subsection{Can't say I know much about any of the stuff you mentioned above but first thing is first, what is diffusion?}

Molecular diffusion, often called simply diffusion, is a net transport of molecules from a region of higher concentration to one of lower concentration by random molecular motion. The result of diffusion is a gradual mixing of material. In a phase with uniform temperature, absent external net forces acting on the particles, the diffusion process will eventually result in complete mixing or a state of equilibrium. Basically, it is the movement of molecules from an area of high concentration to a lower area. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.55]{Diffusion_(1)}

~

\includegraphics[scale=0.14]{cell_diffusion_ink_India} \par\end{centering}

\centering{}\caption{Top: Schematic representation of mixing of two substances by diffusion. Bottom: Ink diffusing in water.}

\end{figure}

How can we mathematically treat diffusion?
As mentioned above diffusion is basically the movement of molecules from an area of high concentration to an area of lower concentration. For simplicity we will consider one-dimensional diffusion. Let $$c(x,t)$$ be the concentration at position $$x$$ and time $$t$$. A Phenomenological description of diffusion is given by Fick's law: $$ j(x,t)=-D\frac{\partial c(x,t)}{\partial x}$$ In words: if the concentration is not uniform, there will be a flux of matter which is proportional to the gradient in concentration. The proportionality constant is called the diffusion constant and it is denoted by $$D$$ its units are $$\frac{length^{2}}{time}$$. The minus sign is there to take care of the fact that the flow is from the higher concentration region to the lower concentration region.

Where is this flux coming from?
Its microscopic origin is the random thermal motion of the particles. The average velocity of each particle is zero, and there is an equal probability for each particle to have a velocity directioned right or left. However, if the concentration is not uniform the number of particles which happen to flow from the higher concentration region to the lower concentration region is higher than the number of particles flowing in the other direction simply because there are more particles there. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{\string"23-07-2009 22-19-57\string".eps} \par\end{centering} \centering{}\caption{Microscopic explanation for Fick's law. Suppose that the particle concentration $$c(x)$$ is not uniform. If the particles move randomly as shown by the arrows, there is a net flux of particles flowing from the higher concentration region to the lower concentration region. Here the diffusion constant of the particle, which determines the average length of the arrows, is assumed to be constant. }

\end{figure}

How do we go on?
We now write an equation for the conservation of matter, the change in the number of particles located at the interval $$[x,x+\triangle x]$$ from time $$t$$ to time $$t+\triangle t$$ is given by the number of particles coming/going from the left minus the number of particles coming/going from the right: {\tiny $$ N(t+\triangle t)-N(t)\simeq\left[c(x+\frac{\triangle x}{2},t+\triangle t)-c(x+\frac{\triangle x}{2},t)\right]\triangle x\simeq\left[j(x,t+\frac{\triangle t}{2})-j(x+\triangle x,t+\frac{\triangle t}{2})\right]\triangle t$$ }or: $$ \frac{\left[c(x+\frac{\triangle x}{2},t+\triangle t)-c(x+\frac{\triangle x}{2},t)\right]}{\triangle t}=\frac{\left[j(x,t+\frac{\triangle t}{2})-j(x+\triangle x,t+\frac{\triangle t}{2})\right]}{\triangle x}$$ taking the limits $$\triangle t,\triangle x\longrightarrow0$$ and assuming continuity and differentiability of the concentration and the flux we obtain: $$ \frac{\partial c(x,t)}{\partial t}=-\frac{\partial j(x,t)}{\partial x}$$ Substituting the expression for the flux gives the well known diffusion equation: $$ \frac{\partial c(x,t)}{\partial t}=D\frac{\partial^{2}c(x,t)}{\partial x^{2}}$$ \subsubsection{What happens if the particles are under the influence of some kind of a potential $$U(x)$$?}

If this happens Fick's law must be modified since the potential $$U(x)$$ exerts a force: $$ F=-\frac{\partial U}{\partial x}$$ on the particle and gives an non zero average velocity $$v$$. If the force is weak there is a linear relation between force and velocity given by: $$ v=\frac{F}{\zeta}=-\frac{1}{\zeta}\frac{\partial U}{\partial x}$$ the constant $$\zeta$$ is called the friction constant and its inverse $$1/\zeta$$ is called the mobility.

\subsubsection{How come the velocity doesn't grow indefinitely? there is a constant force!}

Correct, but it is not the only force acting on the particle. There are also friction and random forces exerted by other particles and hence like a feather falling under its own weight the particle reaches a finite average velocity.

O.K, and what do we do now?
We will obtain the Smoluchowski equation that takes the potential into account, but first we will obtain an important relation between the diffusion constant the temperature and the friction constant. The average velocity of the particle gives an additional flux $$cv$$ so that the total flux is: $$ j(x,t)=-D\frac{\partial c(x,t)}{\partial x}-\frac{c}{\zeta}\frac{\partial U}{\partial x}$$ An important relation is obtained from this equation. As you may recall from statistical mechanics, in equilibrium the concentration is given by the Boltzmann distribution: $$ c_{eq}(x,t)\propto exp(-U(x)/k_{B}T)$$ for which the flux must vanish and hence: $$ -D\frac{\partial c_{eq}(x,t)}{\partial x}-\frac{c_{eq}}{\zeta}\frac{\partial U}{\partial x}=0$$ Substituting for $c_{eq}(x,t)$ we get: $$ \frac{Dc_{eq}}{k_{B}T}\frac{\partial U}{\partial x}-\frac{c_{eq}}{\zeta}\frac{\partial U}{\partial x}=c_{eq}\frac{\partial U}{\partial x}\left[\frac{D}{k_{B}T}-\frac{1}{\zeta}\right]=0$$ Since this is true for every $x$ it follows that: $$ D=\frac{k_{B}T}{\zeta}$$ this relation is called the Einstein relation. The Einstein relation states that the diffusion constant which characterizes the thermal motion is related to the friction constant which specifies the response to external force. The Einstein relation is a special case of a general theorem called the fluctuation dissipation theorem, which states the spontaneous thermal fluctuations are related to the characteristics of the system response to an external field.

And the Smoluchowski equation is obtained by plugging in the {
"new" flux into the continuity equation right?} Exactly right! using the Einstein relation we rewrite the flux as: $$ j(x,t)=-\frac{1}{\zeta}\left[k_{B}T\frac{\partial c(x,t)}{\partial x}+c\frac{\partial U}{\partial x}\right]$$ Substituting this into the continuity equation we get the Smoluchowski equation: $$ \frac{\partial c(x,t)}{\partial t}=-\frac{\partial j(x,t)}{\partial x}=\frac{\partial}{\partial x}\frac{1}{\zeta}\left[k_{B}T\frac{\partial c(x,t)}{\partial x}+c\frac{\partial U}{\partial x}\right]$$ which serves as a phenomenological description of diffusion under the influence of an external potential. Although we have derived the above equation for the concentration $$c(x,t)$$ the same equation will also hold for the probability distribution function $$\Psi(x,t)$$ that a particular particle is found at position $$x$$ at time $$t$$. This is true since the distinction between $$c(x,t)$$ and $$\Psi(x,t)$$ is, for non-interacting particles, only the fact that $$\Psi(x,t)$$ is normalized. The evolution equation for the probability $$\Psi(x,t)$$ is hence written as: $$ \frac{\partial\Psi(x,t)}{\partial t}=\frac{\partial}{\partial x}\frac{1}{\zeta}\left[k_{B}T\frac{\partial\Psi(x,t)}{\partial x}+\Psi(x,t)\frac{\partial U}{\partial x}\right]$$ which will also be termed the Smoluchowski equation.

What's Brownian motion?
Brownian motion (named after the Scottish botanist Robert Brown) is the seemingly random movement of particles suspended in a fluid (i.e. a liquid or gas) or the mathematical model used to describe such random movements. Brownian motion is traditionally regarded as discovered by the botanist Robert Brown in 1827. It is believed that Brown was studying pollen particles floating in water under the microscope. He then observed small particles within the vacuoles of the pollen grains executing a jittery motion. By repeating the experiment with particles of dust, he was able to rule out that the motion was due to pollen particles being 'alive', although the origin of the motion was yet to be explained. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{PerrinPlot2} \par\end{centering} \caption{Three tracings of the motion of colloidal particles of radius 0.53\textmu{}m, as seen under the microscope, are displayed. Successive positions every 30 seconds are joined by straight line segments (the mesh size is 3.2\textmu{}m).Reproduced from the book of Jean Baptiste Perrin, Les Atomes,Perrin, 1914, p. 115.} \end{figure}

And the mathematical treatment?
\subsubsection{Before we start I have to say that it seems awfully similar to diffusion, what's new?} You are right! these are opposite sides of the same coin. However, the approach we take here is microscopic rather than macroscopic. Instead of starting from a macroscopic quantity, the concentration, we will start from the equation of motion for a single particle in solution, Newton's second law: $$ m\frac{d^{2}x}{dt^{2}}=-\zeta\frac{dx}{dt}-\frac{\partial U}{\partial x}+g'(t)$$ Here the first term on the right hand side is the friction force which is assumed to take a standard form of being opposite in direction and proportional to the velocity. The second term is the force exerted as a consequence of the external potential and the third term is a random force that represents the sum of the forces due to collisions with surrounding particles. Let us now rewrite this equation in the form: $$ \frac{m}{\zeta}\frac{d^{2}x}{dt^{2}}+\frac{dx}{dt}=-\frac{1}{\zeta}\frac{\partial U}{\partial x}+\frac{g'(t)}{\zeta}\equiv-\frac{1}{\zeta}\frac{\partial U}{\partial x}+g(t)$$ where we have defined $$g(t)\equiv\frac{g'(t)}{\zeta}$$. Our next step is an approximation, treating very small and light weight particles we will drop the inertial term $$\frac{m}{\zeta}\frac{d^{2}x}{dt^{2}}$$ assuming it is negligible and obtain: $$ \frac{dx}{dt}=-\frac{1}{\zeta}\frac{\partial U}{\partial x}+g(t)$$ we will refer to this equation as the Langevin equation. This equation describes the motion of a single Brownian particle, solving it one can (in principle) obtain a trajectory of such a particle.

I don't understand why you throw away the inertial term, please explain!
This can be further explained by the following example. Consider a particle immersed in some solvent moving under the influence of a constant external force $$F=-\frac{\partial U}{\partial x}$$. Let us denote the velocity: $$ v=\frac{dx}{dt'}$$ the equation of motion for $v$ is given by: $$ \frac{m}{\zeta}\frac{dv}{dt'}+v=\frac{F}{\zeta}+g(t')$$ For simplicity let us factor out the random force by taking an ensemble average (to avoid the subtleties of taking the time average) of both sides of the equation and obtaining an equation for the average velocity: $$ \frac{d}{dt'}+\frac{\zeta}{m}=\frac{F}{m}$$ Multiplying both sides by $$e^{\frac{t'\zeta}{m}}$$ and integrating from zero to $t$ we are able to solve for $$: $$ =e^{\frac{-t\zeta}{m}}\overset{t}{\underset{0}{\int}}\frac{F}{m}e^{\frac{t'\zeta}{m}}dt'=\frac{F}{\zeta}\left[1-e^{\frac{-t\zeta}{m}}\right]$$ Where we have assumed that the particle was at rest at time zero. We see that the velocity approaches an asymptotic value of $$\frac{F}{\zeta}$$ exponentially fast and that the characteristic relaxation time is $$\tau=\frac{m}{\zeta}$$. Dropping the inertial term in the first place we would have simply gotten: $$ =\frac{F}{\zeta}$$ i.e. an immediate response to the force. It is now clear that if the relaxation time $$\tau=\frac{m}{\zeta}$$ is small, dropping the inertial term is a good approximation! In the case of small particles (atoms,molecules,colloidal particles, etc...) immersed in a liquid, the relaxation time $$\tau$$ is indeed very small supporting the validity of our approximation. \subsubsection{If these are opposite sides of the same coin how does the Langevin equation relate to the Smoluchowski equation?} As mentioned earlier, since we don't know the exact time dependence of $$g(t)$$ we will treat it as a random force. The freedom in choosing the distribution of $$g(t)$$ is very large, here however we will limit ourselves to a model which will be equivalent to the Smoluchowski equation. \subsubsection{The Langevin equation gives us trajectories, the Smoluchowski equation gives us a probability distribution for the position, how can they be equivalent?} Excellent question! Examining many trajectories one can generate the probability distribution for the position. For example, starting the particles from a given origin and following its trajectory up to some time $$t$$ one can record the position $$x(t).$$ Repeating the processes many many times will yield many many different $$x(t).$$ Creating a histogram one can generate an empirical probability distribution for the position at time $$t$$. One can show that if the probability distribution of $$g(t)$$ is assumed to be Gaussian and is characterized by: $$ \begin{cases} =0

=\frac{2k_{B}T}{\zeta}\delta(t-t')\end{cases}$$ then the distribution of $$x(t)$$ determined by the Langevin equation satisfies the Smoluchowski equation. In other words, if $$g(t)$$ is a Gaussian random variable with zero mean and variance $$\frac{2k_{B}T}{\zeta}$$ and if $$g(t)$$ and $$g(t')$$ are independent for $$t\neq t'$$ then the above statement holds. \subsubsection{I still don't understand, can you demonstrate on a simple special case?} Yes! Consider the Brownian motion of a free particle (no external potential) for which the Langevin equation reads: $$ \frac{dx}{dt}=g(t)$$ If the particle is at $$x_{0}$$ at time $$t=0$$, its position at time $t$ is given by: $$ x(t)=x_{0}+\overset{t}{\underset{0}{\int}}g(t')dt'$$ From the above we deduce that $$x(t)-x_{0}$$ is a linear combination independent Gaussian random variables. We now recall that the sum independent Gaussian random variables is a Gaussian random variable itself and hence the probability distribution of $$x(t)$$ may be written as: $$ \Psi(x,t)=\frac{1}{\sqrt{2\pi B}}exp\left[-\frac{\left(x-A\right)^{2}}{2B}\right]$$ where: $$ \begin{cases} A=

B=<(x(t)-A)^{2}>\end{cases}$$ The mean is calculated from: $$ A==x_{0}+\overset{t}{\underset{0}{\int}}dt'=x_{0}+\overset{t}{\underset{0}{\int}}0\cdot dt'=x_{0}$$ For the variance we have: $$ B=<\left(\overset{t}{\underset{0}{\int}}g(t')dt'\right)\left(\overset{t}{\underset{0}{\int}}g(t")dt"\right)>=\overset{t}{\underset{0}{\int}}\overset{t}{\underset{0}{\int}}dt'dt"$$ hence: $$ B=\overset{t}{\underset{0}{\int}}\overset{t}{\underset{0}{\int}}\frac{2k_{B}T}{\zeta}\delta(t-t')dt'dt"=\frac{2k_{B}T}{\zeta}t=2Dt$$ and thus: $$ \Psi(x,t)=\frac{1}{\sqrt{4\pi Dt}}exp\left[-\frac{\left(x-x_{0}\right)^{2}}{4Dt}\right]$$ which is exactly (check by direct differentiation) the solution for the Smoluchowski equation: $$ \frac{\partial\Psi(x,t)}{\partial t}=D\frac{\partial^{2}\Psi(x,t)}{\partial x^{2}}$$ In other words, both equations result in the same probability distribution for $$x(t)$$. An important conclusion is that the mean square displacement of a Brownian particle from the origin is given by $$2Dt$$ and is hence linear in time.

O.K, I think we covered everything! anything else?
We are almost done but in order to complete our analysis we need to analyze one more problem, the Brownian motion of a harmonic oscillator.

\subsubsection{Why do we have to do this? how come we always have to talk about the harmonic oscillator?}

The harmonic oscillator is a simple system that serves as a prototype for problems we will solve later one. Treating it here will ease things for us later.

I understand, please go on.
Consider a Brownian particle moving under the following potential: $$ U(x)=\frac{1}{2}kx^{2}$$ The equation of motion for this particle is given by: $$ \frac{dx(t')}{dt'}=-\frac{k}{\zeta}x(t')+g(t')$$ In order to get a formal solution for $$x(t)$$ we multiply both sides by $e^{\frac{k}{\zeta}t'}$and do some algebra: $$ \frac{dx(t')}{dt'}e^{\frac{k}{\zeta}t'}+\frac{k}{\zeta}x(t')e^{\frac{k}{\zeta}t'}=\frac{d}{dt'}\left[x(t')e^{\frac{k}{\zeta}t'}\right]=g(t')e^{\frac{k}{\zeta}t'}$$ We now integrate from $-\infty$ to $t$ and get: $$ \left[x(t')e^{\frac{k}{\zeta}t'}\right]_{t'=-\infty}^{t'=t}=\overset{t}{\underset{-\infty}{\int}}g(t')e^{\frac{k}{\zeta}t'}dt'$$ Assuming the following boundary condition: $$ x(t'=-\infty)\cdot e^{-\frac{k}{\zeta}\cdot\infty}=0$$ We conclude that: $$ x(t)=\overset{t}{\underset{-\infty}{\int}}g(t')e^{\frac{k}{\zeta}\left(t'-t\right)}dt'$$ It is also possible to solve under the initial condition $$x(t=0)=x_{0}$$, in that case: $$ \left[x(t')e^{\frac{k}{\zeta}t'}\right]_{t'=0}^{t'=t}=\overset{t}{\underset{0}{\int}}g(t')e^{\frac{k}{\zeta}t'}dt'$$ and we have $$ x(t)=x_{0}e^{-\frac{k}{\zeta}t}+\overset{t}{\underset{0}{\int}}g(t')e^{\frac{k}{\zeta}\left(t'-t\right)}dt'$$

\subsubsection{O.K, but $$g(t)$$ is a random variable and hence $$x(t)$$ is also one that doesn't tell me much... Can we calculate some moments? Start with the case of the particle that has been with us since $$t=-\infty$$.}

First we note that for the mean position we have: $$ A(t)==\overset{t}{\underset{-\infty}{\int}}e^{\frac{k}{\zeta}\left(t'-t\right)}dt'=0$$ and the mean position is hence zero. We now aim at finding an expression for the mean square displacement from the origin $$<(x(t)-x(0))^{2}>$$, the variance of $$x(t)$$ will be calculated as a by product. We start with the time correlation function of $x(t)$: $$ =\overset{t}{\underset{-\infty}{\int}}dt_{1}\overset{0}{\underset{-\infty}{\int}}dt_{2}e^{\frac{k}{\zeta}\left(t_{1}+t_{2}-t\right)}<g(t_{1})g(t_{2})>$$ Recalling that: $$ <g(t)g(t')>=2D\delta(t-t')=\frac{2k_{B}T}{\zeta}\delta(t-t')$$ we get: $$ <x(t)x(0)>=\overset{0}{\underset{-\infty}{\int}}dt_{1}e^{\frac{k}{\zeta}\left(2t_{1}-t\right)}\frac{2k_{B}T}{\zeta}=\left[e^{\frac{k}{\zeta}\left(2t_{1}-t\right)}\frac{k_{B}T}{k}\right]_{t_{1}=-\infty}^{t_{1}=0}=\frac{k_{B}T}{k}e^{-\frac{k}{\zeta}t}$$ Here we assumed that $$t>0$$ and used the fact that $$<g(t_{1}>0)g(t_{2})>\equiv0$$ since $t_{2}<$0. Similarly if $t<$0 we get: $$ <x(t)x(0)>=\overset{t}{\underset{-\infty}{\int}}dt_{1}e^{\frac{k}{\zeta}\left(2t_{1}-t\right)}\frac{2k_{B}T}{\zeta}=\left[e^{\frac{k}{\zeta}\left(2t_{1}-t\right)}\frac{k_{B}T}{k}\right]_{t_{1}=-\infty}^{t_{1}=t}=\frac{k_{B}T}{k}e^{\frac{k}{\zeta}t}$$ We may hence conclude that: $$ <x(t)x(0)>=\frac{k_{B}T}{k}e^{-\frac{k}{\zeta}\left|t\right|}$$ Letting $t=$0 we get $$ <x(0)x(0)>=<x^{2}>=\frac{k_{B}T}{k}$$ which coincides with the known result obtained from statistical mechanics with the use of the Boltzmann distribution $$\psi_{eq}\propto exp(-kx^{2}/2k_{B}T)$$. We will now show that this is also the variance: $$ B=<(x(t)-A(t))^{2}>=<x(t)x(t)>=\overset{t}{\underset{-\infty}{\int}}\overset{t}{\underset{-\infty}{\int}}<g(t')g(t")>e^{\frac{k}{\zeta}\left(t'+t"-2t\right)}dt'dt"$$ and hence: $$ B=\overset{t}{\underset{-\infty}{\int}}\frac{2k_{B}T}{\zeta}e^{\frac{k}{\zeta}\left(2t'-2t\right)}dt'=\frac{k_{B}T}{k}$$ The mean square displacement $$<(x(t)-x(0))^{2}>$$ can now be easily calculated:{\scriptsize $$ <(x(t)-x(0))^{2}>=<x(t)^{2}>+<x(0)^{2}>-2<x(t)x(0)>=2\left[<x(t)^{2}>-<x(t)x(0)>\right]$$ }and hence: $$ <(x(t)-x(0))^{2}>=\frac{2k_{B}T}{k}\left[1-e^{-\frac{k}{\zeta}\left|t\right|}\right]$$ Here, unlike the case of free diffusion, for long times the mean square displacement is bounded by $$\frac{2k_{B}T}{k}$$. The bound is approached exponentially fast with a characteristic relaxation time $$\tau=\frac{\zeta}{k}$$. Considering the opposite limit $$\left|t\right|\rightarrow0$$ (very short times) we have (to first order): $$ <(x(t)-x(0))^{2}>=\frac{2k_{B}T}{k}\left[1-1+\frac{k}{\zeta}\left|t\right|\right]=2D\left|t\right|$$ Indeed, in this limit the particle has yet to {}"feel" the harmonic potential and we expect regular diffusion.

\subsubsection{I think that since $$x(t)$$ is a linear sum of Gaussian random variables and hence Gaussian itself, we can also write an expression for the it probability distribution. Am I right?} Yes you are! We already found the mean and variance and hence the probability distribution for $x(t)$ is: $$ \Psi(x,t)=\frac{1}{\sqrt{2\pi B}}exp\left[-\frac{\left(x-A\right)^{2}}{2B}\right]=\frac{1}{\sqrt{\frac{2\pi k_{B}T}{k}}}exp\left[-\frac{kx^{2}}{2k_{B}T}\right]$$ which is exactly the Boltzmann distribution. We could have guessed that this will be so since we have given the particle an infinite amount of time to equilibrate with the potential well.

Let's proceed to the case of the particle that started at $$x_{0
$$! }

First we note that for the mean position we have: $$ A(t)=<x(t)>=x_{0}e^{-\frac{k}{\zeta}t}+\overset{t}{\underset{-\infty}{\int}}<g(t')>e^{\frac{k}{\zeta}\left(t'-t\right)}dt'=x_{0}e^{-\frac{k}{\zeta}t}$$ the mean position depends on time and exponentially decays towards zero. For the variance we have: $$ B=<(x(t)-A(t))^{2}>=\overset{t}{\underset{0}{\int}}\overset{t}{\underset{0}{\int}}<g(t')g(t")>e^{\frac{k}{\zeta}\left(t'+t"-2t\right)}dt'dt"$$ and hence: $$ B=\overset{t}{\underset{0}{\int}}\frac{2k_{B}T}{\zeta}e^{\frac{k}{\zeta}\left(2t'-2t\right)}dt'=\frac{k_{B}T}{k}\left[1-e^{-\frac{2k}{\zeta}t}\right]$$ Here again the variance exponentially decays towards the equilibrium variance. The probability distribution is Gaussian again and we have: {\footnotesize $$ \Psi(x,t)=\frac{1}{\sqrt{2\pi B}}exp\left[-\frac{\left(x-A\right)^{2}}{2B}\right]=\frac{1}{\sqrt{\frac{2\pi k_{B}T}{k}\left[1-e^{-\frac{2k}{\zeta}t}\right]}}exp\left[-\frac{k(x-x_{0}e^{-\frac{k}{\zeta}t})^{2}}{2k_{B}T\left[1-e^{-\frac{2k}{\zeta}t}\right]}\right]$$ }which for short times $$t\ll\tau=\frac{\zeta}{k}$$ is the same as free diffusion: $$ \Psi(x,t)=\frac{1}{\sqrt{4\pi Dt}}exp\left[-\frac{(x-x_{0})^{2}}{4Dt}\right]$$ and for long times gives the Boltzmann distribution: $$ \Psi(x,t)=\frac{1}{\sqrt{\frac{2\pi k_{B}T}{k}}}exp\left[-\frac{kx^{2}}{2k_{B}T}\right]$$ \newpage{}

Give me the simplest model for polymer dynamics!
A polymer is a chain of monomers linked to one another by covalent bonds. It is natural to represent a polymer by a set of beads connected to one another by springs. The dynamics of the polymer is modeled by the Brownian motion of these beads. Such a model was first proposed by Rouse in the fifties of the twentieth century and has been the basis of the dynamics of polymers in dilute solutions. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{\string"26-07-2009 16-17-49\string".eps} \par\end{centering} \caption{A pictorial description of the Rouse model.} \end{figure}

But now the beads are connected! how do we take that into account?
Let $$\{\vec{R}_{1},..,\vec{R}_{N}\}$$ be the positions of the beads, if we assume the beads experience a drag force proportional to their velocity as they move through the solvent, then for each bead we can write the following Langevin equation: $$ \frac{d\vec{R}_{n}(t)}{dt}=-\frac{1}{\zeta_{n}}\frac{\partial U}{\partial\vec{R}_{n}}+\vec{g}_{n}(t)$$ Here $$\zeta_{n}$$ is the friction coefficient of the $$n-th$$ bead and from now on we will assume that the beads are all alike and hence $$\zeta=\zeta_{n}$$ for every $$n$$. The random force $$\vec{g}_{n}(t)$$ is Gaussian with the following characteristics: $$ \begin{cases} <g_{\alpha n}(t)>=0 & \forall n,\forall\alpha=x,y,z

<g_{\alpha n}(t)g_{\beta m}(t')>=\frac{2k_{B}T}{\zeta}\delta_{nm}\delta_{\alpha\beta}\delta(t-t') & \forall n,m\,,\forall\alpha,\beta=x,y,z\end{cases}$$ i.e. the random forces acting on different beads and/or in perpendicular directions and/or in different times are independent.

And the potential $$U$$? Harmonic as always?
Indeed, having harmonic springs connecting the beads, we will take it as: $$ U=\frac{k}{2}\overset{N}{\underset{n=2}{\sum}}\left(\vec{R}_{n}(t)-\vec{R}_{n-1}(t)\right)^{2}$$ In this model the Langevin equation becomes a linear equation for $\vec{R}_{n}(t)$, for the internal beads we have: $$ \begin{cases} \frac{d\vec{R}_{n}(t)}{dt}=\frac{k}{\zeta}\left(\vec{R}_{n+1}(t)+\vec{R}_{n-1}(t)-2\vec{R}_{n}(t)\right)+\vec{g}_{n}(t) & \,\,\, n=2,...,N-1\end{cases}$$ and for the beads at each end we have: $$ \begin{cases} \frac{d\vec{R}_{n}(t)}{dt}=\frac{k}{\zeta}\left(\vec{R}_{2}(t)-\vec{R}_{1}(t)\right)+\vec{g}_{1}(t) & \,\,\, n=1

\frac{d\vec{R}_{n}(t)}{dt}=\frac{k}{\zeta}\left(\vec{R}_{N-1}(t)-\vec{R}_{N}(t)\right)+\vec{g}_{N}(t) & \,\,\, n=N\end{cases}$$ In order to unify the treatment we define two additional hypothetical beads $\vec{R}_{0}$ and $\vec{R}_{N+1}$ as: $$ \begin{cases} \vec{R}_{0}\equiv\vec{R}_{1}

\vec{R}_{N+1}\equiv\vec{R}_{N}\end{cases}$$ under this definition the Langevin equation for beads $$n=1,2,...,N$$ is given by: $$ \begin{cases} \frac{d\vec{R}_{n}(t)}{dt}=\frac{k}{\zeta}\left(\vec{R}_{n+1}(t)+\vec{R}_{n-1}(t)-2\vec{R}_{n}(t)\right)+\vec{g}_{n}(t) & \,\,\, n=1,...,N\end{cases}$$

How do we proceed?
In order to proceed it is convenient to assume that the beads are continuously distributed along the polymer chain. We first recall that in the continuum limit: $$ \vec{R}_{n+1}(t)+\vec{R}_{n-1}(t)-2\vec{R}_{n}(t)\,\,\longrightarrow\,\,\frac{\partial^{2}\vec{R}(n,t)}{\partial n^{2}}$$ Letting $$n$$ be a continuous variable, and writing $$\vec{R}_{n}(t)$$ as $\vec{R}(n,t)$ the Langevin equation takes the form: $$ \frac{\partial\vec{R}(n,t)}{\partial t}=\frac{k}{\zeta}\frac{\partial^{2}\vec{R}(n,t)}{\partial n^{2}}+\vec{g}(n,t)$$ The definitions we made regarding the additional hypothetical beads $$\vec{R}_{0}$$ and $$\vec{R}_{N+1}$$ now turn into the following boundary conditions: $$ \begin{cases} \frac{\partial\vec{R}(n,t)}{\partial n}|_{n=0}=0

\frac{\partial\vec{R}(n,t)}{\partial n}|_{n=N}=0\end{cases}$$

\subsubsection{I don't know how to solve this one, can we bring it to a form of something we have solved before? }

Yes we can, as a first step we define normal coordinates by the following transformation: $$ \begin{cases} \vec{X}_{p}(t)=\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\vec{R}(n,t)dn & \,\,\, p=0,1,2,...\end{cases}$$ whose inverse is given by: $$ \vec{R}(n,t)=\vec{X}_{0}(t)+2\overset{\infty}{\underset{p=1}{\sum}}cos\left(\frac{p\pi n}{N}\right)\vec{X}_{p}(t)$$

\subsubsection{Defining new coordinates (call them as you will) is one thing but the inverse must be defined such that it takes you back to the original coordinates! Is this truly the correct inverse? }

We verify this by direct substitution: $$ \vec{X}_{p}(t)=\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\left[\vec{X}_{0}(t)+2\overset{\infty}{\underset{p'=1}{\sum}}cos\left(\frac{p'\pi n}{N}\right)\vec{X}_{p'}(t)\right]dn$$ The first term gives: $$ \vec{X}_{0}(t)\delta_{p,0}=\begin{cases} \frac{\vec{X}_{0}(t)}{p\pi}sin\left(\frac{p\pi n}{N}\right)|_{n=0}^{n=N}=0 & \,\,\, p=1,2,3,..

\vec{X}_{0}(t) & \,\,\, p=0\end{cases}$$ Using the trigonometric identity: $$ cos(A)cos(B)=\frac{1}{2}\left[cos(A-B)+cos(A+B)\right]$$ the second term is written as: $$ \begin{cases} \frac{1}{N}\overset{\infty}{\underset{p'=1}{\sum}}\vec{X}_{p'}(t)\overset{N}{\underset{0}{\int}}\left[cos\left(\frac{(p+p')\pi n}{N}\right)+cos\left(\frac{(p-p')\pi n}{N}\right)\right]dn & \,\,\, p=1,2,3,..

\overset{\infty}{\underset{p'=1}{\sum}}\frac{2\vec{X}_{p'}(t)}{p'\pi}sin\left(\frac{p'\pi n}{N}\right)|_{n=0}^{n=N}=0 & \,\,\, p=0\end{cases}$$ which gives: $$ \vec{X}_{p}(t)(1-\delta_{p,0})=\begin{cases} \overset{\infty}{\underset{p'=1}{\sum}}\vec{X}_{p'}(t)\delta_{p,p'}=\vec{X}_{p} & \,\,\, p=1,2,3,..

0 & \,\,\, p=0\end{cases}$$ We conclude that: $$ \vec{X}_{p}(t)=\vec{X}_{0}(t)\delta_{p,0}+\vec{X}_{p}(t)(1-\delta_{p,0})=\vec{X}_{p}(t)$$ which proves that the inverse transformation is defined correctly.

How does this new set of coordinates help us?
We will now show that the equations of motion for the normal coordinates $$\vec{X}_{p}(t)$$ are the equations of motion for an infinite set of uncoupled Brownian harmonic oscillators. Since we have already treated the problem of a Brownian harmonic oscillator, this will ease our lives considerably. We start by applying $$\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)dn$$ to both side of the Langevin equation for $\vec{R}(n,t)$: {\footnotesize $$ \frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\frac{\partial\vec{R}(n,t)}{\partial t}dn=\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\frac{k}{\zeta}\frac{\partial^{2}\vec{R}(n,t)}{\partial n^{2}}dn+\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\vec{g}(n,t)dn$$ }The left hand side term is identified as: $$ \frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\frac{\partial\vec{R}(n,t)}{\partial t}dn=\frac{\partial}{\partial t}\left[\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\vec{R}(n,t)dn\right]=\frac{d\vec{X}_{p}(t)}{dt}$$ The first term on the right hand side gives: $$ \frac{cos\left(\frac{p\pi n}{N}\right)}{N}\frac{k}{\zeta}\frac{\partial\vec{R}(n,t)}{\partial n}|_{n=0}^{n=N}+\overset{N}{\underset{0}{\int}}\frac{p\pi}{N^{2}}sin\left(\frac{p\pi n}{N}\right)\frac{k}{\zeta}\frac{\partial\vec{R}(n,t)}{\partial n}dn$$ by integration by parts. Invoking the boundary condition for $$\frac{\partial\vec{R}(n,t)}{\partial n}$$ the first term drops, another round of integration by parts gives: $$ \frac{p\pi}{N^{2}}sin\left(\frac{p\pi n}{N}\right)\frac{k}{\zeta}\vec{R}(n,t)|_{n=0}^{n=N}-\overset{N}{\underset{0}{\int}}\frac{\left(p\pi\right)^{2}}{N^{3}}cos\left(\frac{p\pi n}{N}\right)\frac{k}{\zeta}\vec{R}(n,t)dn$$ Here the sine kills the first term and the second term is identified as: $$ -\frac{p^{2}\pi^{2}k\vec{X}_{p}(t)}{N^{2}\zeta}=-\frac{k_{p}}{\zeta_{p}}\vec{X}_{p}(t)$$ where we have defined: $$ \begin{cases} k_{p}=\frac{2p^{2}\pi^{2}k}{N} & \,\,\, p=0,1,2,3...

\zeta_{0}=N\zeta & \,\,\, p=0

\zeta_{p}=2N\zeta & \,\,\, p=1,2,3...\end{cases}$$ We are left with the second term on the right hand side of the original equation which we deal with by defining the random forces: $$ \begin{cases} \vec{g}_{p}(t)=\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\vec{g}(n,t)dn & \,\,\, p=0,1,2,3...\end{cases}$$ Which are characterized by zero mean: $$ \begin{cases} <g_{\alpha p}(t)>=\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)<g_{\alpha}(n,t)>dn=0 & \forall p\,,\forall\alpha=x,y,z\end{cases}$$ And by: $$ \begin{cases} <g_{\alpha p}(t)g_{\beta p'}(t')>=\frac{2k_{B}T\delta_{\alpha\beta}\delta_{pp'}\delta(t-t')}{\zeta_{p}} & \forall p,p'\,,\forall\alpha,\beta=x,y,z\end{cases}$$ since: {\footnotesize $$ <g_{\alpha p}(t)g_{\beta p'}(t')>=\frac{1}{N^{2}}\overset{N}{\underset{0}{\int}}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)cos\left(\frac{p'\pi n'}{N}\right)<g_{\alpha}(n,t)g_{\beta}(n',t')>dndn'$$ }and use of the trigonometric identity: $$ cos(A)cos(B)=\frac{1}{2}\left[cos(A-B)+cos(A+B)\right]$$ gives: {\footnotesize $$ <g_{\alpha p}(t)g_{\beta p'}(t')>=\frac{k_{B}T\delta_{\alpha\beta}\delta(t-t')}{\zeta N^{2}}\overset{N}{\underset{0}{\int}}\left[cos\left(\frac{(p+p')\pi n}{N}\right)+cos\left(\frac{(p-p')\pi n}{N}\right)\right]dn$$ }which yields the result after preforming the integration. This means that the random forces with different values of $$p$$ and/or acting in perpendicular directions and/or acting in different times are independent. The equations of motion for the normal coordinates $$\vec{X}_{p}(t)$$ are given by: $$ \frac{d\vec{X}_{p}(t)}{dt}=-\frac{k_{p}}{\zeta_{p}}\vec{X}_{p}(t)+\vec{g}_{p}(t)$$ and since the random forces are independent of each other, the motions of the $$\vec{X}_{p}$$'s are also independent of each other. These are the equations of motion for an infinite set of uncoupled Brownian harmonic oscillators, each with a force constant $$k_{p}$$ and friction constant $$\zeta_{p}$$ of its own. We have gone from one partial differential equation (which we don't know how to solve directly) for $$\vec{R}(n,t)$$ to an infinite set of uncoupled ordinary differential equations (from a type we are already familiar with) for the normal coordinates $$\vec{X}_{p}(t)$$.

What can we say about the motion of the center of mass?
Using the results of section 3 we will now calculate two time correlation function that will help us in the near future. We first note that since $$k_{0}=0$$, $$\vec{X}_{0}$$ is actually preforming free diffusion and hence:{\tiny $$ <(X_{0}(t)-X_{0}(0))_{\alpha}(X_{0}(t)-X_{0}(0))_{\beta}>=<(X_{0}(t)-<X_{0}(t)>)_{\alpha}(X_{0}(t)-<X_{0}(t)>)_{\beta}>=\delta_{\alpha\beta}\frac{2k_{B}T}{N\zeta}t$$ }On the other hand, the time correlation function for $$\vec{X}_{p}(t)$$ ($p>0)$ is the one for a Brownian harmonic oscillator and hence: $$ <X_{p\alpha}(t)X_{p'\beta}(0)>=\delta_{\alpha\beta}\delta_{pp'}\frac{k_{B}T}{k_{p}}e^{-t/\tau_{p}}$$ where the relaxation time $\tau_{p}$ is given by: $$ \tau_{p}=\frac{\zeta_{p}}{k_{p}}=\frac{N^{2}\zeta}{p^{2}\pi^{2}k}=\frac{\tau_{1}}{p^{2}}$$ A conclusion from the previous result is that: $$ <\left(X_{p\alpha}(t)-X_{p'\beta}(0)\right)^{2}>=\delta_{\alpha\beta}\delta_{pp'}\frac{2k_{B}T}{k_{p}}\left[1-e^{-t/\tau_{p}}\right]$$

We are now ready to calculate some real features of the Brownian motion of a polymer. We start with the motion of the center of mass, the position of the center of mass: $$ \vec{R}_{cm}(t)=\frac{1}{N}\overset{N}{\underset{0}{\int}}\vec{R}(n,t)dn\equiv\vec{X}_{0}(t)$$ is the same as the normal coordinate $$\vec{X}_{0}(t)$$. The mean square displacement of the center of mass is hence given by: $$ <(\vec{R}_{cm}(t)-\vec{R}_{cm}(0))^{2}>=3<(X_{0}(t)-X_{0}(0))_{\alpha}^{2}>=\frac{6k_{B}T}{N\zeta}t\equiv6D_{G}t$$ where the diffusion constant $D_{G}$ is given by: $$ D_{G}=\frac{k_{B}T}{N\zeta}$$ and we note that it is inversely proportional to the number of monomers.

What can we say about rotational motion?
To characterize rotational motion of the polymer molecule as a whole, let us consider the time correlation function $$<\vec{P}(t)\cdot\vec{P}(0)>$$ of the end to end vector $$\vec{P}$$. Using normal coordinates, $$\vec{P}(t)$$ can be written as: $$ \vec{P}(t)\equiv\vec{R}(N,t)-\vec{R}(0,t)=2\overset{\infty}{\underset{p=1}{\sum}}\left[cos\left(p\pi\right)-1\right]\vec{X}_{p}(t)$$ which results in: $$ \vec{P}(t)\equiv-4\overset{}{\underset{p:odd\,\, integers}{\sum}}\vec{X}_{p}(t)$$ We therefore conclude that: $$ <\vec{P}(t)\cdot\vec{P}(0)>=16\overset{}{\underset{p:odd\,\, integers}{\sum}}<\vec{X}_{p}(t)\cdot\vec{X}_{p}(0)>=16\overset{}{\underset{p:odd\,\, integers}{\sum}}\frac{3k_{B}T}{k_{p}}e^{-t/\tau_{p}}$$ This time correlation function is a summation over many terms with different relaxation times. We will now see that for large enough times this infinite sum is well approximated by the first term. We rewrite the correlation function as: $$ <\vec{P}(t)\cdot\vec{P}(0)>=\frac{24Nk_{B}T}{\pi^{2}k}e^{-t/\tau_{1}}\left[1+\overset{}{\underset{p:odd\,\, integers>1}{\sum}}\frac{1}{p^{2}}e^{-t\left[\frac{1}{\tau_{p}}-\frac{1}{\tau_{1}}\right]}\right]$$ but since: $$ \left[\frac{1}{\tau_{p}}-\frac{1}{\tau_{1}}\right]\geq\left[\frac{1}{\tau_{3}}-\frac{1}{\tau_{1}}\right]>0$$ we have: $$ e^{-t\left[\frac{1}{\tau_{p}}-\frac{1}{\tau_{1}}\right]}\leq e^{-t\left[\frac{1}{\tau_{3}}-\frac{1}{\tau_{1}}\right]}$$ We also know that: $$ \overset{}{\underset{p:odd\,\, integers>1}{\sum}}\frac{1}{p^{2}}=\frac{\pi^{2}}{8}-1$$ and hence the second term in the parentheses is bounded by an exponentially decaying function and moreover it is never larger than $1/$4: $$ \overset{}{\underset{p:odd\,\, integers>1}{\sum}}\frac{1}{p^{2}}e^{-t\left[\frac{1}{\tau_{p}}-\frac{1}{\tau_{1}}\right]}\leq\left[\frac{\pi^{2}}{8}-1\right]e^{-t\left[\frac{1}{\tau_{3}}-\frac{1}{\tau_{1}}\right]}<\frac{1}{4}$$ We conclude that the second term may be neglected for large times and the correlation function is approximated to be: $$ <\vec{P}(t)\cdot\vec{P}(0)>\cong\frac{24Nk_{B}T}{\pi^{2}k}e^{-t/\tau_{1}}$$ which decays exponentially with a single relaxation time $$\tau_{1}$$. The relaxation time $$\tau_{1}$$ is called the rotational relaxation time, it is also denoted $\tau_{r}$ and is given by: $$ \tau_{r}=\frac{N^{2}\zeta}{\pi^{2}k}$$

What can we say about the motion of one specific bead?
We now turn to study the internal motion of a polymer chain focusing on the mean square displacement of the $n-th$ monomer: $$ \phi(n,t)\equiv<\left[\vec{R}(n,t)-\vec{R}(n,0)\right]^{2}>$$ Direct substitution for $$\vec{R}(n,t)$$ and $$\vec{R}(n,0)$$ gives: $$ \phi(n,t)=<\left[\vec{X}_{0}(t)-\vec{X}_{0}(0)+2\overset{\infty}{\underset{p=1}{\sum}}cos\left(\frac{p\pi n}{N}\right)\left(\vec{X}_{p}(t)-\vec{X}_{p}(0)\right)\right]^{2}>$$ utilizing the correlation functions we have obtained above all the cross terms vanish and we are left with: $$ \phi(n,t)=6D_{G}t\,+\overset{\infty}{\underset{p=1}{\sum}}\frac{24k_{B}T}{k_{p}}cos\left(\frac{p\pi n}{N}\right)^{2}\left[1-e^{-tp^{2}/\tau_{r}}\right]$$ Let us examine this expression in two limits, for $$t\gg\tau_{r}$$: $$ \phi(n,t)\simeq6D_{G}t\,+\overset{\infty}{\underset{p=1}{\sum}}\frac{12k_{B}T}{p^{2}\pi^{2}k}cos\left(\frac{p\pi n}{N}\right)^{2}$$ The second term is a constant that doesn't depend on time (it is easily seen that the infinite sum converges) and hence $$\phi(n,t)$$ is linear in $$t$$ in this limit. For large enough times the displacement of the $$n-th$$ monomer is determined by the diffusion constant of the center of mass as the monomer drifts away with the polymer as a whole. On the other hand, for $$t\ll\tau_{r}$$, the motion of the segments reflects the internal motion due to the many modes of vibration. In this limit we may approximate by replacing summation with integration and $$cos\left(\frac{p\pi n}{N}\right)^{2}$$ by its average value $$1/2$$: $$ \phi(n,t)=\frac{6k_{B}T}{N\zeta}t\,+\overset{\infty}{\underset{p=0}{\int}}\frac{6Nk_{B}T}{p^{2}\pi^{2}k}\left[1-e^{-tp^{2}/\tau_{r}}\right]dp=\frac{6k_{B}T}{N\zeta}t+I$$ Doing the integral by parts we get: {\tiny $$ I=\overset{\infty}{\underset{p=0}{\int}}\frac{6Nk_{B}T}{\pi^{2}k}\left[e^{-tp^{2}/\tau_{r}}-1\right]d\frac{1}{p}=\frac{6Nk_{B}T}{p\pi^{2}k}\left[e^{-tp^{2}/\tau_{r}}-1\right]|_{0}^{\infty}+\overset{\infty}{\underset{p=0}{\int}}\frac{12tNk_{B}T}{\tau_{r}\pi^{2}k}e^{-tp^{2}/\tau_{r}}dp$$ }The first term vanishes (basic calculus) and the second term is transformed into a Gaussian integral which gives: $$ I=\frac{6tNk_{B}T}{\tau_{r}\pi^{2}k}\sqrt{\frac{\pi\tau_{r}}{t}}=\frac{6Nk_{B}T}{\sqrt{\tau_{r}}\pi^{3/2}k}\sqrt{t}$$ We can now write the $\phi(n,t)$ as: $$ \phi(n,t)=\frac{6Nk_{B}T}{\pi^{2}k}\left[\frac{t}{\tau_{r}}+\sqrt{\frac{\pi t}{\tau_{r}}}\right]\underset{t\ll\tau_{r}}{\simeq}\frac{6Nk_{B}T}{\pi^{3/2}k}\sqrt{\frac{t}{\tau_{r}}}$$ and observe that in this limit the mean square displacement of the $$n-th$$ monomer increases like $$\sqrt{t}$$, i.e. in a sub-diffusive manner.

How does the Rouse model stand in comparison to experimental results?
Unfortunately not as good as one might have hoped. The Rouse model may seem to be a very natural way to describe the Brownian motion of a polymer chain, but unfortunately the conclusions drawn from it do not agree with the experimental results. As we saw above, for the Rouse model: $$ \begin{cases} \tau_{r}=\frac{N^{2}\zeta}{p^{2}\pi^{2}k}\propto N^{2}\propto M^{2}

D_{G}=\frac{6kT}{N\zeta}\propto\frac{1}{N}\propto\frac{1}{M}\end{cases}$$ where $$M$$ is the molecular weight of the polymer. Experimentally however, the following dependencies were measured: $$ \begin{cases} \tau_{r}\propto M^{3\nu}

D_{G}\propto M^{-\nu}\end{cases}$$ Here, the exponent $$\nu$$ is that which is used to express the dependence of the radius of gyration $$R_{G}$$ on molecular weight : $$ R_{G}\propto M^{\nu}$$ The value of $$\nu$$ is determined by the nature of the interaction between the solvent and the polymer, in a good solvent $$\nu\simeq\frac{3}{5}$$ and in the $$\Theta$$ state $$\nu=\frac{1}{2}$$.

The reason for the discrepancy between experiments and the Rouse model is that in the latter we have assumed the average velocity of a particular bead is determined only by the external force acting on it, and is independent of the motion of the other beads. However, in reality the motion of one bead is influenced by the motion of the surrounding beads through the medium of the solvent. For example, if one bead moves the solvent surrounding it will also move, and as a result other beads will be dragged along. This type of interaction transmitted by the motion of the solvent is called hydrodynamic interaction. We will discuss a model taking this interaction into account in the next section. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{\string"29-07-2009 14-52-26\string".eps} \par\end{centering} \caption{The hydrodynamic interaction. If bead $$n$$ moves under the action of the force $$\vec{F}_{n}$$, a flow is created in the surrounding fluid, which causes the other beads to move.} \end{figure} \newpage{}

The Zimm Model for Polymer Dynamics
\subsection{So we need a model that will take into account hydrodynamics interactions, but how do we do that?} In the Rouse model we have assumed the average velocity of a particular bead is determined only by the external force acting on it, and is independent of the motion of the other beads. This assumption led to the following Langevin equation: $$ \vec{v}_{n}(t)=\frac{d\vec{R}_{n}(t)}{dt}=-\frac{1}{\zeta_{n}}\frac{\partial U}{\partial\vec{R}_{n}}+\vec{g}_{n}(t)=\frac{\vec{F}_{n}}{\zeta_{n}}+\vec{g}_{n}(t)$$ In order to take into account hydrodynamic interaction we can generalize this assumption. Denoting the forces acting on the beads by $$\vec{F}_{n}(n=1,2,..)$$ , we assume that there is a linear relationship between these forces and the average velocity $$<\vec{v}_{n}(t)>$$ and so the following holds: $$ <\vec{v}_{n}(t)>=\overset{}{\underset{m}{\sum}}H_{nm}\cdot\vec{F}_{m}$$ Here $$H_{nm}$$ is a $$3\times3$$ matrix, the $$nm$$ component of $$H$$. It is now our task to calculate $$H_{nm}$$ and write the appropriate Langevin equation. This can be done using hydrodynamics and some approximations , the result of the calculation gives: $$ H_{nm}=\begin{cases} I/\zeta & \,\,\, n=m

\frac{[\hat{r}_{nm}\hat{r}_{nm}+I]}{8\pi\eta\left\Vert \vec{r}_{nm}\right\Vert } & \,\,\, n\neq m\end{cases}$$ where $$\eta$$ is the viscosity of the liquid, $$I$$ is the $$3\times3$$ identity matrix, $$\vec{r}_{nm}\equiv\vec{R}_{n}-\vec{R}_{m}$$ and $$\hat{r}_{nm}$$ is a unit vector in the direction of $$\vec{r}_{nm}$$. The appropriate Langevin equation is given by (taking the same potential $U$ as in the Rouse model): $$ \frac{d\vec{R}_{n}(t)}{dt}=k\overset{}{\underset{m}{\sum}}H_{nm}\cdot(\vec{R}_{m+1}(t)+\vec{R}_{m-1}(t)-2\vec{R}_{m}(t))+\vec{g}_{n}(t)$$ and the random force $$\vec{g}_{n}(t)$$ is Gaussian with the following characteristics: $$ \begin{cases} <g_{\alpha n}(t)>=0 & \forall n,\forall\alpha=x,y,z

<g_{\alpha n}(t)g_{\beta m}(t')>=2k_{B}T\left(H_{nm}\right)_{\alpha\beta}\delta(t-t') & \forall n,m\,,\forall\alpha,\beta=x,y,z\end{cases}$$ \subsubsection{The Langevin equation we got seems complicated, it is not even linear in $$\vec{R}_{n}(t)$$! I guess there is an approximation coming my way, am I right? }

Since $$H_{nm}$$ depends on $$\vec{R}_{n}(t)$$ the Langevin equation we got is not linear in $$\vec{R}_{n}(t)$$ and hence tremendously hard to solve. Zimm's idea was to replace $$H_{nm}$$ (the factor that is causing the non-linearity) by its equilibrium average value $$<H_{nm}>_{eq}$$, this is called the preaveraging approximation. In general the equilibrium value of $$H_{nm}$$ depends on the interactions between the solvent and the polymer and hence will have a different value in a good/medium/bad solvents. Here we will concentrate on a special state of a polymer in solution, this state was also mentioned earlier and is called the $$\Theta$$ state. For a polymer in $$\Theta$$ conditions, the vector $$\vec{r}_{nm}$$ is characterized by a Gaussian distribution with zero mean and a variance of $$|n-m|b^{2}$$. Here $$b$$ is the distance between two adjacent monomers and it follows that the probability density function for $\vec{r}_{nm}$ is: $$ P(\vec{r}_{nm})=\left(\frac{3}{2\pi|n-m|b^{2}}\right)^{3/2}exp\left(-\frac{3\left\Vert \vec{r}_{nm}\right\Vert ^{2}}{2|n-m|b^{2}}\right)$$ Since $$H_{nm}$$ is a function only of $$\vec{r}_{nm}$$ we can calculate $<H_{nm}>_{eq}$ (for $n\neq m$) as follows:{\scriptsize $$ <H_{nm}>_{eq}=\int d^{3}\vec{r}P(\vec{r})H_{nm}=\int d\vec{r}\left(\frac{3}{2\pi|n-m|b^{2}}\right)^{3/2}exp\left(-\frac{3r^{2}}{2|n-m|b^{2}}\right)\frac{[\hat{r}\hat{r}+I]}{8\pi\eta r}$$ }Noting that in spherical coordinates: $$ \hat{r}\hat{r}=\left[\begin{matrix}{ccc} sin^{2}\theta cos^{2}\phi & sin^{2}\theta cos\phi sin\phi & cos\theta sin\theta cos\phi

sin^{2}\theta cos\phi sin\phi & sin^{2}\theta sin^{2}\phi & cos\theta sin\theta sin\phi

cos\theta sin\theta cos\phi & cos\theta sin\theta sin\phi & cos^{2}\theta\end{array}\right]$$ We have: $$ \overset{\pi}{\underset{0}{\int}}sin\theta d\theta\overset{2\pi}{\underset{0}{\int}}d\phi\hat{r}\hat{r}=\left[\begin{matrix}{ccc} \frac{4\pi}{3} & 0 & 0

0 & \frac{4\pi}{3} & 0

0 & 0 & \frac{4\pi}{3}\end{array}\right]=\frac{4\pi}{3}I$$ and hence:{\scriptsize $$ <H_{nm}>_{eq}=\int d^{3}\vec{r}P(\vec{r})H_{nm}=\overset{\infty}{\underset{0}{\int}}4\pi r^{2}dr\left(\frac{3}{2\pi|n-m|b^{2}}\right)^{3/2}exp\left(-\frac{3r^{2}}{2|n-m|b^{2}}\right)\frac{\frac{4}{3}I}{8\pi\eta r}$$ }The integral is calculated in a straight forward way, defining $$t=r^{2}$$ we have:{\scriptsize $$ \overset{\infty}{\underset{0}{\int}}\frac{I}{3\eta}\left(\frac{3}{2\pi|n-m|b^{2}}\right)^{3/2}exp\left(-\frac{3t}{2|n-m|b^{2}}\right)dt=\frac{I}{3\eta}\left(\frac{3}{2\pi|n-m|b^{2}}\right)^{3/2}\frac{2|n-m|b^{2}}{3}$$ }and hence: $$ <H_{nm}>_{eq}=h(n-m)I$$ where we have defined: $$ h(n-m)\equiv\frac{1}{\eta b}\left(\frac{1}{6\pi^{3}|n-m|}\right)^{1/2}$$ Substituting this result into our Langevin equation and re-writing it in the continuum limit we get: $$ \frac{\partial\vec{R}(n,t)}{\partial t}=k\overset{N}{\underset{0}{\int}}dmh(n-m)\frac{\partial^{2}\vec{R}(m,t)}{\partial^{2}m}+\vec{g}(n,t)$$ where the random force $$\vec{g}(n,t)$$ is Gaussian with the following characteristics:{\small $$ \begin{cases} <g_{\alpha}(n,t)>=0 & \forall n,\forall\alpha=x,y,z

<g_{\alpha}(n,t)g_{\beta}(m,t')>=2k_{B}Th(n-m)\delta(t-t') & \forall n,m\,,\forall\alpha,\beta=x,y,z\end{cases}$$ }Note that $$h(n-m)$$ depend only on $$|n-m|$$ and we have indeed linearized our equation as promised.

Seems familiar, shall we try normal coordinates again?
Yes, we will one again use the normal coordinates defined for the Rouse model. We start by applying $$\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)dn$$ to both side of the Langevin equation for $\vec{R}(n,t)$:{\tiny $$ \frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\frac{\partial\vec{R}(n,t)}{\partial t}dn=\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\left[k\overset{N}{\underset{0}{\int}}dmh(n-m)\frac{\partial^{2}\vec{R}(m,t)}{\partial^{2}m}\right]dn+\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\vec{g}(n,t)dn$$ }The left hand side term is identified as: $$ \frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\frac{\partial\vec{R}(n,t)}{\partial t}dn=\frac{\partial}{\partial t}\left[\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\vec{R}(n,t)dn\right]=\frac{d\vec{X}_{p}(t)}{dt}$$ The first term on the right hand side gives: $$ \frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\left[k\overset{N}{\underset{0}{\int}}dmh(n-m)\frac{\partial^{2}\left[\vec{X}_{0}(t)+2\overset{\infty}{\underset{q=1}{\sum}}cos\left(\frac{q\pi m}{N}\right)\vec{X}_{q}(t)\right]}{\partial^{2}m}\right]dn$$ which yields: $$ \frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\left[-2k\overset{N}{\underset{0}{\int}}dmh(n-m)\overset{\infty}{\underset{q=1}{\sum}}\left(\frac{q\pi}{N}\right)^{^{2}}cos\left(\frac{q\pi m}{N}\right)\vec{X}_{q}(t)\right]dn$$ and with some additional algebra we get: $$ -\overset{\infty}{\underset{q=1}{\sum}}\frac{2kq^{2}\pi^{2}}{N}\vec{X}_{q}(t)\left[\frac{1}{N^{2}}\overset{N}{\underset{0}{\int}}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)cos\left(\frac{q\pi m}{N}\right)h(n-m)dmdn\right]$$ Defining: $$ \begin{cases} k_{q}=\frac{2q^{2}\pi^{2}k}{N} & \,\,\, q=0,1,2,3...

h_{pq}=\frac{1}{N^{2}}\overset{N}{\underset{0}{\int}}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)cos\left(\frac{q\pi m}{N}\right)h(n-m)dmdn & \,\,\, p,q=0,1,2,3...\end{cases}$$ this term can be written as: $$ -\overset{\infty}{\underset{q=1}{\sum}}h_{pq}k_{q}\vec{X}_{q}(t)$$

But this doesn't decouple the equations! another approximation?
Indeed, we will approximate by neglecting all the off diagonal terms. The reasoning goes as follows, we first note that setting $$m-n=l$$ and noting that $h(n-m)=h(m-n)$ we can write $h_{pq}$ as: $$ h_{pq}=\frac{1}{N^{2}}\overset{N}{\underset{0}{\int}}dn\overset{N-n}{\underset{-n}{\int}}cos\left(\frac{p\pi n}{N}\right)cos\left(\frac{q\pi(n+l)}{N}\right)h(l)dl$$ we now use a trigonometric identity: $$ cos(A+B)=cos(A)cos(B)-sin(A)sin(B)$$ to get:{\tiny $$ h_{pq}=\frac{1}{N^{2}}\overset{N}{\underset{0}{\int}}dn\left[cos\left(\frac{p\pi n}{N}\right)cos\left(\frac{q\pi n}{N}\right)\overset{N-n}{\underset{-n}{\int}}cos\left(\frac{q\pi l}{N}\right)h(l)dl\,-cos\left(\frac{p\pi n}{N}\right)sin\left(\frac{q\pi n}{N}\right)\overset{N-n}{\underset{-n}{\int}}sin\left(\frac{q\pi l}{N}\right)h(l)dl\right]$$ }For large $$q$$, the two inner integrals rapidly approach the following integrals: $$ \begin{cases} \overset{\infty}{\underset{-\infty}{\int}}cos\left(\frac{q\pi l}{N}\right)h(l)dl=\frac{\sqrt{N}}{\sqrt{3\pi^{3}q}\eta b}

\overset{\infty}{\underset{-\infty}{\int}}sin\left(\frac{q\pi l}{N}\right)h(l)dl=0\end{cases}$$ With this substitution $h_{pq}$ becomes:\underbar{ } $$ h_{pq}=\frac{1}{N^{2}}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)cos\left(\frac{q\pi n}{N}\right)\frac{\sqrt{N}}{\sqrt{3\pi^{3}q}\eta b}dn$$ and after using the trigonometric identity: $$ cos(A)cos(B)=\frac{1}{2}\left[cos(A-B)+cos(A+B)\right]$$ {\small $$ \begin{cases} h_{pq}=\frac{1}{2N^{3/2}\sqrt{3\pi^{3}q}\eta b}\overset{N}{\underset{0}{\int}}cos\left(\frac{\pi n(p-q)}{N}\right)+cos\left(\frac{\pi n(p+q)}{N}\right)dn=\frac{\delta_{pq}}{\sqrt{12N\pi^{3}p}\eta b} & p>0\end{cases}$$ }If $$q$$ is small our approximation is still fair but for the case $$q=0$$ it is invalid and this case deserves special attention. The careful reader may have noticed that the sum: $$ -\overset{\infty}{\underset{q=1}{\sum}}h_{pq}k_{q}\vec{X}_{q}(t)$$ starts from $$q=1$$ and it may seem that a discussion regarding $$q=0$$ is pointless. We will nevertheless require this case ($$q=0$$) later on and so we calculate directly{\small : } $$ h_{p0}=\frac{1}{N^{2}}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)dn\overset{N}{\underset{0}{\int}}\frac{1}{\eta b}\left(\frac{1}{6\pi^{3}|n-m|}\right)^{1/2}dm$$ The inner integral gives: $$ \overset{n}{\underset{0}{\int}}\frac{1}{\eta b}\left(\frac{1}{6\pi^{3}(n-m)}\right)^{1/2}dm+\overset{N}{\underset{n}{\int}}\frac{1}{\eta b}\left(\frac{1}{6\pi^{3}(m-n)}\right)^{1/2}dm$$ which results in: $$ \frac{-2}{\eta b}\left(\frac{n-m}{6\pi^{3}}\right)^{1/2}|_{0}^{n}+\frac{2}{\eta b}\left(\frac{m-n}{6\pi^{3}}\right)^{1/2}|_{n}^{N}=\frac{2}{\eta b}\left(\frac{n}{6\pi^{3}}\right)^{1/2}+\frac{2}{\eta b}\left(\frac{N-n}{6\pi^{3}}\right)^{1/2}$$ Substituting this into the expression for $h_{p0}$ gives: $$ h_{p0}=\frac{1}{N^{2}}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\frac{2}{\eta b}\left(\frac{n}{6\pi^{3}}\right)^{1/2}dn+\frac{1}{N^{2}}\overset{N}{\underset{0}{\int}}cos\left(p\pi-\frac{p\pi t}{N}\right)\frac{2}{\eta b}\left(\frac{t}{6\pi^{3}}\right)^{1/2}dt$$ where we have changed variables to $$t=N-n$$. It is now easy to see that for odd $p$: $h_{p0}=$0, while for even $p$ we get: $$ h_{p0}=\frac{4}{\eta bN^{2}}\left(\frac{1}{6\pi^{3}}\right)^{1/2}\overset{N}{\underset{0}{\int}}\sqrt{n}cos\left(\frac{p\pi n}{N}\right)dn\,\,\,\, for\,\, even\,\, p$$ For $p=$0 this gives: $$ h_{00}=\frac{8}{3\eta b}\left(\frac{1}{6N\pi^{3}}\right)^{1/2}$$ while for even $$p>0$$, the integral may be re-expressed in terms of the Fresnel integral $$S(x)=\overset{x}{\underset{0}{\int}}sin^{2}(t)dt$$ to give: $$ \overset{N}{\underset{0}{\int}}\sqrt{n}cos\left(\frac{p\pi n}{N}\right)=\frac{-\sqrt{2}N^{3/2}}{2\pi p^{3/2}}S(\sqrt{2p})$$ and we see that: $$ \left|\frac{-\sqrt{2}N^{3/2}}{2\pi p^{3/2}}S(\sqrt{2p})\right|\leq\frac{N^{3/2}}{\pi p}$$ concluding that for $p>$0: $$ \begin{cases} \left|h_{p0}\right|\leq\frac{4}{\eta b\pi p}\left(\frac{1}{6N\pi^{3}}\right)^{1/2} & even\,\, p>0

h_{p0}=0 & odd\,\, p>0\end{cases}$$ We see that for large $$N$$, $$h_{p0}$$ is small and also decays with $$p$$. We will hence neglect $$h_{p0}$$ for $$p>0$$ and keep only the diagonal term $$h_{00}$$.

O.K, what about the random forces?
We are left with the second term on the right hand side of the original equation which we deal with by defining the random forces: $$ \begin{cases} \vec{g}_{p}(t)=\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)\vec{g}(n,t)dn & \,\,\, p=0,1,2,3...\end{cases}$$ Which are characterized by zero mean: $$ \begin{cases} <g_{\alpha p}(t)>=\frac{1}{N}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)<g_{\alpha}(n,t)>dn=0 & \forall p\,,\forall\alpha=x,y,z\end{cases}$$ And by: $$ \begin{cases} <g_{\alpha p}(t)g_{\beta p'}(t')>=2k_{B}T\delta_{\alpha\beta}\delta(t-t')h_{pp'} & \forall p,p'\,,\forall\alpha,\beta=x,y,z\end{cases}$$ since: {\footnotesize $$ <g_{\alpha p}(t)g_{\beta p'}(t')>=\frac{1}{N^{2}}\overset{N}{\underset{0}{\int}}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)cos\left(\frac{p'\pi n'}{N}\right)<g_{\alpha}(n,t)g_{\beta}(n',t')>dndn'$$ }gives: {\footnotesize $$ <g_{\alpha p}(t)g_{\beta p'}(t')>=\frac{2k_{B}T\delta_{\alpha\beta}\delta(t-t')}{N^{2}}\overset{N}{\underset{0}{\int}}\overset{N}{\underset{0}{\int}}cos\left(\frac{p\pi n}{N}\right)cos\left(\frac{p'\pi n'}{N}\right)h(n-n')dndn'$$ }which yields the result by definition of $$h_{pp'}$$. This means that the random forces with different values of $$p$$ (remember $$h_{pp'}\propto\delta_{pp'}$$) and/or acting in perpendicular directions and/or acting in different times are independent.

That was a bit long, could you please sum up the main result?
The main result is that we have found the equations of motion for the normal coordinates $$\vec{X}_{p}(t)$$ and that they are given by: $$ \frac{d\vec{X}_{p}(t)}{dt}=-\frac{k_{p}}{\zeta_{p}}\vec{X}_{p}(t)+\vec{g}_{p}(t)$$ with $$ \begin{cases} k_{p}=\frac{2p^{2}\pi^{2}k}{N} & \,\,\, p=0,1,2,3...

\zeta_{0}=1/h_{00}=\frac{3\eta b}{8}\left(6N\pi^{3}\right)^{1/2} & \,\,\, p=0

\zeta_{p}=1/h_{pp}=\sqrt{12N\pi^{3}p}\eta b & \,\,\, p=1,2,3...\end{cases}$$ and since the random forces are independent of each other, the motions of the $$\vec{X}_{p}$$'s are also independent of each other. These are the equations of motion for an infinite set of uncoupled Brownian harmonic oscillators, each with a force constant $$k_{p}$$ and friction coefficient $$\zeta_{p}$$ of its own. We have once again gone from one partial differential equation (which we don't know how to solve directly) for $$\vec{R}(n,t)$$ to an infinite set of uncoupled ordinary differential equations (from a type we are already familiar with) for the normal coordinates $$\vec{X}_{p}(t)$$.

\subsubsection{Great! This is very similar to what we got for the Rouse model, are we going to repeat the same type of analysis? }

Since the equation for the normal modes is the same as that for the Rouse model, we can immediately write the expressions for the diffusion constant of the center of mass and the rotational relaxation time using the results of the previous section: $$ \begin{cases} D_{G}=\frac{k_{B}T}{\zeta_{0}}=\frac{8k_{B}T}{3\eta b\left(6N\pi^{3}\right)^{1/2}}

\tau_{r}=\frac{\zeta_{1}}{k_{1}}=\sqrt{\frac{3}{\pi}}\frac{\eta b}{k}N^{3/2}\end{cases}$$

How does the Zimm model stand in comparison to experimental results?
As can been seen $$D_{G}$$ and $$\tau_{r}$$ depend on the molecular weight $M$ as follows (recall that $M\propto N$): $$ \begin{cases} D_{G}\propto\frac{1}{\sqrt{M}}

\tau_{r}\propto M^{3/2}\end{cases}$$ The dependence of these quantities on the molecular weight agrees with experiments performed on solutions in the $$\Theta$$ state. Furthermore, the relaxation times of the normal modes are: $$ \tau_{p}=\frac{\zeta_{p}}{k_{p}}=\frac{\tau_{r}}{p^{3/2}}$$ and hence for short times ($$t\ll\tau_{r}$$) the average mean square displacement of the $n-th$ monomer is given by: $$ \phi(n,t)\approx\frac{2Nb^{2}}{\pi^{2}}\overset{\infty}{\underset{0}{\int}}\frac{1-exp(-tp^{3/2}/\tau_{r})}{p^{2}}dp$$ integration by parts gives: $$ \phi(n,t)\approx\frac{2Nb^{2}}{\pi^{2}}\left[\frac{-1+exp(-tp^{3/2}/\tau_{r})}{p}\right]|_{0}^{\infty}+\frac{3Ntb^{2}}{\tau_{r}\pi^{2}}\overset{\infty}{\underset{0}{\int}}\frac{exp(-tp^{3/2}/\tau_{r})}{\sqrt{p}}dp$$ The first term drops (elementary calculus), the second term is treated by a change of variable $x=tp^{3/2}/\tau_{r}$ : $$ \phi(n,t)\approx\frac{3Ntb^{2}}{\tau_{r}\pi^{2}}\overset{\infty}{\underset{0}{\int}}\frac{2\tau_{r}exp(-x)}{3t\left[\frac{\tau_{r}x}{t}\right]^{2/3}}dx=\frac{2Nb^{2}\Gamma(1/3)}{\pi^{2}}\left[\frac{t}{\tau_{r}}\right]^{2/3}$$ where we have identified the gamma function $$\Gamma(x)=\overset{\infty}{\underset{0}{\int}}t^{x-1}e^{-t}dt$$. The relation $$\phi(n,t)\propto t^{2/3}$$ has been confirmed by analysis of the Brownian motion of DNA molecules. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{\string"31-07-2009 01-21-59\string".eps} \par\end{centering} \caption{The average mean square displacement of the terminal segment of a DNA molecule (solid line), observed by fluorescence microscopy. The dashed line is calculated from the theory of Zimm. The graph is plotted on a log-log scale, on this type of plot the slope of the lines corresponds to the exponent $$\alpha$$ in the relation $$\phi(n,t)\propto t^{\alpha}$$. The fact the lines are parallel, supports the prediction $$\alpha=\frac{2}{3}$$. Taken from: J. Polym. Sci., 30, 779, Fig. 5. } \end{figure} \newpage{}

I Have More Questions, Where can I get Answers?
\begin{thebibliography}{3} \bibitem{key-4}Introduction to Polymer Physics, Chapters 4\&5, M. Doi Translated by H. See, Clarendon Press, 30 November 1995. \bibitem{key-5}The Theory of Polymer Dynamics, Chapters 3–5, M. Doi and S. F. Edwards, Clarendon Press, 3 November 1988. \bibitem{key-6}Polymer Physics, Chapter 8, Michael Rubinstein and Ralph H. Colby Oxford University Press, 26 June 2003. \end{thebibliography}