Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/Jan08 667

Solution 4a
The system of equations may be expressed in matrix notation.

$$ f(x,y)= \begin{bmatrix} 1 + h\frac{e^{-x^2}}{1+y^2} \\ \\ .5+h \arctan(x^2+y^2) \end{bmatrix}

\!\,$$

The Jacobian of $$f(x,y) \!\,$$, $$J(f)\!\,$$, is computed using partial derivatives

$$ J(f)= \begin{bmatrix} \frac{h}{1+y^2} e^{-x^2}(-2x) & \frac{-h2ye^{-x^2}}{(1+y^2)^2} \\ h\cdot \frac{2x}{1+(x^2+y^2)^2} & h \cdot \frac{2y}{1+(x^2+y^2)^2} \end{bmatrix} \!\,$$

If $$h \!\,$$ is sufficiently small and $$x,y\!\,$$ are restricted to a bounded region $$B\!\,$$,

$$\underbrace{\| J(f) \|}_C < 1\!\,$$

From the mean value theorem, for $$\vec{x_1}, \vec{x_2} \in B\!\,$$, there exists $$\vec{\xi}\!\,$$ such that

$$f(\vec{x_1})-f(\vec{x_2})=J(f(\vec{\xi}))(\vec{x_1}-\vec{x_2})\!\,$$

Since $$\| J(f)\|\!\,$$ is bounded in the region $$B\!\,$$ give sufficiently small $$h\!\,$$

$$\|f(\vec{x_1})-f(\vec{x_2}) \| \leq C \|\vec{x_1}-\vec{x_2} \|\!\,$$

Therefore, $$f\!\,$$ is a contraction and from the contraction mapping theorem there exists an unique fixed point (solution) in a rectangular region.

Use Newton Method
To solve this problem, we can use the Newton Method. In fact, we want to find the zeros of

$$ g(x,y)=

\begin{bmatrix} x \\ \\ y \end{bmatrix} - \begin{bmatrix} 1 + h\frac{e^{-x^2}}{1+y^2} \\ \\ .5+h \arctan(x^2+y^2) \end{bmatrix}

\!\,$$

The Jacobian of $$g(x,y) \!\,$$, $$J(g)\!\,$$, is computed using partial derivatives

$$ J(g)= \begin{bmatrix} 1-\frac{h}{1+y^2} e^{-x^2}(-2x) & \frac{-he^{-x^2}}{1+y^2} \\ h\cdot \frac{2x}{1+(x^2+y^2)^2} & 1-h \cdot \frac{2y}{1+(x^2+y^2)^2} \end{bmatrix} \!\,$$

Then, the Newton method for solving this linear system of equations is given by

$$

\begin{bmatrix} x_{k+1} \\ \\ y_{k+1} \end{bmatrix} = \begin{bmatrix} x_{k} \\ \\ y_{k} \end{bmatrix}

-\underbrace{ \frac{1}{det(J(g))} \begin{bmatrix} 1-h \cdot \frac{2y}{1+(x^2+y^2)^2} &   \frac{he^{-x^2}}{1+y^2}   \\ -h\cdot \frac{2x}{1+(x^2+y^2)^2} & 1-\frac{h}{1+y^2} e^{-x^2}(-2x) \end{bmatrix}}_{J(g)^{-1}} g(x_k,y_k) \!\,$$

Jacobian of g is Lipschitz
We want to show that $$ J(g) \!\,$$ is a Lipschitz function. In fact,

$$ \begin{align} \|J(g)(\vec{x_1})-J(g)(\vec{x_2}) \| &= \|(I-J(f))(\vec{x_1})-(I-J(f))(\vec{x_2})\| \\ &= \| \vec{x_1} -\vec{x_2} - J(f) \vec{x_1} + J(f) \vec{x_2} \| \\ &\leq \| \vec{x_1} -\vec{x_2} \| + \| J(f) \vec{x_2} - J(f) \vec{x_1} \| \end{align} \!\,$$

and now, using that $$ J(f) \!\,$$ is Lipschitz, we have

$$\|J(g)(\vec{x_1})-J(g)(\vec{x_2}) \| \leq (1+C)\|(\vec{x_1})-(\vec{x_2}) \| \!\,$$

Jacobian of g is invertible
Since $$ J(f) \!\,$$ is a contraction, the spectral radius of the Jacobian of $$f\!\,$$ is less than 1 i.e.

$$ \rho(J(f))<1 \!\,$$.

On the other hand, we know that the eigenvalues of $$ J(g)\!\,$$ are $$ \lambda(J(g))=1-\lambda(J(f))\!\,$$.

Then, it follows that $$ det(I-(J(g))) \neq 0 \!\,$$ or equivalently $$ J(g)  \!\,$$ is invertible.

inverse of Jacobian of g is bounded
$$ \underbrace{ \frac{1}{det(J(g))} \begin{bmatrix} 1-h \cdot \frac{2y}{1+(x^2+y^2)^2} &   \frac{he^{-x^2}}{1+y^2}   \\ -h\cdot \frac{2x}{1+(x^2+y^2)^2} & 1-\frac{h}{1+y^2} e^{-x^2}(-2x) \end{bmatrix}}_{J(g)^{-1}} \!\,$$

Since $$J(g)^{-1}\!\,$$ exists, $$\mbox{det}(J(g)) \neq 0\!\,$$. Given a bounded region (bounded $$x,y\!\,$$), each entry of the above matrix is bounded. Therefore the norm is bounded.

norm of (Jacobian of g)^-1 (x_0) * g(x_0) bounded
$$\| J(g)^{-1}(x_0) * g(x_0) \| < \infty \!\,$$ since $$ J(g)^{-1} \!\,$$ and $$g(x) \!\,$$ is bounded.

Then, using a good enough approximation $$ (x_0,y_0) \!\,$$, we have that the Newton's method is  at least quadratically convergent, i.e,

$$\|(x_{k-1},y_{k+1}) - (x^*,y^*)\| \leq K \|(x_{k},y_{k}) - (x^*,y^*)\|^2. \!\,$$

Solution 5a
We want to solve the following initial value problem: $$y' = f(x,y) \mbox{, } y(x_0) = y_0 \!\,$$.

First, we integrate this expression over $$[t_n,t_{n+1}]\!\,$$, to obtain

$$ y(t_{n+1})=y(t_n) + \int_{t_n}^{t_{n+1}}f(t,y(t))dt \!\,$$,

To approximate the integral on the right hand side, we approximate its integrand $$f(t,y(t)) \!\,$$ using an appropriate interpolation polynomial of degree $$ p \!\,$$ at $$ t_n, t_{n-1},..., t_{n-p} \!\,$$.

This idea generates the Adams-Bashforth methods.

$$ y_{n+1}=y_n + \int_{t_n}^{t_{n-1}} \sum_{k=0}^{p}g(t_{n-k})l_k(t)dt \!\,$$,

where, $$ y_n \!\,$$ denotes the approximated solution, $$ g(t)\equiv f(t,y)\!\,$$ and $$ l_k(t) \!\,$$ denotes the associated Lagrange polynomial.

Solution 5b
From (a) we have


 * $$ y_{i+1} = y_i + \int_{x_i}^{x_{i+1}} f dx \!\,$$ where $$ \int_{x_i}^{x_{i+1}} f dx \approx \int_{x_i}^{x_{i+1}} \left [ \frac{x-x_i}{x_{i-1}-x_i} f_{i-1} + \frac{x-x_{i-1}}{x_i-x_{i-1}} f_i \right ] dx \!\,$$

Then if we let $$ x = x_i + sh \!\,$$, where h is the fixed step size we get

$$ \begin{align} dx &= h ds   \\ x-x_i &= sh     \\ x-x_{i-1} &= (1+s)h \\ x_i-x_{i-1} &= h \end{align} $$


 * $$ h \int_0^1 \left [ \frac{sh}{-h} f_{i-1} + \frac{(1+s)h}{h} f_i \right ] ds = h \left [ -\frac12 f_{i-1} + \frac32 f_i \right ] \!\,$$

So we have the Adams-Bashforth method as desired


 * $$y_{i+1} = y_i + h \left [ -\frac12 f(x_{i-1},y_{i-1}) + \frac32 f(x_{i},y_{i}) \right ] \!\,$$

Find Local Truncation Error Using Taylor Expansion
Note that $$y_i \approx y(t_i) \!\,$$. Also, denote the uniform step size $$\Delta t \!\,$$ as h. Hence,

$$t_{i-1}=t_i - h \!\,$$

$$t_{i+1}=t_i+ h \!\,$$

Therefore, the given equation may be written as

$$y(t_i+h)-y(t_i) \approx h\left[-\frac{1}{2}y'(t_i-h)+\frac32y'(t_i)\right] \!\,$$

Expand Left Hand Side
Expanding about $$ t_i \!\, $$, we get

$$

\begin{array}{|c|c|c|c|} \mbox{Order} & y(t_i+h) & y(t_i) & \Sigma \\ \hline & & & \\ 0& y(t_i) & y(t_i) & 0 \\ & & & \\ 1& -y'(t_i)h & 0 & -y'(t_i)h \\ & & & \\ 2& \frac{1}{2!}y(t_i)h^2 & 0 & \frac{1}{2}y(t_i)h^2 \\ & & & \\ 3& -\frac{1}{3!}y(t_i)h^3 & 0 & -\frac{1}{6}y(t_i)h^3 \\ & & & \\ 4& \mathcal{O}(h^4) & 0 & \mathcal{O}(h^4) \\ & & & \\ \hline \end{array}

$$

Expand Right Hand side
Also expanding about $$ t_i \!\,$$ gives

$$

\begin{array}{|c|c|c|c|c|} \mbox{Order} & \frac{13}{12}hx'(t_i+2h) & -\frac53hx'(t_i+h) & -\frac{5}{12}hx'(t_i) & \Sigma \\ \hline &&&&\\ 0& 0 &0 &0 & 0 \\ &&&&\\ 1& \frac{13}{12} hx'(t_i)&-\frac{5}{3} h x'(t_i) & -\frac{5}{12}hx'(t_i) & -1\cdot hx'(t_i) \\ &&&&\\ 2& \frac{13}{12}(2h) hx(t_i) & -\frac{5}{3} h h x(t_i) & 0  & \frac12 h^2 x''(t_i)\\ &&&&\\ 3&\frac{13}{12} (2h)^2 \frac{hx(t_i)}{2} & -\frac{5}{3} h^2 \frac{hx(t_i)}{2} & 0 &  \frac43 h^3 x'''(t_i)\\ &&&&\\ 4& \mathcal{O}(h^4)& \mathcal{O}(h^4) &0 & \mathcal{O}(h^4)\\ \hline \end{array}

$$

Show Convergence by Showing Stability and Consistency
A method is convergent if and only if it is both stable and consistent.

Stability
It is easy to show that the method is zero stable as it satisfies the root condition. So stable.

Consistency
Truncation error is of order 2. Truncation error tends to zero as h tends to zeros. So the method is consistent.

Order of Convergence
Dahlquist principle: consistency + stability = convergent. Order of convergence will be 2.

Solution 6a
Multiplying (2) by a test function and using integration by parts we obtain:

$$-\int_0^1 u''v dx + \int_0^1 uv dx = \int_0^1 fv dx \!\,$$

$$\underbrace{\left. -u'v \right|_0^1}_{0} + \int_0^1 u'v' dx + \int_0^1 uv dx = \int_0^1 fv dx \!\,$$

Thus, the weak form or variational form associated with the problem (2) reads as follows: Find $$ u \in H \!\,$$ such that

$$ B(u,v) = \int_0^1 u'v' dx + \int_0^1 uv dx = \int_0^1 fv dx = F(v) \!\,$$ for all $$v \in H, \!\,$$

where $$H:=H_0^1(0,1) = \{ v: v \in H^1, v(0) = v(1) = 0 \} \!\,$$.

Define piecewise linear basis
For our basis of $$V_h \!\,$$, we use the set of hat functions $$\{ \phi_j \}_{j=1}^{n-1} \!\,$$, i.e., for $$j=1,2,\ldots,n-1 \!\,$$

$$\phi_j(x)= \begin{cases} \frac{x-x_{j-1}}{x_j-x_{j-1}} & \mbox{for } x \in [x_{j-1},x_j] \\ \frac{x_{j+1}-x}{x_{j+1}-x_j} & \mbox{for } x \in [x_j, x_{j+1}] \\ 0           & \mbox{otherwise} \end{cases} \!\,$$

Define u_h
Since $$\{ \phi_j \}_{j=1}^{n-1} \!\,$$ is a basis for $$V_h \!\,$$, and $$ u_h \in V_h \!\,$$ we have


 * $$ u_h=\sum_{j=1}^{n-1} u_{hj}\phi_j \!\,$$.

Discrete Problem
Now, we can write the discrete problem: Find $$ u_h \in V_h \!\, $$ such that

$$ B(u_h,v_h)=F(v_h) \!\,$$ for all $$v_h \in V_h. \!\,$$

If we consider that $$\{ \phi_j \}_{j=1}^{n-1} \!\,$$ is a basis of $$ V_h\!\,$$ and the linearity of the bilinear form $$ B \!\,$$ and the functional $$ F\!\,$$, we obtain the equivalent problem:

Find $$ u_h = (u_{h,1}, \dots, u_{h,{n-1}}) \in \mathbb{R}^{n-1}\!\, $$ such that

$$ \sum_{j=1}^{n-1}u_{h,j} \left \{ \int_{0}^1 \phi_j' \phi_i' dx + \int_{0}^1 \phi_j \phi_i dx \right\} = \int_{0}^1 f \phi_i dx, \quad i=1,\dots, n-1.\!\, $$

This last problem can be formulated as a matrix problem as follows:

Find $$ u_h = (u_{h,1}, \dots, u_{h,{n-1}}) \in \mathbb{R}^{n-1}\!\, $$ such that

$$ A u_h = F, \!\, $$

where $$ A_{ij}= \int_{0}^1 \phi_j' \phi_i' dx + \int_{0}^1 \phi_j \phi_i dx\!\, $$ and $$ F_j= \int_{0}^1 f \phi_i dx\!\, $$.

Bound error
In general terms, we can use Cea's Lemma to obtain

$$ \| u - u_h\|_1 \leq C\inf_{v_h \in V_h} \| u - v_h \|_1   $$

In particular, we can consider $$ v_h \!\,$$ as the Lagrange interpolant, which we denote by $$ v_I \!\,$$. Then,

$$ \| u - u_h\|_1 \leq C \| u - v_I \|_1 \leq Ch \| u\|_{H^2(0,1)} $$.

It's easy to prove that the finite element solution is nodally exact. Then it coincides with the Lagrange interpolant, and we have the following punctual estimation:

$$ \|u(x) - u_h(x)\|_{L^{\infty}([x_{i-1},x_i])} \leq \max_{x \in [x_{i-1},x_i]} \frac{u^{(2)}(x)}{(2)!} \left(\frac{h}{2}\right)^2 $$

Continuity of Bilinear Form
$$ \begin{align} B(u,v) &= \int u v + \int u'v' \\ &\leq \|u\|_0 \|v\|_0 + \|u'\|_0 \|v'\|_0 \mbox{ (Cauchy-Schwarz in L2) } \\ &\leq (\|u\|^2_0 + \|u'\|_0^2)^{\frac12}(\|v\|_0^2+\|v'\|^2)^{\frac12} \mbox{ ( Cauchy-Schwarz in R2 ) } \\ &= \|u\|_1 \cdot \|v\|_1 \end{align} \!\,$$

Orthogonality of the Error
$$ B(u-u_h,v_h)=0, \quad \forall v_h \in V_h\!\,$$.

Bound for L2 norm of w
$$ \begin{align} \|w\|_0^2 &\leq \|w\|_1^2 \\ &= B(w,w) \\ &= \int (u-u_h)w \\ &\leq \|u-u_h\|_0 \|w\|_0 \mbox{ (Cauchy-Schwarz in L2 ) } \end{align} \!\,$$

Hence,

$$(*) \qquad \|w\|_0 \leq \|u-u_h\|_0 \!\,$$

Bound for L2 norm of w
From $$(\#) \!\,$$, we have

$$-w''+w=u-u_h \!\,$$

Then,

$$ \begin{align} \|w''\|_0 &\leq \|w\|_0 +\|u-u_h\|_0 \mbox{ (triangle inequality) } \\ &\leq \|u-u_h\|_0 + \|u-u_h\|_0 \mbox{ (by (*) ) } \\ &= 2 \|u-u_h\|_0 \end{align} \!\,$$

Bound L2 Error
$$ \begin{align} \| u-u_h\|_{L^2}^2 &= B(w,u-u_h)  \mbox{ ( from equation (4) ) } \\ &= B(w,u-u_h)-\underbrace{B(w_h,u-u_h)}_0 \\ &= B(w-w_h,u-u_h)     \\ & \leq \| w-w_h\|_{H^1(0,1)}  \| u-u_h  \|_{H^1(0,1)}  \mbox{ (Continuity of Bilinear Form) }  \\ & \leq Ch \| w\|_{H^2(0,1)} \| u-u_h \|_{H^1(0,1)} \mbox{ (Cea Lemma and Polynomial Interpolation Error) } \end{align} $$

Finally, from (#), we have that $$ \| w \|_{H^2(0,1)} \leq C\| u-u_h \|_{L_2(0,1)} \!\,$$. Then,

$$ \| u-u_h\|_{L^2(0,1)}^2 \leq Ch \| u-u_h \|_{L_2(0,1)} \| u-u_h  \|_{H^1(0,1)}   \!\,$$,

or equivalently,

$$ \| u-u_h\|_{L^2(0,1)} \leq Ch \| u-u_h  \|_{H^1(0,1)} \leq Ch^2 \| u\|_{H^2(0,1)}  \!\,$$.

Solution 6d
We know that

$$ \begin{align}

\| u - u_h\|_1^2 &= \langle u-u_h, u- u_h \rangle_{H^1(0,1)} \\ &= \langle u-u_h, u \rangle_{H^1(0,1)} - \underbrace{\langle u-u_h, u_h \rangle_{H^1(0,1)} }_0 \\ &= \|u\|_{H^1(0,1)}^2 - \langle u_h, u \rangle_{H^1(0,1)} \\ &= \|u\|_{H^1(0,1)}^2 - \| u_h \|_{H^1(0,1)}^2

\end{align} $$

where the substitution in the last lines come from the orthogonality of error.

Now, $$ \| u_h\|_{H^1(0,1)}^2=  \sum_{j=1}^{n-1} \sum_{i=1}^{n-1} u_{h,j} u_{h,i} \left( \int_{0}^1 \phi_i' \phi_j'dx+\int_{0}^1 \phi_i \phi_j dx \right) = \sum_{j=1}^{n-1} \sum_{i=1}^{n-1} u_{h,j} A_{ij} u_{h,i}=  C^t A C.  \!\,$$

Then, we have obtained

$$ \| u - u_h\|_1^2 =\|u\|_{H^1(0,1)}^2 - C^t AC. \!\,$$