Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/January 2005

Solution 1
Since both $$f(x) \!\,$$ and $$p_n^*(x) \!\,$$ are continuous, so is the function


 * $$e(x)=f(x)-p_n^*(x) \!\,$$

and therefore $$e(x) \!\,$$ attains both a maximum and a minimum on $$[a,b] \!\,$$.

Let $$M=\max_{x \in [a,b]}e(x) \!\,$$ and $$m=\min_{x \in [a,b]}e(x) \!\,$$

If $$M \neq -m \!\,$$, then there exists a polynomial $$q(x)=p_n^*(x)+\frac{M+m}{2} \!\,$$ (a vertical shift of the supposed best approximation $$p_n^*(x)) \!\,$$ which is a better approximation to $$f(x) \!\,$$.

$$ \begin{align} f(x)-q(x) &= f(x)-p_n^*(x)-\frac{M+m}{2} \\ &\leq M-\frac{M+m}{2} \\ &= \frac{M-m}{2}\\ \\ \\ f(x)-q(x) &\geq m - \frac{M+m}{2}\\ &=\frac{m-M}{2} \end{align} \!\,$$

Therefore,

$$\|f(x)-q(x)\| \leq \frac{M-m}{2} \leq \|f-p_n^*\| = M \!\,$$

Equality holds if and only if $$m=-M \!\,$$.

Solution 2a
Let $$f(x)=1 \!\,$$. Then

$$\int_0^1 \frac{1}{x^{\frac12}}=w_1 \cdot 1 \!\,$$

which implies after computation

$$w_1=2 \!\,$$

Let $$f(x)=x \!\,$$. Then

$$\int_0^1\frac{x}{x^{\frac12}}=w_1\cdot x_1 = 2 x_1 \!\,$$

which implies after computation

$$x_1=\frac13 \!\,$$

Solution 2b
Suppose there is a one-point rule for approximating $$ I\!\,$$ that is exact for quadratics.

Let $$f(x)=x^2 \!\,$$.

Then,

$$\int_0^1 \frac{x^2}{x^{\frac12}}= \frac25 \!\,$$

but

$$w_1x_1^2=2(\frac13)^2=\frac29 \!\,$$,

a contradiction.

Solution 2c
Let $$f(x)=x^2 \!\,$$. Then $$f''(x)=2 \!\,$$

Using the values computed in part (b), we have

$$\frac25-\frac29=c \cdot 2 \!\,$$

which implies

$$c=\frac{4}{45} \!\,$$

Solution 2d
There exist $$w_1,w_2,w_3 \!\,$$ such that if $$ f(x)\!\,$$ is a polynomial of degree $$\leq 3 \!\,$$

$$\int_a^b f(x)dx= w_1f(a)+w_2f(b)+w_3f(\frac{a+b}{2}) \!\,$$

Hence,

$$ \begin{align} \int_a^b f(x)dx &= w_1f(a)+w_2f(b)+w_3f(\frac{a+b}{2}) \\ &= w_1g(a)+w_2g(b)+w_3g(\frac{a+b}{2}) \\ &= \int_a^b g(x) dx \end{align} \!\,$$

Convergence of any starting vector
Using the iteration $$ \,\! x_{k+1}=x_k + \alpha(b-Ax_k)$$, we have that

$$ \begin{align} e_{k+1}	&= x_{k+1}-x^* \\ &= x_k - x^* + \alpha(b-Ax_k)\\ &= x_k - x^* + \alpha(Ax^*-Ax_k)\\ &= e_k - \alpha A e_k \\ &= (I-\alpha A)e_k \end{align} $$

Then, computing norms we obtain

$$ \,\! \| e_{k+1}\| \leq \|I-\alpha A \| \| e_k\|  $$.

In particular, considering $$ \,\! \| \cdot \|_2  $$, we have that

$$ \,\! \| e_{k+1}\|_2 \leq \|I-\alpha A \|_2 \| e_k\|_2  $$.

Now, in order to study the convergence of the method, we need to analyze $$ \,\! \|I-\alpha A \|_2  $$. We use the following characterization:

$$ \begin{align} \|I-\alpha A \|_2^2 &= \rho\left(  (I-\alpha A) (I-\alpha A)^* \right) \\ &= \rho\left(  I-\alpha A^*-\alpha A+ |\alpha|^2 AA^*\right) \end{align} $$

Let's denote by $$ \,\! \lambda $$ an eigenvalue of the matrix $$ \,\! A $$. Then $$ \rho\left(  (I-\alpha A) (I-\alpha A)^* \right) < 1\,\! $$ implies

$$ 1-2\alpha Re(\lambda)+ \alpha^2 |\lambda|^2 < 1 \!\,$$

i.e.

$$ -2\alpha Re(\lambda)+ \alpha^2 |\lambda|^2 < 0 \!\,$$

The above equation is quadratic with respect to $$ \alpha\!\,$$ and opens upward. Hence using the quadratic formula we can find the roots of the equation to be

$$\alpha_1=0 \quad \alpha_2= 2\frac{Re(\lambda)}{|\lambda|^2} \!\,$$

Then, if $$ \,\! Re(\lambda)> 0 $$ for all the eigenvalues of the matrix $$ \,\! A $$, we can find $$ \,\! \alpha \in (0,2\frac{Re(\lambda)}{|\lambda|^2}) $$ such that $$ \,\!\rho\left(  (I-\alpha A) (I-\alpha A)^* \right) < 1 $$, i.e., the method converges.

Choosing alpha if A symmetric
On the other hand, if the matrix $$ \,\! A $$ is symmetric, we have that $$ \,\! \|I-\alpha A \|_2 = \rho\left(  I-\alpha A \right) $$. Then using the Schur decomposition, we can find a unitary matrix $$ \,\! U $$ and a diagonal matrix $$ \,\! \Lambda $$ such that $$ \,\! A=U^*\Lambda U $$, and in consequence,

$$ \begin{align}

\|I-\alpha A \|_2&=\|I-\alpha \Lambda \|_2 \\ &= \rho(I-\alpha \Lambda) \\ &= \max_{i=1,\dots,n}(1-\alpha \lambda_i) \\

\end{align} $$

This last expression is minimized if $$ \,\! (1-\alpha \lambda_1)=-(1-\alpha \lambda_n) $$, i.e, if $$ \,\! \alpha_{opt}=2/(\lambda_1+\lambda_n) $$. With this optimal value of $$ \,\! \alpha $$, we obtain that

$$ \,\! (I-\alpha_{opt} \Lambda)_{i,i}= 1-\frac{2}{\lambda_1+\lambda_n}\lambda_i, $$

which implies that

$$ \,\! \| I-\alpha_{opt} \Lambda \|_2= \frac{\lambda_n-\lambda_1}{\lambda_1+\lambda_n} $$.

Finally, we've obtained that

$$ \,\! \|e_{k+1}\|_2 \leq \frac{\lambda_n-\lambda_1}{\lambda_1+\lambda_n} \|e_{k} \|_2$$.

Solution 3b
If $$Re(\lambda_i) >0 \!\,$$ for $$i=1,2,\ldots,n \!\,$$, then convergence occurs if

$$\alpha \in (0, 2\frac{Re(\lambda)}{|\lambda|^2}) \!\,$$

Hence $$\alpha >0 \!\,$$.

Similarly, if $$Re(\lambda_i) <0 \!\,$$ for $$i=1,2,\ldots,n \!\,$$, then convergence occurs if

$$\alpha \in (2\frac{Re(\lambda)}{|\lambda|^2},0) \!\,$$

Hence $$\alpha <0 \!\,$$.

If some eigenvalues are positive and some negative, there does not exist a real $$\alpha \!\,$$ for which the iterations converge since $$\alpha \!\,$$ cannot be both positive and negative simultaneously.

Solution 3c
$$ \begin{align} \|x-x_{k+1}\| &\leq p\|x-x_{k+1}+x_{k+1}-x_k\| \\ &\leq p\|x-x_{k+1}\| + p \|x_{k+1}-x_k\| \\ (1-p)\|x-x_{k+1}\| &\leq p \|x_{k+1}-x_k\| \\ \|x-x_{k+1}\|     &\leq \frac{p}{1-p} \|x_{k+1}-x_k\| \end{align} \!\,$$

Solution 3d
By Gerschgorin's theorem, the magnitude of all the eigenvalues are between 1 and 5 i.e.

$$1 < |\lambda| < 5 \!\,$$

Therefore,

$$\rho(I-\alpha A) = \max_i (|1-\alpha \lambda_i|) \leq |1 - \alpha 5 | \!\,$$

We want

$$\rho(I-\alpha A) < |1 -\alpha 5| < 1 \!\,$$

Therefore,

$$0 < \alpha < \frac25 \!\,$$