Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/January 2007

Solution 1
We want to show that


 * $$\,\! VV^{-1} = I$$

or equivalently the $$\,\!i$$th, $$\,\!j$$th entry of $$\,\!VV^{-1}$$ is $$\,\! 1$$ for $$\,\!i=j$$ and $$\,\! 0$$ for $$\,\!i \neq j $$ i.e.



\{VV^{-1}\}_{ij}=\delta_{ij}=\left\{ \begin{array}{cc} 1 & \mbox{ if } i=j \\ 0 & \mbox{ if } i\neq j \end{array}\right. $$

First notice that



\{VV^{-1}\}_{ij} = \begin{bmatrix} 1 + x_i^1+x_i^2+\cdots+x_i^{n-1} \end{bmatrix} \begin{bmatrix} \alpha_1^{(j)} \\ \alpha_2^{(j)} \\ \vdots \\ \alpha_n^{(j)} \end{bmatrix} =\sum_{k=1}^n\alpha_k^{(j)}x_i^{k-1} $$

Also notice that



p_j(x_i)=\sum_{k=1}^n\alpha_k^{(j)}x_i^{k-1}=\delta_{ij} $$

Hence,



\{VV^{-1}\}_{ij}=p_j(x_i)=\delta_{ij} $$

Shifted Inverse Power Method
Let



\begin{align} \tilde{A} &=(A-\sigma I)^{-1} \\ \end{align} $$

Then,



\tilde{\lambda_i} =\frac{1}{|\lambda_i-\sigma|} $$

which implies


 * $$\tilde{\lambda_j} > \tilde{\lambda}_i \mbox{ for all } i\neq j  $$

Since shifting the eigenvalues and inverting the matrix does not affect the eigenvectors,


 * $$\tilde{A}v_i =\tilde{\lambda_i} v_i \quad \quad i=1,2,\ldots,n $$

Assume $$\| v_i \| =1$$ for all $$\!\, i$$. Generate $$w_0,w_1,w_2, \ldots$$ to find $$\,\!v_j$$. Start with arbitrary $$\,\!w_0$$ such that $$\,\!\| w_0 \|=1$$.

For $$\,\!k=0,1,2,\ldots$$ $$\,\!\hat{w}_{k+1} = \tilde{A} w_k$$ $$\,\!w_{k+1}=\frac{\hat{w}_{k+1}}{\| \hat{w}_{k+1} \|}$$ $$ \lambda_j^{(k+1)}=\frac{(\hat{w}_{k+1},w_k)}{(w_k,w_k)}=\frac{(\tilde{A}w_k,w_k)}{(w_k,w_k)} $$ (Rayleigh quotient) End

Convergence of Power Method
If $$\,\! | \lambda_1 | > |\lambda_i|$$, for all $$\!\, i \neq 1$$, then $$\,\! A^k w_0$$ will be dominated by $$\,\! v_1$$.

Since $$ v_1, v_2, \ldots, v_n $$ are linearly independent, they form a basis of $$R^n\,\!$$. Hence,


 * $$w_0 =\alpha_1v_1+\alpha_2v_2+\ldots+\alpha_nv_n \,\!$$

From the definition of eigenvectors,

\begin{align} Aw_0 &=\alpha_1 \lambda_1 v_1 + \alpha_2 \lambda_2 v_2 + \ldots +\alpha_n\lambda_n v_n\\ A^2 w_0 &= \alpha_1 \lambda^2_1 v_1 + \alpha_2 \lambda_2^2 v_2 + \ldots + \alpha_n\lambda_n^2 v_n\\ &\vdots \\ A^k w_0 &= \alpha_1 \lambda^k_1 v_1 + \alpha_2 \lambda_k^2 v_2 + \ldots + \alpha_n\lambda_n^k v_n\\ \end{align} $$

To find a general form of $$w_k \,\!$$, the approximate eigenvector at the kth step, examine a few steps of the algorithm:



\begin{align} \hat{w}_1 &=A w_0  \\ w_1       &=\frac{\hat{w}_1}{\| \hat{w}_1 \|}=\frac{Aw_0}{\|A w_0\|} \\ \hat{w}_2 &=Aw_1=\frac{A^2 w_0}{\| A w_0 \|} \\ w_2      &= \frac{\hat{w}_2}{\|\hat{w}_2\|}=\frac{\frac{A^2w_0}{\|Aw_0\|}}{\frac{\|A^2w_0\|}{\|Aw_0\|}}=\frac{A^2w_0}{\|A^2w_0\|} \\ \hat{w}_3 &=\frac{A^3w_0}{\|A^2 w_0\|} \\ w_3       &=\frac{\frac{A^3w_0}{\|A^2w_0\|}}{\frac{\|A^3w_0\|}{\|A^2w_0\|}}=\frac{A^3w_0}{\|A^3w_0\|} \\ \end{align} $$

From induction,


 * $$ w_k = \frac{A^kw_0}{\|A^kw_0\|} $$

Hence,



\begin{align} w_k &= \frac{A^kw_0}{\|A^kw_0\|} \\ &= \frac{\alpha_1 \lambda_1^kv_1+\alpha_2\lambda_2^kv_2+\ldots+\alpha_n\lambda_n^kv_n}{\|A^kw_0\|} \\ &= \frac{\lambda_1^k(\alpha_1 v_1+\alpha_2(\frac{\lambda_2}{\lambda_1})^kv_2+\ldots+\alpha_n(\frac{\lambda_n}{\lambda_1})^kv_n)}{\|A^kw_0\|} \end{align} $$

Comparing a weighted $$w_k\!\,$$ and $$v_1\!\,$$,



\begin{align} \left\| \frac{\|A^kw_0\|}{\lambda_1^k}w_k-\alpha_1v_1 \right\| &=\left\| \alpha_2\left(\frac{\lambda_2}{\lambda_1}\right)^kv_2+\ldots+\alpha_n\left(\frac{\lambda_n}{\lambda_1}\right)^kv_n \right\| \\ &\leq \alpha_2 \left|\frac{\lambda_2}{\lambda_1}\right|^k+\ldots+\alpha_n\left|\frac{\lambda_n}{\lambda_1}\right|^k \end{align} $$

since $$\| v_i \|=1$$ by assumption.

The above expression goes to $$\,\!0$$ as $$k \rightarrow \infty$$ since $$\,\! | \lambda_1 | > |\lambda_i|$$, for all $$\!\, i \neq 1$$. Hence as $$\!\,k$$ grows, $$\!\,w_k$$ is parallel to $$\!\,v_1$$. Because $$\| w_k \| =1$$, it must be that $$w_k \rightarrow \pm v_1$$.

Derivation of Iterations
Let $$ A=D+L+U \!\,$$ where $$D \!\,$$ is a diagonal matrix, $$L\!,$$ is a lower triangular matrix with a zero diagonal and $$U\!\,$$ is an upper triangular matrix also with a zero diagonal.

The Jacobi iteration can be found by substituting into $$ Ax=b \!\,$$, grouping $$ L+U \!\,$$, and solving for $$ x \!\,$$ i.e.

$$ \begin{align} Ax          &=   b  \\ (D+L+U)x    &=   b  \\ Dx + (L+U)x &=   b  \\ Dx          &=   b - (L+U)x \\ x           &=  D^{-1} (b- (L+U)x ) \end{align} $$

Since $$L=0\!\,$$ by hypothesis, the iteration is


 * $$ x^{(i+1)} =  D^{-1} (b- Ux^{(i)}) \!\,$$

Similarly, the Gauss-Seidel iteration can be found by substituting into $$ Ax=b \!\,$$, grouping $$ D+L \!\,$$, and solving for $$ x \!\,$$ i.e.

$$ \begin{align} Ax          &=   b  \\ (D+L+U)x    &=   b  \\ (D+L)x+Ux   &=   b  \\ (D+L)x      &=   b - Ux \\ x           &=  (D+L)^{-1}(b-Ux) \end{align} $$

Since $$L=0\!\,$$ by hypothesis, the iteration has identical from as Jacopi:


 * $$ x^{(i+1)} =  D^{-1} (b- Ux^{(i)}) \!\,$$

Convergence in Finite Number of Steps
Jacobi and Gauss-Seidel are iterative methods that split the matrix $$ A \in \mathbb{R}^{n \times n } \!\,$$ into $$ D \!\,$$, $$ U \!\,$$, and $$ L \!\,$$: Diagonal, Upper (everything above the diagonal), and Lower (everything below the diagonal) triangular matrices, respectively. Their iterations go as such


 * $$x^{(i+1)}=(D+L)^{-1}(b-Ux^{(i)})\!\,$$       (Gauss-Seidel)


 * $$x^{(i+1)}=D^{-1}(b-(U+L)x^{(i)})\!\,$$       (Jacobi)

In our case $$ A \!\,$$ is upper triangular, so $$ L \!\,$$ is the zero matrix. As a result, the Gauss-Seidel and Jacobi methods take on the following identical form


 * $$x^{(i+1)}=D^{-1}(b-Ux^{(i)})\!\,$$

Additionally, $$ x\!\, $$ can be written


 * $$x=D^{-1}(b-Ux)\!\,$$

Subtracting $$ x\!\, $$ from $$ x^{(i+1)}\!\, $$ we get

$$ \begin{align} e^{(i+1)} &= x^{(i+1)}-x \\ &= D^{-1}(b-Ux^{(i)}) - D^{-1}(b-Ux) \\ &= D^{-1}U(-x^{(i)} + x) \\ &= D^{-1}Ue^{(i)} \\ &= D^{-1}U(D^{-1}Ue^{(i-1)}) = (D^{-1}U)^2e^{(i-1)} \\ &\vdots \\ &= (D^{-1}U)^{i+1}e^{(0)} \end{align} $$

In our problem, $$D^{-1}\!\,$$ is diagonal and $$U\!\,$$ is upper triangular with zeros along the diagonal. Notice that the product $$D^{-1}U\!\,$$ will also be upper triangular with zeros along the diagonal.

Let $$R = D^{-1}U\!\,$$, $$R\in\mathbb{R}^{n\times n}\!\,$$



R = \begin{pmatrix} 0  &   *    &   *    & \cdots & * \\ 0  &   0    &   *    & \cdots & * \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0  & \cdots & \cdots &    0   & * \\ 0  & \cdots & \cdots &    0   & 0 \end{pmatrix} $$

Also, let $$\tilde{R}\in\mathbb{R}^{n\times n}\!\,$$ be the related matrix



\begin{align}

\tilde{R} &= \left( \begin{array}{ccc|cccc}     0   & \cdots &    0   &    *   &    *   & \cdots &   *    \\      0   & \cdots &    0   &    0   &    *   & \cdots &   *    \\   \vdots & \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\      0   & \cdots &   0    &    0   & \cdots &    0   &   *    \\ \hline      0   & \cdots &   0    &    0   & \cdots &    0   &   0    \\   \vdots & \vdots & \vdots & \vdots & \cdots & \vdots & \vdots \\      0   & \cdots &   0    &    0   & \cdots &    0   &   0    \\ \end{array} \right)  \\

&=

\left( \begin{array}{c|c}     \mathbf{a}  &      T        \\ \hline      \mathbf{0}  & \mathbf{a}^T  \\ \end{array} \right)

\end{align} $$

Where $$\mathbf{a}\!\,$$ is $$k\times (n-k)\!\,$$, $$T\!\,$$ is $$k\times k\!\,$$, and $$\mathbf{0}\!\,$$ is $$(n-k)\times (n-k)\!\,$$.

Finally, the product $$ R\tilde{R} \!\,$$ (call it (1)) is



\begin{align}

R\tilde{R} &=

\begin{pmatrix} 0  &   *    &   *    & \cdots & * \\ 0  &   0    &   *    & \cdots & * \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0  & \cdots & \cdots &    0   & * \\ 0  & \cdots & \cdots &    0   & 0 \end{pmatrix}

\left( \begin{array}{ccc|cccc}     0   & \cdots &    0   &    *   &    *   & \cdots &   *    \\      0   & \cdots &    0   &    0   &    *   & \cdots &   *    \\   \vdots & \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\      0   & \cdots &   0    &    0   & \cdots &    0   &   *    \\ \hline      0   & \cdots &   0    &    0   & \cdots &    0   &   0    \\   \vdots & \vdots & \vdots & \vdots & \cdots & \vdots & \vdots \\      0   & \cdots &   0    &    0   & \cdots &    0   &   0    \\ \end{array} \right)  \\

&=

\left( \begin{array}{ccc|ccccc}     0   & \cdots &    0   &    0   &    *   & \cdots & \cdots &   *    \\      0   & \cdots &    0   &    0   &    0   &    *   & \cdots &   *    \\   \vdots & \vdots & \vdots & \vdots & \cdots & \cdots & \ddots & \vdots \\      0   & \cdots &   0    &    0   & \cdots & \cdots &    0   &   *    \\ \hline      0   & \cdots &   0    &    0   & \cdots & \cdots &    0   &   0    \\   \vdots & \vdots & \vdots & \vdots & \cdots & \cdots & \vdots & \vdots \\      0   & \cdots &   0    &    0   & \cdots & \cdots &    0   &   0    \\ \end{array} \right)  \\

&=

\left( \begin{array}{c|c}     \mathbf{a}  & \tilde{T}        \\ \hline      \mathbf{0}  & \mathbf{a}^T  \\ \end{array} \right)

\end{align} $$

Here $$\tilde{T}\!\,$$ is almost identical in structure to $$T\!\,$$, except that its diagonal elements are zeros.

At this point the convergence in $$n\!\,$$ steps (the size of the starting matrix) should be apparent since $$R\!\,$$ is just $$\tilde{R}\!\,$$ where $$k=0\!\,$$ and each time $$R\!\,$$ is multiplied by $$\tilde{R}\!\,$$, the k-th super-diagonal is zeroed out (where k=0 is the diagonal itself). After $$n-1\!\,$$ applications of $$R\!\,$$, the result will be the zero matrix of size $$n\!\,$$.

In brief, $$R^n\!\,$$ is the zero matrix of size $$n\!\,$$.

Therefore $$ e^{(n)}=(D^{-1}U)^{n}e^{(0)}\equiv 0 \!\, $$, i.e. the Jacobi and Gauss-Seidel Methods used to solve $$A\mathbf{x}=b\!\,$$ converge in $$n\!\,$$ steps when $$ A\in\mathbb{R}^{n\times n}\!\,$$ is upper triangular.

Examples of (1)
$$ n=3 \!\,$$


 * $$ R =

\begin{pmatrix} 0 &  *  &  *  \\   0  &  0  &  *  \\   0  &  0  &  0  \\ \end{pmatrix} $$


 * $$ R^2 =

\begin{pmatrix} 0 &  *  &  *  \\   0  &  0  &  *  \\   0  &  0  &  0  \\ \end{pmatrix} \begin{pmatrix} 0 &  *  &  *  \\   0  &  0  &  *  \\   0  &  0  &  0  \\ \end{pmatrix} = \begin{pmatrix} 0 &  0  &  *  \\   0  &  0  &  0  \\   0  &  0  &  0  \\ \end{pmatrix} $$


 * $$ R^3 =

\begin{pmatrix} 0 &  *  &  *  \\   0  &  0  &  *  \\   0  &  0  &  0  \\ \end{pmatrix} \begin{pmatrix} 0 &  0  &  *  \\   0  &  0  &  0  \\   0  &  0  &  0  \\ \end{pmatrix} = \begin{pmatrix} 0 &  0  &  0  \\   0  &  0  &  0  \\   0  &  0  &  0  \\ \end{pmatrix} $$

$$ n=4 \!\,$$


 * $$ R =

\begin{pmatrix} 0 &  *  &  *  &  *  \\   0  &  0  &  *  &  *  \\   0  &  0  &  0  &  *  \\   0  &  0  &  0  &  0  \\ \end{pmatrix} $$


 * $$ R^2 =

\begin{pmatrix} 0 &  *  &  *  &  *  \\   0  &  0  &  *  &  *  \\   0  &  0  &  0  &  *  \\   0  &  0  &  0  &  0  \\ \end{pmatrix} \begin{pmatrix} 0 &  *  &  *  &  *  \\   0  &  0  &  *  &  *  \\   0  &  0  &  0  &  *  \\   0  &  0  &  0  &  0  \\ \end{pmatrix} = \begin{pmatrix} 0 &  0  &  *  &  *  \\   0  &  0  &  0  &  *  \\   0  &  0  &  0  &  0  \\   0  &  0  &  0  &  0  \\ \end{pmatrix} $$


 * $$ R^3 =

\begin{pmatrix} 0 &  *  &  *  &  *  \\   0  &  0  &  *  &  *  \\   0  &  0  &  0  &  *  \\   0  &  0  &  0  &  0  \\ \end{pmatrix} \begin{pmatrix} 0 &  0  &  *  &  *  \\   0  &  0  &  0  &  *  \\   0  &  0  &  0  &  0  \\   0  &  0  &  0  &  0  \\ \end{pmatrix} = \begin{pmatrix} 0 &  0  &  0  &  *  \\   0  &  0  &  0  &  0  \\   0  &  0  &  0  &  0  \\   0  &  0  &  0  &  0  \\ \end{pmatrix} $$


 * $$ R^4 =

\begin{pmatrix} 0 &  *  &  *  &  *  \\   0  &  0  &  *  &  *  \\   0  &  0  &  0  &  *  \\   0  &  0  &  0  &  0  \\ \end{pmatrix} \begin{pmatrix} 0 &  0  &  0  &  *  \\   0  &  0  &  0  &  0  \\   0  &  0  &  0  &  0  \\   0  &  0  &  0  &  0  \\ \end{pmatrix} = \begin{pmatrix} 0 &  0  &  0  &  0  \\   0  &  0  &  0  &  0  \\   0  &  0  &  0  &  0  \\   0  &  0  &  0  &  0  \\ \end{pmatrix} $$