Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/August 2005

Find Orthogonal Polynomials w.r.t. Weighting Function
For this problem, we need the first three orthogonal polynomials $$P_1,P_2,\mbox{ and }P_3 \!\,$$ with respect to a weighted inner product


 * $$ \left \langle f,g \right \rangle = \int_{-1}^1 f(x)g(x)w(x) dx \!\,$$

where $$w(x) = x^4 \!\,$$ in our case. This can be done using Gram-Schmidt Orthogonalization on the basis $$\left \{ 1,x,x^2,x^3 \right \} \!\,$$

Zero Order Polynomial
Let $$P_0(x) = 1 \!\,$$

First Order Polynomial
$$ \begin{align} P_1(x) &= x - \frac{\left \langle 1,x \right \rangle}{\left \langle 1,1 \right \rangle}\cdot 1 \\ &= x - \underbrace{\frac{\int_{-1}^1 x^5 dx}{\int_{-1}^1 x^4 dx}}_{\frac{0}{2/5}}      \\ &= x \end{align} $$

Second Order Polynomial
$$ \begin{align} P_2(x) &= x^2 - \frac{\left \langle 1,x^2 \right \rangle}{\left \langle 1,1 \right \rangle}\cdot 1 - \frac{\left \langle x,x^2 \right \rangle}{\left \langle x,x \right \rangle}\cdot x \\ &= x^2 - \frac{\int_{-1}^1 x^6 dx}{\int_{-1}^1 x^4 dx} - x \cdot \underbrace{\frac{\int_{-1}^1 x^7 dx}{\int_{-1}^1 x^6 dx}}_{\frac{0}{2/7}} \\ &= x^2 - \frac{5}{7} \end{align} $$

Third Order Polynomial
$$ \begin{align} P_3(x) &= x^3 - \frac{\left \langle 1,x^3 \right \rangle}{\left \langle 1,1 \right \rangle}\cdot 1 - \frac{\left \langle x,x^3 \right \rangle}{\left \langle x,x \right \rangle}\cdot x              - \frac{\left \langle x^2 - \frac57,x^3 \right \rangle}{\left \langle x^2 - \frac{5}{7}, x^2 - \frac{5}{7} \right \rangle}\cdot (x^2 - \frac{5}{7}) \\ &= x^3 - 0 - \frac72 x \cdot \frac{\int_{-1}^1 x^8 dx}{\int_{-1}^1 x^6 dx} - 0 \\ &= x^3 - \frac{7}{9} x \end{align} $$

Find Roots of Polynomials
The roots of these polynomials will be the interpolation nodes used in Gauss Quadrature.

$$ \begin{array}{|c|c|} \mbox{Polynomial} & \mbox{Roots} \\ \hline P_1(x) = x              & 0\\ \\ P_2(x) = x^2 -\frac57   & \pm \sqrt{\frac57}\\ \\ P_3(x) = x^3 -\frac79 x & 0, \pm \sqrt{\frac79}\\ \\ \hline \end{array} $$

Formula to Compute Coefficients
In Gaussian quadrature we have


 * $$w_j = \int_{-1}^1 l_i(x)x^4 dx \!\,$$, where


 * $$l_i(x) = \prod_{j=1,j \ne i}^n \frac{x-x_j}{x_i-x_j} \!\,$$

1 point formula
$$ l_1(x) = 1 \!\,$$

$$w_1 = \int_{-1}^1 x^4 dx = \frac25 \!\,$$

$$ \int_{-1}^1 f(x)x^4 dx \approx \frac25 f(x_1) \!\,$$

where $$x_1 \!\,$$ is the root of $$P_1(x)\!\,$$.

2 point formula
$$ l_1(x) = \frac{x-x_1}{x_1-x_2} \!\,$$

$$ l_2(x) = \frac{x-x_1}{x_2-x_1} \!\,$$

$$ w_1 = \int_{-1}^1 \frac{x-x_1}{x_1-x_2} x^4 dx = \frac{2x_2}{5(x_2-x_1)} \!\,$$

$$ w_2 = \int_{-1}^1 \frac{x-x_1}{x_2-x_1} x^4 dx = \frac{2x_1}{5(x_1-x_2)} \!\,$$

$$ \int_{-1}^1 f(x)x^4 dx \approx w_1 f(x_1) + w_2 f(x_2) \!\,$$

where $$x_1 \mbox{ and }x_2 \!\,$$ are the roots of $$P_2(x) \!\,$$.

3 point formula
$$ l_1(x) = \frac{x-x_2}{x_1-x_2} \frac{x-x_3}{x_1-x_3} \!\,$$

$$ l_2(x) = \frac{x-x_1}{x_2-x_1} \frac{x-x_3}{x_2-x_3} \!\,$$

$$ l_3(x) = \frac{x-x_1}{x_3-x_1} \frac{x-x_2}{x_3-x_2} \!\,$$

$$ w_1 = \int_{-1}^1 \frac{x-x_2}{x_1-x_2} \frac{x-x_3}{x_1-x_3} x^4 dx = \frac{1}{(x_1-x_2)(x_1-x_3)} \left( \frac27 + \frac{2x_2x_3}{5} \right) \!\,$$

$$ w_2 = \int_{-1}^1 \frac{x-x_1}{x_2-x_1} \frac{x-x_3}{x_2-x_3} x^4 dx = \frac{1}{(x_2-x_1)(x_2-x_3)} \left( \frac27 + \frac{2x_1x_3}{5} \right) \!\,$$

$$ w_3 = \int_{-1}^1 \frac{x-x_1}{x_3-x_1} \frac{x-x_2}{x_3-x_2} x^4 dx = \frac{1}{(x_3-x_1)(x_3-x_2)} \left( \frac27 + \frac{2x_1x_2}{5} \right) \!\,$$

$$ \int_{-1}^1 f(x)x^4 dx \approx w_1 f(x_1) + w_2 f(x_2) + w_3 f(x_3) \!\,$$

where $$x_1 \mbox{, } x_2 \mbox{ and }x_3 \!\,$$ are the roots of $$P_3(x) \!\,$$.

Derive Error Bound
We know that this quadrature is exact for all polynomials of degree at most $$2n-1 \!\,$$.

We choose a polynomial $$ p\!\,$$ of degree at most $$2n-1 \!\,$$ that Hermite interpolates i.e.

$$p(x_i)=f(x_i) \quad p'(x_i)=f'(x_i) \quad (0\leq i \leq n-1) \!\,$$

The error for this interpolation is

$$f(x)-p(x)=\frac{f^{(2n)}(\zeta(x))}{(2n)!} \prod_{i=0}^{n-1}(x-x_i)^2 \!\,$$

Compute the error of the quadrature as follows:

$$ \begin{align} E &= \int_{-1}^1 f(x)x^4dx - \sum_{i=0}^{n-1} w_i f(x_i) \\ &= \int_{-1}^1 f(x)x^4dx - \sum_{i=0}^{n-1} w_i p(x_i) \\ &= \int_{-1}^1 f(x)x^4dx - \int_{-1}^1 p(x) x^4 dx \\ &= \int_{-1}^1 (f(x)-p(x)) x^4dx \\ &= \int_{-1}^1 \frac{f^{(2n)}(\zeta(x))}{(2n)!} \prod_{i=0}^{n-1}(x-x_i)^2 \cdot x^4 dx \\ &= \frac{f^{(2n)}(\xi)} {(2n)!} \int_{-1}^1 x^4 \cdot \underbrace{\prod_{i=0}^{n-1}(x-x_i)^2}_{(p_n(x))^2} dx \end{align} \!\,$$

where the last line follows from the mean value theorem of integrals.

Notice that $$p_n(x)\!\,$$ is simply the polynomials orthogonal with respect to weight function since $$ x_i\!\,$$ are the roots of the polynomials.

Hence, the error for 1 point gaussian quadrature is

$$ \begin{align} E &= \frac{f^{(2)}(\xi)} {(2)!} \int_{-1}^1  x^4 \cdot x^2 dx \\ &= \frac{f^{(2)}(\xi)} {7} \end{align} \!\,$$

The error for 2 point quadrature:

$$ \begin{align} E &= \frac{f^{(4)}(\xi)} {(4)!} \int_{-1}^1  x^4 \cdot (x^2-\frac57)^2 dx \end{align} \!\,$$

The error for 3 point quadrature:

$$ \begin{align} E &= \frac{f^{(6)}(\xi)} {(6)!} \int_{-1}^1  x^4 \cdot (x^3-\frac79x)^2 dx \end{align} \!\,$$

Jacobi Method
Decompose $$A \!\,$$ into its diagonal, lower, and upper parts i.e.

$$A=D+(L+U) \!\,$$

Derive Jacobi iteration by solving for x as follows:

$$ \begin{align} Ax & =b \\ (D+(L+U))x &= b\\ Dx+(L+U)x &= b\\ Dx        &= b-(L+U)x \\ x &= D^{-1}(b-(L+U)x) \end{align} \!\,$$

Gauss-Seidel Method
Decompose $$A \!\,$$ into its diagonal, lower, and upper parts i.e.

$$A=(D+L)+U \!\,$$

Derive Gauss-Sediel iteration by solving for x as follows:

$$ \begin{align} Ax & =b \\ ((D+L)+U)x &= b\\ (D+L)x+Ux &= b\\ (D+L)x    &= b-Ux \\ x &= (D+L)^{-1}(b-Ux) \end{align} \!\,$$

Jacobi Convergence
Convergence occurs for the Jacobi iteration if the spectral radius of $$ D^{-1}(L+U)\!\,$$ is less than 1 i.e.

$$\rho(D^{-1}(L+U)) < 1 \!\,$$

The eigenvalues of the matrix

$$ D^{-1}(L+U) = \begin{bmatrix} 0 & 1 & -1 \\ \frac12 & 0 &\frac12 \\ 1 & 1 &   0 \end{bmatrix} \!\,$$

are $$\lambda=0,0,0 \!\,$$ i.e. the eigenvalue $$ 0\!\,$$ has order 3.

Therefore, the spectral radius is $$\rho=0 <1 \!\,$$.

Gauss-Seidel Convergence
Convergence occurs for the Gauss-Seidel iteration if the spectral radius of $$ (D+L)^{-1}U\!\,$$ is less than 1 i.e.

$$\rho((D+L)^{-1}U) < 1 \!\,$$

The eigenvalues of the matrix

$$ (D+L)^{-1}U = \begin{bmatrix} 0 & 1       & -1 \\ 0 & -\frac12 &  1 \\ 0 & -\frac12 & 0 \end{bmatrix} \!\,$$

are $$\lambda=0,-\frac14+\frac{\sqrt{7}i}{4},-\frac14-\frac{\sqrt{7}i}{4} \!\,$$

Therefore, the spectral radius is $$\rho=\sqrt{\frac{1}{16}+\frac{7}{16}}=\frac{\sqrt{2}}{2} \approx .7 <1 \!\,$$.

Comparison of Methods
In general, the Gauss-Seidel iteration converges faster than the Jacobi iteration since Gauss-Seidel uses the new components of $$ x_i \!\,$$ as they become available, but in this case $$ \rho_{Jacobi} < \rho_{Gauss-Seidel} \!\, $$, so the Jacobi iteration is faster.

Solution 3a
We prove the claim by by induction.

Base Case
$$ p_0(x) = 1 \!\,$$ is monic with degree zero, and $$\operatorname{span}\left\{ p_0(x) \right\} = \operatorname{span}\left\{ 1 \right\} \!\,$$.

$$ p_1(x) = x-\mu_0 \!\,$$ is monic with degree one, and $$\operatorname{span}\left\{ p_0(x), p_1(x) \right\} = \operatorname{span}\left\{ 1,x \right\} \!\,$$.

$$ p_2(x) = (x-\mu_0)(x-\mu_1) - \nu_1^2 \!\,$$ is monic with degree 2, and $$\operatorname{span}\left\{ p_0(x), p_1(x),p_2(x) \right\} = \operatorname{span}\left\{ 1,x,x^2 \right\} \!\,$$.

Induction Step
Assume $$ p_{k-1}(x)\!\,$$ is monic with degree $$k-1 \!\,$$, and $$\operatorname{span}\left\{ p_0(x), p_1(x),\dots,p_{k-1}(x) \right\} = \operatorname{span}\left\{ 1,x,\dots,x^{k-1} \right\} \!\,$$.

Also, assume $$ p_k(x) \!\,$$ is monic with degree $$k \!\,$$, and $$\operatorname{span}\left\{ p_0(x), p_1(x),\dots,p_{k}(x) \right\} = \operatorname{span}\left\{ 1,x,\dots,x^{k} \right\} \!\,$$.

Then from the hypothesis, $$ p_{k+1}(x) = (x-\mu_k)p_k(x) - \nu_k^2p_{k-1}(x) \!\,$$ is monic with degree $$k+1 \!\,$$.

Also,

$$\operatorname{span}\left\{ p_0(x), p_1(x),\dots,p_{k+1}(x) \right\} = \operatorname{span}\left\{ 1,x,\dots,x^{k+1} \right\} \!\,$$

since $$p_{k+1}(x) \!\,$$ is just $$x^{n+1} \!\,$$ plus a linear combination of lower order terms already in $$ \operatorname{span}\left\{ 1,x,\dots,x^{k} \right\} \!\,$$.

Solution 3b
We prove the claim by induction.

Base Case
Let's consider the case $$k=2 \!\,$$. We know that

$$ p_2(x)=(x-\mu_0)(x-\mu_1)-\nu_{1}^2. \!\,$$

The quadratic formula shows that $$p_2(x)\!\,$$ has two simple real roots.

Let $$ r_{21} \!\,$$ and $$ r_{22}\!\,$$ be the zeros of $$p_2(x)\!\,$$. Then, we have (because of sign changes) that

$$ p_2(\mu_0)=-\nu_1^2 \!\,$$ and $$ \lim_{x \rightarrow \infty} p_2(x)>0 \!\,$$ $$ \Rightarrow \!\,$$ there exists $$ r_{22} \in (\mu_0,\infty) \!\,$$ such that $$p_2(r_{22})=0\!\,$$.

$$ p_2(\mu_0)=-\nu_1^2 \!\,$$ and $$ \lim_{x \rightarrow -\infty} p_2(x)>0 \!\,$$ $$ \Rightarrow \!\,$$ there exists $$ r_{21} \in (-\infty,\mu_0) \!\,$$ such that $$p_2(r_{21})=0\!\,$$.

In conclusion, $$ r_{21} < \underbrace{\mu_0}_{r_{11}} < r_{22}. \!\,$$.

Induction Step
Let $$ r_{k-1,1}, \dots, r_{k-1,k-1}\!\,$$ and $$ r_{k,1}, \dots, r_{k,k}\!\,$$ be the simple real roots of $$ p_{k-1}(x)\!\,$$ and $$ p_k(x)\!\,$$, respectively, such that the roots of $$ p_{k}\!\,$$ are interlaced with the roots of $$ p_{k-1}\!\,$$, i.e.,

$$ r_{k,1} < r_{k-1,1} < r_{k,2} < r_{k-1,2} < \dots < r_{k-1,k-1} < r_{k,k}. \!\,$$

Then, we have that

$$ p_{k+1}(r_{k,1})= (r_{k,1} -\mu_k) \underbrace{p_k(r_{k,1})}_0  -\nu_k^2 p_{k-1} (r_{k,1}) = -\nu_k^2 p_{k-1} (r_{k,1}),\!\,$$

$$ p_{k+1}(r_{k,2})= (r_{k,2} -\mu_k) \underbrace{p_k(r_{k,2})}_0  -\nu_k^2 p_{k-1} (r_{k,2}) = -\nu_k^2 p_{k-1} (r_{k,2}).\!\,$$

From the induction hypothesis $$ p_{k-1}(r_{k,1})\!\,$$ and $$ p_{k-1}(r_{k,2})\!\,$$ have different signs since $$ r_{k,1}< r_{k-1,1} < r_{k,2}\!\,$$. Then, there exists $$ r_{k+1,1} \in (r_{k,1},r_{k,2}) \!\,$$ such that $$ p_{k+1}(r_{k+1,1})=0\!\,$$.

Proceeding in the same manner for all the intervals $$ (r_{k,j},r_{k,j+1}) \!\,$$, we obtain that there exists $$ r_{k+1,j} \in (r_{k,j},r_{k,j+1}) \!\,$$ such that $$ p_{k+1}(r_{k+1,j})=0\!\,$$ for $$ j=1,\dots,k-1.\!\,$$

We now consider the smallest and largest roots of $$p_{k+1}\!\,$$ i.e. $$r_{k+1,0}, r_{k+1,k+1} \!\,$$

Since for $$k=0,1,2,\ldots \!\,$$, $$p_k \!\,$$ is a monic polynomial,

$$\lim_{x \rightarrow \infty} p_k(x) = \infty \!\,$$

Hence for any $$x > r_k \!\,$$ (any $$ x\!\,$$ larger than the largest root of $$ p_k \!\,$$)

$$p_k(x) > 0 \!\,$$

Therefore

$$ \lim_{x \rightarrow \infty} p_{k+1}(x)>0 \!\,$$ and $$ p_{k+1}(r_{k,k})= -\nu_k^2 p_{k-1} (r_{k,k}) < 0,\!\,$$

implies there exists $$ r_{k+1,k} \in (r_{k,k},\infty) \!\,$$ such that $$p_{k+1}(r_{k+1,k})=0\!\,$$.

If $$k+1 \!\,$$ is even, then by similar reasoning

$$ \lim_{x \rightarrow -\infty} p_{k+1}(x)>0 \!\,$$ and $$ p_{k+1}(r_{k,1})= -\nu_k^2 p_{k-1} (r_{k,1}) < 0,\!\,$$ $$ \Rightarrow \!\,$$ there exists $$ r_{k+1,0} \in (-\infty,r_{k,1}) \!\,$$ such that $$p_{k+1}(r_{k+1,0})=0\!\,$$.

If $$k+1 \!\,$$ is odd,

$$ \lim_{x \rightarrow -\infty} p_{k+1}(x)<0 \!\,$$ and $$ p_{k+1}(r_{k,1})= -\nu_k^2 p_{k-1} (r_{k,1}) > 0,\!\,$$ $$ \Rightarrow \!\,$$ there exists $$ r_{k+1,0} \in (-\infty,r_{k,1}) \!\,$$ such that $$p_{k+1}(r_{k+1,0})=0\!\,$$.

Solution 3c
Let $$ T_{n+1} = \begin{pmatrix} \mu_0 & \nu_1 &  0   & \cdots &   0     \\ \nu_1 & \mu_1 & \nu_2 & \ddots & \vdots \\ 0 & \nu_2 & \nu_2 & \ddots &   0     \\ \vdots & \ddots & \ddots & \ddots & \nu_n\\ 0 & \cdots &   0  & \nu_n  & \mu_n \end{pmatrix} $$

Then $$\operatorname{det}(xI_{n+1} - T_{n+1}) \!\,$$ is a monic polynomial in $$x \!\,$$ of degree $$n+1 \!\,$$.

Expanding this determinant along the last row, we have


 * $$ \operatorname{det}(xI_{n+1} - T_{n+1}) = (x-\mu_{n})\operatorname{det}(xI_{n} - T_{n}) - \nu_n^2 \operatorname{det}(xI_{n-1} - T_{n-1}) \qquad (1) \!\,$$

where $$ \operatorname{det}(xI_{n} - T_{n}) \!\,$$ and $$ \operatorname{det}(xI_{n-1} - T_{n-1}) \!\,$$ are monic polynomials of degree $$n \!\,$$ and $$n-1 \!\,$$, respectively.

Notice that $$ \operatorname{det}(xI_{1} - T_{1}) = x-\mu_0 \equiv p_1(x) \!\,$$ and if we let $$p_0(x) \equiv 1\!\,$$, then (1) is equivalent to the three-term recurrence stated in the problem.

Thus, $$p_{n+1}(x) \equiv \operatorname{det}(xI_{n+1}-T_{n+1}) = 0 \!\,$$ shows that the roots of $$p_{n+1}(x) \!\,$$ are the eigenvalues of $$T_{n+1} \!\,$$.