Calculus/Approximating Values of Functions

Although many times, a value can and does have an exact representation that is and can be described by some function, it is sometimes useful to get an approximation of those values especially in many real world contexts. For example, a construction worker might need a room to be $$\sqrt{2}$$ feet long. However, this value is not useful because most rulers do not have a measurement for $$\sqrt{2}$$. Instead, workers need an approximation of the length to be able to construct a room that is $$\sqrt{2}$$ feet long.

Some numbers were and still are very hard to approximate, but calculus makes it easier to do so. The subfield of Numerical Analysis studies algorithms used to approximate numbers, including but not limited to the residual (how much the value is off from the true value), the level of decimal precision, and the amount of times the procedure needs to be done to reach a certain level of precision.

While this section will not be a replacement for numerical analysis (not even close), this section will hopefully introduce efficient algorithms to approximate values to surprising levels of accuracy.

Before diving into this section, Section already introduced a method to approximate the solution to a function using calculus as justification behind this approximation algorithm, referred to as the Bisection Method. Using calculus to approximate values should therefore not be very surprising.

Linear Approximation


Recall one of the definitions of the derivative: it is the slope of the tangent line at a point, $$x = \alpha$$, of the function $$f$$. Thinking about the local behavior of the function $$f$$ around $$x = \alpha$$, the tangent line can be a good approximation of the value $$f(c)$$ (refer to Figure 1) if $$c-\alpha$$ is small and $$f(c)-f(\alpha)$$ is small.

Justification: Notice the tangent line at $$x = \alpha$$ for some differentiable function $$f(x)$$ is given by the following equation:

$$

where $$h(x)$$ is the equation of the tangent line.

If we are trying to obtain $$f(c)$$ (the true value) through the tangent line, and the distance $$|f(c) - f(\alpha)| = \epsilon_0$$ for all small $$\epsilon_0 > 0$$, then $$h(c) \approx f(c)$$. Therefore,
 * $$f(c) \approx h(c) = f(\alpha) + f^\prime (\alpha) (c - \alpha) \qquad \square$$

Notice that for this technique to be used, it needs to be the case that If any one of these conditions are false, then this technique will either not work or will not be very useful.
 * 1) $$f(x)$$ is differentiable at $$x = \alpha$$ and continuous in $$\left[\alpha, c\right]$$.
 * $$|c-\alpha|$$ is small and $$|f(c)-f(\alpha)|$$ is small. Otherwise, some very strange possible approximations may appear.
 * 1) $$f(x)$$ is monotonic in $$\left[\alpha, c\right]$$. (You will learn this more comprehensively in Section .)

Let $$f(x) = \sqrt{x}$$. The exact value of $$\sqrt{1.01} = f(1.01)$$.
 * $$f^\prime (x) = \frac{1}{2\sqrt{x}}$$

The tangent line equation at $$x = 1$$ is given by
 * $$y - f(1) = f^\prime (1) (x - 1) \Leftrightarrow y = f^\prime (1) (x - 1) + f(1)$$
 * $$y = \frac{1}{2}x + \frac{1}{2}$$

Let $$g(x) = y$$. Suppose $$g(1.01) \approx f(1.01)$$.

Then, $$f(1.01) \approx g(1.01) = \frac{1}{2}(1.01) + \frac{1}{2} = 1.005$$. Therefore, $$\sqrt{1.01} \approx 1.005$$. $$\blacksquare$$

The benefit of linear approximation is that instead of having to use harder to understand functions, we estimate the value of a function using linear functions, arguably the easiest possible calculation we will ever have to use, assuming the value of the derivative is easy to find.

Determining over- or underestimation
Notice that for any approximation we obtain using a tangent line approximation (same as a linear approximation), there exists a remainder term that will make it equal to the true value we can obtain from the function. That is,

$$

where $$R_2$$ is the remainder term. This, unfortunately, does not give us a precise estimate of the residual, especially if we cannot find the exact value of the term. While there is a technique to determine the upper bound of a residual for this type of estimate, it will be done much later in Section.

The best solution we have for now is determining if the estimate we have given is below or above the true value, which can be done with the following technique.

Justification: Suppose $$f(x)$$ is a twice differentiable function in $$\left[\alpha, c\right]$$ and $$\alpha < c$$.
 * Case 1(A): Let $$f^\prime (\alpha) > 0$$ and $$f^{\prime\prime} (x_0) < 0$$ for $$x_0\in\left[\alpha, c\right]$$. Then, the tangent line, $$h(x)$$ at $$(\alpha, f(\alpha))$$ has positive slope and $$h(c)>f(c)$$ (see the bottom function of Figure 1).
 * Case 1(B): Let $$f^\prime (\alpha) > 0$$ and $$f^{\prime\prime} (x_0) > 0$$ for $$x_0\in\left[\alpha, c\right]$$. Then, the tangent line, $$h(x)$$ at $$(\alpha, f(\alpha))$$ has positive slope and $$h(c)f(c)$$.
 * Case 2(B): Let $$f^\prime (\alpha) < 0$$ and $$f^{\prime\prime} (x_0) > 0$$ for $$x_0\in\left[\alpha, c\right]$$. Then, the tangent line, $$h(x)$$ at $$(\alpha, f(\alpha))$$ has negative slope and $$h(c) f(c)$$ no matter the slope of the tangent, and a concave up function has $$h(c) < f(c)$$ no matter the slope of the tangent, we have justified what we wanted to do. $$\square$$

Let $$f(x) = \sqrt{x}$$.
 * $$f^\prime (x) = \frac{1}{2} x^{-\frac{1}{2}} = \frac{1}{2\sqrt{x}}$$
 * $$f^{\prime\prime} (x) = -\frac{1}{2} x^{-\frac{3}{2}}$$

The function is concave down for all $$x \ge 1$$ because
 * $$\begin{align}

f^{\prime\prime} (x) &= -\frac{1}{2} (x)^{-\frac{3}{2}} \\ &= -\frac{1}{2x^{3/2}} \end{align}$$.
 * $$\Rightarrow -\frac{1}{2} \le -\frac{1}{2x^{3/2}} < 0$$ for all $$x \ge 1$$.

The last implication is justified because $$\lim_{x\to\infty} f^{\prime\prime} (x) = 0$$ (as an exercise, show this yourself). If $$f^{\prime\prime} (x)$$ is monotonically decreasing and bounded (which it is – you can show this multiple ways), then we can be certain that the inequality shown is correct.

Because $$f(x)$$ is certainly concave down from $$x = 1$$ to $$x = 1.01$$, $$\sqrt{1.01} \approx 1.005$$ is an overestimate of the true value. $$\blacksquare$$

Issues with Linear Approximation
While linear (or tangent line) approximation is a powerful, easy tool that can be used to approximate functions, it does have its issues. These issues were alluded to when introducing the technique. Each issue will highlight why this tool may not be very useful all the time.

Let $$f(x) = \sqrt[3]{x} = x^{1/3}$$. Since we are approximating $$f(0.01)$$, we may use the tangent line approximation at $$x=0$$ to obtain an approximation at $$f(0.01)$$. Unfortunately, $$f^\prime(0) = \lim_{x\to 0} \dfrac{\sqrt[3]{x}}{x}$$ does not converge to a finite value. This therefore means that the tangent line is either vertical or does not exist. Since the derivative does not exist at $$x = 0$$, it is impossible to obtain an accurate approximation using the linear method. $$\blacksquare$$

Let $$f(x) = x^4 - \frac{3}{2} x^2$$. Since we are approximating $$f(-0.8)$$, and $$-0.8 - (-1) = 0.2$$ is a small difference in $$x$$, we may use the tangent line approximation at $$x=-1$$ to obtain an approximation at $$f(-0.8)$$.
 * $$f(-1) = (-1)^4 - 1.5 (-1)^2 = 1 - 1.5 = -0.5$$
 * $$f^\prime (x) = 4x^3 - 3x$$

The tangent line equation at $$x = -1$$ is given by
 * $$y - f(-1) = f^\prime (-1) (x + 1) \Leftrightarrow y = f^\prime (-1) (x + 1) + f(-1)$$
 * $$y = -x - 1.5$$

Let $$g(x) = y$$. Suppose $$g(-0.8) \approx f(-0.8)$$.

Then, $$f(-0.8) \approx g(-0.8) = -(-0.8) - 1.5 = -0.7$$. Therefore, $$(-0.8)^4 - 1.5 (-0.8)^2 \approx -0.7$$.

However, this approximation is very off from the actual value. It has an error of $$R_2 = 0.1496$$ within the actual value of the function at $$x=-0.8$$. The reason for this large error has to do with something subtle.

While the derivative does exist at $$\alpha = -1$$, the function is not monotonic from $$\left[-1,-0.8\right]$$: the function will both decrease and increase. Recall from your reading that a function is monotonically decreasing if and only if for any $$x_1, x_2 \in \left[a,b\right]$$ and $$x_1 < x_2$$, $$h(x_1) > h(x_2)$$. However, there exists a counterexample for $$f(x)$$ above, which becomes apparent if you graph the function.

Hence, it would be irresponsible to use a tangent line to approximate the value of $$(-0.8)^4 - 1.5 (-0.8)^2$$. $$\blacksquare$$

Because this will be something you will learn more comprehensively in Section, assume that all exercises have a monotonic function. This example is simply meant to illustrate a common pitfall that this method falls to.

Newton-Raphson Method
While tangent line approximations are very helpful, they tend to only be useful when you know a nearby value. However, if there does not exist a nearby value to help estimate its value, then it will be very difficult to obtain a precise estimate of the exact value desired.

The Newton-Raphson method, introduced in Section, is a useful method to determine the zeros of a function, whether polynomial, transcendental, irrational, exponential, etc. However, the Newton-Raphson method can also be used to approximate values of specific functions.

If you read Section, then this equation should be known and already justified to you.

Let $$x = \sqrt{72}$$. We need to manipulate this equation so that we may obtain $$f(\alpha) = 0$$.
 * $$\begin{align}

x &= \sqrt{72} \\ x^2 &= 72 \\ x^2 - 72 &= 0 \\ \end{align}$$ From this, let $$f(x) = x^2 - 72$$. Since $$x=\sqrt{72}$$ is a root of the equation, we can use the Newton-Rhapson method to approximate $$\sqrt{72}$$. First, we choose an initial guess. Since $$f(8) = -8 < 0 < f(9) = 9$$, and $$f(9) > \left| f(8) \right| > 0$$, $$x_0 = 8$$ is a good initial guess. Before we can begin, we need the derivative of the function, which can be easily shown to be $$f^\prime (x) = 2x$$. Now we begin by finding the root.
 * $$\begin{align}

x_1 &= 8 - \frac{-8}{2\cdot 8} &= 8.5 \\ x_2 &= 8.5 - \frac{8.5^2 - 72}{2\cdot 8.5} &= 8.47058823529 \\ x_3 &= 8.47058823529 - \frac{8.47058823529^2 - 72}{2 \cdot 8.47058823529} &= 8.48529411765 \end{align}$$ Out of convenience, we will stop here. However, the value we have obtained is already correct to four decimal places. $$\blacksquare$$

Let $$x = \sqrt{2} + \sqrt{3}$$. We need to manipulate this equation so that we may obtain $$f(\alpha) = 0$$. Keep in mind to eliminate as many square roots as possible so that our jobs are easier.
 * $$\begin{align}

x &= \sqrt{2} + \sqrt{3} \\ x^2 &= 2 + 2\sqrt{6} + 3 \\ x^2 - 5 &= 2\sqrt{6} & \\ x^4 - 10x^2 + 25 &= 4 \cdot 6 \\ x^4 - 10x^2 + 1 &= 0 \end{align}$$ Notice this equation has four possible roots, but we only care about one of them. Looking back:
 * $$1 + 1 = 2 < \sqrt{2} + \sqrt{3} < 4 = 2 + 2 $$

Notice that $$f(3) = -8 < 0 < f(4) = 97$$, so we choose $$x_0 = 3$$.
 * Let $$f(x) = x^4 - 10x^2 + 1$$. Then, $$f^\prime (x) = 4x^3 - 20x$$.

Notice that $$4x(x^2 - 5) = 0 \Rightarrow x=0 \vee x = \pm\sqrt{5}$$ means that $$f^\prime (x) \ne 0$$ for $$x\in\left(\sqrt{5},4\right)$$.

Finally, notice that $$x_{n+1} = x_n - \frac{x_n^4 - 10x_n^2 + 1}{4x_n^3 - 20x_n}$$ cannot be simplified. We are going to have to work with these values. (Keep in mind that even without calculators, many people in the past used slide rulers to work with these ugly values. We have the benefit of an electric computer on our fingertips as opposed to an analog one.)

Now we begin by finding the root.
 * $$\begin{align}

x_1 &= 3 - \frac{-8}{48} = \frac{19}{6} &\approx 3.16666666667 \\ x_2 &= \frac{19}{6} - \dfrac{\frac{1657}{1296}}{\frac{3439}{54}} = \frac{86569}{27512} &\approx 3.14659057866 \\ x_3 &= 3.14659057866 - \frac{0.0201173085466}{61.686167862} &\approx 3.14626445516 \end{align}$$ Out of convenience, we will stop here. However, the value we have obtained is already correct to six decimal places. $$\blacksquare$$

Failure Analysis
Of course, as we know from Section, the Newton-Raphson method is not perfect and will fail in some instances. One obvious instance is when the derivative at a particular point is zero. Because that is in the denominator, we cannot find the next possible root once that occurs. However, there are others.

Starting Point Enters a Cycle
For some functions, some starting points may enter an infinite cycle, preventing convergence. Let


 * $$f(x) = x^3 - 2x + 2 \!$$

and take $$0$$ as the starting point. The first iteration produces $$1$$ and the second iteration returns to $$0$$ so the sequence will alternate between the two without converging to a root (see Figure 2). The real solution of this equation is $$-1.76929235\ldots$$. In those instances, one should select another starting point.

Derivative does not exist at root
A simple example of a function where Newton's method diverges is trying to find the cube root of zero. The cube root is continuous and infinitely differentiable, except for $$x = 0$$, where its derivative is undefined:


 * $$f(x) = \sqrt[3]{x}.$$

For any iteration point $$x_n$$, the next iteration point will be:


 * $$x_{n+1} = x_n - \frac{f(x_n)}{f^\prime (x_n)} = x_n - \frac{{x_n}^\frac13}{\frac13{x_n}^{-\frac23}} = x_n - 3x_n = -2x_n.$$

The algorithm overshoots the solution and lands on the other side of the $$y$$-axis, farther away than it initially was; applying Newton's method actually doubles the distances from the solution at each iteration.

In fact, the iterations diverge to infinity for every $$f(x) = |x|^\alpha$$, where $$0 < \alpha < \frac{1}{2}$$. In the limiting case of $$\alpha = \frac{1}{2}$$, the iterations will alternate indefinitely between points $$x_0$$ and $$-x_0$$, so they do not converge in this case either.

Discontinuous Derivative
If the derivative is not continuous at the root, then convergence may fail to occur in any neighborhood of the root. Consider the function
 * $$f(x) = \begin{cases}

0 & \text{if } x = 0\\ x + x^2 \sin \left(\dfrac2x\right) & \text{if } x \ne 0 \end{cases}$$
 * $$f^\prime (x) = \begin{cases}

0 & \text{if } x = 0\\ 1 + 2x \sin \left(\dfrac2x\right) - 2\cos\left(\dfrac2x\right) & \text{if } x \ne 0 \end{cases}$$ Within any neighborhood of the root, this derivative keeps changing sign as $$x$$ approaches $$0$$ from the right (or from the left) while $$f(x) \ge x - x^2 > 0$$ for $$0 < x < 1$$.

Thus, $$\dfrac{f(x)}{f^\prime (x)}$$ is unbounded near the root, and Newton's method will diverge almost everywhere in any neighborhood of it, even though:
 * the function is differentiable (and thus continuous) everywhere;
 * the derivative at the root is nonzero;
 * $$f$$ is infinitely differentiable except at the root; and
 * the derivative is bounded in a neighborhood of the root (unlike $$f(x)/f^\prime (x)$$).

Euler's Method
As will be shown in the examples below, Euler's method does not tend to an approximation as quickly as Newton's method. There are a number of reasons for this, but Euler's method does not stop being useful in spite of this issue. The examples will showcase how useful Euler's method can be however.

Let $$y = \sqrt{x}$$. We need to find a differential equation $$y^\prime = f(x,y)$$. Notice that we will do something in the last step which will be important for approximating $$y(6)$$.
 * $$\begin{align}

y(x) &= \sqrt{x} \\ y^\prime (x) &= \frac{1}{2\sqrt{x}} \\ y &= \frac{1}{2y(x)} \end{align}$$ The last step shown here is very important. We need to approximate $$\sqrt{x}$$. However, if we are trying to approximate $$\sqrt{x}$$, that cannot be done if we need to use the square root to do so. Instead, since $$y = \sqrt{x}$$ and Euler's method gives us new $$y$$-values to work with, we can get an approximation.

We next need to choose an initial point, $$(x_0,y_0)$$. This is the only point in which we use the actual value from the function $$y(x)$$. In this instance, since we want $$\sqrt{6}$$, we will choose $$y(4) = 2$$. From there, we can choose the step size, which can be arbitrary. Here, we will choose $$\Delta x_{\rm step} = 1$$
 * $$\begin{array}{l || c|c || c}

n & x_{n} = x_{n-1} + \Delta x_{\rm step} & y_{n} = y_{n-1} + \Delta x_{\rm step} \cdot f(x_{n-1},y_{n-1}) & f(x_n,y_n) \\ \hline 1 & 4 & 2.00000 & \frac{1}{2 \cdot 2.00} = 0.25000 \\ 2 & 5 & 2.25000 & \frac{1}{2 \cdot 2.25} = 0.22222 \\ 3 & 6 & 2.47222 & \frac{1}{2 \cdot 2.47222} \end{array}$$ As can be seen here, the approximation for $$\sqrt{6}$$ is $$2.47222$$. This has an absolute error of $$0.0227325$$, which is pretty good for most contexts. If this was to design a box for example, then this is an okay margin of error. $$\blacksquare$$

While the margin of error appears to be big compared to the local linear approximation of $$\sqrt{1.01}$$ (with an absolute error of $$0.0000124$$), keep in mind that our step size is considerably larger than the one shown in Example .1. If we have chosen a smaller step size, then the approximation would be more accurate (if only a little more cumbersome). For instance, with a step size of $$\Delta x_{\rm step} = 0.25$$, the absolute error would be approximately $$0.005$$.