Real Analysis/Applications of Integration

Integrals are used primarily to work with derivations. Even though this chapter is titled "Applications of Integration", the term "application" does not mean that this chapter will discuss how integration can be used in life. Instead, the following theorems we will present are focused on illustrating methods to calculate integrals and defining properties.

Theorems on Computation
This heading will deal with deriving computation formulas for integration. Although the results of the Fundamental Theorem of Calculus yields a method of calculating integrals that involves having knowledge of the primitive beforehand, it is by no means the only way to compute integrals—especially if the integral is anything more complex than a power function.

The Primitive
We will first take a small detour into the nature of what a primitive is. Recall from the Fundamental Theorem of Calculus that, essentially, there exists a function $$f$$ such that they appear when the derivative of $\int_a^x{f}$ is applied.

If we use different variables than from the theorem (They use F and f), we can display this process through the following steps


 * $$u = \int_a^x{f} \Longleftrightarrow u' = \frac{\operatorname{d}}{\operatorname{d}\!x} \left[\int_a^x{f}\right] \implies u' = f$$

We use the implication arrow for the last step to highlight a point. The final step, although logically is an equivalent statement—and thus an iff sign is fine in that position, it does not mean that the reverse is easy. In fact, only mathematics outside of Real Analysis will rigorously show how reverting that step may be more complex than moving forward. Thus, most first-year real analysis courses will only ask you so much as only moving forward.

Yet, there are some functions whose primitives are easily found. Almost by design of the educators of mathematics, every function studied in elementary mathematics, including those rigorously defined in the Functions section; the Trigonometry chapter; and the Exponential and Logarithmic headings, have easily defined primitives that is easily derived through the difficult mental exercises of working with the Table of Derivatives in reverse (with leniency offered to things like the power functions and trigonometric functions – trigonometric functions are among the hardest in this list to derive). Despite that, we will provide a table below, so that you do not have to work through the mental exercise.

Hah! If you noticed, there is, on each primitive, a constant variable C around. Coupled with the rather indirect definition of a primitive, it appears as if the concept of the primitive itself warrants an explanation. Case in point, because of one major consequence of applying the Fundamental Theorem of Calculus and a derivation theorem together (specifically the one that states that if f' = g', then $$f = g + c$$), a constant variable C must be added in order to be an algebraically correct conversion. As a consequence of this requirement, although the derivation of an integration commonly, to use a laymen term, "cancels" both operations, it logically is not a full cancellation—especially when the primitive f is not known.

However, there are two facts that, when accepted, make the constant C issue immensely more manageable when working with integrals.
 * 1) For the intents and purposes of integral computation (a trick with indefinite integral definitions), one does not need to worry about constants until the final result is computed.
 * 2) Computing the constant value C is easy if the function's properties allow for easy numerical values (such as its roots) to show.

The Indefinite Integral
This will act as a small aside, but is essential to understand the rest of this chapter.

What if one does not wish to write out all those steps above? One wishes to instead simply acknowledge the multiple interpretations of what a primitive is without being bogged down by explicitly declaring your function f as an antiderivitive while still writing the primitive by virtue of filling out the f in what $$f + C$$ is. Simple. Mathematicians agreed to the definition of an indefinite integral, which is defined as


 * $$\int{f} := \left\{f + C : \forall C \in \mathbb{R}, f = \frac{\operatorname{d}}{\operatorname{d}\!x} \left[\int_a^x{f}\right]\right\}$$

which, simply put, means the set of all primitives of that statement with the integral and the derivative. Curiously, this definition is implied when talking about the primitive, and, more importantly, this definition is the technical output when derivating an integral.

Integration by Parts
Why are the variables u and v instead of the more traditional f and g? Well, using the variables u and v is the traditional naming convention for integration by parts! Aside from confusing traditions, it serves a more pragmatic reason as well, namely that a given function f can be a combination of two functions u and v, which can be placed into this equation to yield a calculated answer for f.

This is an important theorem which is extremely easy to derive as well (it only involves algebra and equations). Assuming one knows the derivation of a function composed of multiplying two other functions, the proof is as follows.

Notation
One may have noticed, especially if one has read any other literature on mathematics, of a notation format used when discussing about integration. For example, the full notation for integration by parts is


 * $$\int u(x) v'(x) \, \operatorname{d}\!x = u(x) v(x) - \int v(x) \, u'(x) \operatorname{d}\!x $$

but is often written as


 * $$\int u \, \operatorname{d}\!v = u v - \int v \, \operatorname{d}\!u $$

The symbols and  are not to be meant in the normal sense as discussed in the section on integrals, namely that the variable after the d represents the variable that will be treated as such and every other variable as a constant. The varibale instead refers to a function. Thus, the symbol is being used in a new manner, which can be succinctly described by defining the specific cases


 * $$\operatorname{d}\!v := v'(x) \operatorname{d}\!x $$
 * $$\operatorname{d}\!u := u'(x) \operatorname{d}\!x $$

Note that this new definition is non-conflicting with the original definition of, since this one refers to when the variable on the right is a function instead of a variable.

Integration by Substitution
Unlike Integration by Parts, no reference to a variable u is made. This is because of complex relationships between how the variable u, g, and f relate in this overall theorem, which will be explained in another heading. For now, let's focus on making sure that this statement is even valid to begin with.

This formula is, luckily, also easy to validate. Like Integration by Parts, it too does not use a lot of theorems, only relying on the Second Fundamental Theorem of Calculus and the Chain Rule.

As mentioned earlier, this equation speaks little of how it can be applied to compute integration statements. Given that much of the content on how integration by substitution is done through other websites or other wikibooks (take Wikipedia's page on Integration by Substitution, which provides detailed examples on how to use this method, or take Calculus as an example, whose page discusses method of Integration by Substitution as well), the remainder of this heading will discuss how the steps they teach relate to this theorem.

First and foremost, this theorem stands out from what others normally teach because this uses definite integrals, unlike Integration by Parts which uses indefinite integrals. This is because the theorem, and its explanations, are best illustrated using definite integrals, as it highlights where the functions go in order to equate both statements. However, the bounds making up the definite integral can be easily "canceled" through derivation to create the easier-to-work-with theorem elementary mathematics teaches,

One may be interested in noting that when Leibniz's notation is used here, it almost appears as if the algebraic operation of division is used for the operator. This is a possible motivation for why this notation continues to exist today, as certain theorems can be easily expressed by adhering to algebraic-like properties.

Usage of Integration by Substitution
Unlike Integration by Parts, which explanation is easily bundled up through derivation of multiplication and indefinite integral explanation, this theorem requires plenty of explanation in order to understand how its used.

Aside from why it uses definite integrals, which was brushed over earlier and will be implied throughout this heading, we will first discuss what the functions f, g, and u represent. Note that in nearly all cases (unless you define $$g(x) = x$$, which doesn't result in an easier calculation), the function you see being integrated as well as masquerading as f in the left-side of the theorem statement will actually be a composition function of f and g, where g is the function one wishes to manipulate and f is the other parts of the overall function to be integrated. So, if the example is


 * $$\int{(2x + 5)(x^2 + 5x)^7}$$

then the function $$f \circ g = (2x + 5)(x^2 + 5x)^7$$ can be expressed as


 * $$f = x' \cdot x^7 \land g = (x^2 + 5x)^7$$

So for most cases, the method of Integration by Substitution is actually sussing out the function g from the overall composition function $$f \circ g = (2x + 5)(x^2 + 5x)^7$$, defining u = g, and reducing the overall function into one where integration can be simply derived. So for this example,


 * $$f \circ g = g'\cdot g^7$$

which can easily be computed since now the only focus is on g7 i.e. apply Integration by Substitution.

So, the variable u used in most explanations will simply equal g under simple circumstances. For more complex situations, u will not equal g – which is a situation explained in the next heading.

Inverting Integration by Substitution
The usual method of applying Integration by Substitution, explained further in other wikibooks, is to find some function u = g(x) that uses the input variable x in the overall statement f whose derivative can also be found in the overall statement f, substitute it in for the statement f – making sure that this new function can replace all instances of the input variable x along with other terms with u, and apply the theorem. However, it is possible to do the inverse. What do we mean? Instead of finding a function u = g(x) that can replace the input variable x, one can find the inverse of the function x = g-1(u) that can cancel out sections of the overall statement f as well as the input variable x and then integrate for u.

In other words, if one ever requires to "move" – which would involve inverting u the function – the function from the dx side over to the du side, this corollary is being used. We know that for the variables x and u, inverting is as easy as inverting the function g and using the variables x and u as respective inputs. However, do we know whether or not we can "move" functions from one dx to the other du by replacing it with the inverse? After all, dx and du are not variables, but operators. The following corollary proves yes.