Real Analysis/Limits

The challenge in understanding limits is not in its definition, but rather in its execution. Successfully completing a limit proof, using the epsilon-delta definition, involves learning many different concepts at once—most of which will be unfamiliar coming out from earlier mathematics. This chapter will serve as a guide in navigating these proofs, as the skills here will serve you well in higher mathematics.

Definition
The definition of a limit, in ordinary real analysis, is notated as:


 * $$\lim_{x \rightarrow c}f(x) = L$$

One way to conceptualize the definition of a limit, and one which you may have been taught, is this: $$\lim_{x \rightarrow c}f(x) = L$$ means that we can make f(x) as close as we like to L by making x close to c. However, in real analysis, you will need to be rigorous with your definition—and we have a standard definition for a limit.

The notation of a limit is actually a shorthand for this expression:

This definition gives a lot of people a lot of trouble, but since it is so fundamental to higher mathematics, there are many ways to help solidify the definition down. This chapter will be a guide in solidifying the behavior of this definition and provide necessary insight into working with the definition, whereas the Exercises will help unravel the puzzle, solidify the concept, and enable you to execute this definition properly.

Corollaries of Limits


It is very common, given limits, to work with the concept of infinity. However, the concept of infinity has yet to be well defined. Intuitively, we know that infinity represents endlessness and it is represented as $∞$. Yet, infinity itself is not a number. The current limit definition will fail if we use infinity like a number. If you suppose some limit where $c &#61; ∞$ and we use our original definition, it would mean that


 * $$\lim_{x \rightarrow \infty}f(x) = L$$ means that  $$\forall M>0: \exists \delta : 0 < |x - \infty| < \delta \implies |f(x) - L| < \epsilon $$

Which is clearly nonsense!
 * 1) You cannot "subtract by infinity" - infinity isn't a number nor is it really a variable.
 * 2) Infinity cannot be bounded, yet by putting infinity in a $$|a - b| < x$$ format, it implies boundedness.

So, the definition needs to be rewritten, which is done in the following chart. The definitions for when either the limit as x approaches positive or negative infinity; or the limit as ƒ(x) converges to positive or negative infinity are as follows:

Take a note of the following variables: We only use big N and M because the connotation associated with ε and δ is that they are small numbers. Big N and M has the opposite connotation.
 * 1) N usually notates a limit with infinity and is analogous to ε.
 * 2) M usually notates a limit with infinity and is analogous to δ.

Conceptualization



 * 1) For all ε, only it, the ε variable, will be used to derive a δ. This powerful statement basically states that δ is related to ε. To excuse mathematically rigorous language for a moment, δ can be imagined of like a function which outputs ε. This is actually important, as neither δ or ε are allowed to have variables, such as x, be part of their formulation.
 * 2) ε and δ are supposed to represent bounds. Hence the absolute value signs. They are mathematically equivalent to writing $$-\delta < x - c < \delta$$ and $$-\epsilon< f(x) - L < \epsilon$$, which exemplifies their bound-like nature a lot more.
 * 3) This limit definition is designed to ignore the value of f(c) and whether or not c is even in the domain of ƒ. The requirement $$|x-c|>0$$ provides the appeal in studying calculus, by removing the technicality of having to analyze the behavior at the point (which is usually undefined to begin with). It is the mathematical implementation of the idea that the behavior of a function near a point shouldn't be affected by its behavior at the point. Thus, f(x) need not be defined at c to have a limit there.

Properties
Given that limits are such a fundamental concept of calculus, it should be reasonable to expect that limits should have some intriguing properties to both warrant analysis as well as be a staple mathematical topic in both elementary, applied, and higher mathematics.

Uniqueness
A limit is unique, in that there is always one and only one answer if the input is the same. This is commonly rephrased as "a function cannot approach two different limits at c". Limits having unique answers is very important, since if they don't, the use of limits will grow so complex that it will simply become unusable.

Algebraic Operations
If $$\lim_{x \rightarrow c}f(x) = L$$, and $$\lim_{x \rightarrow c}g(x) = M$$, then:

By applying the corresponding theorems for sequential limits, we find that functional limits are both unique—they preserve algebraic operations and ordering—and that a corresponding "Squeeze Theorem" holds.


 * If $$\exists \delta: f(x) > g(x) \forall x \in A$$, then $$L > M$$.


 * $$L=M, f(x) \leq h(x) \leq g(x) \implies \lim_{x \rightarrow c}h(x) = L$$


 * If L = 0 and h(x) is bounded, then $$\lim_{x \rightarrow c}f(x)h(x) = 0$$.

Proof
For the various operations, the following proofs may require more or less knowledge of algebraic inequality manipulation.

Addition
Of the operations, the proof for addition is the most simple, as it relies on the least amount of inequality algebra.

Subtraction
Subtraction follows from the addition proof by imagining a function h that is the negation of the function g beforehand. In other words, imagine the function g in the proof as a variable for a negated function.

$$\blacksquare$$

Multiplication
Of the operations, the proof for multiplication is the most complex, as it relies on the greatest amount of inequality algebra. It also requires a seemingly contrived lemma to operate. We will start by proving the lemma, which is simply an algebraic relationship between inequalities, similar to that of the binomial theorem relates a summation of terms and a product.

As you can see, the lemma itself describes a simple to prove and valid, yet very contrived and unnatural-looking relationship between numbers. But, this relationship is very attractive to be applied blindly for limits, because any value of a, b, c, and d inputted (even 0's) works, and that x > 0 is a condition that matches the ε variable.

As you will see below, we will apply this lemma for multiplication.

Multiple
The proof for multiples of some function ƒ follow from the proof on multiplication. It however relies on the limit of a constant proof. Because of the proofs reliance on two previous proofs and that those proofs this one relies on are robust (they account for things like 0's), this proof is just as robust, even working when a = 0.

Reciprocal
Of the operations, the proof for the reciprocal is similar to that of multiplication. It too requires a seemingly contrived relationship between some mathematical statements in order to function, and relies on the argument that the formula or concepts attached to assuring that epsilon and delta's boundedness is maintained is what defines a valid limit. Anyways, let us begin with the "contrived relationship".

As you can see, the lemma itself describes a simple to prove and valid, yet very contrived and unnatural-looking relationship between numbers. But, this relationship is very attractive to be applied blindly for limits, because any value of a, and b inputted (not including 0's) works, and that x > 0 is a condition that matches the ε variable.

As you will see below, we will apply this lemma for the reciprocal. Note that the proof is a simple assertion statement.

Division
The proof for division of the function ƒ by g is a corollary based on the proof done for limits of multiplication and limits of reciprocals.

As always, this proof has the obvious restriction that M cannot be 0.

Limits of Functions
Here, we will prove the answers to many of the functions you may commonly see. As always, the following chart for quick recall is provided below.

Note that for the linear function, we used z instead of the usual x because the variable name x is already defined and is being used by the limit notation.

Limits of Sequences
Consider the sequences $$(x_n) = (\frac{1}{n}), (y_n = (\frac{-1}{n})$$. Each converges to zero, but $$(f(x_n)) = 1$$ and $$(f(y_n)) = 0$$, and these have different limits as $$n \rightarrow \infty$$. Thus the limit does not exist.

Limits of Discontinuity
We'll be giving many more examples in the section on continuity. Although discontinuity is more sensibly important using continuity (which is covered in the next chapter), the definition of discontinuity is actually defined in regards to limits.

Point Discontinuity
An example of point discontinuity would be the functions


 * Example 1 $$f(x) =

\begin{cases} 1 & \mbox{if }x = 0 \\ 0 & \mbox{if }x \not= 0 \\ \end{cases} $$
 * Example 2 $$g(x) = \frac{x(x-1)}{x-1}$$

For the following functions,


 * Example 1 $$\lim_{x \rightarrow 0} f(x) = 0$$
 * Example 2 $$\lim_{x \rightarrow 1} f(x) = 1$$

The proof of which is the following:

Jump Discontinuity
0 & \mbox{if }x\leq 0 \\ 1 & \mbox{if }x > 0 \\ \end{cases}$$. Then $$\lim_{x \rightarrow 0} f(x)$$ does not exist.
 * Let $$f(x) = \begin{cases}

Limits of Unusual Functions
Many of the examples here may seem a bit contrived and appear quite nasty, with even more nastier proofs, but if done correctly, these examples (and the associating exercises) will solidify not only the methodology of a limit proof, but of how mathematics can, using verified theorems and behaviors, solve some seemingly unsolvable problems.

Our first example, often given as a demonstration of just how nasty functions can get (and how far a definition can take you), is


 * $$f(x) = \begin{cases}

1/q & \mbox{if }x \text{ is rational, and thus } x = p/q \\ 0 & \mbox{else} \\ \end{cases}, \forall x \in (0, 1) $$

For the function ƒ, $$\lim_{x \rightarrow c} f(x) = 0$$ for all numbers in the domain. Yes, really.

The first step in understanding the proof of this statement is to stop imagining limits and continuity as the same - that is, if the first step of this problem is to imagine the graph of this function and in a sense, zoom in until an answer can be deduced graphically. Do not be saddened if this is how you thought about how to work out this problem; this method is a simplified explanation of limits commonly taught in elementary mathematics and would thus be ingrained in you anyways.

This proof demonstrates a method of mathematical proof through manipulating theorems instead of manipulating numbers or variables to form the epsilon-delta model, which in turn implies the limit's validity; the existence of a limit. It also shows how a limit proof is actually an exercise in trying to relate two easily malleable inequalities together using valid theorems.

The next example, of a similar vein, is


 * $$g(x) = \begin{cases}

1 & \mbox{if }x \in \mathbb{Q} \\ 0 & \mbox{else} \\ \end{cases} $$

For the function g, $$\lim_{x \rightarrow c} g(x)$$ does not exist for any $$c \in \mathbb{R}$$.

Given $$x \in R$$, let $$x_n$$ be any rational number in the interval $$(\frac{-1}{n},\frac{1}{n})$$, and let $$y_n$$ be any irrational number in the same interval ($$x_n$$ and $$y_n$$ are guaranteed to exist by density of the rationals and irrationals). Given any $$\epsilon > 0, |x_n - 0| < \frac{1}{n}$$ and $$|y_n - 0| < \frac{1}{n}$$, so $$(x_n),(y_n) \rightarrow 0$$. However, (g(x_n)) = 1 and (g(y_n)) = 0, so their limits are 1 and 0. Since these are not equal, $$\lim_{y\rightarrow x} g(y)$$ does not exist.

Appendix
Here, we will expose more topics in regards to limits. First, we will give a review on the nature of functions. Recall that a function from a set X to a set Y is a mapping $$f: X \rightarrow Y$$ such that f(x) is a unique element of Y for every $$x\in X$$. In analysis, we tend to talk about functions from subsets $$A \subseteq \mathbb{R}$$ to $$\mathbb{R}$$.

The definition for the limit of a function is much the same as the definition for a sequence. In fact, as we will see later, it is possible to define functional limits in terms of sequential limits. For the moment, however, let us reevaluate the definition of a limit for a function ƒ given a generalized-enabled function:

Given a subset $$A \subset \mathbb{R}$$ and a function $$f:A\rightarrow \mathbb{R}$$, we say $$\lim_{x \rightarrow c}f(x) = L$$ if $$\forall \epsilon > 0: \exists \delta: 0<|x-c|<\delta \implies |f(x)-L|<\epsilon$$

Sequential Limits
One curious result of thinking about real numbers as built upon natural numbers and the like (as we have structured our section on numbers in this wikibook) is that the definition of a limit, which we have used the real number version for real functions all this time, can be derived using sequential limits instead of axiomatically, as so:

Given a subset $$A \subset \mathbb{R}$$ and a function $$f:A\rightarrow \mathbb{R}$$, we say $$\lim_{x \rightarrow c}f(x) = L$$ if $$\forall (x_n)_{n=1}^{\infty}$$ such that $$x_n \not= c, \lim_{n \rightarrow \infty}(x_n) = c$$, and $$\lim_{n \rightarrow \infty}(f(x_n)) = L $$

Note that the requirement $$x_n \not= c$$ corresponds with the requirement $$|x - c| > 0$$.

As an exercise to test your understanding, prove that these two definitions are equivelant. Note that taking the contrapositive gives a good criterion for determining whether or not a function diverges:

If $$\exists (x_n), (y_n): (x_n)\rightarrow c, (y_n)\rightarrow c$$, and $$ \lim_{n \rightarrow \infty}(f(x_n)) \not= \lim_{n \rightarrow \infty}(f(y_n))$$, then $$\lim_{x \rightarrow c}f(x)$$ does not exist.

Definition on an Arbitrary Metric Space
Let $$(X,d_1)$$, and $$(Y,d_2)$$ be metric spaces. And let $$f:X \to Y$$

The limit as $$x \in X$$ approaches $$a \in X$$ of $$f$$ is equal to $$L \in Y$$ if $$\forall \epsilon > 0 \text{ }\exists \delta \text{ such that if } 0<d_1(x,a)< \delta \text{, } d_2(f(x),L) < \epsilon$$

This is denoted $$\lim_{x \to a} f = L$$