Floating Point/Epsilon

Granularity
Floating point numbers, because they are comprised of only a certain number of bits, have a granularity. They therefore cannot express an infinite amount of fractional values. This means that there is a "Largest Possible Value", &epsilon;, that satisfies the following equation:


 * $$1.0 \oplus \epsilon = 1.0$$

This value is called the machine epsilon of the floating point system.

Epsilon (&epsilon;)
When a real number is rounded to the nearest floating point number, the machine epsilon forms an upper bound on the relative error. This fact makes the machine epsilon extremely useful in determining the convergence tolerances of many iterative numerical algorithms.

Determining Epsilon
The machine epsilon can be computed according to the formula:



\epsilon = {b \over 2} \cdot b^{-p} $$

So, for IEEE 754 single precision we have

\epsilon = 2^{-24} = 5.96046447753906 \times 10^{-8} $$ and for IEEE 754 double precision we have

\epsilon = 2^{-53} = 1.11022302462516 \times 10^{-16} $$

When $$p$$ is not known, but $$b$$ is known to be 2, the machine epsilon can be found by starting with a trial value of, say, 0.5 and successively dividing the value by 2 until $$1.0 \oplus \epsilon = 1.0$$ is true.

Granularity Effects
An effect of this granularity is that some basic algebraic properties don't strictly hold. For instance, if we have three floating-point values, x, y, and z, we can show that:


 * $$x + (y + z) \ne (x + y) + z$$

When the floating-point numbers are used in iterative calculations, round-off and granularity errors can result in large errors.