One of the common uses of floating point arithmetic is in
modelling real-world phenomena. The techniques described in
the article I believe are inadequate to handle some of the
issues that crop up in trying to solve these models. For
example: we often have measurement error; know physical
constants to varying degrees of accuracy; discretize data
that is supposed to represent a continuum; use algorithms
that may not be numerically stable over the entire domain;
or deal with problems that are inherently stiff. The real
world is nonlinear.
There is an alternative to using fixed point schemes
(whether integers, scaled integers, rationals, or floating
point numbers as approximations to a continuum), which is
to compute with sets of numbers. For reasons of efficiency
and to take advantage of hardware acceleration, we
generally use intervals, defined as the set of all numbers
between a lower bound and upper bound [a,b].
By using intervals we can represent measurement error,
floating point rounding error, and imprecise constants in a
unified and consistent way. For some of us, one of the
greatest strengths of this approach is that when you
compute something, you also obtain an indication of the
quality of the answer i.e. the width of the interval.
Some problems have traditionally been considered
intractable, which may no longer be the case when using IA.
Consider a (large) solution space. By eliminating boxes
(multidimensional intervals) where it can be proved the
solution cannot be, you can iterate towards more and more
accurate approximations of the solution, subject to the
precision of the arithmetic being used. As the size of
boxes shrink, switch to higher precision if required. A
classic example of this technique is an Interval Newton
method for finding all roots of a function.
For all of this to work, you do need an implementation of
interval arithmetic, one that guarantees containment of the
true solution for all operator-operand combinations. In my
own work, I use the implementation that is part of Sun's