It's all a matter of scale
As a longtime Fortran programmer (and compiler developer), I was delighted to see this article, which sheds light on a subject that has ensnared many over the years. You are right that some try to "fix" the problem by fudging, and right again that decimal arithmetic (most commonly used in COBOL applications), is a better way. But there is a method available in almost any language that is proven effective - scaling.
The trick here is not to do the computations in dollars or pounds or euros, which entail decimal fractions, but in cents (or whatever is appropriate for your currency.) Scale the input values by 100 so that you are computing in cents. When you are done and want to display the result, divide by 100 and display the result rounded to the nearest .01. You'll never be off.
It is also important to keep in mind that a single-precision float (float in C, real in Fortran, etc.) is typically good to about 7 decimal digits, which means that as values get larger, you start to lose information. It is better to use the double precision datatype which is good to about 15 digits.
COBOL, Fortran and PL/I have built-in features for handling this scaling - in languages which don't, you'll have to do it yourself, but it's easy once you get the idea.