A sign of the times?
At the risk of provoking the wrath of the legions of bright, young Java/Ruby/C#/[insert name of latest fad language] programmers in the audience, I'd like to point out that I learnt about the perils and pitfalls of the limited precision of floating-point numbers when I was a rookie FORTRAN programmer 25 years ago.
Back then, programming textbooks clearly spelled out the difference in precision between "real" (i.e. 32-bit) and "double" (64-bit) floating-point numbers, warned about the kind of rounding errors described by Dan Clarke, and offered sound advice on when one should use integers.
My favourite FORTRAN textbook (Munro's "FORTRAN 77") specifically warned against using any kind of floating-point numbers to represent currency amounts in financial applications.
Programmers had a greater appreciation of the hardware in those days, largely because there was such a variety. The Intel monoculture was far in the future, and each manufacturer had its own internal representation of floating-point, none of them compliant with IEEE 754. When you ported a program from one machine to another, you had to take into account, for example, that single-precision floating point arithmetic on an IBM VM/370 system was only good for about six decimal digits of precision.
Programmers seem to be less aware of such issues today, and more trusting that the CPU will always give them "the correct answer".
Then their Java enterprise application rounds a number in a way they hadn't expected, and their company's accounts are unexpectedly short a couple of hundred million dollars.
Maybe we ancient FORTRAN programmers can still teach them a trick or two ;-)