Re: 32-bit measurment accuracy
Actually double is best for intermediate calculations if performance isn't an issue. A mix of addition and subtraction operations that should be mathematically commutative are not so when they're floats or doubles because the precision varies as the accumulator fluctuates. It can lead to maddening bugs where the final precision is far lower than required until the order of operations is tuned. Float does this sooner than you'd expect but it's very unlikely to happen with a double.
Forgetting to round off when displaying floats and doubles to humans whole different problem.