A couple dozen in a quintillion ...
... is unlikely to affect me & mine.
That said, shame on Intel. That's an 'orrible documentation error.
Sharp-eyed and mathematically-savvy coder and blogger Bruce Dawson has spotted something interesting: the fsin instruction in x86/x64 chippery can produce a “worst-case error [of] 1.37 quintillion units in the last place”. That's not helpful because Intel's documentation suggests far smaller errors are the norm. And because a …
22 divided by 7 is 3.142857.
Pi is 3.14159 for all practical purposes. Both numbers round to 3.142 or 3.143 when you consider what the significant digits become.
7 times 3.14159 equals 21.99114858
That's close enough for carpentry, masonry, sheet metal layout, horseshoes and hand grenades.
The error from using that useful fractional version of Pi would hardly be noticeable unless you were using a diameter of several miles.
you wouldn't really expect an accurate result for sin(1e99), would you?
There's no reason why that input can't be range-reduced.
In any case, the problem is that Intel's fsin is inaccurate for values close to π. That's a bit more of an issue. Try reading Dawson's post.
The problem lies in guys like me having to make transistors give you an answer in a small amount of time and guys like him expecting results regardless of how long it would take. Intel have now sensibly fixed the documentation to explain what really happens.
Intel's PI is 66 bits and Bruce is asking for sin(0.0000001) where the number of zeroes is about 80 bits. Not suprisingly the answer is correct to 0.000001 (65 bits of zero) but huge relative to 0.00001(80 bits of zero)
You're being a bit unfair there. He's just complaining that Intel oversold the precision. He did not seem so upset about the precision itself, but about the fact that if you rely on the doc you'll assume that your result is more accurate than it actually is, and thus rely on it instead of doing things another way.
I bet he's perfectly happy with Intel's response of fixing the doc to fit the code.
"The problem lies in guys like me having to make transistors give you an answer in a small amount of time and guys like him expecting results regardless of how long it would take."
No, the problem here is the quality of Intel's documentation.
In the 80s at least one chip vendor managed to come up with formal specs for floating point instructions and then prove that the hardware meets them so no one gets any unpleasant surprises. By contrast Intel's published documentation has a habit of specifying complex behavior using ambiguous waffle, so it's hardly a surprise that the hardware fails to match expectations of software developers.
And if your read the post, fsin et al are not worth the transistors; every library is using their own version. For those of us doing sums using a large number of sins given to us by mathematicians with no idea of the limits of practical hardware, these things matter.
...so as to identify when people had ripped the FPU technology. This is can seen when comparing integer against float methods for drawing circles, the intel trig functions produce a stepped rather than smooth edge to the circle at certain angles and zoom
With a function like the sine, a conventional implementation which is very accurate for values between plus and minus pi/2 (plus and minus 90 degrees) can still go wrong if you try to take the sine of a very big number.
While the sine of a big number still has a mathematically exact value, usually errors in that case don't matter, because floating-point numbers have limited precision, and usually their accuracy is not greater than their precision. So if you try to take the sine of 3.52 times 10 to the 52nd power, the entire range of the sine function can be produced by just changing stuff beyond the least significant bit.
Using a copy of pi to an enormous precision allows such cases to be taken care of accurately, but that's a waste of time for the normal purposes to which trig functions are put.
Biting the hand that feeds IT © 1998–2021