Quick, the Mind Bleach
And lots of it please. After reading the name of the South American country, coming across the name Liz Truss is a bad combination, Did she have one of these Brazilians when she did the calculations for the fiscal event?
For most of us, a calculator might have been superseded by Excel or an app on a phone, yet there remains a die-hard contingent with a passion for the push-button marvels. So the shocking discovery of an apparently rogue HP-12C has sent tremors through the calculator aficionado world. The HP-12C [PDF] is a remarkably long-lived …
If you wanted to completely destroy western capitalism would it be better to:
A, have all financial calculators explode - wiping out accountants, hedge fund-managers and dealers of derivative backed mezzanine split-level convertible debt golden tickets
OR
B, allow all accountants, hedge fund-managers and dealers of ... to go on using the calculator and thinking they understood the answer ?
Well, some of them will have dueling scars and riding breeches, some of them will be listening to Mozart with their eyes shut, and will say "You see, we are not all philistines".
Originally, a "scientific" calculator was one that used "scientific notation". That is, it could display very large or very small results in exponent form, ie "3.14159 E23".
In Aus (dunno about the rest of the world), the term "scientific calculator" now means "non-programable, not graphing, no program memory".
Students use a "calculator" in class and for homework, but for exams they are permitted only to use a "scientific calculator".
The rest of us just use our phone, but there is a substantial business in "exam ready" calculators, which implies that the firmware is known, standard and fixed. Small errors like this are not important in the capital markets, but they are certainly important to Auditors and to Educators.
UK (school) exam rules specify that sophisticated, programmable, calculators must have ( and need to be in) " exam mode" - which puts a green frame round the window to show that it is made safe. Exam mode persists for 36hrs or some such number and can't be cancelled.Scientific calcualtors, as far as I can work out, have all kinds of mathematical functions that aren't used in every day life by most people. And are permitted/required in science exams. What they do is well above my head*
*I learnt to use a slide rule at school**.
**"Learnt is the most general and vague sense- I was never very good at it, being on the falling off a cliff side of clumsy***
***Never actually fell off a cliff****
****Never actually been very near the edge of a cliff- dleiberately.
Lefty woke LGBTQ immigrant teachers are making your child use Arabic numerals
Pfft.. Real m.. people use RPN and RPL. And in my case, I still use my trusty HP48GX. Main disadvantage is sometimes grabbing it when I'm not fully caffeinated and forgetting it uses RPN.
The sad thing is, HP calculators have lost their quality, reduced in the number of models, and the latest scientific does NOT have RPN.
On a positive note, then there is an HP42S clone with the benefit of 4 lines not 2, and the soft buttons are an extra line also :
https://www.swissmicros.com/products
HP have truly lost their way.
The sad thing is, HP calculators have lost their quality, reduced in the number of models, and the latest scientific does NOT have RPN
I'm just impressed that mine has lasted. Must be getting on for 30ys old, and still going strong. They don't make'em like they used to!
Can't get RPN anymore in Britain since we left the EU.
There's always trade with Japan. I also have a mostly functioning Casio FX7000G calculator, although the LCD in that one is fading. Still usuable if I hold it at the right angle. And yes, I have replace the batteries.. :p
"The rest of us just use our phone"
As maybe the only person here actually using a HP-12c at work, every day, I'll say absolutely not. My boss gave me a HP-12c because someone gave it to him and he didn't know how to use it (RPN) and after I wore the keys out, assisted by its age (keys stopped responding, domes wore out i guess) I had him buy me a 'new' one, a mint condition HP-12c Platinum off ebay. I even have a HP-12c emulator app on my phone.
Why should I grab my phone, unlock it, find a calculative app (in the pulldown, app drawer or home screens) and then activate it when I have a real device sitting 5 inches in front my keyboard? Grab, push "On", and get to work.
Although I certainly do not use 97% of its abilities during everyday use, I love the 12c's 10 memories and programmable mantissa length, currently set at 4 digits for my work. Auto rounding (2 is all I'm legally required to be accurate to, use 4 in the division calculations of these small numbers).
I really enjoy my 12c, makes my job easier as I build a price from multiple factors, and the multiple memories allows for easy additions after multiplying and saving unit costs.
I find the report of the inaccuracies disturbing but it is part of a function I'll never use.
I've got a 12c, a 15c, a 16c and a 41c on my desk. I use them all regularly. Nothing beats physical buttons when chewing through large calculations. The 41c, I'll admit, is an affectation. It was the fitrst machine I ever programmed back in the 70s. But the others are true workhorses, each designed for a different purpose.
One of my earlier gigs was a financial booking system. It had to work down to fractions of a penny and up to billions (for currencies where the unit is worth very little like yen) without any errors. We implemented a base 10 variable length representation with the appropriate operator overloads (a poster child for the beauty of C++...).
Worked perfectly and performance wasn't an issue. In fact in some ways performance was better because we could print out human readable values without an expensive base 2 to base 10 conversion every time.
It didn't do exponentiation and maybe not something you want to cram into a 1980's embedded device but sure beat battling with fixed precision limitations. Quite a lot of our end users did have those "wierd" HP calculators.
Windows (tm) calculator app does something similar so that users don't complain that 3/10 = 0.3000000000001
I don't know how the calculator works, but you don't need arbitrary precision to avoid that error. You get that error "because the last digit implies a fractional bit", but by the same token you may choose any last digit that matches the last available bit -- you didn't need to choose "1" for that example.
A problem is that many people don't understand floating point, and believe that the choices of the standard c library represent the only mathematically correct way of doing base conversion.
No, ten times no. It’s a job for either a person, or library, that was taught to do numerical analysis properly. Like I was for A-level. You re-arrange the formula to be numerically stable.
It’s one of the issues I have with the whole “science or finance needs double precision calculations” stuff. It really doesn’t. If going from single to double precision makes a difference…..then you are running your calculations in a numerically unstable way. Fix that. Otherwise, there is no reason at all to think that going from double to extended precision won’t also show up just as large a change, in even more edge-case. Anyone who thinks they need extended precision, does not know what they are doing, and shouldn’t be trusted to do the calculations at all.
Most of the younger generation (anybody younger than 60) has simply never been taught proper numerical analysis. Remember, people like Isaac Newton and Edmund Halley did calculations that were good enough to predict the solar system, known to be chaotic and exquisitely sensitive to small perturbations. They got accurate answers, by hand. Do you think they were using 14 decimal places? We know for a fact that they didn’t, because we have the workings out.
Definitely! Whenever you need to represent values "down to fractions of a penny and up to billions", you should clearly choose IEEE-754 Single Precision (SP) representation (32-bits), that gives approximately 6 accurate decimal digits, do deep numerical analysis on that, and whamo! extract the needed 12 accurate decimal digits from the result! ... and you should repeat this ten times no times I think ... (ahem).
But yes, numerical analysis has its uses, just not in making one get 12 digits of accuracy out of a 6-digit accurate encoding (for example). You can't get 12 accurate digits of π out of IEEE-754 SP for example (as far as I know).
Money should be represented as an integer number of pennies, not as *either* single or double precision floating point.
If it’s a banking application, you might indeed need to represent up to billions or even trillions, which cannot be represented in signed 32-bit integer, but can as signed 64-bit integer. This is the *output* of a numerical analysis, not some bogus woo-woo about single precision floating point, versus a double precision float that just happens to 64-bit (today).
If you really are doing calculations in real world that require more than 6 decimal place accuracy output, then yes you will need double precision……but you will never work on one. The *only* theory that humanity has ever produced which is accurate to that level, is Quantum Electrodynamics, and orbital mechanics. Nothing else even approaches that. If somebody gives you 6 sig fig data outside those fields, they are fooling themselves and you, because the underlying equations contain other approximations that are not valid at that precision.
If somebody asks you to write a code implementing some equations, which only gives the “right” answer if the calculations are done to that higher precision……then it doesn’t give the right answer at all. Both you and they just haven’t understood the subject deeply enough.
This is the real point. People who implement high-precision calculations are *not smart or skilled or knowledgeable enough* to be doing the work at all. You are not smarter than Isaac Newton. You’re just doing publish-or-perish nonsense paper-mill.
Unless, of course, you're dealing with large numbers. In many calculations, you don't deal with more than six significant figures. Until you have a hundred thousand units of it, in which case you're at six without any decimal places. Let me guess. Do the calculations on one, then multiply by your number of units? There are two problems with that:
1. By all rules of mathematics, there should be no difference to the result if you do this or do not. People who aren't programmers do not and should not have to understand why there is any difference. We do not teach our children in mathematics that multiplication and division are reverses of one another, except if you are doing calculations with floating point numbers because you were supposed to know which ones the computer was going to do wrong and not done them.
2. While it may not be necessary, we do frequently have situations where we actually have that much precision available. I'm trying to estimate the speed of an operation. I've done millions of operations. I have the time taken to do those down to the microsecond. Both of those have over six figures. In many cases, using the floating point and ignoring the error is sufficient for my uses, but nothing prevents me from having and using more precision than that.
In many cases, the lower precision is justifiable in order to get faster computation. Programmers who are writing calculations into their software should be considering this. They don't get to tell their users that their calculations are wrong when the users demand more precision or different handling, such as an integer number of pennies (plan for what happens if fractional pennies are used, because that happens in some cases). There are also situations where speed is not important. One really important one is in a calculator. You're not doing millions of operations per second on a calculator. The calculation they entered is the only one to work on right now. There is no reason not to use something that answers correctly.
No. You may use a simple, direct way of doing the calculations, which then results in needing higher precision that you otherwise would have needed, the smart way. But this is uncontrolled. I’m not telling you that single-precision is enough. I’m telling you that, if you find empirically that single-precision is insufficient doing it the way you are doing it…..then double precision is also insufficient, relative to quad-precision etc.
The concern, isn’t that you are “wasting machine cycles doing stuff to higher precision”. The concern rather is that your testing is insufficiently powered to find the test-cases where double precision is still giving you the wrong answer, because your methodology is intrinsically ill-conditioned. And I can’t count the number of people who use Matlab simulations (double-precision) as a gold standard, failing to understand that their Matlab, if running the calculations the same way up, will give the same wrong answers.
If you want to do this empirically, here’s what you would do. Set up the calculation the way you were going to do it; run single-, double- and quad-precision side-by-side. Identify some test case where single and double diverge significantly and analyse. Parametrise the item causing the difference (eg “difference between two large numbers”), and then set up a test case where this parameter is extremal. Re-run test for the extremal parameter. I almost guarantee you will find that your double-precision is wildly inaccurate compared to quad. And if you try octuple, it may well show that quad isn’t sufficient either.
Conversely, if, rather than re-code for octuple precision, you simply spend time, thought and expertise to re-factor the way you are doing the calculation; such that the divergence between double and quad becomes negligible. In the absolute vast majority of cases, the divergence between single and double, you will also find that single becomes acceptable too.
The easiest way to get correct answers for most financial calculations is to use a floating point number of sufficient bits.
Yes, you can use arbitrary precision, you can use residue correction, and you can avoid stiff formulations, but you don't have to if you use enough bits.
There are two reasons why 64 bits is enough for the entire world of international capital.
(1) We don't calculate interest daily over hundreds of years.
(2) Actual contract calculations are nailed down as either calculations that can be done by pencil, or calculations that can be done by 1960's fixed point calculators.
Even though 64 bit floating point will give you the arithmetically correct answer to all theoretical capital market formulas, in practice the calculations have to be done to fixed point decimal values, with the number of bits required for simple calculation only one more bit than the decimal representation.
As you say it is a job for someone who has read and understood Donald Knuth's books.
Lesser mortals can do much worse than buying a copy of Numerical Recipes (3rd edition is just that and its C++, 2nd & 1st edition came in multiple language options: C, Fortran, Pascal) and paying attention to the text instead of just copying a function and hoping for the best. If has many lengthy discussions on the difference between ideal computation (infinite machine precision, memory, and and speed) and the practical world of doing it for real. Very rarely is there one "best" method as often it depends on the range of values you need to solve, etc. That is why good commercial libraries such as NAG routines or IMSL as usually very complicated inside as they look at what you asked for and chose one of a couple of algorithms accordingly.
TL;DR get a properly tested library for the job.
If doing it yourself then be prepared for a lot of reading, coding and testing effort to establish you are actually achieving the precision you need. In a commercial environment that won't be cheaper than NAG or IMSL, but for hobby or academic reasons then read Numerical Recipes and code away...
"If going from single to double precision makes a difference…..then you are running your calculations in a numerically unstable way. Fix that."
Sorry, I'm one of those young stupid people. Maybe you can help me fix this:
1234567890.0/10.0 = 123456792
Hmm. That's not the answer generated by my brain, and I've run it passed an 80-year-old person so my brain is probably right. Why did that happen? Maybe the compiler can help make it clear. If I don't put on the .0s, Clang helpfully tells me this:
warning: implicit conversion from 'int' to 'float' changes value from 1234567890 to 1234567936
Well, that's a lot closer to the number, although we still have the 92 instead of the 93.6 at the end. I'm sure the proper analysis would have fixed that. Sometimes, a number is more precise than a 32-bit float. That ten-digit number is not ridiculously high for calculations. The calculation I used to demonstrate this is trivial to do mentally, but a very similar one could be entirely doable with paper and pencil but people don't want to. A user of a calculator who expects it to be able to divide a ten-digit number should not be told that their calculation is the problem and they deserve the wrong answer.
In your example, changing the representation to double precision does *not fix your problem*. It just makes it slightly harder to find an “obvious” test case that gives incorrect results
This makes your code *worse*, not better. Assuming that you do actually test your code.
In case you haven’t understood the deeper point, when you say “1234567890.0”, what exactly does this *represent*? Presumably, this input data came from somewhere.
Your underlying assumption that the input data is “correct”, is not meaningful. If you care about the output precision to the level you believe you do, then you have a problem because your input is guaranteed imprecise. There simply is no physical input data source with that accuracy. The person or agency who thinks it is, is wrong.
I think it represents that I have 1234567890 of something. Maybe that's a number of people, and I want to know how many would be in a certain group. Maybe it's a number of liters of a material and I'm trying to divide it among multiple containers. The answer to any division with that as the dividend does not vary based on the units involved. Whether I choose to do the calculation that way probably does depend on the units, which is when you would want a calculator to actually calculate using the number you entered.
Your answer ends up simplifying to "you shouldn't need more than 24 bits (7.224 digits) of precision". Sometimes, you do, as I demonstrated with a 10-digit number. Sometimes, just going to double precision isn't good enough. You can solve this by just increasing the precision over and over again or by using something other than floating point to do the calculations. People who use calculators expect and rightly so that you have done this for them. They don't care how you divide the number. They care whether the result is right. If you use 32-bit floats and excuse the incorrect results as the user was calculating the wrong things, you're making a bad product.
That particular one is marginal, but even then I mostly stick to my point. It’s a “number of people” is it? It’s a billion’ish, so roughly the population of China. What is the population of China then? Accurate to within the nearest unit? Well, whatever number you have, it’s wrong now, because it has changed by several units since you answered.
Now, your point about the calculator is different, and correct. If you used single-precision on a calculator, that is going to be a problem. Since the implementer of the calculator does not control what is done with it.
Here is a simple python example to try:
import numpy as np
import math
array = np.array([1.0e-200, 1.0, -2.0e-200, -1.0])
sum = 0.0
for i in range(len(array)): sum += array[i]
print(sum)
print(np.sum(array))
print(math.fsum(array))
A visual inspection shows the +1 & -1 cancel and so you should get -1.0e-200 as a result from 1.0e-200 + (-2.0e-200) to as many digits as appropriate.
But only one of the 3 approaches works on my machine to give you the correct answer!
People--who know how to program^{1}, and who understand arithmetic^{2}--seem to do very well using fixed-point numbers for high-precision arithmetic.
Compute much?
^{1,2} THE reason why most people absolutely refuse to learn Forth--as well as having to understand RPN. Absolutely an affront to their self-image, as well as their self-professed high status as 'software gurus'.
THE reason why most people absolutely refuse to learn Forth--as well as having to understand RPN. Absolutely an affront to their self-image, as well as their self-professed high status as 'software gurus'.
Maths isn't my strong point, but oddly enough getting my HP and being forced to learn RPN helped me understand maths better. I think I was possibly helped by having previously learned to program on things like ZX81s and Commodore 64s and delving into assembly language programming. So the concept of pushing & popping things from stacks wasn't quite so alien.
When I were doing assembly language on the CDC CYBER-74, the instructor taught us how to do arithmetic using one digit per word. The task at hand was to compute e or pi, I forget which, to a number of decimal places just less than the amount of RAM available to us. He allowed as how we could go further using data files.
And why it's cheaper to pay 3x as much for a flight returning on a friday rather than paying for a night in a hotel and a cheap weekend flight
Or why it's better to sit on a travel request for 3months and have the approved travel office book a British Aiways flight the day before, rather than an easyJet flight 3months out
I contracted for CFA for many, many years. This would have been a colossal FsckUP if CFA was still giving exams in halls holding thousands of candidates. I can't imagine having to turn on all of the 12c's to check for the correct firmware before they enter the hall, and then tell the candidate that they cannot sit for the exam that they may have paid thousands out of pocket for, because HP screwed up. It gets quite emotional, that's why Securitas was on hand...
The accompanying legend gives ip is defined as the interest rate presumably annual but i isn't defined.
I assume it's the interest rate over period between payments roughly ip/number-of-payments-per-year.
The example of 1p per second for a year at 10% pa could be calculated as a continuous compounding.
When we had a mortgage I was curious how the time to repayment was related to the interest and repayments. The TVM for a mortgage is a simple recurrence becoming a sums of geometric series.
Turns out (to my thinking) the time to full repayment is mostly dependant on the ratio between the repayment amount and the interest accrued in between repayments which is understandable. eg ratio = 1, PMT = PV.(1+i) ie Interest only. Once the ratio is around 2.0 things start to move. ;)
'i' is the periodic interest rate, and 'p' is coefficient of 0 or 1 depending on whether the periodic payment happens at the end or beginning of the period. I.e. most mortgages you make your first payment around a month after borrowing the money - so p = 0.
Two of the battery of TVM tests relate to calculating 'N', or the time to full repayment, and they sit on the border of an interest-only repayment which makes the calculation quite awful.
You might find this [1] little puzzle amusing, which brings together calculating the time to payment, and also whether you pay at the beginning or end.
[1] https://forum.swissmicros.com/viewtopic.php?f=2&t=3990&sid=695dae231e0d5a50aa6387105b8a3edb
I found a bug in BBC Basic a long time ago. I was trying to attach a joystick to my BBC "A". It was a 12-bit output device and I had to divide the outputs by 16 so that I could plot it on a Beeb screen.
A large block of screen was unreachable because of an arithmetic error.
I don't know why I am telling you this but people must be told!
I thought the A didn't have the ADC chip fitted?
Did a similar thing with my B (issue 7, OS 1.2) - think it was a two-axis digitiser from a magazine. Can't remember having an arithmetic problem that wasn't of my own making. Wasn't it possible to put the ADC into 10 bit mode where it was a lot more stable too?
M.
I would appreciate it if someone could help clarify the following questions regarding the calculations referenced in HP Solve #25:
1. Where does the "true result of 331,667.00669" come from?
In HP Solve #25, Page 14, while addressing the "a penny per second" problem proposed by William Kahan, the data is entered as follows:
n : = 60 * 60 * 24 * 365 = 31,536,000 seconds per year.
i : = 10/n = 0.000 000 317 097 9198 % per second.
PV : = 0
PMT : = -0.01 = one cent per second to the bank.
FV : ?
The result is stated as follows:
For HP-27, HP-92, HP-37, HP-38, and HP-12C: $331,667.0067
For the new HP 10bII: $331,667.006691
Meanwhile, non-HP calculators may produce significantly different results, such as $293,539.16035, $334,858.18373, or $331,559.383549.
My question is:
Is the "true result of 331,667.00669" based on the HP 10bII as the reference?
If so, why is the HP 10bII considered the correct answer in this case?
2. If the answer is $331,667.00669, it seems unlikely that the Regular HP-12C would have an accuracy of "10.6". In fact, it should be higher based on this result.
3. What, then, is the actual "true result"?
Using formulas 1 & 2 in Microsoft Excel, I consistently get an answer of $331,667.01259915200.
Similarly, using ChatGPT for verification, the answer is also $331,667.01259915200.
If we assume that $331,667.01259915200 is the "true result," the accuracy for the following devices would be as follows:
Rogue HP-12C: Calculation result of $331,666.9849, with an accuracy of approximately 7.08.
Regular HP-12C: Calculation result of $331,667.00669, with an accuracy of approximately 7.75.
As you can see, the difference between the two is not that significant.
This leads to the conclusion that the so-called "Regular HP-12C" is very close to the Rogue version in terms of accuracy.