Well done (see icon)
Belgian, naturally.
A cryptographic puzzle proposed two decades ago that involves roughly 80 trillion squarings has been cracked much earlier than expected – in just three and a half years. We say cryptographic because it involves a verifiable delay function [PDF], a moderately hard cryptographic function. The conundrum was set by Ronald Rivest …
Days?
"He believed that a single computer would have to be running for 35 years continuously, where the hardware would have to be upgraded every year to the next fastest chip available."
The NSA have multiple clusters of Cray computers and the best mathematicians in the world. Assuming they didn't have backdoor access, it would take them minutes of computer time.
A younger relative is a keen chess player and heavy metal afficianado.
On a trip to Belgium, he visited a heavy metal club with a somewhat charged atmosphere including a lot of skinheads not obviously there to discuss flower arranging.
All round the walls were tables with a chessboard in the middle with games happening on most of them, apparently to quite a high standard of play.
Being able to think well did not automatically classify one as an oddball minority.
The probability of ever seeing such a scene in England is precisely zero.
"I've already got a 15GHz clock rate here in my home. You can too: just sum all your devices."
I assume yout are one of the sort of people who (?) 15 years ago would advertise "4.8GHz AMD processors" on the basis that the AMD 3800X4 ran at 1.2GHz and had 4 cores .... or oerhaps you were the hapless MIPS salesdroid who visited the company where I worked 20-ish years ago and said that as their core had a Dhrystone rate of 1.4 then the new multithreaded core would have a rating of 2.8 .... don;t think he'd tried this on with an audience of processor architects before but at least before he left he'd learned how multithreading works in real life.
This post has been deleted by its author
Is that Usain's absolute maximum speed (ie rolling start, instantaneous speed measurement) or his average over a set distance (from a standing start or moving)?
His 100m from a standing start averages a smidge over 22mph, so his instantaneous maximum speed is probably >30mph... I have cycled faster than that, but it doesn't happen often.
Still not sure what the relevance is though.
Google is your friend...
"The record was 44.72km/h (27.8 mph), measured between meter 60 and meter 80 of the 100 meters sprint of the World Championships in Berlin on 16 August 2009 by Usain Bolt. (Bolt's average speed over the course of this race was 37.58 km/h or 23.35 mph.)"
Not over 30 mph but still outrageously fast, and most definitely faster than I could cycle.
In short, building custom hardware and software to optimise the problem-solving is in itself a remarkable achievement, but certainly not as remarkable as a guy doing it on his home PC.
"How is a team of professionals with FPGA development capability any kind of match for a lone amateur working with available equipment?"
Conversely, how is a guy who wrote a simple bit of code and left it running for a few years any kind of match for a team who put effort into designing a custom, efficient method of calculating this in a much shorter timeframe?
That said, while both approaches have merit and kudos to him for solving it, the key point to take away from this is that cryptography needs to keep up with technology. The fact that this was predicted to take decaddes but was solved in 3.5 years on standard hardware is an indication that we can't just rely on "This is secure enough, it'd take years to crack it!"
I remember when WEP came out and it took 2 hours to crack a 40 bit key. They then announced it would take 1,000 years to crack a 42 bit key with the then available 1996 hardware. Fast forward only a few years and it was taking a mere 2 minutes.
was solved in 3.5 years on standard hardware is an indication that we can't just rely on "This is secure enough, it'd take years to crack it!"
Dr. Mouse, I can only whole heartedly agree. I can only add, what happens when Quantum computers are commonly available. Nothing will be safe.
I can only add, what happens when Quantum computers are commonly available. Nothing will be safe.
My very vague understanding is that quantum computers if we're ever able to build them should be very good at "This safe has 356,245,896 possible combinations. You need to open it or bad things will happen ... to you." type problems. But this is more of a "Go to Shinjuku 3-chome, building 15 in Tokyo, Japan and buy a drink for the blind begger who will give you your next clue" type problem. I don't think Shor's algorithm helps with those.
However, the second part of your observation seems to almost certainly correct. As computational capabilities improve, a lot of once "safely encrypted" stuff is going to become readable. And some folks who have overlooked that aspect of reality are to probably going find it quite inconvenient.
It may only take two minutes to crack - in theory, The thing is to know you have cracked it and just because you solved some maths doesnt mean you solved the right maths, Quantum computers can get the right answer but at the same time they can get a near infinite number of not right answers that you can only prove they were wrong (or right) if you have the original to compare with, We are at the stage where even medium end GPU can parse a Shakespeare play from white noise in an afternoon!
Good to know the knee-jerk commentators are still, well, jerks.
As a few seconds' research would have shown you folks,1 the Peffers team is not simply a Supranational effort. It's the Cryptophage consortium, which includes researchers from other organizations, including academia and the Ethereum Foundation.
They're taking an approach that is very different from Fabrot's, so it would be worthwhile even if they were "competing" with him. Which they aren't. As far as I know, the Cryptophage team didn't even know Fabrot was working on the problem before he announced his results.
And what Cryptophage is doing is important because there are a number of applications for VDFs as cryptographic primitives, so it's very useful to get real-world results for solving examples of this VDF. Plus they'll have successfully shown a fast ASIC implementation of Öztürk's algorithm.
All of this is good work, which does not, and is not intended to, diminish Fabrot's. Not everything is a fucking contest.2
1Though it would have been nice if Katyanna had provided a relevant link in the article.
2Being snarky and insulting in the Reg forums of course is a contest.
The two are equally deserving because they demonstrate that the initial assumption was flawed. I may have missed it but it looks both solutions are using brute force. Even without someone discovering a mathematical shortcut, always a possibility, this suggests that with a slightly bigger hardware budget (like that of the NSA) the problem could be solved even faster.
The two are equally deserving because they demonstrate that the initial assumption was flawed.
If you want to put it that way. It was an educated projection of the advance of processing performance over three-and-a-half decades, which was always going to be a tall order since the kind of innovations that drive such advances are by definition not yet conceived of, and might even follow a complete paradigm-shift (quantum computing being an arguable example).
Even without someone discovering a mathematical shortcut, always a possibility,
Rivest assumed not, and that assumption has not yet been shown to be flawed.
with a slightly bigger hardware budget (like that of the NSA) the problem could be solved even faster.
Nope. The problem cannot be parallelised, so the performance of a single task is the bottleneck. So the only way an infinite sack of cash can help is by paying people lots of money to invent more efficient
hardware (which is no guarantee of results).
I never suggested parallelisation.
If someone can pay for FPGAs then the NSA can pay for more and better ones and even get silicon etched. Once you have a system that can run "fast enough", this can be used to verify other attempts and shortcuts. IIRC this is what happened as soon as a proof of concept for hacking GSM encryption was developed.
Hi,
I'm part of the Supranational team. In fact the timing is purely coincidental and we had no knowledge of Bernard's efforts.
We started working on the puzzle earlier this year because it relates to our work around Verifiable Delay Functions for blockchains. As Ron Rivest says in this Wired story, the timing is "an astonishing coincidence" after 20 years (https://www.wired.com/story/a-programmer-solved-a-20-year-old-forgotten-crypto-puzzle/)
If you're interested you can learn more about our work here: http://www.cryptophage.com/
Kudos to Bernard for his patience over the years to keep it running an get to a solution!
I'm pretty sure that when Ron Rivest created the puzzle he didn't perform "[a]pproximately 80 trillion modular squaring operations": how else could he "hide" the congratulatory phrase if not knowing the final result?
My question: is there an easy way to explain a lay person how he created the puzzle?
Thanks,
Massimo
"3.5 years (probably) isn't going to frighten anyone, sixty odd days is plausibly within the range where the decrypted information is still useful."
All intelligence is perishable, but things still get put away under the '100 Year Rule' and similar; 3.5 years might still be an embarasingly short interval in respect of some secrets.
Did it get as far as deducing the existence of rice pudding and income tax before anyone managed to switch it off though?
My thoughts however took a different literal bent.
“It is the brain, the little gray cells on which one must rely. One must seek the truth within--not without." ~ Poirot”
There are definitely some chicks out there who would be totally turned on by someone doing this.
Humanity is wonderfully diverse, there's a matching pair (or perhaps more) of people for any particular 'kink' (for want of a better word). The hard part is the probability of those two people finding themselves in the same place at the same time, so that they can actually meet...
Humanity is wonderfully diverse, there's a matching pair (or perhaps more) of people for any particular 'kink'
These statements are in opposition to each other. The greater the diversity, the lower the chance of every kink occurring twice.
/forever alone in my crab salad paragliding kidney-massage fetish :(
This post has been deleted by its author
Ah, DESQview. From the days before we took multitasking for granted. I can recall running DoubleDOS and somehow managing to get things working in only 240K of RAM (and no TSRs because they ate all the HIMEM space).
At least amplifiers with valves and vinyl are back in fashion - makes me feel less old :)
Windows 95/98 could run for at least 49.7 days without crashing if you didn't give it anything to do. Of course, attempting to get the PC to do anything useful was often a recipe for a blue screen of death long before the 49.7 days were up!
And the underlying OS for Windows 9 -- MSDOS -- really would run for years. We once ran across a PC that had been used for some sort of lab measurement. When the testing was completed, "they" apparently unplugged the sensor, turned off the monitor, and pushed the PC -- still running -- back into a corner. When someone finally looked at it, it had filled up the hard drive with specious "data", but continued to try to write more. Based on the "data" collection, it had been running for at least 18 months, but it could have been there substantially longer.
Well yes, a Commodore 64 "OS" would run for years as well, in both cases that's because pretty much the only thing the "OS" does is update the real time clock on an interrupt. I personally learned to expect a bit more from an OS during the mid-80s.
We ran DOS software under dosemu/MSDOS 6.22 and it never seemed to crash. OK the host machine would be rebooted occasionally and the dosemu instance restarted occasionally, but we never saw an "OS crash" with uptimes of the order of 600 days.
Having said that, if you don't poll for time at least once per day by some program/system action then the DOS date gets stuck as the time-of-day counter simply sets a midnight flag, and is not actually incrementing the date counter...
Is this not the sort of task which can be massively parallelised on a GPU? I'll confess, the algorithm went over my head, so there is perhaps some reason there, or maybe the FP precision isn't up to it...
Edit:
Hah - I went back to re-check the article and found this:
The mathematical enigma is also designed in a way that prevents the use of parallel computing to brute force the solution, since it’s impossible to compute it quickly without knowing the factorization of n.
Carry on, everyone. I'm just being more ignorant than normal.
"I'll confess, the algorithm went over my head..."
me too, but I gather that each step in the cycle needs a value from the previous step and thus cannot be parallelized. And each step is itself a discrete computation so you can't parallelise the individual steps.
That's what I understood, at least
My student take on it is that you need to factorise a very large number that is a particular form, and the "efficient" solution is to build that from the bottom up, rather than top down.
As for Moore's law, isn't it still roughly true? Modern CPUs have multiple cores and a GPU in the same package, so while 10Ghz might be off, 4x 2.5Ghz plus a GPU is about right in terms of miniaturisation.
Anyone here get much benefit from higher clocks? I ran my i7 at 4.5Ghz, and stepped it down to 3.9 and didn't notice the difference for anything in real world terms. Disk and GPU still are the bottlenecks.
This specific computation is something that would benefit massively from pure clock speed. In normal computing your perceived speed is largely dominated by memory access and since increasing CPU clock doesn't improve memory access you don't see much difference. In this case the problem easily fits withing typical level 1 cache and therefore becomes core clock speed limited instead.
"Anyone here get much benefit from higher clocks? I ran my i7 at 4.5Ghz, and stepped it down to 3.9 and didn't notice the difference for anything in real world terms."
Overclocking is rarely noticeable for most general purposes because the increment isn't really all that much as well as the other bottlenecks you mention. When the CPU is already in the multi-GHz range a 5-10% overclock is barely noticeable. Back when CPU speeds were in the MHz range, a 5-10% overclock was often very noticeable to the user, even just in recalculating a large spreadsheet.
I think the most noticeable in my experience was so far back in the day that hard drives still used interleaving because the PC couldn't process a sector read in time to get the next following, so sector 2 would be 3 or 4 sectors after sector one. A faster CPU could process the sector read more quickly and be ready for the next more quickly. You could low level format with different interleave settings and benchmark it for optimum value or you could get something like SpinRight which would automate the process. This could massively improve disk throughput by matching it to the CPU clock speed.
Back when CPU speeds were in the MHz range, a 5-10% overclock was often very noticeable to the user, even just in recalculating a large spreadsheet.
...and a 50% overclock was phenomenal. Back when you could buy the Intel Celeron that was meant to run at 300MHz on a 66MHz clock but could quite happily run at 450MHz on a 100MHz clock instead.
I can't remember the actual costs involved, but it was certainly a way to get performance that was close to the top-of-the-line Pentium II's of the time for a fraction of the costs.
Back in the day my 1MHz 6502 was quite happy being overclocked to 2MHz. That was a very noticeable improvement!
Sadly, some of the RAM was only rated with a 650ns cycle time and couldn't quite keep up - I ended up putting it at the bottom of the bit-mapped screen address space so it was only needed in the highest resolution graphics modes, and moved all the 450ns RAM I had to where the CPU was going to be accessing it most. Back then the RAM was a pile of 2114s and more expensive than my schoolboy pockets could afford to upgrade it all to the correct speed.
"Back in the day my 1MHz 6502 was quite happy being overclocked to 2MHz. That was a very noticeable improvement!"
A few years after that I bought a new PC motherboard with a Pentium 120MHz processor on it (it was the first processor I bought that needed a cooling fan .... though it was tiny!) and reading through the mboard manual I saw there was a jumper to select between 30MHz and 33MHz front side bus, thinking it must be better to have a faster bus (especially as ISA was spec-ed to be 33MHz) and turned the PC on .... and was amazed (at the time - I clearly know why now) that the BIOS announced I had a 133MHz Pentium!
That was the original Celeron. It was basically a PIII on slot1 with the cache chips removed, and since the cache was the most speed-sensitive part of the whole package you could drop it into a 440BX board instead of LX, and get your 100MHz FSB instead of 66MHz.
Mine was 266MHz boosted to 400.
I also went to overclockers.co.uk and bought one of their "specially binned" Coppermine PIIIs. 650MHz, I think, running at 850MHz. Needed a beast of a cooler on air, though. And I wonder why my ears are buggered now...
Just done some more reading. Mine would have been a Celeron 300A - it did include cache (128KB compared to the 512KB of the equivalent P II), and I had the PPGA version in a slot1 adaptor board.
Having the cache on-die made up a lot of the performance deficit against a PII's of the time.
Oh? You were doing well then! The Celeron As didn't overclock anywhere near as consistently as the cacheless jobs (which had the on-die L1 cache but no L2).
Basically the originals were duff parts if you couldn't blast them to 400MHz. Ran that thing for years as my home server. It's still in the attic, unused. Last upgrade was to a 1400MHz Tualatin on a slot1 adapter. Couldn't overclock that one. :)
I don't bother for anything less than 20-25%+. Now, the machine next to me is over-clocked +33% on the CPU, RAM, and FSB but that's seriously water cooled all over the place. That's the host for my Tesla GPGPU.
[I even went into the Northbridge and Southbridge to tweek the settings there with extensive testing to find the right values. I'll never do that again. 'Twas still fun, though.]
"in the USA but the current life expectancy is only 79.38years."
Not sure where that's from but the usually quoted "Life expectancy" without any additional qualifier usually means "Life expectancy at birth". That's (a) a national average - for particular groups women would be more than men, rich people more than poor etc. (b) if you exclude the people who die at an early age, the average for the rest of the group rises, so the longer you have already lived, the higher your life expectancy*. Based on your numbers he's currently 72, meaning he's got quite good odds of making it to 2033:
https://www.ssa.gov/oact/STATS/table4c6.html
*incidentally, does that mean that for the single oldest person in the world, your life expectancy is exactly the same as your age?
My father lived to be 96, with a pig vein in his chest for the last 30 years, and my mother lived to be 102 1/2. They were hardly rich - I doubt they ever made more than $20K a year between the two of them. So yes, 86 is highly obtainable. When we were researching assisted living accommodations for my mother when she had just turned 102, we visited one facility where she would have been the 4th oldest resident, and they had fewer than 100 residents.
But thats usually depressed by - young men dying from their own hands, worksite deaths, car crashes, bike crashes, drug overdoses, alcohol poisoning etc.
My grandad will be 84 in a few weeks and he worked in construction when asbestos was the order of the day so somehow avoided asbestosis, his neighbour across the road is 89 this year and for a man his age spry still also and he was a bricklayer for years, would be more spry if the nurses in ICU had moved his position so he didn't end up with pressure sores that cost him the tissue on his heels (then again they thought the bowel infection was going to finish him but he pulled though)
.. did our crafty Belgian just rip a giant hole in the cryptographic protection of our data by means of creating a shortcut, or are we still moderately OK feeding websites our credit card numbers so THEY leak them and not us?
I'm not quite awake today (4:00am finish, 9:00 restart and not enough coffee), but that's what I ende up wondering about. Kudos for the achievement, but does this imply side effects?
Reading it I don't think so, but it does raise the possibility that standard encryption may be solvable quicker than currently believed if one can work out the underlying mathematics used and then break that down into equivalent chunks. I'm reminded of the tale of the teacher who gave his students a puzzle to keep them quiet during one maths class and one bright spark answered it in under a minute...
1+2+3+4+5+...+98+99+100
which can be summarised by (1+n)*(n/2)
No you can't.
If you're performing a calculation that takes that long, you take the care to ensure that the calculation is checkpointed so that when the inevitable reboot happens (because of power, maintenance, etc.) you don't lose the current state. Heck, IRIX had a tool to do it all for you - cpr.
At least you do take the care for the second time :)
(And I'm not a fan of Windows either, but I do know people who run calculations that run for multiple weeks)
When one does this sort of stuff, one usually writes out intermediate results (in case of component failure or whatnot) to allow restart from checkpoint. Whether he did so or not, kudos to him for system stability and as well as programming chops.
"It requires some discipline to relaunch the computation"
Without seeing the source code how is it storing previously calculated values which would make this achievable? Equally how is it feeding these back into the program to make it work in the event of relaunching it?
To say he did this in "a few lines" of code sounds a bit suspicious.
You would have to - at the very least - write the previously calculated values to a file (not memory in case of a crash, power failure, etc). When it relaunched it would need to work out where it stopped and then use the appropriate data to start off from that point. That in itself is pretty complex.
How do you do all of that, not to mention write the actual calculation part, in "a few lines" of C or indeed any programming language?
Doesn't sound that hard to me. At periodic intervals (say, after each millionth instance), open a temp file, write the instance and the current result. If it's ever restarted, re-open the file, re-read the instance saved and the result at that point, allowing you to resume. Two rather basic routines, actually. And if the computer crashes or blacks out, the maximum loss, provided the temp file is kept healthy, is shy of a million instances which in a task requiring more than million times such would be at worst a mild headache.
A few lines of calls to the GNU Multiple Precision Arithmetic Library will run an awful lot of ops for very little typing...
On a modern OS you can probably just issue async transactional writes to dump incremental results, with negligible effect on performance. Rotating the dump files might be the bulk of the code. At startup ask the OS for the last successful transaction and do no other checking than whether it exists, perhaps throw in a cheap checksum.
Can't even see why restarting the calc would be a chore, just let it autostart with the PC.
Given 50% of the people in here are now Millennials, I'm hugely encouraged by the quality and tone of the debate. Chess, Heavy Metal (not 'arf), Mathematics and Valves (see tubes). The world is clearly laughing its way to destruction and we're going to be there with a Belgian fruit ale to watch the fireworks and throw in some sarcastic comments of which Mr Adams would most surely approve.
Well done everyone, you're all doing very well.
Young Mr Grace.