Chinese make a claim; Americans say they're full of it.
Colour me shocked while the American government rolls out the the quantum-resistant encryption that was recently discussed in a hurried panic...
Briefly this week, it appeared that quantum computers might finally be ready to break 2048-bit RSA encryption, but that moment has passed. The occasion was the publication of an academic paper by no less than two dozen authors affiliated with seven different research institutions in China. The paper, titled "Factoring …
Scott Aaronson is not a mouthpiece of the American government, and he's not the only one pouring cold water on this claim. Lots of people are working on post-quantum crypto, but nobody is particularly rushing to roll it out. Apart from anything else, you don't want to discover you've implemented SIDH and then somebody comes along and breaks it over a weekend.
NIST have identified a future need (Quantum Resistant Cryptography) and is following its process for creating a standard.
Just how is this "rushing it"?
What should they be doing differently that would mean they are not rushing it (but also not fumbling the ball by failing to keep ahead of the game)?
Nothing. It's a conspiracy trap. Once someone has decided that an organization is doing something covertly, or for covert reasons, there is quite literally nothing the organization can do to prove they aren't, because any response will be interpreted as a misdirection attempt.
In this case, it is impossible to prove that the NIST is not already aware of feasible RSA quantum cracks. If they appear to be rushing it, it's because they are aware of RSA quantum cracks and want a countermeasure as soon as possible. If they appear to not be rushing, it's because they are aware of RSA quantum cracks and want to keep countermeasures secret as long as possible. If further information is provided by third parties, it doesn't matter because those third parties are unreliable. There is no possible evidence that could prove they are just doing business as usual.
So far, the best approach I've found to handle this kind of reasoning is to just go away and do something else. I really wish I could find a way to actually get through.
One of the NIST candidates is already in trouble - see this Register article. I guess much more work in proving them is required.
NIST's process is working as it should. No one who knows anything about the subject thought creating good PQC standards would be easy, and what we're seeing is within sensible expectations.
Yeah, it was a little sad when Rainbow died. It was a little sad when SIKE died. Their inventors and the people studying them put in a bunch of work. But now we know more about those families of algorithms.
More-conservative families such as Classic McEllice and NTRU seem to be holding up. There are some practical issues with them, particularly for small systems, but nothing we can't live with for the cases where we actually need PQC in the foreseeable future.
This post has been deleted by its author
Some clever people make a claim with a caveat, another clever person points at the caveat and goes 'Hey Everyone! Make sure you don't miss this bit as it really is quite large...'.
I know it's hard to imagine in 2023, but not everything is geo-political one-upmanship.
Sometimes things are just maths.
Aye. I've read so many 'one big caveat' papers in maths and computer science over the years. Sometimes it's just one slight of hand in the middle of the equations "Assuming convergence of QAOA [123] lemma 2 simplifies to..". Other times you're lucky and it's buried in the discussion/further work section at the end too. The reputable authors will plonk it clearly in the abstract, and the reputable journals will insist on that. In this case it is at least in the abstract as long as you read it with a sceptical eye. But for every author with a reputation to uphold there are ten desperately trying to get a bit of attention who can't help jazzing up their claims and dancing endlessly around the caveat. (Also sometimes the claims are even intentionally sensational as part of a barely-stated proof by contradiction, or evidence of absurdity - "QAOA can't possibly converge because we'd be factoring with ease on a laptop", "QM needs a better interpretation otherwise the cat would be both alive and dead", etc).
Overall the paper looks similar to cranky American stuff I've read, just with a surprisingly large number of authors.
It’s simple - PoC||GTFO. Anyone that can actually factor a 2048 bit semi-prime can provide evidence by factoring RSA-2048 from the RSA Challenge (https://en.m.wikipedia.org/wiki/RSA_numbers#RSA-2048).
Anyone making that claim without doing so is talking bollocks. It’s not about nationality - soooo many frausters have made wild claims about breaking RSA, then been shown to be telling lies. That’s *why* the RSA Challenge was created over 20 years ago! It’s not a new thing sadly
Well, to be fair, a result like "here's an approach that might speed up factoring given hardware we don't currently have (or this team doesn't have), but might soon" could be worth publishing. It just has to be a decent result.
But, yeah, I wouldn't be staying up nights until someone had a PoC. Even then I probably wouldn't be, because I have very little exposure to attackers with hugely expensive equipment occasionally cracking a single RSA-2048 key pair. When someone can do it economically and in bulk, then I'll worry.
Chinese make a claim; Americans say they're full of it.
Colour me shocked while the American government rolls out the the quantum-resistant encryption that was recently discussed in a hurried panic...
Have you heard of peer review https://en.wikipedia.org/wiki/Peer_review? Any idiot can make claims about something but that doesn’t make those claims true. See David Icke and his ‘theories’ https://en.wikipedia.org/wiki/David_Icke
I can claim I worked on my tax return this morning before coming into work………but peer review would show that isn’t true.
ECC isn’t post-quantum secure :( As with RSA algorithms are known for breaking it on a suitably large quantum computer.
However, ECC does have benefits over RSA and anyone implementing asymmetric encryption in a new system/protocol would do well to avoid RSA and use ECC instead. Unfortunately they’ll both be broken when we make large enough quantum computers.
Nobody with any great sense will make any sort of a claim that they have cracked/hacked secret security encryption with quantum. They will just enjoy the advantage and plan for basking in the glory of future anonymised inconvenient revelations.
Always love reading his blog. I can't even pretend to understand the underlying topics most of the times, but his dry wit and take-no-prisoners debunking articles are just brilliant. The way he recently ripped to shred the rubbish about "quantum computer creates wormhole" - which even Quanta and Nature fell for - is just *chef's kiss*.
The article sounds a bit "everybody worries about this quantum computing which doesn't exist at this level", but it is prudent to stay *well* ahead of the curve in terms of breaking and forging. If we look at digital signatures as a feature of your govt ID, for example: the smartcard you spec now gets into your citizens' wallets in two or three years at the earliest and will probably be valid for ten years or so, and you don't exactly want to say "oh it's 2023, I don't trust a digital signature from 2021 anymore", so yes, "at best breakable by nation-state-level actors only (but not large-scale organized crime, e.g.) for the next 20 or 25 years" is a very very valid requirement.
Just look at how many people are still using sha1 (or even md5). Change the recommendations today, and _maybe_ 15 years from now, when attacks actually become practical, no one will still be using vulnerable algorithms. There's a lot of inertia in protocols, for both good and bad reasons.
Also, it's not always possible to have perfect forward secrecy; stuff you're encrypting today might still be useful to an attacker by the time they're able to break it. The earlier you upgrade your crypto, the more stale (and therefore hopefully useless) the information will be by the time someone can easily break it.
"It is possible that Shor's algorithm could be implemented in the next 15 years,"
Well that's just fine then. Unless, of course, there's something being made now that's safety critical, easily accessed from teh internets, has a lifespan in excess of 15 years, which cannot easily have its computing hardware upgraded and where software updates for it stop as soon as manufacturing of that particular version ceases.
Oh look! The "connnected car"....(!)
More proof, as if it were needed, that this is a Really, Really Fucking Stupid Idea.