There's a Mandy Rice-Davies quote in there
somewhere.
Adi Shamir, the cryptographer whose surname is the "S" in "RSA", thinks folks need to stop worrying about quantum computing breaking encryption algorithms. Speaking on the annual cryptographers' panel at the RSA Conference in San Francisco this week, he opined that in the 1990s he saw three big issues appear on the security …
Having your data encrypted means someone who does not have the keys cannot read it.
If you do have the keys then you can usually copy the data (or photograph it).
I have data on an encrypted disk. The data is visible to me because I had the key. I can easily copy the data to a USB. It is going to be a challenge to make this secure.
These days pirates use spanners, not cutlasses though -->
I think Shamir is being complacent. I suspect that the NSA, its Chinese equivalent and maybe even GCHQ already have quantum computers that they can use to break 2048 bit RSA if they really need to. They certainly won't be advertising this capability in academic journals if they have it. If they don't have it, they will within a few years.
A clear lesson from the history of technology is that once we know something is possible someone will build it, and it frequently takes a lot less time than you might think.
It sounded to me like some very eminent people opining about the current state of research and the future trajectory of a field (quantum computation) in which they are not actively involved; i.e not to be taken seriously.
A couple of years ago (even last year?) you could have made the same claim about AI not having delivered. It's now clear that LLMs, whatever their shortcomings, have crossed the threshold from useless to usable.
I'm not sure whether security agencies yet have quantum decryption - if they do, it will be expensive and used sparingly. But commercial applications could come very swiftly whenever we cross that critical threshold. It could be tomorrow. It could be thirty years. It could be never. But we should be designing algorithms and keys on that basis rather than getting caught with our pants down.
In this case it's not. Quantum Computers are so far from being useful, due to quantum error correction and the need to hold millions of qubits in quantum entanglement. It is no different to nuclear fusion, always 20 years away.
But does it matter if it’s always 20 years away, or always 1 year, or always tomorrow…?
Interesting diversion- look up ‘life tables’ for risk of dying and survival by age (ONS in the UK). Once you hit 90 years, your ‘life expectancy’ is always 1 yearuntil you’re aged over 100 or more (but the chance of getting another year goes down steadily)
Kudos to Shamir for getting things back to Earth where quantum computing is concerned.
Indeed, most of us have nothing to fear from having our lunch meetings decrypted - which is also why all the hoopla around child abuse is a very poor excuse for backdooring encryption.
That said, I don't think quantum computers decrypting messages will be useless, it's just that those who have one will be using it on messages coming from very specific sources.
Moscow will try to capture and decrypt everything it can from the US embassy, the NSA will do the same to Russia and China, and China will be throwing massive numbers of quantum computers to get a hold on as much embassy traffic as it can.
Qauntum computing is still very much a threat to encrypted messages, it's just that the sphere where it will apply has now been publicly restricted to very high levels only.
I think we can pretty much guarantee that the major embassies have already deployed quantum-resistant systems.
The interesting thing about crypto is that the demand for commercial applications so exceeds that of government that we can expect commercial to be permanently ahead in terms of research. But for development, the ROI terms are different enough that we can expect what they are using to be better in certain categories.
...Two out of three had delivered...
I make that one, unless somebody's come up with an actual AI in the last few minutes while I wasn't looking. I reckon we're no closer to genuine AI than we are to genuine Quantum Computing.
HINT: The LLM / ML products may give a passable impression of intelligence, but they're no more than idiots savant at best.
The LLM / ML products may give a passable impression of intelligence, but they're no more than idiots savant at best.
I could be snarky and say that LLMs give a passable impression of intelligence only to the unintelligent, but I won't.
It really doesn't take much to demonstrate how poor the LLMs are. Given the right prompt, they can generate text that can read plausibly to people without specific domain knowledge. People can be excused the lack of relevant knowledge, and LLMs can be used as a way of hoodwinking the terminally uninformed, which is a shame, and dangerous. Automating the exploitation of uninformed people is a bad thing.
However, if you use an LLM as a chatbot, it takes roughly three to four questions to show just how appalling they are. Ask a question, drill into the answer a couple of times, and expose just how lacking the apparent simulacrum of intelligence is. It also shows how a viva voce is good at discriminating between a student who is good at recalling and regurgitating text and one who actually knows what they are talking about.
My most recent experience was asking (in French) if the LLM could converse in French. It claimed to be able to do so, and showed evidence of interpreting the (French) input correctly, but stuck rigidly to answering only in English, even though I wrote repeatedly that I was monoglot in French.
The models appear to be almost stateless as well, not remembering previous answers, as the following shows (paraphrased slightly):
Please give me a list of three fruit.
- Apple, orange, banana.
Which fruit in the previous answer is yellow?
- Lemon
Have you ever made a mistake?
- Yes, I have made mistakes
Please describe a recent mistake you have made.
- I'm sorry, I do not have the ability to make mistakes.
Each prompt and reply, on its own, is 'plausible'. As a collection, though...
No doubt LLMs will proliferate as chatbots used on 'customer service' websites. I can see I will get even more tetchy.
And so it goes.
I'm guessing by the time quantum computing is good enough to be neck and neck with regular crypto, it'll be possible for two parties to share one particle each of 2048 entangled pairs of particles and measure them every few seconds (I'm assuming such particles change state over time but entangled pairs remain entangled) at arbitrary distance for a totally unhackable securely encrypted connection. That was my armchair understanding of what entanglement can get you--theoretically successfully applied.
What we need isn't *longer* keys. It's *better/stronger* keys. A 521-byte ED25519 key is enormously stronger than a 2048-byte RSA key.
This StackExchange response is 7 years old, but the truth is nothing in it has substantively changed yet:
https://security.stackexchange.com/questions/90077/ssh-key-ed25519-vs-rsa
Yes, there is a lot of *chatter* about quantum computers. Decoherence continues to be an almost insurmountable problem. Recently there was a "major" advance in the field, which was this: Someone devised a scheme for quantum error correction that *introduced slightly fewer errors than it fixed.* Think about that for a minute.
Quantum computers, ON PAPER, have enormous capabilities. Quantum computers with limited numbers of qubits have been deployed in certain extremely narrow niche applications. But there is no real sign on the horizon yet of overcoming the difficulties of scaling them to be generally useful at real-world problems. The death of conventional cryptography is a long, long way away yet. It is definitely time to be *prepared and aware*; but it's not time to panic, and if you think that the solution to your (or anyone else's) problem is a 4096-bit RSA key, then you don't understand the problem.