
The Hype surrounding Quantum Computing reminds one of this:
https://en.wikipedia.org/wiki/Extraordinary_Popular_Delusions_and_the_Madness_of_Crowds
Your mission, should you choose to accept it, is to help the National Institute of Standards and Technology (NIST) defend cryptography against the onslaught of quantum computers. It hasn't happened yet, but it's pretty widely agreed that quantum computers pose a significant risk to cryptography. All that's needed is either a …
What about before encryption switching languages every sentence? Inserting a random word every n words? Reversing the entire string? Splitting the string into equal parts then reversing each one independently? Using Google translate to transverse 5 languages ensuring the reverse translate matches the original string?
I think you're looking at the crypto only, this is out of the box thinking. Machines are only as clever as we make them but in essence they are dumb so if the person programming hasn't thought of the many ways you can harden the encryption then they are doomed to fail.
Not as "out of the box" as you seem to think. What you're describing is a sort of crude, clumsy, weak pre-encipherment. Instead of the crude, clumsy, Heath Robinson crap you suggest, why not just use a different modern cipher for your pre-encryption obfuscation? Why not pick something widely believed to be secure in its own right and designed to produce ciphertext which is demonstrably statistically/cryptographically indistinguishable from random data? Something which has withstood decades of analysis itself. Then encipher its statistically/cryptographically pseudorandom output using a different cipher and unrelated key. Just as Truecrypt used to do... Before it was harried into oblivion. There's been some interesting work done in proving the worth of such approaches and how they compound, useful in the light of offputting thought experiments like "evil ciphers" ...and which might go some way to explain what happened to TC.
Better than entrusting your "secret" in plaintext to the interwebs and Google Translate, wouldn't you agree?
Maybe I didn't explain myself, what I am suggesting is that if you have an algorithm that can process millions of combinations simultaneously to crack the cipher then wouldn't that be rendered useless if the original information made no sense unless you knew what to do with it after you had removed the encryption and how would the algorithm determine it had decrypted something if it didn't know what it was decrypting as the output wouldn't match the expectation. I suppose I'm also asking why we solely rely on math to encrypt rather using random ideas as well to create encrypted data, surely that would be more difficult to break and yes google is a bad idea but they do no evil...
is that they are no use for the frequent, reliable transfer of information. Although what you are suggesting would probably work very nicely for a single communication, or set of communications, with a known other (who knows what tricks you are pulling) it would not work as a widely available communications system where standards have to be created, agreed to, implemented and therefore become common knowledge and easily reversed.
The reason we rely on maths is because it provides a known method for reducing structured information into essentially random data together with a key that can be shown to be breakable only by the application of 'n' clock cycles and that 'n' can be adjusted to whatever level of security you require (bugs and implementation errors excepted).
@AC:
Just as Truecrypt used to do... Before it was harried into oblivion.
Oblivion: the act or process of dying out; complete annihilation or extinction.
Truecrypt is not dead or forgotten. The source code still exists and still compiles just fine, and it is just as effective as it ever was.
. . that before an encryption apocalypse is upon us there are already several avenues of investigation open because mathematicians (I assume) 'wasted' their time with developing branches of Number Theory that most people would look at and say, "What's the use of that?".
Anyone responsible for science funding should take lessons from these sort of developments.
AC "Relativity is used by GPS satellites... ...militaries have loved GPS..."
There's a story that the earliest GPS satellites had a remote controlled 'Relativity Switch', because the military decision makers didn't quite trust the Einsteinian Physics.
If it's true, then it makes the point opposite of what you made. Which is funny.
before an encryption apocalypse is upon us there are already several avenues of investigation open because mathematicians (I assume) 'wasted' their time
Much of the research directly relevant to post-quantum cryptography was done after quantum computing was conceived. After all, we've known Shor's and Grover's algorithms for two decades now.
Lattice-based PQC algorithms like NTRU are of similar vintage. As the article notes, McEliece (one of a family of PQC algorithms based on error-correcting codes) is even older (almost four decades); but it was invented as a cryptosystem, i.e. as a piece of applied mathematics.
It is nice to know that, broadly speaking, cryptography has found applications for number theory and some other once-abstract branches of mathematics - not because there's anything wrong with pure mathematics, but because it's an interesting and unexpected consequence. But it's more true of conventional cryptography than of QC.
Perhaps one-time-pad's time has come?
For many years, storage sizes have been growing exponentially while the costs drop. You could cheaply make individual business cards with 4GB of random numbers and exchange them with everyone you meet; for site-to-site links you can have a 1TB hard drive at each end.
You could use these random numbers as AES-256 keys, or for really sensitive stuff encrypt the message directly with the pad.
No quantum crypto is going to break that.
Obviously you would need some system to make sure the keys are only used once. To start with, the cards you hand out would have to be unique (while you keep a copy of each). Each would contain two separate blocks of key data, one for sending and one for receiving.
It still might not be super practical, but the basic idea is sound, say 4GB is good for an awful lot of emails.
AC "But if you keep re-using the same data off the card it's not a one-time pad?"
The solution is in the name. It's a "One Time Pad".
After using a block of randomness, securely erase it at each end.
Your objection is ridiculous. One shouldn't have to explicitly rebut such trivialities.
It's an obvious idea. I posted the same basic concept before (about six months ago, and likely previous to that...). I have no idea why anyone should downvote it. It's common sense that it could be made to work. Cheers.
Zener diodes, transistors, ring oscillators and such ...as sources of noise It seems obvious that such noise should be packaged up into 1kB blocks, and then written to pairs of multi-TB SSDs for physical distribution. The hardware would look like a HDD duplicating machine. 'Several TB of One-Time Pad should be enough for anyone.'
http://forums.theregister.co.uk/forum/2/2015/11/12/big_bang_left_us_with_a_perfect_random_number_generator/
Sigh.
Every crypto article gets this response.
OTPs are symmetric encryption. Symmetric encryption does not give a rat's ass about quantum cryptanalysis. QC only provides at best a quadratic improvement in the size of the inner loop when breaking a symmetric key. Doubling the key size removes that advantage.
OTPs are largely useless in practice, because you need a secure channel to transmit as much information as you plan to encrypt. If you have that secure channel, then just use it to send your messages.
MW "If you have that secure channel, then just use it to send your messages."
There's a concept called 'The Axis of Time'.
Those that ignore it can fail to think clearly.
Alice and Bob can meet (at some point in time), probably in Vienna, have amazing sex, and organize their pair of multi-TB One Time Pads. They they can bid a tearful farewell, and Bob heads off on his mission. Then (at all following points in time) they use the One Time Pads. Seeing as how the OTPs are huge, they'll last for years.
I shouldn't have to explain this.
But wouldn't the 'quantum resistant algorithms' be resistant and non-resistant at the same time?
Anyway, the point of one time pads is that that you use each and every one of them only once, ever. So if you can make as many one time pads as you'll ever need, based on a truely random generator, and don't ever re-use them, well...
AC: "...getting the pad, securely, to [Bob]..."
With multi-TB OTPs, that's a 'One Time' problem.
This is in the context of spying, not online banking.
Bob can pick it up from Alice.
I've contemplated an algorithm to refill OTPs (below), but it's been downvoted so it must be infeasible. I guess that the Downvote button is too small to contain the details of the flaw.
It will take years to properly vet suitable PQC suites and they are worried about matters decades from now.
Betting on QC is truly a mug's game. Personally, I think ECC has years of life (and world+dog is moving towards it) and that the NSA's recent about-face is more politics than anything else (e.g. http://cacr.uwaterloo.ca/~ajmeneze/publications/pqc.pdf ).
Personally, I think ECC has years of life (and world+dog is moving towards it)
Almost certainly.
and that the NSA's recent about-face is more politics than anything else (e.g. http://cacr.uwaterloo.ca/~ajmeneze/publications/pqc.pdf ).
There are many possibilities. FUD is definitely one. But let's assume the NSA is privy to a non-NOBUS attack (i.e., one they think someone who isn't the NSA could discover) against ECC. If it's a QC attack, then it will be a long time before it's economical to apply it to traffic that isn't very valuable to the attacker. Even if it's a conventional attack, the economics may not make it worth attacking generic HTTPS traffic and the like.
My instinct tells me that it should be possible to securely communicate replacement One Time Pad noise, so Alice can refill Bob's slowly depleting One Time Pad, within a message.
E.g. 2 kb transmission, carrying 1 kb of message text (encrypted) AND another 1 kb of replacement noise (encrypted), both encrypted using the same 1 kb of One Time Pad. The new nose is noise; so you can reuse the One Time Pad in this case.
But I'm not sure. It might be a house of cards. It might violate conservation of entropy. It might prove to consume as many bits as are transmitted.
But I suspect it's feasible.
One downvote (so far).
Any explicit rebuttal on why it wouldn't work?
Other examples to think about (same concept):
E.g. 1 kb of replacement bit-wise noise (encrypted, new block of One Time Pad), but encrypted using the same-old same-old 1 kb of One Time Pad used to send all the new blocks of One Time Pad. The new noise is bit-wise noise; so you can endless reuse the One Time Pad when sending more good bit-wise noise.
E.g. One secret bit = 'X' (0 or 1). Used to XOR-encrypt an endless bit stream of noise. The noise is either inverted or it's not inverted. Can you ever guess 'X'? I don't think so. One bit OTP. Same thing.
Rebuttals very welcome.
The problem with McEliece is key size. 1 MB keys are a bit awkward.
Europe seems to be leaning toward a variant of NTRU that's not patent-encumbered. Key sizes aren't great for the NTRU lattice problems either, though.
Personally I think Ring-LWE looks like the best candidate at the moment. Proven to reduce to a believed-hard problem, so unless someone shows P=NP or something equally drastic (FTL travel, decryption by unicorn magic, etc), it looks secure. And key sizes and performance are within the practical range.
For symmetric crypto, PQC is straightforward. Longer key and digest sizes do the trick. No algorithm in BQP can achieve better than quadratic improvement in number of iterations for breaking a symmetric key, and that's assuming iterations on a QC are as fast as on a conventional machine, which is very unlikely.
AES specifically will probably need to be tweaked, because its faulty key schedule means you can't just make the key longer and get an equivalent improvement in strength. The article's wrong about that. Similarly, we're not so sure that SHA-2's digest can be made larger arbitrarily without compromising the hash. We're not certain about SHA-3 either, but it was designed to produce larger digests, so it's probably the better candidate there.