Everything in your world needs encryption
Everything in my world just needs to know when to shut up.
The Internet Architecture Board (IAB) has called for encryption to become the norm for all internet traffic. Last Friday, the IAB issued a statement saying that since there is no single place in the Internet protocol stack that offers the chance to protect “all kinds of communication”, encryption must be adopted throughout the …
It's not just about encryption. OpenSSH was done for by a poor implementation of a protocol. Reason? When they created it the developers had decided on a binary protocol for OpenSSH and then ballsed up the code that implemented it.
Not surprising really; writing down the spec for a binary interface is hard enough, and then you're dependent on a coder reading it, understanding it correctly, and then writing the code bug free. That's a lot of opportunity for getting it wrong.
Surely in this day and age we can do better than that? There's lots of nice things like Google Protocol Buffers or, even better, ASN.1 (it does proper constraints checking and type tagging; GPB has a way to go yet) that make defining and implementing a protocol a much easier task. Honestly, once you get into ASN.1 and things like it you wonder why on earth not everything is done that way.
Getting protocols implemented in a way that automatically makes them more robust by using a proper schema language is just as important as encrypting everything.
I take it you've never written or debugged an ASN.1 stack.
That thing is a f&%king nightmare. DO NOT USE ASN.1 for new protocols please, unless you are having a competition to see how many CVE's you can get for your software ("look Ma, we beat LDAP... !" :-).
I do so every day, and it's straightforward enough. You just have to get the right tools and libraries. Decoding a BER (or worse, PER) datastream from scratch is a mugs game; that's what libraries and tools are for.
What has let down ASN.1 to date is that to make it easy to develop with it requires language features that are relatively recent. Bear in mind it's older than C++ and a lot of the compilers for it have long pedigrees [= legacy code] too.
But that's gradually changing; you can do ASN.1 in Java, C#, Python. The tool vendors are gradually discovering the C++ STL; all we need is for them to discover shared pointers too. The European Space Agency (I don't work there) uses ASN.1 in their TASTE framework; it's a pretty comprehensive piece of work.
Anyway, if you can name a multi-language, mutli-platform, type-tagging, contraints checking, binary encoding, extensible, and flexible serialisation standard, go ahead. I've never found another one. Serialisation without all of those things is, well, incomplete and leaves you having to fill the holes yourself (which is a waste of my time as far as I'm concerned).
Google kids hacking away
Google are seemingly busily re-inventing ASN.1 with their Protocol Buffers, but haven't got round to proper type tagging or constraints checking yet. It's a complete waste of everyone's time. If they'd Googled for "Binary serialisation" they may just have come across this existing standard. Rumour is that the Google guy who annoucned GPB had never heard of ASN.1 when asked by someone in the audience.
Even if they had come across it and dismissed it because of the clunkiness of the available tools, they could have written their own. They're writing their own anyway for an unstable ever changing standard called GPB, so why not do it for a stable and far more sophisticated standard that's already in widespread use? I mean, if they sat down and thought about it properly they'd end up with something very close to ASN.1 anyway.
"Decoding a BER (or worse, PER) datastream from scratch is a mugs game; that's what libraries and tools are for."
Oh, so that's your answer. The details are hard - let someone else do it...
I'm one of the people who have to do it from scratch. ASN.1 utterly *sucks* I'm afraid. Far too complex for its own good. Type tagging is a bad idea. The software needs to understand the marshalling/unmarshalling format, so type tagging is irrelevant IMHO. You either completely understand the stream format, or you have no business trying to parse it (that way lies security holes for sure).
I'm old :-). ONC/RPC xdr format is nice, simple, and has already had its share of security holes so it's now pretty well understood. Give me an xdr stream any day...
"I'm one of the people who have to do it from scratch"
Well, that's unfortunate for you and you have my deepest sympathy. Personally speaking I've always erred on the side of getting tools and libraries for it simply for the sake of making progress. You can get commercial tools for anything POSIXish, Windows too. They're not free though which is a barrier, but sometimes it's worth spending a little to save a bunch of time. Also the right license will get you source code too, which is normally easily re-compiled for other platforms (I've done this twice now).
If you're more open source minded you may also find this and particularly this link useful. This is an open source ASN.1 compiler that is used by the European Space Agency and builds code for C (and ADA); the resulting C code doesn't use any malloc/free calls, which is very handy sometimes, and you can throw Python and SQL into the mix too for good measure. In fact the whole TASTE framework looks like a very interesting way of developing any system, not just spacecraft.
"Type tagging is a bad idea."
No, type tagging is a good idea. You can tell what PDU is being sent without having to know in advance what the sender will do. Agreed, it's irrelevant if the system has only one PDU to send, but as soon as there's two you can tell them apart without having to risk reading the whole byte stream and guessing. That makes it difficult to misinterpret the byte stream, and the constraints checking does the rest. That's something that, for example, Google Protocol Buffers cannot do, and I've seen people get into a right mess with it as a result.
ASN.1 utterly *sucks* I'm afraid. Far too complex for its own good.
Well, if the world wants a binary encoded, type tagged, constraints checked, multi-language, multi-platform and extensible serialisation standard then I'm afraid it's going to struggle to get one simpler than ASN.1. It is indeed complex, but doing all those things is not going to result in something noddy. Noddy gets you bugs likes Heartbleed. Google Protocol Buffers are no where near as sophisticated, and it kinda shows; they keep adding to it, breaking everyone's existing code.
DO NOT USE ASN.1 for new protocols please, unless you are having a competition to see how many CVE's you can get for your software ("look Ma, we beat LDAP... !" :-).
Hmmm, I think the LDAP guys screwed up by inventing their own encoding format (Generic String Encoding Rules) instead of using the existing ITU standard encoding rules. If there's been problems, well perhaps that's not surprising.
"a long-held view within the Internet Engineering Task Force articulated in 1986 in RFC 1984"
First reaction: Wow, someone really had a clue back then!
Then checked the RFC - it is from 1996, not 1986, which makes it slightly less impressive (IIRC, even the infamous "clipper chip" was dead by then).
Yes, Clipper was just about dead but key escrow was still on the rampage (as in "let the Government have a copy of your crypto keys, just in case we ever have an irresistible urge to read your stuff"). Also the criminal investigation of Phil Zimmerman was still ongoing when RFC 1984 was drafted.
What's new today is that data mining of massive amounts of plain text has become practicable in a way that wasn't anticipated in 1996.
Says the person who's never had to do multi-vendor IPSEC inter-operability?
The major vendors use slight differences in implementation to make sure you don't stray from the fold.
The IAB is also internet focussed. What happens when you've got multiple 10G or 40G NICs in a box on your LAN. Do you still want to encrypt everything?
I'm all for encrypting internet traffic, but lots of people use internet protocols on the local lan where encryption can cause problems (e..g for troubleshooting), especially IPSEC tunnels.
I've got a little network based TV tuner. Do I want all the MPEG streams encrypted? No. It isn't required and its way too much complexity for the job.
For a small part of the dispute, there is the noise about DNSSEC vs DNSCurve. DNSSEC is more widely deployed, but Daniel J. Bernstein (author of DNSCurve, and also the discoverer of Elliptic Curve 25519 and other important work, but a rather difficult individual to work with) has denigrated DNSSEC as a "DDOS amplifier." However, by considering encryption to be "free," DNSCurve would eliminate DNS caching, and the load on authoritative DNS servers would increase... dramatically. So nobody uses DNSCurve.
Because nobody uses DNSCurve, your every DNS query is open to interception and manipulation. DNSSEC makes it harder to forge the responses, but that may be small comfort when you're jailed for looking up torproject.org.
What I'd like to see is IPsec with opportunistic encryption, but I don't expect that to be widely available... ever.
Well, it provides for the possibility of a somewhat better PKI than the broken one we use for HTTPS, where any of over 500 CAs can forge a certificate for any domain on the net. DNSSEC would not be a perfect PKI by any means, and there's no particular reason to trust ICANN as the holder of the keys to this kingdom either. But signatories of bogus NS records pointed to at managed zones can at least be held accountable. Such a bogus signed NS record for any next level down is observable and recordable, and once recorded and publicised, such would provide signed proof of bad intentions and actions wherever in the DNSSEC hierarchy registrar reputation needs to be protected.
DNSSEC also provides a fairly obvious place for certificated public key storage. For example, if you want to develop a networked application called foo, storing the public key for example.com at _foo.example.com seems fairly obvious. And given domain registrants already got the hassle of having to renew domains every year or 2, now's good time to move our business to DNSSEC domain friendly registrars in preference to those which are not. This could also save the cost of those stupid, expensive and near useless CA HTTPS certificates.
There's also the problem of guilt by association. The very nature of the Internet requires the routers and so on to know the endpoints, sort of like how the post needs addresses. These are basically essential for the protocols to work yet they alone can be incriminating. So you're stuck with potentially incriminating evidence that can't be encrypted. And obfuscating this with extra hops and such, by definition, reduces the protocol's efficiency by adding garbage data (and the associated costs) to your overhead., leaving you with a hard choice to make.
Assuming that a classified data be protected by an encryption key of 256-bit entropy and the program to manage the system be protected by a manager’s password such as P@$$WoRd1234, the chances may well be that the system will have been taken over by the criminals or spooks who broke the password rather than those who tried to attack the 256-bit encryption key. It could be emphasized that sufficiently strong passwords are the key for the safe deployment of cryptography..
While I agree that crypto should be applied across the board (assuming key mgt is dealt with), we're once again ignoring the other 2/3 of the issue ... protection at rest and during processing. People get a false sense of security when we over-stress protection in transit, assuming that their info is transparently protected. Any person who uses any form of information system for any variation of their personal data needs to have the prime directive branded on their brain. Its your info, its up to you to protect it to the degree you're happy with. If you count on any one else to keep your secrets, you will be disappointing.
Their counterpoint is that, due to side channel attacks and at-rest attacks and all sorts of other ways to sniff out compromising data in transit, it's ALWAYS needed. Basically, they only need to be lucky ONCE; you can slip where you couldn't even conceive of slipping and it's game over. In such an environment, how much do you value the integrity of your data in transit?
Your mistake is in assumption that if a protocol has been designed for non-confidential data, it will never compromise confidential data. Which is obviously wrong, see how often passwords are sent over plain SMTP.
Yes, you would be perfectly right to blame the user. But this won't fix the problem. What will fix it is to use security for any protocol which might be used to gain access to protected data. Which basically means all of them.
This post has been deleted by its author
Biting the hand that feeds IT © 1998–2022