
How many SSL weaknesses are there...?
How many remain to be discovered?
We need a better system to keep others out.
Apple has updated its mobile operating system iOS to patch a bug that blows apart the integrity of encrypted connections. Versions 7.0.6 and 6.1.6, available now for download, fixes a vulnerability that could allow "an attacker with a privileged network position" to "capture or modify data in sessions protected by SSL/TLS," …
This post has been deleted by its author
The linked article states:
"A test case could have caught this, but it's difficult because it's so deep into the handshake. One needs to write a completely separate TLS stack, with lots of options for sending invalid handshakes."
So rather than the absence of proper unit testing I'd say it was the absence of exceptional unit testing, but it plays into the usual narrative around Apple's attitude towards security versus the more commerce-oriented firms.
But these same issues must be concerns for Firefox et al. How many variations of ostensibly valid but actually invalid SSL certificate can there be and has nobody set up test servers that automatically vend those? Writing a unit to test to connect to each of those, with and without a data fuzzer, doesn't sound too hard.
>it was the absence of exceptional unit testing
This is the level and sort of testing that formed the majority of tests performed to assess IEEE 802 and OSI standards conformance, back in the late 80's. The building of the relevant "reference implementations" and test scenarios took time and effort and a particular bend of mind... So it doesn't surprise me that OEM developers under delivery pressure will focus on the simpler functionality tests.
"A test case could have caught this, but it's difficult because it's so deep into the handshake. One needs to write a completely separate TLS stack, with lots of options for sending invalid handshakes."
A test case?!! This particular bug could have been caught by paying attention to compiler warnings, or if the implementation is particularly poor, using a static linter. The duplicate goto renders a big chunk of the function unreachable, and any decent C implementation should warn about that.
It's lousy coding practice (copy and paste, gratuitous use of goto instead of refactoring, unbracketed single-line if-bodies) coupled with lousy integration practice (not checking warnings from the build).
This sort of thing is, unfortunately, all too common among programmers in general, and C programmers in particular. It's possible to write robust, clear, clean, maintainable C, but very few people even learn enough of the language to do it, much less make the effort.
Replying to myself: further down in the ImperialViolet post, the author talks about the lack of warnings for a similar mistake from GCC and Clang with typical defaults. Well, you use an implementation with lousy behavior, you get lousy code; and compiler options are part of the implementation.
Frankly, if you're writing security-sensitive code in C and you're not using both static and dynamic checkers with it, it's already highly suspect.
Does anyone bother to actually verify a damned SSL certificate these days?
Actually, I do, but one of the issues I found with iOS is that there appears no way to check which certs were flagged up, but subsequently accepted by a user. One of the issues I found is that especially hotel networks try to make users accept invalid certs when checking email, and if you're typing when this comes up you may end up accepting one inadvertedly - and unable to turn the clock back on that.
I would be interested to hear of people who know how this mechanism works, and if there is a way to zap user-accepted certs, if need be all at once.
I verify certificates, and that's the default for all the SSL/TLS-enabled software I write. But it is both a tremendous hassle and very confusing for non-experts.
One of the issues I found is that especially hotel networks try to make users accept invalid certs when checking email, and if you're typing when this comes up you may end up accepting one inadvertedly - and unable to turn the clock back on that.
As with all things PKI, it's far too complex and difficult, even if you've studied all the relevant esoterica.
With X.509 certificates, the normal verification path goes something like this:
1. Check to see that the certificate itself is valid: that it's well-formed, within its validity dates, etc.
2. If the certificate is self-signed, see if it's one you've already decided to trust. If not, you don't trust it.
3. Otherwise, find the certificate that was used to sign it. That signing certificate might be part of the "chain" of certificates sent by the peer, or it might be one you already have.
4. Check that the signing certificate is valid, as in the first step above.
5. Use the public key of the signing certificate to verify the signature on the certificate you're trying to verify.
6. The certificate you just used to check the signature may be a "root" (self-signed) certificate, or it may be an "intermediary", which means it was signed by another certificate, in which case you repeat steps 3-6.
7. Eventually you end up at a root certificate. Either you already trust this, in which case you trust the chain (so far); or you don't trust this root, so you don't trust the chain. Normally trusted roots are issued by (supposedly) well-known CAs, and are pre-installed for you. Of course we know CAs are not themselves particularly trustworthy, which is one reason why the whole X.509 PKI is a house of cards.
8. If you trust the chain up to this point, now you go back and check to see that none of the certificates in it has been revoked, either by comparing against CRLs (Certificate Revocation Lists), which you're supposed to periodically update, or using OCSP (Open Certificate Status Protocol). Neither CRLs nor OCSP work well in most cases; in fact they're nearly useless for most users.
Now, if we get to the point where you've decided you don't trust the chain, what happens next is up to the SSL/TLS implementation, which usually defers the decision to the application, which usually in turn defers it to the user, who generally doesn't know enough to make an informed decision. This is where we get prompt-to-trust mechanisms, which - as you noted - generally have abysmal UI/UIM and are prone to accidentally agreeing to trust certificates. In the security biz, this is what we call "failing to insecure" or "fucking stupid and lazy" (which, ironically, is also how we describe most software developers).
I would be interested to hear of people who know how this mechanism works, and if there is a way to zap user-accepted certs, if need be all at once.
Well, there's the rub. When the user tells the software to accept an unverified certificate, the software might do pretty much anything. It might just ignore the verification failure for the current session. Or it might add the peer certificate (the end of the chain) to a collection of "trust these peers" certificates. Or it might add the root or earliest intermediary (the beginning of the chain) to a collection of "trust these issuers" certificates - which may or may not be distinct from its collection of trusted CA certificates.
And where are those collections? Most OSes provide some sort of central certificate stores. Windows has one for the machine ("My Computer") and one for each user and domain collections as well. iOS has some set of central certificate stores, but I don't know the details. Some apps use those OS-hosted stores. Others maintain their own stores. Firefox, for example, keeps its own certificate collections. Java apps often maintain their own.
Pretty much everything that keeps collections of trusted certificates lets you browse and remove them, but good luck figuring out which ones were shipped with the OS or product and which ones were later added by a user - deliberately or inadvertently.
The problem with cryptography isn't encryption, it's key distribution. And it remains unsolved. PKI systems fall into two categories: massively over-engineered in an attempt to handle every use case, and thereby well-suited for none; or tuned for a specific use case and not helpful for general use. PGP's web of trust is an example of the latter, and X.509 is an example of the former. It's fundamentally broken, and has seen a tiresome series of ever-more-complex attempts to patch the worst problems.
Please explain!
He's referring to the zillion dev "howto" manuals and "guru" programmer recommendations when dealing with SSL certs. A lot of them end up telling the dev to "disable SSL validation" or doing something similarly dumb like that. Even some of the howto's that shouldn't give this advice (those concerning security stuff like Identity Management and Access Mgt suites) are still giving advice on how to disable SSL validation or how to trick the tool into accepting self-signed certs.
"Unlike the o/s used by Android devices, which apparently are usually updated by purchasing a new device."
Purchasing a new device, which you fanbois do anyway, like sheep, every year the new shiny comes out?
Also you obviously have no clue what you're talking about. Google can and does update things on older phones.
But keep the butthurt iPropaganda coming. You provide entertainment.
But keep the butthurt iPropaganda coming. You provide entertainment.
Disappointingly, having your former account closed has not been educational enough for you to moderate your style and method of expressing yourself on public forums.
A couple of tips for you:
- everyone is entitled to their own opinion.
- everyone is entitled to respect for their choices.
- everyone is entitled to the courtesy you would also give them when meeting in person.
- if someone makes a choice or has an opinion you don't agree with, rational, well reasoned arguments to support your own position are welcome because that's the basis of intelligent discussion.
- give and you shall receive: pay the same respect to someone else's arguments as you expect for your own.
- there is no reasonable argument for getting personal or insulting.
- the use of invective other than for humorous effect merely reflects negatively on the user.
- used in moderation, humour, irony and sarcasm are very effective in a discussion. Insults aren't.
- relax. As a human being, you are entitled to make mistakes. The ability to accept this and learn from it distinguishes good discussion partners from people who will eventually be ignored.
- there is no "winning" or "losing" - there are points of view(*)
- any attempt to get away with creative interpretation of facts deserves the contempt and derision it will invariably receive. I know this goes slightly against the gist of the above, but trying to lie is frankly a stupid idea on a forum where most readers have above average analytical abilities by dint of their profession.
- you don't know who you're talking to. There ARE people smarter than you. Sometimes they even bother to give you a hint you're on the wrong track.
Hope this helps.
(*) I would have said "shades of grey", but that would now invariably result in someone throwing the number 50 in there :).
This explanation about what the flaw entails makes absolutely no sense to me. SSL validation has got no bearing on the IP address of the host. And if they've just simply turned off CN validation (which is what everything's pointing to at the moment) for all iOS handled SSL connections, they should get sued for gross negligence.
We've been advocating using your own SSL channel (such as what Chrome uses on iOS) for years, especially if the data you're sending can be misused for financial gain. Having the OS handle cryptography is simply a bad idea.
Except there is a major feature missing in Chrome SSL implementation, there is no way to import certificate, which is useful if you have man-in-the-middle protection on corporate firewall/proxy.
You should be able to either import new root certs or trust root pushed by Apple config/MDM solutions.
So you can only download 6.1.6 on devices that can't run iOS7 (i.e. 3GS).
Which is moronic for devs, as we can't download it from the dev portal nor via iTunes nor via device (you can only download iOS 7 via Settings).
We have to keep a range of devices with each OS installed so we can reproduce bugs reported by users, and you'd be amazed how many iPhone 4 and iPad 2 users are still on iOS 6 (even 5). Telling people to upgrade is all well and good, but it's a total cop-out of just letting people use what they want to (and I understand, because I loath iOS 7).
"yep, loads of people are still on iOS 6 on there phones because they don't want iOS 7"
Yep. My wife really doesn't want ios7, so she's going to be really upset when I tell her. Sorry dear, you need to update to that pig-ugly ios7 because Apple didn't implement a security protocol correctly. Yes, they fixed it in ios6 also, but you can't have it. She, like many others, will have no idea how important it is.
This post has been deleted by its author
There is. Download the update on the device rather than via iTunes. The update itself is only 13.6MB.
Of course, that may be a bad idea in this case, if the update process uses iOS SSL connection security to verify that it's downloading from a trusted source...
I don't give a rat's arse about SSL snooping - that's a vuln that can be fixed. It's corporate data theft that really pisses me off - but no-one wants do do anything about because it's apparently legal. Apps such as LinkedIn can steal your contact lists from the mobile app and fuck knows what Facebook is up to in the background.
I had a notification that Dropbox are going to change their terms and conditions - No idea what they will be doing and I don't give a shit. It's going. My trust is gone in "free" apps. All these c**ts want is your personal information. Same as the care.data shite. Harvest peoples info and flog it.
The net isn't what it was 15 years ago - marketing bastards see it as a convenient way of stealing information by way of small print and profiting from it and mobile OS vendors seem to be quite happy to provide methods via the API to allow this to happen.
The whole ethos of the net is fucked and exploitation of vulnerabilities is a minor crack in the huge hole of distrust and contempt that I have for companies who appear to be whiter-than-white but have nefarious aims.
My trust is gone in "free" apps.
Sigh. Sorry you realised this late that "free" most definitely isn't. It's the biggest, most well spread lie of the 21st century, knocking "I did not have sex with that woman", "we confirmed WMD in Iran" and "this won't hurt" into a cocked hat.
It'd be nice if Apple could release a 7.1beta6 to patch the vulnerability. 7.1beta5 suffers the problem (based on the various test sites that have popped up) but still no Beta 6 - and you can't, officially, go back to 7.0 once you're on the beta track (although, if you're willing to lose your backup - or hack the version strings in the backup to keep it, you can do it).
Not happy that to fix this bug I have to upgrade from iOS6.1.3 to iOS7 on my iPhone4S. I have deliberately stayed on iOS6 because I just don't like the new "flat" look of iOS7. The rubbish icons, the over minimalism. I think it's a step backwards in many ways. I wish 6.1.6 was available to iOS6 users even on iOS7 capable devices. What to do?
I'm in the same boat.
If it was any other company, a concerted twatterstorm or whatever might embarass them into doing something, but as it is Apple, they'll just unleash the New Model Fanboi Army (TM) to crush the resistance of the non-beleivers with their "Get over it" banners.
But a fix for iOS 6.16 is available. It's not an enhancement - it's a fix for a vulnerability of Apple's making. Perhaps the problem (as Apple would see it) that by making the fix available without upgrading, it would create a precedent. Heaven forfend...