Putting the SSL information in the DNSSEC system looks like tbe best and most distributed way to go.
I am leery of google setting up a catalogue, do we really want them controlling more of the net?
Every year or so, a crisis or three exposes deep fractures in the system that's supposed to serve as the internet's foundation of trust. In 2008, it was the devastating weakness in SSL, or secure sockets layer, certificates issued by a subsidiary of VeriSign. The following year, it was the minting of a PayPal credential that …
In any case, most of the problems described in the article relate to the current CA system, not to SSL as such.
DANE can fix a lot of that (putting most of the server-oriented CA racket out of business in the process).
Additionally there is another draft in the security working area which addresses specifically some of the CA system short coming by allowing domain owners to stick who is authorised to issue certs on their behalf in DNS. This is a CA-oriented hack which is designed to prevent incidents like the recent Comodo one.
So it seems that most of the problems here are either as a result of -
1. Sloppy string validation in Common Names and URL bars
2. Massive proliferation of "trusted" entities who may or may not have good security practises or even be trustworthy at all
3. Broken revocation methods
4. Un-revoked certificates using obsolete hashing or encryption methods
I've been working with SSL/TLS for a bunch of years now and 2, 3 and 4 have been obvious for a LONG time. 1 is more interesting because you would have thought you'd be extra, extra careful in a security application, if only because the programmers are working on security systems!
Apart from CN validation though, the problems here are HTTPS problems, not SSL or TLS problems. SSL has many wider applications than securing the web. These frequently do not involve any public authority trust at all, have manual revocation methods, cipher suite restrictions and no plaintext-to-encrypted bridge.
The fundamental difficulty here is that the problem is almost impossible to completely solve. A little like DRM (which can be summed up as "how do I give the content and the key to someone, but prevent them using the two together in ways I dislike?"), the trust problem comes down to "How do we establish a relationship of trust between two parties that have never met?". The solution we have been using so far is to involve a third party that the user has never met either. When there were only a handful of these third parties it was perhaps not too much of a stretch; now I look at firefox and there at least 50 "authorities" each with a couple or more root certificates. I know several of them have issued bad certificates in the past and others have been compromised. But if I get rid of them I lose the ability to 'secure' a lot of comms, though secure is the wrong word. Tricky.
tl;dr - The HTTPS infrastructure is in need of a lot of work. SSL/TLS itself less so.
SSL is fine in the sense that you can do without the CA for, say, an enterprise roll-out and that it's mostly the built-in browser bias that favours the CA infrastructure with inflated prices. But that's not a unique https thing.
Take, for example, symbian code signing. Other code signing schemes might be similar, or not, as the case may be. The point is that to get a key to sign your code you need to pay a third party a substantial amount of money. Yes, there are alternatives that are deliberately crippled and otherwise disrespectful, but I'm only talking about the "main" way to sign code and have it be useful on other people's handsets. This is apparently lazyness on nokia's part, forcing you to pay verisign for the privilege of being "vetted" so you get a key and you can sign code.
Not only is this nokia pushing unnecessary costs on the application writer, it also doesn't do what they appear to assume it does. Vetting of someone's possibly government-issued ID doesn't tell anything about whether the code is worthy of a signature. And worse, there is no redress. There doesn't appear to be a working certificate revocation system there either. So if bad code manages to get signed, all nokia can do is boot it out the ovi store. They can't revoke a rogue writer's certificate, and can't announce to the world to no longer trust the app and/or the writer.
They don't even control the root cert in the devices they ship. Now that might actually be deliberate, but the cost is a little excessive, and it doesn't appear to gain much of anything at all.
One of the most important things missing is that there's no choice and even if there was the choices would be meaningless and mostly between "will work" and "won't work". Given a bog standard thorougly conditioned DAU there'll be frustrated and blind clicking on buttons until the thing will appear to "work". No matter the cost.
I say that the model is fundamental to these problems, and not the technicalities that you put up front. A "trusted third party" is indeed no real solution, but programmer lazyness as a widely accepted substitute for a real take on the problem. And a profitable one too!
So dig deeper, and notice that even government-issued ID, should it contain a certificate usable for this sort of thing, would be inappropriate for too many cases to enumerate. This then is a fundamental problem and so far everyone is busily ignoring it. As long as that remains the case, all models will be full of insidious brittleness like this one here.
It isn't just https. It's the CAs and everything the certificates they issue are used for. That brings it squarely back to the PKI.
There are surprisingly simple ways to fix the largest holes, though. Ditch the CAs and have certificates instead be issued by parties for whom it makes the most sense to issue them.
Cert signing is a racket. I have to buy a cert from a CA, a cert which expires in a few years and bestows zero trust in my site. The only reason I'm buying a cert at all is to stop the browser from complaining about the cert.
Why can't I as an individual generate my own cert and sign it with a PGP key? When a browser encounters a site with a PGP signed cert it could present the user with a notification to inspect the web of trust and then choose to trust the site or not based on that. It could sit between a self signed cert and a CA signed cert in the security model - enough for personal & small businesses to use but probably not something a bank or ecommerce store might use.
I'm aware that there are some free CAs, but obtaining a cert is almost as odious as it is for a commercial CA. In either case, the CA isn't providing my site with any extra trust, it's just shutting the browser up.
I'm surprised that some of the free / open source browsers and SSL impls don't realize the chilling effect this is having secure communications and do something to rectify it.
I think you'll find that most EU's won't trust a site that signs it's own cert. I certainly wouldn't. If you think the current system is easy to break imagine how easy it would be to impersonate a site that signed it's own certs.
So go ahead and sign your own cert, but don't expect many customers.
I said a cert that I generate and then sign with my PGP key. Not a self signed cert. The browser would need to be able to recognize this kinds of cert, and present the user with the PGP web of trust for them to make the determination if they want to trust the site or not.
The difference over a self signed cert or unrecognized CA is that the browser could explicitly recognize a PGP signed site and not scare the user away from using it like it does for self signed certs. The difference for site owners is they can roll a cert with a few commands and don't need to pay for the privilege.
Obviously such certs aren't going to be suitable in every case but they would allow individuals and small businesses to set up encrypted communications with site visitors without paying a tax to a CA for the sole purpose of making a scary browser dialog go away, not for any inherent trust in the signature because there isn't any.
I trust my immediate friends. Well, some of them anyway. I may extend trust to their friends. But a tenuous link to parties beyond that, by the vouching for and of people I don't know well if at all... I may well be missing a deep understanding of the workings of a web of trust, so if anyone wants to enlighten me then that would be great, but I don't trust it.
Don't get me wrong, the CA infrastucture and https are both pretty broken, but WoT does not inspire me.
I still trust a few (not all) CAs more than I trust your PGP signature I'm afraid. Besides which your scheme still needs some way of having a protected comms channel with a WoT PGP sig verification service, a way that isn't vulnerable to MITM or other attacks... so we're back at square 1!
When using a secure connection it's usually because confidential data is being transmitted.
You can eliminate that problem entirely simply by using an open connection, no scary message, however no encryption.
What's needed is a method of using an encrypted connection without all the scary messages which put people off using the safer encrypted connection.
There is nothing wrong with a self signed certificate as long as this is expected and understood. The message that pops up should helpfully examine the certificate and be scary only if the cert seems to be impersonating a site that supposed to have a CA cert.
As long as I can tell that the cert has come from the site I am accessing then there should be no drama, just an informative message about the certificate.
The scary message is there for a reason. MITM is actually pretty trivial in a lot of settings, especially on public networks (look up ARP poisoning amongst other things, moxie's sslsniff does this along with a bunch of other tricks), so a self-signed certificate doesn't offer much to me in the way of security as it's absent any authentication.
Is it better than nothing?
I'm not sure. Maybe after the first time, if the browser stores the certificate and checks it's getting the same one every time.
Taken from https://www.owasp.org/index.php/Testing_for_SSL-TLS_(OWASP-CM-001)
Large number of available cipher suites and quick progress in cryptoanalysis makes judging a SSL server a non-trivial task. These criteria are widely recognised as minimum checklist:
SSLv2, due to known weaknesses in protocol design
Export (EXP) level cipher suites in SSLv3
Cipher suites with symmetric encryption algorithm smaller than 128 bits
X.509 certificates with RSA or DSA key smaller than 1024 bits
X.509 certificates signed using MD5 hash, due to known collision attacks on this hash
TLS Renegotiation vulnerability
While there are known collision attacks on MD5 and known cryptoanalytical attacks on RC4, their specific usage in SSL and TLS doesn't allow these attacks to be practical and SSLv3 or TLSv1 cipher suites using RC4 and MD5 with key lenght of 128 bit is still considered sufficient.
The following standards can be used as reference while assessing SSL servers:
NIST SP 800-52 recommends U.S. federal systems to use at least TLS 1.0 with ciphersuites based on RSA or DSA key agreement with ephemeral Diffie-Hellman, 3DES or AES for confidentality and SHA1 for integrity protection. NIST SP 800-52 specifically disallows non-FIPS compliant algorithms like RC4 and MD5. An exception is U.S. federal systems making connections to outside servers, where these algorithms can be used in SSL client mode.
PCI-DSS v1.2 in point 4.1 requires compliant parties to use "strong cryptography" without precisely defining key lengths and algorithms. Common interpretation, partially based on previous versions of the standard, is that at least 128 bit key cipher, no export strength algorithms and no SSLv2 should be used.
SSL Server Rating Guide has been proposed to standardize SSL server assessment and currently is in draft version.
SSL Server Database can be used to assess configuration of publicly available SSL servers based on SSL Rating Guide
One key problem is that we are all too lazy to bother deciding who we want to trust. This is too difficult! We are happy to rely on browser vendors huge list of root CA's from organisations who obviously do not have adequate security and are happy to sell that "secure feeling" of a $10 certificate.
This problem is unlikely top be solved by some technical solution such as improved protocols IMHO.
Perhaps our browser needs some kind of "trust mode". When in "banking mode" it will not trust domain validated certififates. When in "Dissident mode" it will drop trust in Chinese certificates or whatever local regime I do not trust. Etc.
Is to involve the user more determining what master certificates are being used - instead of pretending that the browser maker knows it all.
one stab at this was mention above, but you can think of a variety of others too.
"Perhaps our browser needs some kind of "trust mode". When in "banking mode" it will not trust domain validated certififates. When in "Dissident mode" it will drop trust in Chinese certificates or whatever local regime I do not trust. Etc."
"Why have a ridiculous multi-step process for those when unsigned/unencrypted connections don't deserve any warning at all."
The reason there is no warning for unencrypted connections is that they are not supposed to be secure. The point about https is that it's SUPPOSED to be secure. If, therefore, there is something questionable about the certificate the browser warns you. Plain old http is inherently not secure so no warning should be needed.
There is of course the one that says "if you send data over an unencrypted connection any dodgy bastard will be able to see it". But I'll bet you decided to select "don't display this message again" didn't you?
Then of course there's the fact that you can configure your browser quite easilly so it will only communicate with sites that you decide to trust.
It's not rocket surgery.
The whole system is supposed to solve the problem of "How does your browser (Alice) know that she's talking to the right remote server (Bob)?"
So we ask a 3rd party (Trent) to issue a token that can be used to confirm that 'Bob' really is Bob and not somebody else (Mallory).
All of that works fine - the technology of SSL tokens is basically sound, and life is good.
The problems with this model arises when Trent is compromised. Without a trivial way of quickly replacing Trent's tokens, then the model fails.
Right now we've got a small number of Trents, and even worse, no way of easily replacing a given 'trust supplier' with a different one. This means that these 'trust suppliers' can do whatever they want without consequences, because it's quite difficult for a Bob to switch trust supplier and effectively impossible for Alice to decide that she doesn't trust a given Trent anymore.
The problem with various gradations of certificates is that the system cannot cope gracefully with different levels of failure, and even if it could "the user" wouldn't know what to do. Heck, "the user" most of the time doesn't know that anyway, so certificates do not manage to inform that same user, and are therefore worthless.
Having browsers require "the user" to jump through numerous screens for self-signed certificates --like how early betas of vista required up to seven clicks to confirm a simple deletion "for your own safety", managing to aggravate "the user" but not achieving anything beyond that whatsoever, in fact creating a more unsafe environment-- in the face of entirely unaccountable CAs is just that more insult to injury.
It's not unreasonable of the comodo CEO to complain about DV certs though he frames it entirely in commercial terms whereas we already established that the commercial angle is itself a fertile source of attack vectors. He also is currently at the centre of a nasty breach of trust spat and can thus freely lash out to others. He's already lost face, and he probably rightly feels that the others aren't better.
Just as accidents will happen, some "trusted" CAs will turn out to not be trustable, as even in the best case where all CAs are trustable for some users, none can always be trustable for all. But none of the browsers support the notion, nor does the system allow for graceful or even meaningful degradation there.
Even the various proposals to mend this generally build on broken premises like CRLs and OCSP do. The alternatives that rely on DNS in various ways to announce who the site owner gave money to to "prove" his identity manage to open up yet another can of worms. Apart from the inherent problem with identity --a rather deep and as of yet ill-understood problem that causes most of the "security vs privacy" tradeoff fallacy, but I digress-- it does but apply generous glue to try and bridge a fundamental disalignment of interests.
Nevermind that you'd then need DNSsec to be halfway sure you have the right certificate, that in turn involves a lot of spendy crypto in an entirely separate system, en passant also forcing you to trust the various TLD registrars and nowadays the root too, with all the inherent trust problems there.
If you have to trust the chain of companies making up the path from root to the domain you're accessing, what business case is there left for "trusted third parties" running their part of the CA scam? Why keep them around at all? What have they done for us lately to earn or deserve our trust?
And so why do browser makers continue to support that scheme? What's in it for them? Why haven't they even bothered to come up with schemes allowing users to pick their own trusted CA sets, and perhaps being much more active about what root CAs to allow (say, none that don't at least use intermediaries that can actually be revoked in case of need) with a much more direct and volatile browser linkage.
A large part of it is, again, that this oh-so-clever scheme doesn't allow for revoking certificates as soon as they are sufficiently high up the chain. But what use is such a scheme to "the user" if everything conspires to take choice away from him? How does being forced to trust not make the term "trust" utterly meaningless?
"virtually all of which count on SSL to secure their internal networks": Companies use SSL to secure their internal communications which do not involve browsers, but they can easily manage their own keys, and reject authentication based on public roots to avoid the issues cited in the article. Perhaps they don't, but unlike the system used in browsers, there is no political reason to avoid change, only ignorance. This article doesn't help with the ignorance by confusing SSL with browser behaviour. The concession to accessibility should stop after the byline.
This post has been deleted by its author
To Quote... "Abdulhayoglu is also critical of the entire certificate market for selling credentials for as little as $8 apiece. The low cost means CAs can only turn a profit by doing as little vetting as possible and relying on automated mechanisms that are more susceptible to attacks than those that require the intervention of humans"
This is a very important statement. The low end market is indeed flooded with cheap SSL offers, but that market is dominated by 3 major CAs namely RapidSSL, GoDaddy and you guessed it Comodo.
So by his own admission, Abdulhayoglu admits that in order for his company to turn a profit in this highly competitive market, Comodo (as part of that network offering low cost SSL certificates) must do 'as little vetting as possible'.
This is made even more worrying due to Comodo recent promos including free upgrades to EV SSL.
A question: If it’s hard to turn a profit on Domain Validation, how the hell does this company make any money on Extended Validation by giving it away free.... do they do 'as little vetting possible' here as well???
A CA is paid to validate to the very highest of standard as set by the industry and the CAB forum guidelines. Sorry but if you can't make a profit, don’t cut back on the rules and misissue certificates. Bow out gracefully and let the other companies get on with doing their job properly
If Comodo is therefore a big Enterprise player as it claims, then surely it does not need to sustain itself from this low end, unprofitable DV market. It can stand proudly on its soap box and lead the market away from DV certificates. if they are so dangerous and no profit to be made then what is to stop them?
It’s a case unfortunately that Comodo is the worst culprit in low cost certs and has to seek alternative methods to save costs to increase margin. That is why they employed an number of RAs to do their job and it back fired spectacularly. Now they try to mask that as an industry wide problem!
Come on, wake up everyone.
Have you asked a normal person (someone who doesn't know of the theory of SSL) what "the padlock" means? Chances are they won't have a clue.
In my experience SSL is just a hindrance to the average user, something that stops them from buying crap off the Internet when IE6 fails to validate a cert because of an old set of CAs.
So why are we bothering with EV Certs when the greater risk is the average user who will press anything and confirm every dialog box just so they can order their latest Dan Brown novel tomorrow morning?
I can only see that state-sponsored advertising is the solution, like Clunk-Click of yesteryear which also demonstrated the general public's ignorance to safety and desire to have their eyes poked out by an Austin Allegro cigarette lighter.
3-4 years ago I would agree with this. Ask the average user today and I think you will be surprised. The majority of online consumers now know to look for the gold padlock before they transact online. But lets face it, its taken years to get to this stage.
Its very rare to see an SSL certificate flagged by a browser anyway, that becuase the system as a whole is reliable. PKI and the CA practice is a very robust system. Like others have pointed out, the problem is that when it does go wrong, CAs are not held accountable.
Its a case of the few, letting the many down, then everyone getting the blame, thats surely not fair.
Ok, what we need is a greater awareness of what SSL is and what to look for before handing over your credit card information. That really is the job of the big browser companies to enforce and drum into the users of their browsers.
I've seen Thawte years ago issuing a certificate to "Internet Explorer". It was used to install malware inside IE. User were warned "Do you want to install xxxxxx from "Internet Explorer"?"
If the Internet wasn't run by marketers Thawte would no longer exist. But it's still there issuing certificates.
The current hierarchical approach has single points of failure that multiply consequences the further up the food chains you go.
I should have a number of certs for my PKI that are vouched for by the people that I know. This would mean you would see a number of mutual "friends" who all (hopefully) agree that I am who I am and hopefully highlights any anomalous outliers. You could also look at the certs accepted by your more technically inclined friends and compare them to the ones you are being asked to accept.
Of coursde this would involve throwing away a perfectly good monopoly and fewer sheeple.
A related problem is that browsers like FF are now being configured to make it hard for users to accept self-signed certs. This led to us having to take SSL off clients' webmail connections because users kept complaining that 'the browser said the security was faulty.' Result: No security.
This situation is also, I suspect, responsible for the number of requests for certs relating to mailservers running on subdomains.
All of which shows that ill thought-out security can reduce security instead of improving it.
Never use self signed certs on browser stuff it is just a road to pain when users are involved. It's ok for non browser stuff. For web stuff you will have to use a bought SSL cert for your webmail subdomain I guess or let users have unsecured email where packets can be read across the internet. let’s just hope they don’t send password to each other in email!
allow you to have the benefit of SSL encryption without the need of purchasing a certificate from a CA.
You loose the benefit of a trusted third party vouching for you, but you maintain the security of the encrypted link, so it's not all a waste.
I personally would want to use a trusted partner for my Webmail, but I may myself be happy for a self-certified certificate for services I expose on the Web for my own use.
Also, the problem of using a CA on a closed Intranet can be a serious issue without either setting up a local CA, or using self-signed certificates.
In either of the last two cases, having Firefox bitch about self-signed certificates is less than helpful.
"In either of the last two cases, having Firefox bitch about self-signed certificates is less than helpful."
Then you don't understand the technical side of it. Without an authority you have no idea who you're talking to. In a public setting MITM is really quite easy, so without the third party vouching for you, I have no idea who you are. What use is encryption if I'm only encrypted as far as your MITM-bot?
Setting up a local CA for an intranet is pretty trivial, not a serious issue at all, and firefox's 'bitching' is there for a damn good reason.
The public trust apparatus and certification authorities are broken. Lessening the importance of trust and authentication in secure comms is not a very good way to address this.
I don't care if you're an (inter)national certificate signing authority. Unless I can ring you up and have you read out your key fingerprint so I can check it against the one I have here, I don't trust you.
"Central Signing Authority" my muscular buttocks. Who the hell does he think he is?
I know the telephone number is good if the fingerprint they read out matches the one I've got. And because I wasn't stupid enough to find the telephone number in the same place as I found the key.
And because of my muscular buttocks.
All joking aside ... well, MOST joking aside, I know Web Of Trust isn't a perfect system. It's just that it's obvious to me that Central Authority doesn't even pass the first hurdle. Sure, let's all trust this guy we've never met, because ... erm.... he's got lots of money?
Comodo's CEO is correct only in that the browser makers need to assert more control over the SSL CA industry with the processes it follows and that Certificate Authorities reduce costs by reducing verification efforts. It's ridiculous that he points blame at the Iranian government and blames VeriSign for downing the industry as smokescreens to the real problem... that Comodo actually gave access to it's certificate signing credentials to it's resellers rather than doing the validations themselves.
There is already an easy fix to this scenario and it's called Extended Validation SSL. EV SSL is the ONLY type of SSL that requires a STANDARDIZED set of validation procedures for the Certificate Authority. The CA's procedures are audited and verified annually. Problem is that aside from the green URL bar with EV certs, browsers must differentiate between the different SSL types... DV, OV, and EV. Otherwise most website visitors don't know the difference and will buy the cheapest possible product to display a padlock.
In the end, Certificate Authorities are only trying to keep up with market demand and are not going to self regulate themselves if it means any dip in profit. Maybe Google leading the regulation would be a wise choice here?
Biting the hand that feeds IT © 1998–2021