This is soooo easy
Hamster and ferret do this soooo well. To be even better, this allows you to sidejack eBay sessions and start the bidding! Avast ye Scurvey Dogs, Prepare to be Boarded!
In the morass of Web 2.0 insecurity, Gmail and other Google-hosted services stood out as a beacon of hope. That's because they were believed to be the only free destination that offered protection against a decade-old vulnerability that enabled hackers to steal sensitive authentication details as they pass over Wi-Fi hotspots …
Yes Salesforce.com is very secure... it's so secure in fact that when a remote users ISP renews their IP address salesforce won't allow the Outlook client to connect until said IP address has been added to the whitelist by an admin.... The Outlook client doesn't look at cookies don't you know.... Grumble grumble...
If you're using SSL for the entire session (not just when presenting login credentials) then surely this reset packet attack will be evident when your browser pops up the warning that you are now leaving a secure session? If I got that halfway through doing my gmail I would certainly wonder wtf was happening.
Security is a compromise between denying access to those that should not have it, and allowing easy use for those who are authorised. It's interesting to hear about this attack vector, and we can hope that Google does something about it RSN.
However Gmail is an excellent service, and secure enough for many users. After all, locks only keep out honest people, and safes are rated by the amount of time it takes to crack them open.
Anyone know the answer to this?
I've been thinking of making our webserver HTTPS only but am worried about the overhead purely because no-one else seems to do it.
Currently session ids time-out and are invalidated if the user hits logoff or changes browser / machine. Personal data can't be changed unless the password is re-entered. A session hijacker would be more of an irritant than anything.
As it happens, I came across a free utility that claims to set up a VPN when you use a wi-fi hotspot. It routes all your traffic through a server in the USA, which just moves the trust element.
With the CIA running a venture capital fund, this could easily be the new version of the BBS on a Carlisle telephone number that runs on a computer in Cheltenham. Did that ever happen? Who can tell? But who can we trust. the way the US government rides roughshod over it's own laws on wirtetaps?
Matt wrote:
"So why don't they enforce SSL? What's the overhead to them? I suppose it takes a little CPU to run the encryption algorithm but surly not that much."
HTTP is stateless therefore once it's served a page it's done. Each webpage is a separate transaction.
SSL is a session, so it has to keep the connection alive until either it's closed or times out. Too many concurrent users with a long timeout and you could run into problems.
You can't proxy images on SSL sites.
Matt wrote:
"I also thought SSL had a side effect of slightly compressing the data so you save on bandwidth."
Standard HTTP can be compressed on the fly if server and browser negotiate it.
I've heard it so many times - "we can't afford the processing overhead to wrap our whole site in SSL", but the fact of the matter is this is the only way currently we know how to prevent session-ID cloning. Ironically I see it all the time, on one hand they can't afford the processing overhead yet at the same time they use cheapo semi-skilled C#/PHP hackers writing clumpware hammering the DB and processor. Also with a few tweaks to their infrastructure they could proxy SSL layer upstream to free up the application servers. This leaves only the internet bandwidth, so if you offer customers a choice - as until recently proved we thought Google had done - then everyone has a choice between performance and security...
The main problem with SSL is that you can not use name based hosts with it on your web server.
Each host that uses SSL has to be assigned a unique IP number. This is because the SSL negotiation happens before the HTTP headers of the request are parsed an d SSL requires that the host name in the certificate matches the host name of the IP address.
Because of this many websites have one domain name for SSL such as secure.example.com and once the user has logged on there, future requests can be passed of to any of the other URLs the site maintains.
SSL encryption does put an additional workload on a server, especially at the initial handshake stage to establish authenticity of the two parties (i.e. before it goes down to bulk cipher, which has a lower CPU burden).
You need to be concerned about this in two scenarios:
1. Your server is already operating at close to its processing capacity, so the additional workload from SSL will start to max it out.
2. You are operating a very large server farm, where the incremental workload actually costs a fair bit in additional hardware to manage the burden.
If you have a box that isn't really struggling under the current load, <massive generalisation>your box should have the spare processing capacity to cope with the additional SSL workload</massive generalisation>. Your mileage will vary though.
"The main problem with SSL is that you can not use name based hosts with it on your web server.
Each host that uses SSL has to be assigned a unique IP number. This is because the SSL negotiation happens before the HTTP headers of the request are parsed an d SSL requires that the host name in the certificate matches the host name of the IP address.
Because of this many websites have one domain name for SSL such as secure.example.com and once the user has logged on there, future requests can be passed of to any of the other URLs the site maintains."
Not true on IIS6 anymore you can have multiple sites running of the same IP. You need to edit IIS metabase, you can't do it via the mmc.
In most instances you need to identify which parts of your server are "harmless" - nothing very serious is trasmitted around, and which parts are "potentially dangerous" - user logins, names, addresses, credit cards numbers and so on.
Amazon does this fairly well - they have an innocuous session which follows you around in the address bar which just maintains your shopping basket and a few other odds and sods, nothing incredibly harmful if someone hijacks it... hmmm unless you don't want someone to see that "butt plug" in your recent purchases list of course - I guess that could give a whole new meaning to "sensitive data".
Anyway, once you need to access any (genuinely) sensitive data you hit the SSL login and your session is migrated to a secure one - a secure cookie is used to maintain your session (one of the properties of cookies is that they can be set to be read ONLY under SSL).
The reason people don't generally wrap their entire site in SSL is not the overhead on the server, but the slowdown the users will experience. Everything being downloaded is encrypted on the server and decrypted on the client and vice versa for uploads (such as form data)... all this takes a little time and makes the site seem slower.
For something like email, really, if you can use SSL, it's probably better to do so - generally it's only fairly small amounts of data being encrypted/decrypted - all of which is "potentially dangerous" (forgotten your password? click here to have it emailed). Google, at least, seems to have a reasonable turnaround time from discovery to fix.
There are workarounds for all these excuses. You can deplex name based resolution by proxying the SSL session, the main reason most servers can't do name based lookup is that the SSL session must be established before you can even read the server name string sent by the browser. Of course this means that all subservers must use the same (wildcard) SSL certificate and be sub-domains of the certificate domain.
But my basic argument is that it can be done, it's a compromise between processing overhead at the expense of a flawed security model. And the second part of my argument is that more MIPs are lost with poorly crafted server-side scripts than SSL overhead. Clean up some scripts and use SSL! Furthermore the subdomain/IP issue is not really an issue except for very small players. I run a small outfit and still have 5 IPs...
John: This is why wildcard security certificates exist. You buy *.domain rather than host.domain. Costs a bit more per certificate, but you save overall by needing fewer certificates to go completely SSL.
Matt: You still need wildcard certificates as well. IIS6 cannot do SSL with host headers using regular certificates.
Pie Man wrote:
"There are workarounds for all these excuses."
... apart from the fact that ISPs can't proxy content on SSL websites.
So, either you host your images on a non-SSL server which will pop-up browser warnings about mixed secure/non-secure content or you serve them on an SSL server within your domain.
For sites like Gmail where most of the content is personal then this may not be a major issue but for an e-shop like Amazon then this would be a big factor. Given the number of "304 Not Modified" messages I see in my access logs this would increase their bandwidth costs by orders of magnitude.
If I actually used my freemail for anything important whilst surfing in McDonalds, I deserve to have it taken from me, edited to say rude things about my employer and then sent to my boss from my own email address.
The way I would measure my email providers security is to ask them if they would allow me to connect wirelessly from a Starbucks. If they will.. I wont use them.
There is hardware that takes the burden of processing SSL off the primary processors, including external crypto cards and indeed dedicated SSL farms, so there isn't any great excuse to claim processing burden if you are serious about your security. It's true that there are other limitations to SSL but processing isn't one of them.
If you think SSL will "overload" your servers ... well there's those "reverse proxies" you can look at, like IBM's Tivoli WEBSeal and such. Basically, you can offload all SSL operations *and authentication* to the thing, and it'll resend the request to the "back-end" which can be either SSL or plain-old HTTP. As an added plus, server-side cookies can be "hidden" and stored instead into the server-side session, and the thing's session cookie stores a whole lot more than just a "random string". So the thing itself is partly immune even to the "session hijack" attack, even if SSL isn't enabled.
Really, the only thing I see as a problem with SSL is the whole "one certificate, one IP, one server" thing, and even then, using the "reverse proxy" method you can stick in one single proxy serving as many "backend-servers" as your proxy's hardware can cope with.
I am encouraged that people followed up the most egregious posts with corrections. But I decided to not trim these out of my long post, as a way of corroborating their points.
SSL puts a significant amount of processor load on both ends of the communication, except for those ends of the communication which have a competent crypto-card to do most of that computation in hardware. (Note that said card needs to support the form of encryption being used, and may not be very upgradeable in the face of new algorithms (that having been said, there are tricks to allow them to be upgradeable; I don't know how prevalent said tricks are, however, as I haven't looked into these since getting some upgradeable cards 4-6 years ago.)) Not only do crypto cards allow the server to offload most of its SSL overhead, but they also tend to do it significantly faster than the server would. (Note: a crypto card is not magic. The software needs to be compiled specifically for the card. If you're using a proprietary web server, that means you can only use crypto cards which are specifically supported. If you're using an open source web server, that means you can only use crypto cards which provide libraries usable by your web server.) As a personal anecdote, when we switched to using crypto cards, we spent enough we could have purchased one new server - and we got as much improved performance as if we'd added three new servers.
Many web browsers complain about unencrypted images on encrypted pages, so you have to encrypt your images also.
You need to retain the SSL session for longer (processor for memory performance tradeoff - to reduce the number of initial connections, we save connections between page fetches. But we need the memory to be able to do this.)
Most CAs are very reticent to giving out multi-named certs, so there's usually just one name on the cert. This limits the number of systems that can use the same cert, and if the name on the cert doesn't match the name on the webpage, the browser complains, as it can't distinguish legitimate variation from attacks. (That having been said, it is possible to get wildcard certs and multi-named certs; you just either need to be persistent (shopping around until you find someone who'll do it), or set up your own CA to do it.)
Two tier auth works great, so long as the transition from insecure to secure requires an authentication step. If the site just converts the cookie behind the scenes, then the insecure cookie is effectively a secure one, it's just easier to get. (That is, actual two tier auth is great, but I've seen sites that claimed to do two tier auth without doing it, in a misguided attempt to simplify things for the lusers.)
@Pie Man: Some sites have performance problems even after optimizing their scripts, because their management is too tight-fisted with the cash - unwilling to shell out $3,000 to get another server until the site performance numbers lag enough that their millions in profits show a dip due to low customer satisfaction. Of course, a few $100-$500 crypto cards would generally fix them up fairly quickly, since they're really only needed on the exterior proxy boxes anyway, and not having that security will cost them far more at some point down the road.
If you use wifi, then you get what you ask for really. I don't even use my home wifi without openvpn running over it, even though wpa2 with a complex password might be secure.
In any case it's just silly to use an unencrypted wifi connection for anything remotely sensitive. The obvious problem is that loads of people don't understand that - most of them don't even seem to realize that its a good idea to encrypt their personal wlans.
That said, I managed to use the non-ssl version of gmail at defcon 15 without getting owned. Because I routed all of my traffic through an SSL'd VPN, which was tunneled through and ssh session which was tunneled in yet another ssh session. (The ssh sessions were to deal with stupid complexities of my target network.) I then blocked every host on the local subnet except the gateway w/ ebtables. Sounds complex, but I had a nifty script. And I had no problems pushing 2mbit in both directions, even with all that encryption overhead.
Long story short, just pretend that you're at defcon whenever you switch on that wifi adaptor and you'll be safe (or safer, at least...)
Dunno why you lot are struggling to work this one out... SSL offload network appliances have been around for donkey's years. Funnily enough anyone running large load balanced server farms uses this and doesn't just kick the arse out of their servers.
Oh and you also probably need to use the SSL offload devices in order to load balance any kind of application that requires session persistence anyway...
Anyway, surely the Gmail problem is resolved by having the user select in their profile if they want to use SSL or not. If they select this option then the system will never allow them to drop into unencrypted mode mid session- which is actually the problem here... hmmm that sounds quite simple...