OK, store some holiday snaps on it or some amateur code but you'd have to be bonkers to use anything in the "cloud" for serious work.
This story is just the tip of the iceberg of reasons for not using a service like this.
Web-based code hosting service Bitbucket experienced more than 19 hours of downtime over the weekend after an apparent DDoS attack on the sky-high compute infrastructure it rents from Amazon.com. This in turn left many developers without access to code projects hosted on Bitbucket, a GitHub-like service based on the Mercurial …
That's unfortunate. It does not help that folks like Stallman are acting as the "Glenn Beck" of the software industry, with his anti-cloud agitprop.
Until Azure is up and running, there is really not a good alternative to Amazon's services, unless you can implement your work in Google's relatively limited cloud-like service.
That'll teach you to put all of your water in one bucket.
But perhaps security at amazon will increase after this, and so the cloud will have a silver lining.
Maybe Amazon had its head in the clouds regarding security, this should bring them back to Earth.
Sorry, I'll stop with the puns now. Anyone else?
"The lesson here is: 'Don’t bet the farm on a single cloud provider,'" says Craig Balding.
Well that may address availability as ones data then becomes available to those that run which ever cloud one places it on as well as oneself.
Now how do we address confidentiality and integrity? Oh that's right we trust a corporation, it's IT dept and it's HR dept to be perfect.
The real lesson to be learned is don't bet the farm on ANY provider.
I find it interesting the number of people who complain when a cloud service craps out, then the obligatory phrase ' That'll teach you to put your eggs in one basket...' appears...
In reality, how is this different to running your service, in a single data centre, on your own servers, with your own comms gears, firewalls , Anti-DDOS appliance, edge routers, software licences and support contracts.
No matter how much failover gear you've got, when someone launches a large scale DDOS at your site, you're still going down until the upstream ISP can sink hole the offending traffic.
Most sites are single points of failure and how many in reality can afford real time replication between two data centre's and double the hardware/software/support/hosting/bandwidth.
Personally, I'm not a cloud user yet, still hosting on own hardware, but when these cloud services have been running for a while longer and they sort out real time support issues, they should afford us business users with the kind of scalability and resilience that we simply could not afford ourselves. I think the cloud will be the data centre for the next few decades, the days of buying your own gear are numbered.
I'll get my coat.
Have to go along with that. It is just stunning how people even entertain this as a serious business tool. You'd have to be utterly insane to send your business off to live in the ether somewhere.
Business always wants to cut costs (at all costs). But sometimes costs are unavoidable. In college we learned about the difference between gross sales and net profit. In their rush to reach 1:1 these people are getting just what they deserve.
Cloud computing is the logical extension of outsourcing. Another scheme that is great for the vendor not so grand for the customer. But there's one born every minute.
"He asks why the storage isn't on a separate channel - and why Amazon doesn't have methods in place to rapidly detect and combat such DDoS attacks."
BECAUSE IT'S CHEAPER. Internet vs MPLS VPN is always a bit cheaper...save a few bucks, get pounded into the ground by DDos. You get what you pay for..
"how is this different to running your service"
Well, one might think that an internal service inside a company would not be accessible from the Internet, or only through VPN. Like, um, 99% of company servers at this point in time.
Any serious company knows that work is done on internal servers, and you never, ever, put an internal server in direct contact with the Internet.
That is also why companies need sysadmins and network admins and firewalls and all that jazz. When the data is important enough, the cost is secondary.
The only possible advantage to cloud computing is a provider putting up a bullet-proof environment and using the same structure for multiple clients, thus saving on scale. But I'll bet the cost of that is way above whatever is offered now.
More big failings like this one will be required for companies to notice the real issues and act on them, just like this company has now tasted the true cloud experience and will react.
Of course, since they are still going for cloudy stuff, they haven't really had a hard enough lesson, but they will.
It's true that your own local, outward-facing servers are at least as vulnerable to DDOS attacks. However, part of the implicit promise of cloud computing was that it makes data and processing widely distributed, so that the failure of even an entire data center wouldn't cause a significant interruption of service. That a relatively simple attack can take down a whole company's resources - both public and internal - is a devastating blow to that promise. Amazon's cloud now seems to have much less advantage over other hosting providers. It would be interesting to test how well other clouds handle similar attacks.
in short: the FBI was investigating someone. Went to their (cloud) host and took the entire data centre down. Several other companies who just happened to be using the same host went out of business.
The moral: many baskets. Not just different data centres, use different companies (and have copies somewhere you can physically get to them). In fact if you can, host your systems on different planets.
I completely agree. Right now, cloud services are the equivalent to a brand new version of operating system from Microsoft. It looks shiny, promises much , but there's no way you'll install until everyone else has discovered 85% of the problems.
As with all new economies of scale, there will be problems, there always will, but over time, most will be mitigated and naturally as Pascal says you're not going to store hyper critical data there, but, and this is a big but, for most businesses, in time, this is where they will ultimately host mission critical systems.Business and CFO's will demand it. The mind set of sysAdmins will simply have to change from 'how do I protect my own personal farm, to, what kind of fabric layer encryption I am using in the cloud to protect my data and how to I replicate that.
I agree with both, its not ready yet , far from, as demonstrated by both Google and Amazon, but you'll have to accept that it will get better over time , security issues will be addressed and the economics of running on the cloud will ultimately lead to its adoption , like it or not.
Thing is the DDOS would be on against the URL/logical (IP) address so even if the service could be moved dynamically the end result would still be denial of service, the bad packets would still reach their destination. So cloud computing could work for diaster recovery (power failure, nuclear war, whatever), but it's not a replacement for good security from the cloud and hosted service provider.
Also the thing about the storage being on the same interface sounds like nonsense. I am making a couple of assumptions here, that they access the service via the public internet rather than via private means (ie VPN/MPLS etc), and that they lost access to everything. In which case the DDOS would have been against the address used to provide access to the hosted service, not the storage, and hence deny access to the entire service.
Although Amazon "uses standard DDoS-fighting techniques such as syn cookies and connection limits" and also "maintains internal bandwidth which exceeds its provider-supplied Internet bandwidth", their pipe has a finite size which can be filled with a large enough flood of unwanted packets - which can come from anywhere.
Hopefully, initiatives to filter malicious network traffic at its source (eg. http://www.darkreading.com/blog/archives/2009/09/dutch_isps_sign.html) will gain traction. Straight bandwidth flooding attacks, such as the ones that struck Bitbucket, are precisely the type of attack that can be curtailed with a simple packet symmetry filtering mechanism, situated close to the source of the malicious traffic.
There are many assumptions made above and ridiculous comments with the exception of a couple sensible comments. I'm curious to know how many of the people above have actually used cloud computing within any scale. Cloud/Grid computing all have their positive and negative points, however you need to be smart enough to take this on board.
It effectively comes down to capacity planning and disaster recovery. 19 hours is obviously a significant outage, which would make you ask why Nøhr was not sufficient in dealing with the outage and recovering off-site and/or splitting traffic between Amazons EC2's data centers?
Throwing software at the cloud will not prevent (D)DOS attacks, their are many forms. In addition Amazon's turn around time in terms of their support is excellent. I have not spoken to any incompetent engineers nor had serious delays in getting any matter resolved from routing issues to instance problems..
A little better planning could have alleviated the issue altogether, and Amazon I'm sure will be happy to help put the matter to a close.