
Death to the cloud
How many times does this have to happen before consumers/customers get it into their heads that cloud-based services are a disaster waiting to happen?
Google Drive has been titsup for the last few hours with many users being greeted by 502 error messages when they attempt to access the online storage service. For many, it remains stubbornly offline for now. Google confirmed that there was a problem with Drive about two hours ago after users complained about service …
Downvoted because that's not the point. If YOUR servers go down, then it's in your power to get things fixed. You can apply what resources (or not) you see fit.
If cloud servers go down, then you have absolutely no control over the fix, or how long it takes.
It's a little like owning a car, or choosing to cloudsource your transport needs to the bus/train. If your car breaks down you can arrange to get it fixed and possibly a replacement. Probably something you have mae allowances for, so your downtime is predictable.
If you get to the bus stop to discover all buses are off (strike/faulty fuel/whatever) then how long till you get where you're going ?
But for businesses cloud storage is definitely an advantage.
For example it's more than likely going to get fixed quicker by qualified engineers at Google than Dave who restocks the vending machine. I have seen it countless times where the so called Admin who is responsible for managing the servers can merely plug in an Ethernet cable.
>>that's not the point. If YOUR servers go down, then it's in your power to get things fixed. You can apply what resources (or not) you see fit.
In a medium-large business with dedicated sysadmins working 24/7, sure. For a small business where people go home at night, not so much.
Let's face it, the longest outage we've seen has been something like a day, to fix a total global screw up. That's pretty impressive... I'm pretty sure Google's thousands of people working 24/7/265 are going to fix things faster than a developer who got lumbered with the sysadmin work because he knows how to build a home network :)
I want to focus on developing stuff, not managing my network.
More to the point, if something of yours goes titsup.com, it's almost invariably one of your somethings and most of your business will carry on while you deal with it.
When it pisses down in the cloud, the whole lot drops on the floor and everyone gets to sit there and twiddle their thumbs until it comes back.
Also worth remembering that the first of these results in TPTB looking for someone to blame, while the second results in them looking for someone to sack.
Down-time is probably the last issue most businesses should consider when considering cloud moves.
Uptime freaks should have enough cash to replicate between the public cloud and the private cloud.
This is also Google drive. Probably not running line-of-business apps for most users.
Except for our largish government agency. Where Google sold us on not just their mail service but also Sites and Apps because that makes collaboration easier. Even if the security wonks don't actually want us putting the data there.
And while in theory we aren't supposed to be putting sensitive or critical information there, with 5,000+ employees and contracts across the country (many of them in 2 to 10 person offices with no real tech department on site), do you really want to bet that policy is being followed any more than the one that says you can't write your password on a sticky attached to the bottom of your keyboard?
And while most of the time our product is something you want to get every day for convenience, occasionally (more frequently than I'd like here of late) that product is sometimes life saving information.
I'm a domestic user, not an admin.
Once upon a time my harddisc failed. Annoying as hell, but I was able to stick in a smaller drive and restore the important stuff from other locations; I knew what went wrong, what needed to be fixed, and where the bits were for restoring data. What it needed was some money and time, but if I needed file X right now, I could get to it. I imagine for professional admin it is similar story. Point is, when your stuff dies, your tears are proportional to how well you run your system.
When a cloud service dies, there is nothing you can do except bitch about it on forums..... Unless you have SD cards, USB harddisc, etc in which case you might think to wonder why it is you are using the cloud in the first place.
Dependency on something out of your control is effortless, but not necessarily smart.
Because your local servers NEVER go offline and take hours or days to fix?
In that case, I know where to go and whose rear end to kick to get it going again. And I can restore from my own backup. And I can find out what is going on, which also means I can decide if this is a coffee break, lunch break or go fishing early kind of interruption.
For Google mega corp somewhere far, far away, you're only interesting and relevant in volume. You won't be able to call them and all the *cough* support is slope shouldered to user forums. My local guy has a cell, I can feed him beer and pizza and I can get help for him. All you can do with Google is hope they switch it back on instead of deciding that, after all, this isn't a service they want to run either..
My local servers are built with an eye towards application-layer redundancy such that, even if a major failure occurs, we should still have userland access available. There are certain cataclysm-grade incidents which could take our systems down, but the ensuing floods, cloud of fallout, horde of zombies, etc., would probably be of greater import than restoring services to the users (if my employers are reading this: I kid. As a loyal employee, I would, of course, place business continuity above protecting my own family from radioactive mutants.)
That said, the cloud is a very reasonable place to keep your work, assuming your work is not important or is easily duplicated.
>>Dream on with that belief. The only thing the "cloud' is, is that you are handing over hardware responsibilities for someone else to handle.
Someone with hardware and backup redundancy about 100X better than anything you can hope to build. And security the likes of which you can only dream about. In these big stories, I don't recall them talking about data actually being lost. Scale wins.
"The cloud really is the safest place for your data..."
Dream on with that belief. The only thing the "cloud' is, is that you are handing over hardware responsibilities for someone else to handle.
Almost forgot: plus network connectivity.
Almost forgot: plus security.
Almost forgot: plus system software upkeep.
Almost forgot: plus basic access rights, like making sure that your account is active and actually grants you access to your own data.
Almost forgot: discounting copyright and DRM search / lockdown procedures, the failure to guarantee that your own data will *remain* your own unless some big company decides to use auto-scanning on your data and delete what they see fit.
Almost forgot: cloud ownership and service continuity. Because if a company decides to shut down a service due to internal decisions, or even sell their service to a new owner, you have no guarantee of continued service.
Almost forgot - AND PROBABLY THE MOST IMPORTANT OF ALL - MTBF. This is EXACTLY what you are seeing today. When 10,000,000 users access a certain construct, the construct's MTBF is statistically altered. If a switch is rated at 180,000 MTBF, that calculation get modified by the number of switches and ends up λ= 1/Ɵ = r/T, using for example analysis such as:
http://www.vicorpower.com/documents/quality/Rel_MTBF.pdf
This factor must then be calculated across ALL devices in the data stream - your network card, your router, your wires, your modem, the data link to your ISP, the ISP switches, the ISP servers, the ISP link, all the segments to the cloud service, the cloud service data link, the cloud servers, the cloud switches, etc. etc. etc. PLUS you must add in the statistical analysis of the MTBF of the multiple hard drives that make up the cluster that your data is physically on; the hard drives must maintain the same MTBF calculation modifications and, therefore, the more drives in the cluster the more likely that one drive will fail within said cluster.
In other words, statistically a cloud MUST have more failures than an individual PC due to the fact that there are more component links to fail. Add in the human factors - software bugs, errors in administration of hardware, software *and* account configuration / rights / access, etc. - and cloud will ALWAYS have a greater statistical probability of failure. For example: ISP reliability. Wow, where I am currently located that is defined as "pipe dream", and I'm in the 8th largest city in the world (by population)! I have had to deal with 1 WEEK outages (infrastructure building wiring failure) at the job and 2 day outages at home. Was just yesterday dealing with an ISP blackout at my inlaw's home 35 blocks away, using a different ISP than I have personally. Last week? Last week *I* had that 2 day system outage; a combination modem and ISP configuration failure, as upon replacement of the modem with 3 different modems they still could not stay consistently connected until a phone call (made by me after the tech left) to the ISP forced a full reconfiguration of the newly replaced (3rd) modem! :(
FWIW, I have *never* lost one scrap of data or lost personal access to said data on any of the systems I use. Never. As the data is local and the system is mine, I can control data redundancy and maintenance and, as there are fewer failure points, the statistics are also on my side. Offline backups, offsite backups - and, amongst other things, no network probabilities between me and my data - assure that my data always there when and where I need it.
Depend upon the ISP, the internet AND the cloud service for my data needs? No, thank you.
Talking generically: The cloud is a tool to be considered for each possible application. It doesn't have to be the only place you put your data, and it (third-party) certainly isn't right for critical/sensitive data, but it has many uses that make it convenient.
(icon: cloud)
The so called cloud involves many things. Not least the data communications infrastructure that's in place. Not long ago people were still running private wide area networks over astronomically priced leased lines (we're talking many tens of thousands of pounds per annum) at speeds of 64kbps. They did this because the nascent non-academic Internet with its encrypted IP tunnels were simply too unknown, unreliable and slow. It was slow even over relatively expensive leased Internet lines, but even worse over ISDN, modem or eventually DSL.
Move on just a few years, and only a handful of companies would now dream of leasing their own point to point connections accross the Atlantic. We tunnel through the Internet. It's a fraction of the cost and because of this it has revolutionised the world.
Your view on these really quite new third party storage services should not be based on their current reliability or security. It should be based on whether they are an early stepping stone to a revolution, as the early commercial comms infrastructure was.
I believe that cheap online storage that's acceptably reliable, acceptably secure and where the price is right is already revolutionising the way many people work and play, and will do so more and more as the technologies mature.
"What's funnier - that your comment on this page exists in "the cloud"? Or that you don't even know it?"
Really, the register? I thought this was just a BBS. Anyways, where do I upload my mp3's? Where is the page for encoding more H.264 files? Where is the page telling me I can't access anything?
Maybe what is funnier than what we "don't even know", is what you haven't considered. Keep your head out of the "cloud", you don't want to loose it.
The cloud is no less a disaster than having it stored locally.
What most regular people fail to see is that even the cloud data needs to be backed up.
In the case of earlier today, I simply go to my cloud backup location (different company on different infrastructure) and restore just the files I need from my daily backup and keep working. No files or time are lost.
If the data was local, then there is a chance the backups are on the same infrastructure which is down and the IT manager would be more concerned with getting the server back up and running rather than running around restoring backups for everyone.
With my daily automatic cloud-to-cloud backup plan in place, I feel very confident my data is accessible at ALL times.
Well, no-one else has mentioned it, so I'll just point out that Drive is a lot more useful than just some place to put all your important data while you wait for someone else to lose it for you.
It's also where Apps Script (GAS) is hosted. I did some work for a local sports club recently that would normally have involved writing code to parse a spreadsheet, finding a server somewhere, setting up Apache, talking Ajax, lots of JavaScript, a bit of HTML, and so on. 2 or 3 weeks work, for no money, so it would never have happened. Instead, I did the whole thing on GAS in a couple of evenings, and they've got a pretty usable website with a complex form entry system for free. I bet there's a lot of people doing this sort of thing on Drive, and I/they don't care if it drops out occasionally.
Of course, GAS is full of bugs, and is practically unsupported, and development seems to have stopped, and it's generally a bit of a pile of sh*te. Still, it's a great idea, and it's better than nothing.
This post has been deleted by its author
>"I honestly can't say I noticed. Makes you wonder if it's been blown out of proportion?"
Seemed like it was up and down for me for about a half hour this morning. Didn't impact any critical work at all. One nice thing about cloud storage is that it's already synced offline on all my computers.
I'm glad it was Google and not my stinkin' cable company - they are quite skilled at bringing the entirety of the intertubes to a halt for days at a time.
~I'm glad it was Google and not my stinkin' cable company - they are quite skilled at bringing the entirety of the intertubes to a halt for days at a time.~
If you have your data on the cloud as a backup and are sync'd locally, you are in good shape. Unfortunately, cloud services are being sold to people as the only place you need to have your data, which is a big mistake.
Your cable company sounds much like mine. They slow down, they go down, never answer their phone after 9-5 and don't care much about service. I dropped my TV subscription partly because the quality was so bad for years and never addressed, but mostly because there isn't much on basic cable worth watching.
I liked the rather verbose comment earlier about MTBF numbers dropping like a rock as more infrastructure is placed between you and your files. Even if cloud service providers maintain their gear, the statistics will eat them for lunch.
Of course it is blown out of proportion; or at least it seems that way when 1.7m headlines scream 'Google Down' or some such.
A lot if the abstract paranoia I read in these comments must come from older admins who remember the days where if something/anything went wrong with your system then everything was inevitably lost or corrupted. While that is still true in a lot if in house deployments it simply isn't the case with large providers anymore. Sure, there are going to be hiccups but 'the cloud' is where it is all going. In house systems were a go between setup that was never optimal and only existed because communications infrastructure didn't exist and technology was simply too expensive to scale.
Don't get me wrong. A properly engineered (and paid for) cloud infrastructure could be as reliable as a mainframe.
But despite the huge drop in what a MIPS of processing or a GB of storage cost it's not there yet.
The question is how long will it take to get there and, if you commit your business now what's the risk it'll get clobbered by an outage before your supplier reaches that level of reliability.
Rules for cloud storage.
1) If its sensitive, encrypt it yourself. (7-zip works for me)
2) If its important, back it up yourself. (OSX's Time machine for all cloud folders)
3) Only use services that store your data both locally and online.
4) If you use it to collaborate with others, have an alternative in place and tested for sharing files. (FTP server)
According to my router I'm getting 8128kb/s down* but only 448kb/s up. Why would I bother to upload anything other than a text file?
I also wonder when we will get equal up/down speeds, probably never if BT are in charge
*BBC's iPlayer just told me that I had insufficient bandwidth!
> I also wonder when we will get equal up/down speeds, probably never if BT are in charge
Whenever you want, just ask (and pay) BT for an SDSL link instead of an ADSL one.
Most domestic users want fast download, and don't need fast upload, so a 10/1 Asymmetric ratio is just fine. Business users want roughly equal, say 5/5, so will order Symmetric DSL. It's not BT's fault if you order a product that isn't what you need.
...Google is far-far-far ahead of everybody else - Drive, by default, SUPPOSED to run as a SYNC service in the enterprise, providing a cloud (aux) copy, not a single copy of anything, remember. If you didn't use like that then stop eating Google's BS eg Chromebook and its cloud drive...,
Don't forget - some countries (eg the US) have an Internet Killswitch at the ready.
We're already being seeded/conditioned by mainstream media that "China's" alleged hacking is a growing concern.
I can imagine a scenario in the not too distant future where an alleged electronic threat is in progress (and Iraq has WMD!) and the Internet Killswitch is put into effect.
(include in that the possibility of mass social unrest and the need to clobber Twitter and FB via the killswitch as has been done by some governments already)
What will happen to all your files and systems on the cloud then?
(some may assert that it wouldn't matter because the internets are down anyway - which I proactively disagree with)
It's not just about YOUR files and infrastructure that you carefully designed for redundancy ("I'm alright Jack") - because there are so many businesses and entities out there that are blindly relying solely upon the cloud...
Due to the simple reason that you rely on a an external connection.
Our office is in central London and beside an occasional cataclysmic event, such as fire on a BT exchange, flooding on a BT exchange, general flooding of everything on the street below, people drilling into an underground high tension power cables etc, for a good chunk of time the connection speeds on our lines were slower than a 44k modem (call BT engineer, wait for BT engineer, look at BT engineer shrugging his shoulders, witness magic, problem's fixed, problem reappears 2 weeks later, call BT engineer etc etc).
People often forget that with your own infrastructure you can plan downtime for out of business hours. Even cloud provider has to deploy fixes and patches and the infrastructure for doing so is very complex (to enable rolling updates and whatnot). Very complex infrastructure tends to be expensive. Cloud providers have to balance availability with cost very carefully so inevitably some compromises have to be done with inevitable consequences - cloud is no magic, it still is the same old IT infrastructure, albeit with dash of custom servers and deployment software.
As cloud providers are global, every outage is during business hours somewhere. This is a serious drawback of cloud. Ironically in some ways it can be quite rigid. Their infrastructure management is their SPoF.