Microsoft's Windows Azure cloud was hit by a worldwide partial compute outage today, calling into question how effectively Redmond has partitioned its service. The problems emerged at 2.35AM UTC, and were still ongoing as of 10.20PM UTC the same day, according to the company's service dashboard. "Manual actions to perform …
We were pretty earlier adopters of Azure. Sadly the application we developed for it was brought back into our datacentre after Msfts European datacentres' suffered several failures that left us looking like idiots.
It would appear not much has changed!
So far i have yet to experience or witness a cloud platform that is more reliable than our own datacentre
"Cloud" in the sense that it is implemented currently is about scale, not reliability. Unfortunately, no one told the marketing people so they continue to espouse the reliability aspects which just gives people unrealistic expectations.
We run several services on Azure, which has an infrastructure and set of services that is far more powerful than anything we could ever afford to build. To us, this is worth the potentially reduced availability, your mileage may vary.
If you take a site from your own servers and expect it to have the same uptime on cloud with no extra effort to make it resilient to failure, you're going to be disappointed.
As of now, none of my service are affected, looks like I managed to dodge a bullet there.
Until we have a blue sky of death icon, the nuke will have to do.
"So far i have yet to experience or witness a cloud platform that is more reliable than our own datacentre"
I haven't managed to find one that is cheaper either.
So what the hell are the benefits of using them?
If anyone who is happily using these cloud services could outline the scenario that makes them a good option I'd be interested.
"We were pretty earlier adopters of Azure. Sadly the application we developed for it was brought back into our datacentre after Msfts European datacentres' suffered several failures that left us looking like idiots."
Frankly if you were earlier adopters and bet your business on an a offsite datacentre - sorry , I mean "cloud" - "solution" by MS before it had least reached the equivalent of V2.0 then you ARE bloody idiots.
MS are crapping all over SMBs and VARs (anyone with five PCs trying to install Office 2013 on them all within a day knows this - you can't activate more than three a day from the same IP addy without a volume license, it seems) and concentrating on enterprise and cloud operations.
And they can't do cloud right on their own platinum product, meant to advertise the scalability and stability of it.
If MS can't get it to work properly, how the fuck is anyone else supposed to?
Ah well, at least they still have Volume Licensing to sell, eh?
If the latter does not follow the former, shouldn't MS be paying us to rewrite the procedures and documentation for setup and deployment? ;-)
I'm prepared to accept that sort of the thing in the open source world, but paying several hundred quid for the privilege?
And yes, I am posting with a pinch of salt, kids - relax, it's nearly Friday...
"Not true. Also you can always phone them to activate - only takes a couple of minutes...."
Yeah, so if I have 30 copies of office to register (nowhere near the figure where volume licensing makes sense), that's just an hour of my time wasted.
As such, get a fucking grip - that hour could be spent doing 101 other things more productive than calling them up because they have completely lost the plot in terms of dealing with SMBs.
-These are not isolated failures! We are sleep walking to the mother of all f*ck-ups. A constantly changing and evolving system like the Cloud is going to keep f*cking up! This and Adobe's recent re-admission beautifully demonstrate the mockery-that-is Cloud and Subscription services today! Weren't we reading only last month that another cloudy service had no redundant live backups, so every customer lost one day of data! WTF?
-But thanks to persuasive salesmen armed with minimal knowledge, SME's, corporations, governments and individuals have all jumped onboard. Software / Hardware has always had its fair share of bugs, but these types of 'Cloudware' self-inflicted wounds add a new dimension to existing points of failure. Cloud providers keep changing things either for efficiency or security or for cost centre considerations. This constant change leads to more breaks in the chain!
-Cloud providers need to replicate data around the world continuously so that the next time someone cuts a submarine cable, it doesn't disable an entire region and leave them in the cold! So they need to replicate data but they also need complete independence at a regional data centre level, to ensure the entire system doesn't crash short of a planetary collision with an asteroid etc. But this is tough to do! Really tough! Consider natural disasters, acts of terrorism, internal sabbotage and hacking too! These are big big problems yet to be solved. Additionally, the privacy / spying issue means no one, and I mean no one, can have any idea where their data will ultimately end up, or if it will be disabled or deleted accidentally or deliberately!
-Cloud dependant companies too must also have sufficient redundant telecoms infrastructure in place. Every mission critical business unit needs redundant internet access and that means separate cables and satellite dishes at a minimum, and this costs money, real money. But in the current environment of corporate cost saving, this isn't always possible!
-Maybe in a hundred years or maybe fifty or even twenty, we'll have sufficient rock solid cheap interconnectivity. But for the next decade, expect FIAT (fix it again tomorrow)... As cloud providers have too many complex things to get right, and I for one can't see them doing it. So IMHO a cluster f*ck of monumental proportions is coming our way in the near future...
-But methinks there's lucrative consulting work to be had here. i.e. Helping companies (especially SME's) migrate back to local tech after they've been badly mis-sold cloudy services...
Feel free to downvote or just correct me if I'm wrong (it is after 3am) but isn't this the sort of thing MS want other companies to build their games around? Cloud processing etc of part of the game I mean. If it is then this just shows all those that said it was a bad idea were right to worry. Down globally = no games for anyone if they rely on this.
Again, correct me if I'm wrong but be gentle.
it's probably just a glitch.
But since they have let me out of isolation I am free give the bird to all those half-beards who sneered at my reluctance to wholly embrace the cloud on first sight.
Cloud is OK as one of the backup regimes in the same way a reversing mirror is good for a small bit of travel in one direction, you wouldn't want to use it as the main method though, you can't always see what is going on.
the best solution I've found is to co-back up to a couple of friendly companies in different locations. You keep a copy of their encrypted data and they keep a copy of yours. Costs the price of a couple of disks, the odd KWhr a year, and the really expensive part is the contract but that's lawyers for you.
Maybe the clouds are being upgraded with the latest NSA security patches and service upgrades and they have to reboot the lot?
No salaries/pensions paid in Denmark either due to the regular "some IT related system failure" which magically only just happens on Payday day - four times this year - gotta check if it coincides with options roll-over day, maybe the banks do not in fact have the money in their accounts yet.
The world is getting brittle!
Partition ? That's easy. They partitioned it just like the laptops and PCs sold in shops : a single partition, everything on C:.
That way, when the inevitable Windows crash happens, you get to lose everything when you reinstall the OS if you didn't go through all the hassle of repartitioning your PC in a more sensible way.
----- That way, when the inevitable Windows crash happens, you get to lose everything when you reinstall the OS if you didn't go through all the hassle of repartitioning your PC in a more sensible way'
You mean like linux? Which shoves all packages and their data into the same place as the system stufff?
I know this post will be downvoted - not because I'm wrong, but because I dare point out a linux issue
Umm... what the hell are you talking about? My personal files are on a separate partition, the software comes OS but I could mount /bin and /lib on a separate disk or partition if I felt like it. Even better than that, I can make /bin span several disks or partitions if use LVM.
In practical terms, this mean when I install a new OS I can be sure my personal files don't get destroyed because they are on a separate disk. My home folder is now well over a decade old despite upgrading and even switching from SuSE to Ubuntu and then back to OpenSuSE.
----- Umm... what the hell are you talking about? My personal files are on a separate partition, the software comes OS but I could mount /bin and /lib on aseparate disk or partition if I felt like it. Even better than that, I can make /bin span several disks or partitions if use LVM.
As a unix system admin/developer/designer/programmer/kernel hacker for over 20 years, you haven't said anything I don't know already.
But I might throw your response back at you: "what the hell are you talking about?"
Which part of "Which shoves all packages and their data into the same place as the system stufff?" in my original response didn't you understand?
The original message talked about a reinstall of the OS if it crashed - it said nothing about a corrupt partition.
I was talking about file organization, and not partitioning.
Sure, your user files are seperate from the OS, but then, so are most windows files these says.
If for any unlikely reason, I had to reinstall the OS, if I was using Linux, restoring all the installed packages would be a far greater hassle than restorinng user files.
You know, I probably detest windows more than you do, and there is so much to criticize with it , that you don't need to spread FUD that just ends up hurting the whole open-souce movement.
And no surprises I've received at least 4 down-votes, which could only have come from trolls, the ignorant, or fanbois who don't like hearing any criticism - however valid.
Linux fanbois (and I don't mean all Linux users and enthusiasts) do as much harm to the cause as apple and ms and android fanbois do to ther respective platforms. And just like with religions, each side thinks they are right, and only the other sides come off as blinkered ignoramuses
"And no surprises I've received at least 4 down-votes,"
You probably received them because your post was rather unclear about what you meant - indeed I'm still not sure either. What would you like the default to be ?
BTW even I could restore the OS and the extra software I use in ~30 mins without touching my personal files.
It doesn't help posting as AC either as there is a certain AC around who trashes Linux ALL the time
I'll upvote you for once. It's stupid that by default Linux distro's only create a single filesystem. But you do get asked whether you want to create other partitions during a normal install (and in a more guided way than Windows 7 does) and most experienced Linux admins do it as a matter of course (me - I come from a UNIX background and expect to have at least /, /usr, /var, /tmp, and /home as separate filesystems, with other filesystems set up according to the use of the system)
The problem here is that MSDOS partition table format, which was the default up until Windows XP (SP1?) only allows 4 primary partitions, and then extended partitions in one of the primary partitions, which many boot loaders will not allow you to boot from (I know GRUB does - I'm talking historically)
This meant that when you write a distro installer intended to co-exist with other OSs, unless you are prepared to probe the partition table type, you take the option of only using one of your primary partitions to be as unintrusive as possible.
Unfortunately, although the world has moved on, bad habits die hard, and most installers take the same decisions as they have always done.
I must learn more about the more recent partition table formats to bring myself up-to-date. Although I've installed Windows 7 from scratch twice, I've never created a dual-boot system with Linux (I've done a dual boot XP and Win7 system). All my systems tend to not have any Windows on at all!
" It's stupid that by default Linux distro's only create a single filesystem. "
Some do but certainly not all. I've never experienced a single partition install in years In fact I installed OpenSUSE onto a 16GB pendrive the other day - not a live CD but a full install and it defaulted to a 6GB partition for the root system / and a 8GB for /home - the only change I made was to suggest a swap partition on the stick as well ( this was an experiment to have a full mobile Liniux 'machine' that I could carry from machine to machine. Apart from being a little slow to boot ( the stick's access time is rather slow) it seems to have worked very well
I do that! I use a Sandisk Cruzer Fit 32GB USB flash drive; leave it permanently connected and boot to it by default, only using Windows when I have to. No messing about with partitions...
Am planning to buy a Fusion USB 3.0 Flash Drive
Want to see if it works faster; also want to try Windows to Go on it too.
20 years ago, you booted a computer OS from removable media (floppy disks). Today, I'm *still* booting a computer OS from removable media (USB flash drives). The more things change, the more they stay the same :-)
I guess that may be partially right, but I think most defaults split [/root, /home. and /swap] least I am fairly ceratin RHEL and Fedora do (even if its on the same disk). It's also probably not fair to consider Desktop Envrionments in place of Server Environments anyway.
I suppose many environment use defaults, but it seems many Linux Admins, even fresh ones deal with partitioning of system at some level because of the culture, most fresh Windows Admins tend to click next unless otherwise directed. I guess I tend to be more methodical with my linux systems than I do my Windows boxes, but I don't have many crtical systems running on Windows.
Jokes aside about single partitions, I think we look forward to this being a "Maintenance Update which affect all systems, was improperly corrdinated." or some other BS.
They are actively making the sane and sensible thing impossible. At least you can still manually store things on another partition...
@linux people: partition your own disk already.
@windows people: learn to click on everything that says Advanced... and partition your own disk already.
"blue sky of death: no cloud for you": priceless.
Sadly, this is a totally invalid analogy, because unlike the IT world you cannot have a local backup of the aircraft in case it crashes. Or have copies of the passengers in two different cloud systems. Or take regular backups which can be restored to live versions if needed.
It would be nice if you could, though. "I'm sorry, Mrs. Jones, your husband was killed in that plane crash. He'll be back with you tomorrow but he won't know what happened on Wednesday and Thursday".
"Thank God for that" (thinks: so he won't remember catching me in bed with the milkman.)
Cloud computing, we are told, offers benefits over locally hosted servers on the grounds of scale, availablity and initial costs. There are big boys and girls there who know what they are doing and we can trust our enterprise level applications and databases with them because of the massive redundancy and expertise there.
I disagree with you that my analogy is totally invalid. Flying within clouds, like using Cloud computing, is not as problem-free as we were lead to believe. It seems that the clever boys and girls can't foresee everything that comes their way and they might as well be flying in fog. We here frequently about the bouts of «turbulence» whenever something awry happens.
Furthermore, the analogy does deal with data (or passenger) loss, merely discomfort and poor planning on behalf of those in charge as well as the power of marketing over reality.
On the topics of backups, I wonder what people will do when they 30TB databases and cubes up in the Cloud. Making backups of these is an expensive business and God help us if there is an outage while trying to download a 30TB backup to a local server. My guess is that those companies with large DBs will keep them and their backups up in the Cloud, hoping that they will never have to download them.
It might be that the solution is a sort of cluster or mirroring arrangement between the Cloud and local servers, so that when the Cloud servers become unavailable, that an automatic failover kicks in. But then, what is the point of having your servers in the Cloud?
Cloud computing is the server level equivalant of outsourcing and we all know what a success story that turned out to be.
So why are companies moving to the Cloud? The smarter companies know there are problems. But in a world where short-term-ism and hitting quarterlies is the name of the game, selfish bungee-managers and executives are migrating in the interest of self-preservation. They can guarantee early bonuses based on upfront cost-savings and head out the door soon after. Leaving the poor sods who can't leave to pick up the pieces when the proverbial sh-t hits the fan down the road....
The sheer amount of unknowns regarding the cloud reminds me of the genius that was advanced planning of the two wars in the middle east. For a 100 points can anyone guess the 'genius' behind this quote :
"There are no 'knowns'. There are things we know that we know. There are known unknowns. That is to say there are things that we now know we don't know. But there are also unknown unknowns. These are the things we do not know we don't know."
"Sir Humphrey and Bernard from Yes PM wasn't it? ;)"
Indeed, although that particular quote sounds suspiciously like Donald Rumsfeld (US secretary of 'defence' at the time). I wouldn't put it past Rummy to plagiarise a bit of limey satire and without understanding it. ;)
Another Microsoft cloud failure, following a period of 2 years of many examples of Microsoft getting this wrong on Dynamics, Azure and 365. Often their is the belief that bigger is better, but in the cloud this does not bear true.
Getting it wrong in the cloud if you are big means you devastate more clients and in todays social medium the world will know and quickly.
Biting the hand that feeds IT © 1998–2021