Title should be
Real cloud strikes fake cloud.
Microsoft is blaming bad weather for the massive outage that knocked a number of Azure cloud and Visual Studio Teams services offline Tuesday. The Windows giant revealed its South Central US facility in Texas was crippled after severe storms and lightning strikes overloaded its cooling equipment, forcing its servers and other …
Sensing some resistance to our AC/DC puns. Get it, AC, DC. Works on many levels.
(Also, think we've done real-fake clouds.)
C.
Is also putting a datacenter in areas with a good chance to be hit by sever weather each year a good idea? I understand those areas are also cheap, full of cheap workers, and with politicians willingly to subsidize anything that will bring in some jobs, but that's the result....
For certain uses, you want the datacenter near the users. There are plenty of users in south east USA, but the whole area is at risk from hurricanes. So putting a datacenter there is a perfectly reasonable decision, balancing the risks and benefits.
Of course, for an organisation with multiple datacenters, designing your worldwide directory service to depend on any single datacenter is very silly.
"Is also putting a datacenter in areas with a good chance to be hit by sever weather each year a good idea?"
????? Every place can/might have extreme weather each year, and when it isn't weather it is earthquake, volcano, tsunami, stampedes, bugs (crawly type), war, crime etc...
True, but not every place have an hurricane/tornado/typhoon/monsoon season... with a far bigger chance to be hit by extreme weather than others. Planning for earthquakes is another obvious thing to take care of - some areas have far larger chances than others. Tsunamis are usually avoided when you're not close to the sea.
Sure, you can still be hit by an asteroid anywhere, or Trump cold declare war to anybody, but the chances are far lower. That's called risk-assessment, but if beancounters preempt any reasonable choices and their eyes are filled with tears when they see cheap workers and subsidies, your chances of a failure becomes far, far bigger.
Whilst I already knew some MS regions were represented by only one datacentre, it strikes me as odd this is the case in a highly developed part of the world that is known to have it's fair share of freak weather.
As for one region being able to take down Azure Active Directory, Calrissian's conjecture would seem to apply - This deal is getting worse all the time.
Trust us, you don't need to know where your compute power is hosted. It's virtual, safe, and infinitely distributed. We swear it's not all ending up in one place.
What was it...a week or so ago Microsoft acted unprofessionally and threatened the growing social network Gab.ai with kicking them off Azure cloud service if they didn't censor two posts from two users of the social network? If Microsoft will pull the service out from under your business because they don't like one or two of your customers, that makes using Microsoft's Azure a unpredictable risk to any that might use them. Maybe God is finally warning Microsoft or Karma caught up with them...
Ok, so, let's try to understand this ... MS claims your data is stored in Europe when here we have proof it is not ... and nobody picks it up ?
Given MS' long and winding track record (recent AND past), who would trust MS to be able to implement a stable and disaster-safe infrastructure ? I mean, come on, with the resources they have ... everytime there are WIndows updates due, the infrastructure becomes unresponsive .... now, lightning in the US causes Europe's service to go TITSUP ?
Nope, this is fully documented. Azure AD has always been a global service where data is not guaranteed to be in region.
Also, there are disaster recovery procedures in place which would have been used if recovery were not underway, which it was. What I think you mean is business continuity, and I agree it's disapointing that AAD isn't designed for availability across regions.
"if you e.g. have your employee's names and contact numbers, that very much *is* PII..."
That's true, but if you have a business reason to store them, such as the ability to contact employees and ex employees for HR purposes then you're fine. Naturally you need to manage that info and have lifecycle processes, but GDPR was designed with these scenarios in mind. Sadly it's nearly 70 pages long so most people don't read it and instead make wild assumptions based on sales seminars.
How odd ...
... that when modem'ing into the university mainframe in 1981, a lightning strike in Texas didn't stop me working.
... that when developing a C++/ASM module to support an airline Clipper application in 1993, the weather in America didn't bring everything down.
... that while working on an 800Mpx 5-layer image yesterday on my four-year-old desktop PC, some rain 4,000 miles away didn't bring me to my knees.
The efficiency, robustness, reliability and security of 'cloud' is truly a wonderful thing. Until you find that you're paying for latency, sluggishness, mysterious interruptions, literally endless excuses and get-out clauses, and single points of failure arising from the inclusion of absurdly over-complicated and often unnecessary systems, all of which, when you come down to it, are primarily contrived to extract money from you, hold your data to ransom, entrap your business's livelihood, spy on you and steal your IP.
Go right ahead, make yourself dependent on this or that monopolistic internet giant. Tell yourself they have your best interests at heart. Wait till you've foolishly let yourself become dependent, wriggling on the punji sticks of their 'ecosystem'. And when they put the prices up to whatever doesn't quite bankrupt you, squeal as loud as you like.
Better still, make sure you grabbed that 'cost-saving' bonus last year and ran for the hills ...
Going back through all of industrial history, the trend has been centralising power in the name of efficiency - Enclosure of fields between 1500-1800, the great mills of the industrial revolution and now the cloud.
You may as well complain that flooding in Bangladesh drives the price of RAM up when in the past you were able to cobble together some VRAM yourself with delay line memory or how flooding in central Europe causes a shortage of iceberg lettuce.
Centralisation happens, efficiency improves, local effects at the point of centralisation become more important.
But sure, rail against the cloud for the obvious WOMBAT that it is.
"That depends on the customer's design choice, not on Microsoft."
Yes? But no. What would be the point of going to an external provider if you have to implement all the replication by yourself? The service the cloud is selling *is* the replication and high availability. Else it is a co-hosting, not a cloud.
Lots of odd presumptions here. I'm going to try to sort some of them.
1) Data plane != control plane. It is entirely possible (but really, really bad design) for the AD information to be held in European servers that won't work properly unless they are talking to some server in Texas.
2) Data centers go down. That's the point. It is really, really dubious to claim that your system is running a cloud of any sort if a single datacenter going out can take it down. In SRE, you plan for planned outages & unplanned to happen at the same time. Anything less is NOT resilient in any meaningful way.
Therefore, it is perfectly rational to put a datacenter in hurricane alley, or tornado alley, or right on top of the San Andres fault, or on the downwind side of a volcano. Datacenters are sited primarily for access to power and workers, and are engineered (in theory) to have physical uptime of 90-95%, at least, that what I saw.
It's the wiring and coding between datacenters that is the magic of SRE that can give five nines. Not what the individual datacenters can do.
3) Azure is hardly the most mature cloud offering out there. It's just silly to dump on the entire idea of cloud because a company with a history of poor reliability is having trouble when it starts a major undertaking.