"Scalable" - all the way from zero up to zero.
Microsoft's Azure DevOps is suffering what it describes as "availability degradation" in the UK and Europe and parts of Google's cloud platform are also broken. The wobbles – sorry, "availability degradation" – have shut down Boards, Repos, Pipelines and Test Plans. Core services are meant to be working normally. Azure DevOps …
Monday 11th November 2019 14:57 GMT iron
Monday 11th November 2019 17:44 GMT Pascal Monett
In time, I'm sure Cloud will be great
Right now we're still learning the ropes. I am convinced that Cloud is complicated, and DevOps, go fast and break things, and all the new thingamabobs they keep adding to remain "competitive" are certainly not helping in the stability and availability sides of the operation.
In twenty or so years, when the long-toothed DevOps guys have actually gained the wisdom of experience, I'm sure Cloud progress will be at a much more sedated pace, and availability will be up there with the famous Five Nines.
But first, we're going to have to live through the breakneck (and neck-breaking) pace of those young whippersnappers who have to invent everything Right Damn Now and get it into production yesterday.
I'll keep my data on my own network during that time, thank you very much.
Tuesday 12th November 2019 19:51 GMT Claptrap314
Re: In time, I'm sure Cloud will be great
I wish I could agree, but consider:
1) AWS was first, but it is only within the last two years that they even STARTED to treat multi-data center deployments as a thing. You do NOT have reliability without MDC.
2) M$ well-earned reputation for quality is fully-duplicated in Azure. I'm not keeping careful count, but it seems like the number of fails is trending up, not down, over time. Moreover, their post-mortems make it abundantly clear that they simply do not get reliability engineering.
3) Despite the above, the market does not appear to be taking G seriously.
Which means, that we're dealing "new day, same ****" garbage that has made professionals embarrassed by association to call ourselves "engineers" for decades.
My prediction: the cloud will eventually become stable, because everyone will have moved on to the latest greatest, and unused servers have perfect reliability.
Monday 11th November 2019 20:46 GMT Anonymous Coward
Monday 11th November 2019 22:29 GMT johndougherty
Tuesday 12th November 2019 08:30 GMT RichardB
Its just scale
When my infra cocks up it knocks out a project or 3 for a while.
When GCP cocks up it knocks out many, many projects for a while.
Question is, for -me- are they more likely to cockitup, or me? I still think that for anything outside my core skillset and that of those around me, it's a fair bet that theirs will be significantly better, with the added advantage that when they _do_ mess up, it's pub time not panic stations.
Tuesday 12th November 2019 11:02 GMT Anonymous Coward
Re: Its just scale
"with the added advantage that when they _do_ mess up, it's pub time not panic stations."
If that really is the case, then you must be in a small company. If you had everything running in the 'cloud' with the big mistake of using a single provider, 'your' infrastructure going down should trigger your BCP, meaning evaluating what to do and if you should carry out your BCP. Meaning a lot of work for you.
Tuesday 12th November 2019 12:38 GMT AVee
The German solution
It seems like Microsoft resolved the Azure issues using the tried and tested German solution: "Reboot macht immer gut."
"We identified the issue with identity calls and our engineers rebooted the ATs to mitigate, which has brought the system back to a healthy state."
So that's fixed, it probably won't happen again...