discount for fire-damaged kit?
Having seen them post photos of them physically cleaning up motherboards to remove smoke damage - erm, I'd rather not run my workloads on that kit, thank you very much...
French cloud provider OVH has outlined a three-point plan designed to avoid a repeat of the loss of data and services resulting from the fire which engulfed its Strasbourg operations on 10 March. Dubbed "Hyper Resilience", the plan employs the combinations of a revamped approach to internal backups, external customer back-ups …
This post has been deleted by its author
Yay for Single Point of Failure. Nice to know that the ol' buddy is still alive and kicking.
I think that, in the past few months, we've had largely enough demonstrations that UPSs should be quarantined far from actual servers.
In any case, whether you appreciate OVH's customers or not, I think OVH has done a fine job of openness and transparency on this issue. We are far from the usual "only a small number of customers have been impacted / we take customer data security very seriously / etc".
I'm hoping that OVH will publich a complete, official DR report with step-by-step instructions. As painful as this was for some, it is a priceless opportunity for all other datacenters to check against their own environment and start implementing mitigations now, before it's too late.
"UPSs should be quarantined far from actual servers."
I'm thinking we'll see a generation of datacenters with well-separated UPSes. Then, in a few years, we'll see a batch of of notable outages due to severed power connections between UPS and equipment, followed by a call to design datacenters to place UPSes as close to protected equipment as possible.
IT tends to be cyclic that way...
Forget cyclic, go cynical. Separate UPSs cost more money to set up and maintain plus they need more land. In a few years, the beancounters will convince management (with the help of consultants and vendors) that integrated servers and UPSs will improve next quarter's bottom line and we'll be back to square one.
"Lastly, OVH said it would change internal rules for building data centres"
Ah yep, this is much needed ! I need to send them "DC building for dummies", there are probably some good tips in there.
Meanwhile, still today, some people web sites are still down and they have 0 info on whether they'll come back again !
And their support web site doesn't say anything either.
Surely that says as much about the customer? Like OVH they had all eggs in one basket and no real DR plan having just ticker the box that says 'OVH does it'. In most cases a simple off site backup taken whenever it was changed would have allowed it to be restored quite quickly.
We are OVH clients, with a not insignificant number of servers hosted there. We had servers in SBG2. We also had no data loss and minimal downtime (what downtime we did have was largely my own fault).
OVH has the technology and network available to avoid building systems with a single point of failure. They have advanced networking capabilities if you want to use them (in our case, we've now added API access & scripts to our network monitoring to repoint IP's if a network goes dark).
The point is that the client has to use them. If they don't, they have only themselves to blame.
BC is more than just backup.
"OVH is proposing that customers will be able to replicate and remove the backup data for their own purposes"
About time cloud providers made backups available for download and use elsewhere. Most providers provide no means of doing this so you have no way of keeping an external copy, nor restoring it to another cloud provider for disaster recovery purposes, let alone on your own kit.
I have however succeeded in doing this via tar to a VM on my own kit but its far from easy to then make it bootable but not impossible. This was mainly to create a local testing platform that we can refresh at intervals for testing releases without incurring the cost of a running a duplicate cloud server. Also gives that warm fuzzy feeling that I can recover from a disaster like this within a reasonable time frame.
That really depends on what services you're using. We use bare metal servers. We install our own hypervisor on them and we back them up to our own, non-ovh facilities using normal backup tools.
We have contingency plans that allow us to restore to either AWS or Azure if necessary.
It's not really up to the provider of the DC to manage your backups. Sure it's nice if they will but there's no substitute for doing it yourself.
I think you miss the point, my point was about cloud servers, nor bare metal, clearly if it is bare metal everything is your responsibility.
Cloud providers however offer snapshots and backups as part of their product, thus it is taken care of as long as you switch it on. The problem is you can not 'off site' the backup by downloading it, let alone take it elsewhere (at home/office/other provider).
Whilst its a good way to make it painful to migrate to another provider it also means all your eggs are in one basket should the provider have a critical outage or simply go out of business.
So let's get this right. Number 2's power set up was literally on top of Number 1's power set up. Which would ensure that if either power set up went 'foom' it would take the other's power set up with it. How was this ever considered to be in any way sensible or reasonable?
(OK, so some of my customers stored their 'back-up' tapes in the same room as, and in one case on top of, the server, but they were all in buildings where the power supply was sort of provided by professional electricians.)
And putting high power electrical devices inside steel shipping containers as a permanent architectural design looks to be like, almost literally 'fell off the back of a lorry' rather than even 'bargain basement' procurement.
I suppose had it all worked really well, Karbe would be hailed as a leader in cost saving design and innovation. As it is ... he may well be remembered for something rather different.