Who else wants popcorn?
*Starts handing out paper shopping bags stuffed full of freshly popped goodness to one & all*
Grab a drink, find a chair, & let's enjoy watching this FlusterCluck unfold!
The OVHcloud datacenter in Strasbourg, France, that was destroyed in a fire last year had no automatic fire extinguisher system nor an electrical cutoff mechanism, according to a report from the Bas-Rhin fire service. The incident report was obtained and published by Journal du Net (JDN), a French-language IT site. It …
I remember a friend commentating on the Monica Lewinsky saga and meaning to type 'Clinton in the White House', but typed 'Whitehouse' instead... auto-correct suggested he meant 'whorehouse'
(I'm not saying auto-correct had trained itself to his personality, but the same guy used to joke that he was so 'ashamed' of his son that he would claim he played piano in a whorehouse rather than admit he was a lawyer)
Too bad for the customers not smart enough to realize the system was designed in a way so that failures would be have to be handled by the customer rather than the provider. They just saw the low price and said hey let's use that, all data centers are the same right?
"Early on April 30, a fire broke out in one of the data center electrical rooms at Terremark's NAP of the Capital Region in Culpeper, Va. The fire department was on site for hours, and the event was covered by local media. But the facility remained online throughout the entire event, according to Terremark, with no downtime for customers."
"A 2009 fire at the Fisher Plaza data center hub in Seattle caused $6.8 million in damages[..]The July 2, 2009 incident knocked payment processor Authorize.net offline, disrupting e-commerce for thousands of web sites, while also causing lengthy downtime for Microsoft’s Bing Travel service, domain registrar Dotster, colocation company Internap and web hosting provider AdHost,"
I worked at a company that was hosted at the 2nd facility(hosted there before I started my job), I moved them to a different facility in mid 2007 if I recall right, after two full data center power outages (there were more before I started). One of the power outages was a curious customer wondering what the "Emergency Power Off" button did(after that incident all customers had to attend EPO training before gaining access). Though in THIS case, the fire was contained to the electrical room and as far as I know no customer equipment was damaged.
The building ran on generator trucks for several months while they replaced the electrical system. The fire caused a roughly 42 hour downtime of the facility I think(including knocking the HQ of a local TV channel offline). I do recall being told stories some customers were freaking out because the batteries in their storage systems (to maintain data in the cache) of course can only retain power for X number of hours and there was uncertainty when power would be restored.
Though some storage systems were designed to handle that better in that they ran on internal battery long enough to dump the contents of cache to an internal drive(one per controller so there's two copies of the cache data) before shutting the controller down in the event of a power outage.
I am pretty sure that the large volume of hackers they seem to host (judging by even just the 404 logs) won't mind, they're used to being cut off and just move to others such as Azure or DigitalOciean.
As far as I'm concerned the fire improved things for a bit..
Well, yes. OVH are clear about what their services offer. I've used them for years for development and gaming server hosting. Would I put stuff from work there? No, that's on Azure, suitably redundant.
Gotta pick the service that fulfils your needs. Anyone who uses OVH for critical services, and doesn't put in place their own redundancy is asking for trouble. And that's from a happy OVH customer.
Completely agree. It seemed blindingly clear to me that I would need to provide a backup and failover/DRP solution myself and that's exactly what I did.
While I feel it's fair to blame OVH for deficiencies in the fire prevention area it's hard to feel any sympathy for customers who didn't bother trying to understand what they had bought...... it's not like OVH tried to hide it or make it anything other than obvious.
Have you seen the class action suit? (linked in the article) - they are charging customers almost 600 euros to join them.
"Loss of numerous data and significant economic damage: several thousand OVHcloud client companies were hard hit by this fire.
OVHcloud has therefore, without a doubt, failed in its contractual obligation to protect your data, which is assigned to it in its capacity as a service provider.
Despite everything, OVHcloud refuses to compensate its customers, in violation of its contractual obligations.
You are therefore entitled to engage OVHcloud's liability in order to obtain compensation for your damage suffered."
Back in a former lifetime I used to maintain Bottled Lightning (ancient valve powered HF radio transmitters well past the end of their economic lives)
I don't think I saw arcs that big even from the 50kV DC systems even when rats got across them (but the mercury arc rectifiers were impressive....)
Did you know rats can explode?
I save money because I don't need to hire an IT department!
That is, before being captured and unable to move from there, and having spikes in prices from 15% to 25% since the beginning of the year.
"They" said that OPEX were better than CAPEX. What does happen when the running costs exceed investments and grow more and more? Dear bean-counters, what do you think about it?
Yes. Have you been to Telehouse, Equinix, Gyron/NTT etc. These top tier data centre operators house the hyperscalers as well as other businesses. You can book a tour and be shown round the public areas by one of their engineering team where you can see the infrastructure, the building and even talk to them about maintenance procedures. They don’t sell me space on the good data floor and then AWS gets the shitty floor :)
Obviously there are other data centres owned by AWS for example which are for their own exclusive use but they will be built to the same standards.
You seem to believe the hyperscalers build their data centers to top tier standards. They do not. They really never have. Their model of operating is if a data center goes down you are still online because you built your apps to handle that failure by leveraging multiple facilities. Obviously there is a huge cost difference from a top tier facility to a lower tier facility, which is why they do it the way they do.
The only exception might be in markets where hyperscalers are leveraging co-location capacity, but they won't tell you that they expect you to make your apps more redundant.
But as we've seen in many situations most orgs don't do that(or at least do a poor job at it).
"You seem to believe the hyperscalers build their data centers to top tier standards. They do not. They really never have." wut?
Have you ever been on an Azure datacenter tour? Trust me, they are some of the best datacenters I've ever seen.
But even a quick browse of the public documentation shows that they have more redundacny that you seems to think they have... https://docs.microsoft.com/en-us/azure/security/fundamentals/physical-security
"Availability zones are physically separate locations within an Azure region. Each availability zone is made up of one or more datacenters equipped with independent power, cooling, and networking. Availability zones allow you to run mission-critical applications with high availability and low-latency replication."
We do exactly those kinds of assessments for companies and I've had the 'pleasure' of visiting plenty of DCs just like OVH - one provider, with DCs mostly in Germany and some other northern European countries, doesn't even have *any* detection or suppression system. I asked the owner why not and his direct quote was: "What's in a DC to burn? It's all concrete and steel and if the fire alarm goes off, the fire brigade can be here within ten minutes. I know this because we set it off several times accidentally during construction."
I duly noted his answer and put it in my report to my client (who was *very price sensitive and the DC provider was cheap as chips) and it went from a 'sure thing' to a 'no thank you'.
"doesn't even have *any* detection or suppression system. I asked the owner why not and his direct quote was: "What's in a DC to burn? It's all concrete and steel and if the fire alarm goes off, the fire brigade can be here within ten minutes. "
If there's no detection system, the fire is likely developed beyond a point where the firefighters can do much to save the contents by the time anyone notices the blaze and calls in an alarm.
So many data centres got set up in any-old-building during the boom of the noughties, particularly in Europe where empty early 20th century industrial buildings were in cheap and ready supply. I've seen many sights - one where a data hall was separated by a plasterboard partition from an indoor go-karting track in the same building, one where some of the DC staff were using the currently empty floor above the data halls to rebuild a couple of old motorbikes - complete with petrol cans and welding equipment.
in some case not so industrial...
Case in point Nr1 : TH2 : Telehouse 2 in Paris, Bd Voltaire, it's a repurposed 19th century ( or early 20th ) that might have had an industrial past at some point, but was an office building when it was converted to a DC. ( with fire alarm, extinguisher [ inert gas ] and all the perks you can expect from a full fledged DC )
Cas in point Nr2 : Once upon a time, when France deregulated the telecoms everybody with money wanted to become a telco... some used refurbished textile shops in the Bourse Area of Paris ( ground floor, street level, under 4 or 5 floors of housing.... what can go wrong ? [ You can guess, maybe someday there will be an On Call ] )
Sure it had the fire alarm and the extinguisher... but it was a cobbled up solution because everybody wanted to be in the *Silicon Sentier* ( nowadays nobody cares... the Internet bubble got them to stop and think before wasting money )
>It sounds like they used an old school building as a data centre not a purpose designed building.
If you had followed matters a few months back when the fire was first reported, you would know the data centres were purpose built. However, what was clear was the design, whilst good for passive cooling would also make a good brazier. Hence why we should ask why sensible fire prevention wasn't included in the build.
>Of those customers going legal, how many of them had sent a competent person on a site visit to see where their data was being housed ?
I suspect because in part because of their understanding of 'cloud'.
If they knew 'cloud' was really just the systems in their datacentre relocated to someone else's (singular) datacentre, minus a few features like backup, I suspect businesses wouldn't be so keen on it...
Actually they built a purpose built building... using wood... without any fire extinguisher system.
In France it was a well known fact *before* the SBG2 fire, the OVH CEO even talked about it once when the Canadians forced him to install sprinklers in the OVH Canadian datacenters. ( and lets says that he was not happy by having t'o waste money on sprinklers )
There's several other OVH datacenter built with the same building plan... if you are ( still ) an OVH customer make sure your "cloud' backup is offsite, and not on a server next door in the same DC.
The funny thing is, steel is actually not preferred in many fire risk situation. The problem is that it melts after X length exposure at Y temperature and then suddenly collapses entirely. It can also warp quite easily long before it melts.
That is the reason in modern buildings steel beams, columns, joists, lintels etc. are often sprayed with a fire retardant cement-like coating sprayed on to encapsulate them.
That is also the reason many fire doors are deliberately made of solid wood, they can hold the fire off for a surprisingly long period. A wooden door maintains its structural integrity in the unburnt half even when the other half has completely perished.
You are correct about steel and poor fire performance. One quibble: melting is not the issue. Steel looses its strength when it gets hot. Steel beams will fail long before they melt.ypu can see this concept in action if you watch a blacksmith work hot metal.
Wood also has one more trick up its sleeve in a fire. When a wood beam or wooden door burns, the outer layer chars. The layer of chart protects the unburned layer of wood below, slowing the overall burn rate.
(shipping containers)> That was SBG1... which was totalled as collateral damages from SBG2 fire.
Ah. Well that was not clearly brought-out in the press, even the geek press. Hunting I found DataCenterDynamics' coverage:
--and linked articles.
I see SBG1 is steel boxes. SBG2 may not be steel boxes but sure follows that theme. I don't see any "old school building" but much is murky.
Yes, in a good fire steel fails quickly and unpredictably, but does not add fuel until past incandescence. The World Trade Center put a few inches of concrete on the steel which delayed collapse just about as code specified (code did not anticipate 9/10 of a jetliner worth of added burning fuel on an upper floor). In mills where fire happened, "mill construction" was heavy timber beams and floor which would char for many hours and sag before collapse. "Fire Doors" are still heavy wood with 1 hour or more fire rating. The ceilings in SBG had 1 hour fire rating BUT the lack of electrical cut-out left energy flowing in for far more than an hour.
The "1 meter arcs" may be ionized air, or carbon-tracks on the doorframe (guitar amp builders see 1cm tracks arcing), or the excitement of the moment (deep inside a burning building).
I always have to count backwards through the wars until the Franco-Prussian war to remember which country the Alsace is in currently. :-)
Fun fact, because it is located at the border between two great European powers that were often at war with each other, Strasbourg has become a symbol for European peace. When the European Coal and Steel Community was formed in 1951, with the express intent to tie the French and German coal and steel industries together to make another war practically impossible, Strasbourg was chosen as the seat of its Assembly. A city that is culturally both German and French.
Many reforms later the ECSC has become EU and the Assembly has become the European Parliament. That's also the reason that formally the main seat of the European Parliament is in Strasbourg while, for obvious and practical reasons, it actually mainly meets in Brussels.
Strasbourg is also the seat of the European Court of Human Rights, an institution of the Council of Europe (you know, the one Russia was kicked out of two weeks ago), that interprets the European Convention on Human Rights (you know, the one a number of British Conservative MPs want to get rid of). The Council of Europe, the European Convention on Human Rights and the European Court of Human Rights are all direct results of WWII that had ended only a few years prior.
Strasbourg is therefore steeped in history and symbolism.
...the reason that the services of Google, Amazon, MS et al are stuffing the small, domestic providers has waaay more to do with who you can trust is doing it properly than it does with "anticompetitive practices".
Let's face it, you're far more likely to get purpose-built data centres and a working system of redundancy with the established big names.
> more to do with who you can trust is doing it properly than it does with "anticompetitive practices".
Why can you trust them to do it properly though?
Because they have more money to spend on "purpose-built data centres and a working system of redundancy".
Why do they have more money to spend on these things?
Because they abused their dominance in one market to gain outsized profits, which they then used to subsidise their entry into another market, undercutting any competition except from other companies also able to subsidise their operations similarly.
But the design of the building was so elegant, so chic...
hand crafted crystal chandeliers for lighting... 24k gold wallpaper for electrical screening... deep pile carpets to improve sound deadening... equipment racks crafted from exotic hardwoods
Unfortunately the money ran out before they could install the laser illuminated dancing sprinkler system... and the on-site funfair kept tripping the power supply, so they had to bypassed the circuit breakers
Something doesn't add up
An inverter fire suggests that they were running on battery. You'd hope the inverters wouldn't be getting hot unless you were running on DC. If they were running on DC that would mean there was no need for the local power company to cut the power as the inverter takes it's power from batteries when the UPS looses mains feed. If the mains is on the inverter isn't in use.
Another thing that doesn't add up is the arcs of over a metre. For an arc of over a metre in air you'd be looking at something in the megavolt range surely?
I would guess active UPS systems, take in shitty mains, rectify it, shove it through an inverter and it comes out as a clean supply.
Nice because it doesn't matter what goes on with your mains supply (under voltage / over voltage / out of frequency range or missing half a cycle) your equipment gets a steady supply at the right voltage & frequency.
not so nice if something goes wrong with the control system because even if you cut the mains feed they will just carry on with battery power
not so nice if something goes wrong with the control system because even if you cut the mains feed they will just carry on with battery power
And if you have those things anyway, you can run the output of the generators (not known for being perfect) also through the inverters.
"The OVHcloud datacenter in Strasbourg, France, that was destroyed in a fire last year had no automatic fire extinguisher system nor an electrical cutoff mechanism, according to a report from the Bas-Rhin fire service."
I wish El Reg, whose hacks, can some few times, throw one or 2 french words, had made this article earlier.
Plenty of leaks in the french local press, coming straight from the fire brigade or police, have been all over the place since months, about the wooden floors and no freaking automatic fire extinction ...
I even commented it here, end of the page, 5 months ago:
Lack of power cut-off is news to me, however, I've never seen anything about that, before. This indeed explains a lot ...
"...and did not understand the need to pay for backups..."
Sounds like every user in every company I've ever worked in. Users and management just don't understand the importance of reliable backups unless they are a technical company or a bank themselves; and even the banks only do it because regulations force them to. And in the cloud yet? Well, you NEVER need to backup up the cloud, right?
Fortunately I've worked at quite a few places that had good backups and done a few restores over the year so it's not a universal truth that management don't understand. However I do remember one place that had the good fortune never to have to switch to their standby server because its overnight backup from live to standby consistently failed for lack of time but up until then nobody had noticed. I noticed because my gig was to replace both boxes for Y2K reasons.