20x7
... is that because of a 4 hour lunch break?
French cloud company OVH has pledged to create a lab to model the effects of data centre fires – less than two weeks after one of its data centres was destroyed by fire. News of the lab came in a video update from OVH founder and chair Octave Klaba, who advised that some services in the company’s ruined Strasbourg data centre …
The slur is that the French are lazy.
They are no worse than the USA or Germany.
Can we agree that if 35 person hours in France produces about as much as 40 person hours in the UK, that the French are doing something right? They are working to similar rules.
So the question comes down to can the French really produce in 35 hours about much as a Brit in 40.
Fortunately this is actually something the UK government has looked at.
International comparisons of UK productivity
(ICP), final estimates: 2016 page 3
[UK]
lower than that of Italy by 10.5%, with the gap widening from 9.6% in 2015
lower than that of the US by 22.6%, with the gap narrowing from 23.1% in 2015
lower than that of France by 22.8%, with the gap widening from 22.2% in 2015
lower than that of Germany by 26.2%, with the gap narrowing from 26.8% in 2015
If the authors are American - productivity per hour about the same as at home. A lot better than Canada and the UK.
After 30 years living in France I can quite categorically say that anyone who thinks the French are actually more productive than the US or UK is smoking something very interesting.
The figures are in part skewed by the large amount of invisible work in France, people working "au noir" where they don't declare their work to avoid paying tax, which boosts the apparent GDP. Traditionally Britain has tended to clamp down much more harshly on that. France sticks to its legal 35-hour week, and ignores the people working 40+ hours & getting paid under the table for the extra.
A couple of years ago I was travelling in rural France.
I just missed the pharmacy closing for lunch - I thought, lazy bastards.
Then it dawned on me - who's got this quality of life thing right and who's got it wrong?
My conclusion: the Frogs have got it right - there's more to life than work.
Being just another alienated mouse on the 24/7 capitalist treadmill is not my idea of living
YMMV
If you look at the site layout, it would be awkward for SBG1 to remain while SBG2 is demo'd and rebuilt. As SBG1 looks half toast anyway, moving any remaining kit elsewhere seems sensible, even it it's into more containers north of SBG4. Of course they may still call that "SBG1"...
https://lafibre.info/images/ovh/202103_ovh_strasbourg_plan.jpg
https://lafibre.info/images/ovh/202103_incendie_ovh_strasbourg_3.jpg
My company discovered that's not a good idea after a data centre was wiped by the IRA back in the bad old days. Hilariously the reserve hardware was located in Belfast!
Backing up SBG2 to SBG3 is almost as bad. Do people never learn? If it hadn't been a fire it could have been a network or power setup that took out the whole Strasbourg complex. These days geographical separation of data should be more than 100 yards - Oops metres. Preferably in a different country.
The point is the easy bit having been wiped out in SBG2 is getting hardware running elsewhere - a VPS can be up and running in minutes and a few mouse clicks from a wide choice of providers. But if you have to wait WEEKS before you can get your hands on and start loading the backup - well what can one say? And if a customer finds the (presumably paid for) backup has gone up in smoke too then any sympathy one may have for OVH being the victim of a possible third party UPS error follows it up the chimney.
The French IT lawyer industry must be licking their lips - provided they don't host their billing system with OVH.
Depends on the risk assessment. If you are in a place with mass-scale events like earthquakes, then 25 miles is too close. At a previous employer, our US West Coast systems are backed up by systems in the middle of the country.
IMHE this is realatively easy & cheap to do with fully-cloud native applications & infrastructure. In our case, we had a complete replicate of our infrastructure on "cold standby" replicated for less than $500/mo with continous data backups to a distributed infrastructure and daily full snapshots. And, yes, we tested it 2x/year - took 15 min to spin up the replica.
Depends on the risk assessment. If you are in a place with mass-scale events like earthquakes, then 25 miles is too close. At a previous employer, our US West Coast systems are backed up by systems in the middle of the country.
I hope you are aware everything east of San Andreas will eventually drop in the Atlantic ;)
Most of my customers are quite happy for key systems to be parked in separate rooms of the same building or complex. It's cheap and thankfully events like this are incredibly rare.
Some are a bit more cautious and have a bit more money to spare so say they need to be at least a few miles apart to protect against local issues, but within c. 80km of each other 'cos it does take time for network packets to get through a stack of kit and down the cables and the microseconds soon add up.
A few expect DR systems to be a least 300km away although I'm not quite sure what they're trying to protect against as I repeatedly point out that an event that can affect sites 299km apart is likely to be of global significance and do they really expect my ops teams to be staying at their posts to deal with it or running home to their families for a last goodbye? Admittedly one was in California so having something on the other side of the Rockies probably makes more sense for them.
On the other hand, I've seen dual-redundant servers, comms and storage not only running in the same rack, but all running off a single (and worryingly warm) power strip plugged into one 13A wall socket. I did not like being in that room...
You have no idea what can be found in French datacenters...
I have the backup remote connection equipments ( used to do support for $TELCO ) sitting on top of the main remote connection equipments. ( and under yet another remote equipment ) in a a specific room, in a given rack.
And I have another one ( again for a $TELCO ) where the equipment sits next to each other ( in two racks, in the same row, in the same Exchange Building converted to datacenter )
“I want to test how the fire is going in different rooms,” he said. “I want to find how to extinguish the fire in different rooms and share with the rest of the industry.”
Sure, mate, but no need for a lab, I can educate you:
- fire goes up, which is accelerated if you have abundant oxygen intake and also non fire-contained levels in your DC .... erm, stack of containers. Especially true with vertical cable trays that are not fire sealed
- fire normally eats everything above the source and also horizontally, if the previous condition is met. This is especially true when there is absolutely fuck all automated fire suppression system (Innergen is the norm and will extinguish a major fire in mere seconds) of any kind
- having 2 poor sods, alerted by the fire sensors, come on zone (well, containers) with probably outdated portable fire extinguishers, doesn't cut it for a major fire, at all. When your curtains catch fire at home, it does the job, though.
- a stack of containers is NOT fit for DC purpose and multiple lines of 20 kV. It's fit for purpose for carrying idle goods on ships.
I trust the lab will probably be a funny way of proving the above.
PS: Klaba, there are other nice jobs than DC management. You should really try those and leave this for people that know better.
I'm sure he is much more competent than me, but could someone explain to me how a server fire is different from an electrical fire ?
Is it that an electrical fire is just the power cables and their insulation, whereas a server fire is the electricity plus the toxic plastics and metals that are melting ?
Does that really make such a difference ?
Normally, IT kit can set up a fire (short cut, other, like in this case) but barely contribute to it at all after that.
What will propagate the fire is mostly plastic of all cables, of which there are tons. Plus anything built with wood, like apparently was the case at OVH (you read this correctly, one building had its floor in wood).
This combined with fancy flows of pulsed fresh air all over the place and also inter-levels of the building, where ... cables are running, will ensure multi-level fire.
EDIT: there is no difference between an electrical fire and any other fire. They start from different causes but then, after ignition, it's all the same.
"how a server fire is different from an electrical fire ?"
His terminology wasn't great, since a server fire is likely an electrical fire. If you have a server on fire you have a small amount of combustible material and the added concern of something like a 230V 15A? electrical circuit pumping in added energy. If the power equipment for a large datacenter has a fire, you likely have more combustibles, plus you have higher voltage and insanely higher fault currents available.
It all depends on the recover/restart philosophy.
If the intent/need/promise is to recover kit and recommission it, then yes, you want different fire control chemicals and procedures than a simple 'flood the place and hope for the best' approach.
If the approach is to just put the damned fire out and who cares about the aftermath, well, a couple of hundred thousand deca-liters of water will do the job.
An electrical fire is only electrical while there is electricity present in the burning materials. As soon as you turn off electrical supply, it becomes a “standard” fire.
The only rule is you don’t put water on an electrical fire, as you can create electrocution risks, and also cause further shorts elsewhere that start new fires. That’s why we have inert gases for DCs, CO2 extinguishers, etc.
I’ve now written two posts about this on Medium. These DCs had wood in the structure, had no suppression other than handheld extinguishers, were built like a giant chimney with no control over ventilation that would allow to close air intake during a fire... the list goes on.
Bottom line is, you get what you pay for. You have to think hard why is it you can get servers for a few quid a month, instead of paying big bucks for metal in a proper DC...
The fire lab is a useless idea, fire behavior has been simulated to death by suppression system vendors and fire departments/labs.
The fire lab is a useless idea, fire behavior has been simulated to death by suppression system vendors and fire departments/labs.
Wrong, this will be the only way to convince the owner how incredibly stupid he was. The expression "Penny wise and pound foolish" comes to mind.