Re: Meh
Who knew that clouds were so flammable?
A fire that broke out this morning at Telstra's London Hosting Centre (LHC) bit barn in the English capital has disrupted customers' services, with a fire crew called to tackle the flames. An email to clients, seen by The Register, was sent by SIP and hosted telephony provider Voiceflex that confirmed its hosted servers were …
To be fair, it's not just cloudy stuff. I manage a few on-prem Asterisk based PBXs, and the SIP trunks for connecting to the PSTN world have to go via a provider, who might have their kit in that data centre. As far as VoIP systems go, it's as un-cloudy as you can get really, I think.
From memory, there were some.. issues around the building ownership (financial engineering <cough>) and the division of spoils following the collapse of PSINet. So Telstra got some assets, Cogent got the US, and GTT/Interoute the European network, Then some floors leased out to other carriers, including Reliance as a result of FLAG also going titsup and leaving it's transatlantic cable dangling there. But several large/strategic carriers having it as a core PoP & collo site for their wholesale customers. I've done business there in the past for capacity as there's been reasonable physical seperation & fibre availability. When a big PoP goes <pop> though, it's always a good test of capacity planning & management.
The all time best I ever heard was recounted in the Netwire conference on Cix back in the day.
This was shortly after "big bang" in the City and took place at one of the dealing houses now part of ${bank}. Their IT director put together a presentation showing that, while the mainframes were protected by UPS, the loss of the phones, screens, lighting (etc ad nauseum) meant that, in the event of a power outage, they'd be effectively unable to trade. He included figures showing that this would cost them £1m a day. Back in the eighties...
The board approved the stripping out of a basement level and the installation of an online UPS to run the entire building.
When the UPS went titsup.com, everything their IT director predicted happened. The only thing he got wrong was that his figures were conservative. It cost them over £2.5m a day. They were down for a week. He was so fired it was unbelievable.
Happens often enough that I like to have dual power supply servers. One supply plus into the UPS, one to utility power.
Sure, I get "dirty" power from the utility instead of "clean" UPS power, but power supplies are pretty tolerant things.
Maybe I'm just bitter after APC and their special cables (the ones that look exactly like 9pin serial cables, but aren't quite compatible).
When I worked for a well-known telecommunications organisation, in an old building, we once had a genuine fire, a UPS deciding it was time to join the choir invisbule so to speak. We had three fire tenders turn up and most of the day off. Oh and a police officer toured the building telling everyone to leave (it was not time for the official annual test evacuation of the site, you see, that only happened in the week of the August public holiday to save on disruption and to ensure that all 15 people sad enough to be at work then can get out in the required 3 minutes).
You would almost think that these companies have furloughed most of their staff. Of course less people around means less chance of someone walking past a failing UPS and thinking that seems hotter than usual, we should check that it's ok.