You get what you pay for
Despite the promises, cheap collocation relying on other people’s promises etc etc, sometime they’re gonna disappoint.
To be fair LINX is in LD8 so for some there is no option but to peer there.
Equinix has doubled down on efforts to irritate customers following a lengthy bit of unplanned downtime at its LD8 London data centre. A faulty UPS caused the facility to drop over yesterday, leaving customers (including major ISPs) without connectivity. It took until 21:50 BST on 18 August for the company to state that …
Not sure I'd describe LD8 as cheap co-location, last time I looked at our bills it was the most expensive co-lo we have!
LINX being at LD8 does not force anyone to peer there. LINX have two London peering LANs across 16 locations, you can reach any LINX member on the same LAN as you in any location from any location, that is one of the key benefits of LINX. A lot of members will be at least two locations.
We also got Equinix stating they would bill us for remote hands to check our rack during this outage, they claimed power was back a long time before it was working to large parts of the datacentre.
More worryingly we had a similar issue with MA1 (Equinix Manchester) UPS causing a total power outage back on 11 May this year, I can only assume the clowns doing the work incorrectly in Manchester for them have now moved to London. I suspect Equinix will have several customers asking questions about their expertise in power management after these two issues.
No the cloud is your code on other people's computers in their data center. It fails they move the code to other computers in other data centers = their problem.
This is the old 2nd worst case, your computers in their building. Now it's both of your problem.
The only one worse is on-prem. Now the computers, power, comms, security and oh-shit our building just flooded/burned down/got invaded by zombies is all your problem
Yep, but with on-prem, you can do WHATEVER is necessary to resolve it, and also know everything you want to know about what's happening, instantly.
To many people, that's worth far more.
Not even including the fact that I don't have to worry about what they do with the hard drives with my customer's data on when we're done with them.
That's what annual bonfires are for, dontcherknow.
This was an electrical problem. Unless your onsite facilities staff are qualified and able to fix that kind of thing and have all the relevant spares to hand, you're probably going to be dependent on someone else whether you have physical access or not. I suspect Equinix might have rather more leverage on suppliers, given their size.
The visibility/comms point is very true though and that was clearly a key problem here.
If you're worried about disposal of disks you should probably look into encryption at rest, wherever they are. It's really easy in the cloud - robust key management infrastructure is there already and seamlessly integrated into lots of different types of block storage/object storage/database services. It's a nice advert for some of the advantages of using cloud infrastructure - you don't have to do all of this kind of thing yourself.
Yes, when you control everything, then your #1 problem is the #1 problem for the guys responsible to fix the problem. But it also means that you have to fund & deliver the solution all by yourself.
And if you lack the space/electrical capacity/cooling capacity to hold the new servers your need to solve the problem?
I don't know what cloud contracts typically look like, but businesses MUST seriously examine the penalty clauses to ensure that they won't get hung out to dry when the counterparty fails to deliver. I continue to be astounded by the child-like faith that companies exhibit towards each other in that regard.
But believe me, when the estimates for server load for Pokemon Go! turned out to be wrong by a factor of four, Nianic was almost certainly thrilled that it was Google scrambling to get the resources in place instead of themselves.
Actually wherever it is, it's your problem
At EVERY point, it'll be the cheapest staff they can get, the least number of staff they can get, etc,etc,etc
so at each level you outsource, you're actually increasing risk in my opinion. If you have to Co-locate somewhere, don't have anyone in the middle as that's another layer of crap that you'll have to deal with.
If I had an MSP based in that DC and it went down, I don't CARE that an Equinix centre went down, I'm being invoiced by that MSP. If they didn't build in resilience then that's not Equinix's problem but the MSP.
If YOU have YOUR servers in the Equinix centre and it goes down then YOU have control, YOU have responsibility and YOU own the risk. It'll probably be a fuck load cheaper as well. I haven't seen ONE MSP, Vendor, Outsourcer, consultancy etc, that doesn't take the money and just try to push the work BACK on to the customer.
It's amazing how many suited consultants I've pissed off down the years by saying "isn't that what we're paying YOU to do?"
Yes, I'm aware if the difference between Cloud (aka other peoples computers) and co-lo (your computers in other peoples buildings).
The point I was making was
1. to deliberatley overlap the 2 for comedic effect
2. point out that in both scenarios you are shit out of luck when trusting other people to run your hardware and software for you when they fuck it up
You only have control if you host your own DCs, run your own power backups, host your own servers storage and network, and employ people who 1. Give a fuck and 2. Know what they are doing.
In todays IT industry, management clowns are getting sucked off to believe other people running kit for them is better. They are wrong.
Do you also design your own CPUs and motherboards, run your own internet backbone and have the ability to manufacture your own diesel supplies?
If not, it’s just a question of where you choose to draw the line. There are multiple valid answers to that.
Your approach is theoretically valid, but having seen the state of many on-prem DCs, and some of the people who run them, I have a slightly jaundiced view of what it actually looks like in practice. I accept there is some well-run on-prem out there though, and it can be great. It’s just not that common, and getting less so as the top talent is getting hoovered up by the cloud giants.
A technical glitch can happen to anyone, we all know that.
But to invoice your customers because they were asking you to check on their status is pushing things too far.
Such behavior can only mean that Equinix is a company that is searching for every penny it can possibly scrape, which doesn't mean good things for hardware maintenance or employee proficiency.
Rubbish. Anything automated you can turn off, or send an automated follow up email stating "previous email was nonsense, please ignore". Instead, they are going to generate a lot of bad will. Golf claps all round, and one for you saying that people can't think for themselves.
It seriously screwed up some leased lines I know of, including one of ours. This is what the ISP gave out as info: https://noc.enta.net/2020/08/tt-device-outage/ one of the messages there mysteriously rewrote itself but I still have the email copy ...
I have no idea how they are all intertwined but IPv6 vanished along with quite a lot of the internet in my office and for another site (same ISP) the whole internet vanished. The outage lasted from just after 0500 to about 1600.
£ 170. ONE HUNDRED AND SEVENTY POUNDS to open a screen and reply to an email? Really?
That, right there, was my "What actual fuck" moment. As in, what the AF am I already paying THOUSANDS per month for, if not basic monitoring and maintenance, even if this wasn't the host's fault and problem?
Why the surprised look on your face?
Because even though I'd expect a certain amount of gouging, that is an absolutely staggering amount of money for a routine 10 second effort. A Pentagon-level charge-out rate, really.
In my money that is more than R 3,500 (South African Rands) and it's enough to buy half a fair-ish laptop for a high school pupil, or 10 hours of IT support work at my (what I think of as extortionate) corporate rate.
Nice work if you can get it. I guess the techie whose labour is being charged at £ 612,000 an hour is probably seeing about the same as I do, i.e. not far from £ 20/hr.
You can’t highlight the £170 without EXPLICITLY stating it was NOT an invoice but an order request from the customer through the customer portal with standard agreed charges involved. Even the OP acknowledge this in his tweet :(
Everyone is now mis-reading the sentiment and intent!
Poor show El Reg !.
You can’t highlight the £170 without EXPLICITLY stating it was NOT an invoice but an order request
I think everyone got that. The whole point of the article was to *highlight* that someone let the *intent* to charge slip through. Also that later it was withdrawn with scarcely even a shrug, let alone an apology.