Who have they pissed off? It just seems "odd" to see a VPS provider on a hit list.
Virtual server host Linode has been on and offline since Christmas Day as it weathers an ongoing denial-of-service attack. Four days in, its customers are getting grumpy. Linode Status page ... Linode still suffering days after attacks began "We are currently aware of a DoS attack that is affecting the Linode Manager/ …
It strikes me as odd that the only defence so far against DDoS is the likes of Cloudflare. Has anyone here been involved in handling a DDoS? What did you do, and what worked?
It's a risk that almost any Internet business is now exposed to, with apparently no way to identify, let alone deal with the perpetrators.
There are things you can do, but it takes money and pre-prep. In extreme cases, you route all your traffic through a company that 'cleans' your inbound connections (like Arbor Networks). Or you can use a DNS-based service like CloudFlare.
It seems strange that Linode hasn't done something effective yet, esp. since it looks like it's only targeting their management tools (as opposed to their entire network block...).
I would think that there's been some forewarning here lately.. ProtonMail and Janet come to mind. Both were multiday DDoS attacks. If, and it's a big if, it's the same people, they'll probably be doing more of this. The big question is, what's the goal? Are they using these attacks to hide something else like a penetration? Or merely trying to drive things off the air, so to speak?
"The big question is, what's the goal? Are they using these attacks to hide something else like a penetration? Or merely trying to drive things off the air, so to speak?"
Just the question I was asking. I did wonder if this was part of a series of tests to check out a process that will then be used as part of a much larger offensive at some time in the near future.
The principle of an attack against critical telecoms infrastructure is one that has been raised before by a lot of people working in security, both as something to be defended against and as a possible attack vector.
Heard via reliable sources that the mail provider DDoS's were ransom DDoS's (there was a more than 2, can't remember the details), and odds are its likely the be the same group again. While its semi easy (if costly) to DDoS protect 1 website / service against any attack its another story to protect and whole cluster of DC's + countless services and sites.
The attackers seem to have a lot of resources at their disposal going on the previous attacks so its not just so small time script kiddies, this has probably been planned for a long time.
Also makes me worry that this is going to follow the same pattern as the previous attacks i.e. they switch to another company in the same industry e.g. Digital Ocean, Vultr, etc. Hopefully not.
It would perhaps be cynical to view the attacks as a tool to get people to move to another (preferred?) provider? Maybe the attackers are getting kickbacks?
AFAIK, it's isn't any harder to protect a whole bunch of IPs vs one IP, just a whole lot more expensive.
Fundamentally, the real solution is to move to IPv6, which has better tools for dealing with this. Also, the idiots with edge gateways that don't verify source address routing (e.g. and therefore route any random address sent to them) should be banned from the internet forever.
The problem is depending on the nature of the attack it can be very expensive to Mitigate, that's assuming you actually have enough incoming capacity before any DDOS mitigation kit to deal with the volume coming in, You could get your upstream to null the target IP but that just takes the target down anyway.
The attackers however (who don't care about breaking the law) are likely to be using compromised boxes for the attacks so they're likely not paying much if anything for their resources which means it's expensive to defend against but fairly cheap to carry out.
While blackholing is an important mitigation tool (especially when done selectively on the pipes most attack traffic is coming through as opposed to globally; few transit providers offer this as standard though), if you have specific customers being targeted, it's still important to keep those customers up during/after an attack. Otherwise you are just inviting more DDoS...
I have (many years ago though). Requires cooperative upstream providers, equipment that can do packet filtering at meaningful speed, reasonably fat pipes (preferably with a lot more outgoing than incoming traffic so the occasional DDoS doesn't cause your 95th percentile billing to go through the roof) and a bag of tricks.
I'd imagine it's harder nowadays with multi-million-host botnets and such, although if the attacker max out the pipes of the hosts they should still die off pretty quickly.
I've heard of quite a few data centres/hosting companies being attacked like this recently, usually a threat with ransom demand first, then a small attack, then more sustained, unfortunately the attacks are also becoming harder to prevent. These days I don't think you have to piss anyone off, it's simply a matter of time.
As Janet found publishing too much information as the attack continues simply provides details to the attackers to help them fine tune their attack.
If you run any kind of internet facing business, then this is an unfortunate fact of life these days, however, how your VPS provider handles the customer side is critical. We have a few hundred dollars per month on Linode for a VPN product, and this was chosen specifically so that we would have better protection than a single node in a couple of data centres.
The fact this happened over Christmas has saved us a lot of trouble, as most of our customers are not using our VPN products, but the wholesale "taking down" of the entire Linode IP range for each data centre in various patterns has reduced our offering to ZERO at various times irrespective of any redundancy we may have built in.
A simple email, apologising and letting us know WTF is happening and to what extent, would have allowed us to take some form of alternative action - such as rebuilding at least one server on another VPS provider and giving us a bit of confidence that come January 4th we have a plan B if this continues. DNS propagation is often not instant so plan B's need some, well, planning. In this day and age we all rely on good communication from our suppliers if they are having difficulties, and I am afraid in this case Linode have catastrophically failed.
It seems the real way forward for us is VPS redundancy on multiple providers, and with some extra coding to keep everything synchronised until things go tits up. Oh well, it made the festive period a bit more interesting than the chestnuts roasting etc...
Ironically I only heard about this via El Reg after picking today [of all days!] to fire up a new Linode VPS [at their Frankfurt datacentre] and wondering why my SSH connection to it was behaving like a slug on mogadon as well as dropping the connection intermittently, whilst my existing VPS [also hosted in Frankfurt] was as responsive as ever.
As regards Linode's handling of it, I totally agree. Not even so much as an email to warn customers about the outages [as I said, I had to find out about it through El Reg] and, when [in my ignorance of events] I tried to set up a new VPS today, not only did they fail to warn me that it might not be a good idea at the current time, they had the cheek to ask me to pay upfront the $10 outstanding on my account [which goes out automatically in a day or two anyway] before allowing me to add another Linode to my account.
I know that neither Linode nor any other VPS host can be blamed for buckling under such an attack, but how they respond to it means a lot when it comes to customer goodwill –and mine is evaporating rapidly. I've been a customer of theirs since 2009, but I think it might be time to look around for another provider.
Any suggestions, anyone?
Now their Frankfurt and Singapore datacentres are supposedly "Operational" again –those were the ones that were getting hammered earlier today– but their main Linode Manager and API have now gone titsup, so no-one can access their Linodes anyway.
If I didn't have all my websites running on Linodes [thankfully only blogs and suchlike –nothing financially critical], I'd almost consider pulling up a comfy chair and a bag of popcorn for this. The words "fuck" and "cluster" are marching inexorably towards each other.
This is going to really damage Linode's reputation as a dependable VPS provider.
True, Linode's communication brevity has been most unsettling. When a support ticket is opened, their canned response is to refer the client to their status site "http://status.linode.com".
They are assuming a very standard business practice of remaining quiet regarding the extent of the service denial and mitigation efforts Good on them for avoiding information leakage to the attacker(s). What is surprising, however, is their admission of volumetric attacks; a vulnerability that should be somewhat controllable with current network appliances. What this seems to say is Linode appears to rely extensively upon upstream network service providers to deliver protection. During one support call, I was advised by a technician that Linode has neither prescribed practice nor tools to protect VMs from DDoS. So, weakness appears to reside at two fronts; minimal node and control infrastructure protections as well as excessive reliance upon third parties for network service protections.
The Atlanta data center has been in a "hard down" posture since approximately noon EST today (01JAN16). There is little doubt that seven days of rolling outages will have a negative impact on customer retention.
This experience clearly emphasizes the danger of developing business reliance upon "cloud" service providers.
Biting the hand that feeds IT © 1998–2020