
I'm pretty sure this already happens in the advertising sector, which seems to be riddled with fraud at every level of the supply chain.
If you want to stick it to a startup that relies on serverless infrastructure, you may be able to inflict $40,000 in financial damage every month with a modest 1,000-node botnet. That estimate comes from computer-science boffins Daniel Kelly, Frank Glavin, and Enda Barrett at National University of Ireland, Galway, in their …
Yeah, from the trenches, the process for most agencies is this:
(1) inflate all your numbers. Lie if you have to, but mostly just project and use the future tense. If you got 10,000 clicks this month and next month you intend to triple the amount of space you're giving to advertising, find a buyer for 30,000 clicks. Plus an extra 10,000 because the consumer market is growing. Plus 5,000, ummm, because.
(2) take the buyers out and use a small portion of the money you took from them last year to supply a lot of alcohol. And maybe an ITV2 celebrity. That'll prove that your agency is big time and therefore likely to get 45,000 clicks.
The buyers will lack sufficient experience to spot the con because they've been in post for less than a year. They replaced the last batch, who all got fired for failing to get a measurable return on the advertising budget.
Rinse and repeat.
I've seen single CPU 1GB instances at some of the big players for even less than that. You can get those resources without any per second billing for a quarter of that price.
If this is what they charge for serverless setups then it's a scam because you can run the same code on the same resources and expand just as easily for much, much less.
I've got entire web farms sat behind NGINX proxy clusters with not much more than that which deal with tens of millions of requests per box a month that never cost me more than a tenner a month per box. In front of those I have various CDNs on low tier packages.
It's surprisingly cheap to handle huge amounts of traffic these days.
The wheels fall off when the backend is .NET / MSSQL based...shit gets pricey then as it doesn't scale as frugally as other solutions.
If you stick the tried and tested NGINX, PHP, Mongo setup and you keep your code simple and trim the fat you'll be able to handle mountains of traffic.
"The number of attack nodes simply linearly affects the financial damage. A botnet of 100,000 nodes will do the same damage in one month that 1,000 nodes would do in a year."
If it is linear, a botnet of 100,000 nodes will do the same damage in 3 days that 1,000 nodes would do in a year
This is shown as a quote, so I am posting a comment not a corrections email. If it is actually a Register transcription error my apologies to the authors.
I agree that it is an entirely obvious attack.
I do NOT agree that it is easy to defend against--especially given the difficulty of monitoring spending on these platforms.
I have read an article on this subject. The authors of that paper were arguing for some sophisticated analytics, combined with aggressive IP-level rate limiting. Yeah, sure. I'm going to bet the company on that.
There are several layers to work this.
First, use a no-asset LLC to contract the web services. If an outrageous bill arrives, the option of walking away is real. This protects against any number of boo-boos, not just malicious actions by outsiders.
The next this to do is to closely monitor your costs. Even if the platform is denying you the actual numbers, you should be able to get almost minute-by-minute data by cost-modelling based on monitoring. A circuit breaker can then be applied.
Note that this attack is actually less disruptive than a Black Swan Event. If your company gets mentioned in a prominent way, you might see a tremendous amount of legitimate traffic. You really are hoisted by your own petard, because you can be ruined by reputation if the site is not available, and ruined by costs if it is. (Very, very little of the traffic will generate profits in the first month.)
If you can handle a BSE, a EDOS (or DOW) attack doesn't look nearly so bad. The solution is that you never go with pure serverless. I don't even know if the platforms support this, however. Unbounded cost is still a concern, but autoscaling was designed in part to handle a BSE. Again, with proper monitoring, the costs should not be a surprise.
So, if your mixed deployment uses "serverless" functionality until your system detects that it would be cheaper to switch over to autoscaling, then I suspect that EDOS loses much of its threat. Not all, but the hyper-expensive by API call costs of "serverless" deployments at least are contained.
But defending against (or even detecting) an EDOS attack where the individual nodes are tagging you at less than one qpm is going to be HARD. The statistical analysis that says "real transactions follow X distribution pattern" just means that the attackers will conform their attacks to the X distribution pattern. It's almost as easy to write attack code as it is to write the monitoring code. Sometimes, it will be easier--and since the analysis is statistical, there WILL be false positives.
If (and it is a HUGE if) the attacker is actually bearing the costs of the attack, one effective defense would be to force the attacker to spend more than you do. This can be done by requiring requests effectively mine a block. The mining is expensive compared to verifying it. But doing that means that your site is likely to trigger malware warnings on the botnet, and might get you classified as a malware site. Therefore, such a response must be limited to cases where you are fairly certain that you are facing an EDOS attack....
Defending against this is hard. Beware the inherent risks of "serverless".
expending small amounts of money/effort to cause your opposition to spend LARGE amount
We've seen unintentionall results of this kind of thing already when spammers targetted relay servers on the epsensive fringes of the net back in the 1990s. It's what caused the creation and rapid adoption of blacklists
Cloud working is just the modern version of using timesharing mainframes from the early days of computing.
For almost all sustained workloads it is cheaper to use your own kit rather than rent services from a cloud provider.
Valid reasons for using a cloud
1) Short term peak (under 3 months)
2) Insufficient internet connectivity at own premises
3) Keeping development and testing well away from production
4) Temporary substitute for unavailable systems (eg after a fire)
Reasons for NOT using a cloud
1) Cost - in under 3 years (under 1 year in many cases) running the job on own hardware will be cheaper than the cloud price
2) Legal constraints - any company in the EU that allows personal data to be on a cloud controlled by US firms is in danger of massive fines due to the EU GDPR and the US CLOUD act.
3) Data security - if the access to the cloud application is not set correctly then massive data breaches are all too easy - this again raises the potential of nasty fines to companies that trade in the EU due to GDPR. Data breaches on own kit behind a firewall are usually due to an attack (rather than the stupidity that has left so many Amazon storage buckets with world access).
4) Lock in to one cloud supplier. It is far too easy to embed implicit assumptions about the available facilities into applications resulting (for example) in an application that works on AWS but needs extensive rework to run on Azure.
PLEASE before committing a job to "the cloud" price the costs of own kit vs cloud kit over the expected timeframe. Include the costs of 2,3,and 4 above in the analysis before committing to the cloud.
Add to reasons for not having a cloud
5) The cost of having people procuring the hardware
6) Maintaining a data center, and patching the servers.
7) The one-off costs of upgrading old hardware can be much harder to get approved than a higher steady cost of renting capacity. I've seen plenty of examples of production systems running on obsolete unsupported hardware and OS because it was never cost effective to upgrade it.
Well, that seems like a really good idea for ... each of the following to use. They'll have way plenty spare boxes to run DoW attacks on their clients. Silently, insidiously.... profitably.
" Google Cloud Functions cost the most, followed by AWS Lambda, IBM Cloud Functions and lastly Azure Functions."