And the corporate world ...
... STILL doesn't understand that increasing the size of the attack surface is always detrimental to security. The mind absolutely boggles that people use clouds for anything remotely important.
Misconfigured AWS S3 storage buckets exposing massive amounts of data to the internet are like an unexploded bomb just waiting to go off, say experts. The team at Truffle Security said its automated search tools were able to stumble across some 4,000 open Amazon-hosted S3 buckets that included data companies would not want …
Cloud marketing implies you can lay-off all those well-qualified IT staff clogging up the Overheads on the company's annual figures.
Contrary to this, companies still need their IT staff.
However, whether they will be inclined to work in an administrative, rather than a technical capacity is another matter. Having to spend hours dealing with reset password requests is not my idea of a fullfilling day's work.
The vulnerability of the attack surface is a function of how big it is and how secure it is. If your security is perfect it doesn't matter how big the attack surface is because no one can get through it. If you've only got one point of attack and it isn't secure then you're at risk. Culture, risk management and funding priorities are the main drivers of good/bad security in most organizations and far more important than "technical" aspects like system design/partition, security policies and staff capabilities in deciding whether your confidential data end up being spaffed all over the web.
Well, not really. An on-premises data centre is securted by firewalls and gateways, often provided by external professionals. It takes some work (or F5 :|) to leave exploitable holes and even then those holes have to be exploited.
Much cloud storage is individual, each bucket is a separate piece of infrastructure that needs to be individually secured. Failure to secure = no security. It literally takes no effort to see the contents of an unsecured S3 bucket (for example).
You end up effectlively with hundreds or thousands of datacentres to secure.
"An on-premises data centre is securted by firewalls and gateways, often provided by external professionals. It takes some work (or F5 :|) to leave exploitable holes and even then those holes have to be exploited."
"firewalls and gateways, often provided by external professionals"
why have you outsourced your firewall & gateway provision & security to external professionals? They may as well be in the cloud.
Many things depend on what your incentive is. Internal IT are incentivised to ensure their operations perform correctly, 3rd parties are generally incentivised to ensure they are billing the customer for as much as possible.
"I was discussing increasing the size of the attack surface, not the abilities (or lack thereof) of the staff."
Actually, you framed it in terms of businesses understanding that. But it's not the business as some legal entity that understands things, it'sthe people who work there. It matters if that ability remains in the company after the PHBs have done their thing,
This is the real issue. Devs who come up with some grand idea and get PHB approval and run off and build something with no involvement with security or IT teams.
They then stick credentials and PII in unsecured S3 buckets because they had to open up all the perms to get their app to work.
Security/Compliance/IT teams have no opportunity to help, because the aren't involved.
As others have already said:
The AWS Default setting is Publicly Inaccessible (at least in recent times), so a deliberate effort has to be made - with lots of warnings - in order to make a bucket public.....
It's possible that an incorrect Global policy might be in place of which a Developer might not be aware, but I'd still say that you have to try quite hard to make an AWS bucket public, since AWS are fully aware of the issues arising from this kind of misconfiguration and so defaults to fail-safe.
If you can pay an outside entity to do the job securely and rent equipment to do it all, then you could afford to pay an internal team MORE, or be of better quality.
Especially if all these people are doing is dumping your data in a unsecured bucket in the cloud.
There's a reason I resist cloud-movement of data, and why I insist on keeping the most critical stuff in-house where we can see it, and know where it goes.
It's all going to come back to bite you, and then Amazon / Azure / etc. are just going say "Your problem".
Of course this is an IT site so moaning about management priorities is to be expected :-) But I actually know many companies where IT is highly valued, by the board and the shareholders, as an important strength, differentiator and enabler for reducing cost. And even in those who haven't particularly valued IT in the past, the recent spate of ransomware and other attacks is a big focus for the board of all plcs.
For this issue, and security in general, I think the biggest problem is not cost reducing so much as timescale pressure causing a move to allowing unfinished projects to go into production. That allows things like "for the prototype I have put the key in the source directory - we need to remember to set up a secure way to distribute the keys before we go into production" to be forgotten.
I think it is less about cost, or even cloud as such, and more about a management misunderstanding of things like Agile and DevOps causing an attitude of "we should let people start using the prototype to see whether we are on the right lines - do whatever operational hacks you need to get that running and we will fix it later". Of course, once the prototype is in use there are many, many, many bugs and change requests coming in and the team can never prioritise removing the temporary hacks.
Finance Directors sometimes are.
In many companies IT reports into Finance. And Finance Directors are almost always fired after serious control problems in their domain -- that is, after all, their main job.
Of course, it is very, very hard to know, unless you are an executive or a board member of such a company, because listed companies very rarely say "we fired our FD" because they don't want anyone to know there has been any problem. But it might be interesting to see how many companies that have had an IT problem have had their FD decide (completely voluntarily, of course) to seek other challenges 6 months later.
Amazon employ people whose role is explicitly to make their infrastructure secure (and pay them handsomely for it). Run-of-the-mill companies don't do such things.
OK, they also give their customers the tools to defeat that security if they so choose, but that is not the default setting.
So, AWS comes secure by default. (some) corporate IT drones would hook a windows server up to the internet with all ports open.
A competent in-house tech however can still beat any cloud based setup.
"This is a problem over and above the abilities (or lack thereof) of the staff."
If the manglement sees moving its data centre to the cloud or the like as a means of saving money on staff who have the ability to secure their infrastructure, wherever taht may be, and can demand commensurate salaries, then the two are intertwined.
Recentlyish I had to set up something to forward proxy logs to Symantec Manged Security Services where some Security Operations Centre lot could look at some sort of dashboard and tell us what nasty things were happening.
The KB article advised creating a dir in /tmp with 777 permissions in which to store your proxy logs before being sent on via nxlog. I decided to not follow that advice and put something together that was a lot more palatable.
Not something you'd expect to find in the official blurb for a security-based product.
I'm also not blaming Amazon, but the people mentioned in this article mention how they "contact Amazon so they can close the vulnerability".
This imples that Amazon can tell whether a bucket is intentionally open, or just set up badly, so why don't they preempt the need to be contacted?
When you outsource all your expertise to Amazon, what do you expect?
Me not being an expert on AWS cloud, I'd think you'd get "best practices".
But maybe the inherent problem is a LACK of (or knowledge of, or proper documentation of, etc.) a list of properly defined "best practices" to begin with?
And now I think I have a better understanding as to (maybe) why the US DoD isn't using AWS clouds...
[I assume that it IS possible to configure things properly, so why aren't people doing it?]
"so why aren't people doing it?"
In my experience, because once the "cloud" is adopted, the internal folks who previously managed security are typically "let go".
One of my clients a while back actually TUPE'd its entire IT staff to the cloud provider they'd contracted with, expecting to get the same level of attention from them as when they worked exclusively for that client. Of course, my client was just another contract among many under the new arrangement.
I had the unenviable job of trying to sort the mess out, and the only way would have been to engage an on-premise tactical IT security management team to oversee the outsource. However that thwarted the purpose, which was to minimise the IT overhead on the balance sheet.
You get no more security than you deserve.
Maybe searching for, downloading and analysing exposed data is in this case being done with good intentions, but does it not break the law? ISTM that no matter how lax the security, it still amounts to unauthorised access to a computer system. I might be stupid if I leave my car doors unlocked and the keys in the ignition - but if you drive it away without asking, that still makes you a car thief.