Can I just say...
... I know where my data is exposed because I have the keys not the cloudy ones.
The ongoing rash of data leaks caused by misconfigured clouds is the result of companies having virtually no visibility into how their cloud instances are configured, and very little ability to audit and manage them. This less-than-sunny news comes courtesy of the team at McAfee, which said in its latest Infrastructure as a …
Yup. Easy is easily exposed data. Can't secure the house if you don't know how to stick the key in the locks. Shall we go on?
But, just to get a little more technical, not to scare off those who actually believe it's easy to secure the cloud: imagine what you get when you have a massive number of instances all spawned from the same configuration--and you get just one little security setting wrong. Time to check, check, check yet again, and quadruple check your damn security settings.
Part of it the problem is not having the expertise or specialists available in your company to secure the cloud projects but I can't help but think that if data is almost insecure by default or very complicated to make secure, then the cloud services are to blame.
If access rights is easy to screw up or are that difficult employees just give full acess rights to make their app work, something is wrong at a fundamental level.
Adding more API's, services or features doesn't mean lowing security if done right.
"How does one do traditional penetration testing when apps use blabla.bucket.amazon.com using a pool of shared IP ranges?"
Yeah, I've had that discussion before. There needs to be something that guarantees there can't be any rogue devices, whether that's done by scanning or not. That is, if you have a bunch of different organizations sharing a pool, there just has to be a way to know if someone's pissed in the water.
Once rogue device detection is dealt with, you end up having to feed scanners with lists of exact destinations. And, if you have some kind of ephemeral service in which a server is spawned and vanishes in a short period of time, that passing of an exact destination is going to have to be automated--except the destination isn't going to exist for a full scan. So, you either end up "arranging" to have an instance held up long enough for a full scan, or scanners will need a feature to move to the next vulnerability test for the next instance.
And, all this would be fine for plain-old vulnerability scanning. For pen testing, this is going to feel very artificial, and full pen testing is going to have to get very dynamic--and soon, lest the black hats get there first.
I suspect most of the orgs have limited visibility (major factor in risk management) on premise as well, all we are seeing is an extension of poor risk management practices into cloud.
Maybe we should focus on fixing broken business processes and actually implement a robust risk framework to cover all assets regardless of which piece of wire or "serverless" CPU they are attached to.
Remember: Cloud provider only provides security if you pay extra for it. Cloud is a business and unlike the datacentre hosting providers of old they have no real incentive to do stuff for free.
Your business = Your Risk.
While your argument is sound, the ability to quickly and easily provision environments at aws/azure/google is often so developers and project leaders can get around those pesky business processes that slow down "innovation".
We have lots of business processes in place, but it didn't stop developers from going out on their own and setting up business critical systems at AWS without going through any of the business processes that exist. Then they throw it all over the fence to the ops team when their are operational problems they didn't think through and the business tells them to hand it over (and its a steaming pile of poo, and comes with documentation. That's if most of the team even sticks around (the smart ones left because they did all this resume driven architecture to advance their careers).
I have not worked at an org that didn't have good visibility into its own on prem environments. You know what subnets the network and security engineers have provisioned, and from there its easy to scan the network and find any rogue systems. Any of these systems that go unclaimed can either be shut off or their ports can be disabled. Not so easy to do in the cloud.
The cloud does offer access controls to prevent "rogue" developers. All production changes should go through change management established through a governance framework informed by risk.
I see many on-prem / hosted datacentres where they think they have good visibility. This tends to be limited to the O/S and net, with minimal understanding of app versioning or ownership.
Really need to have full visibility.
I guess the visibility is proportional to the size and complexity, just as risk is.
I unfortunately recognise your comments around "Throw it over the fence" I still have the mental scars :)
We have lots of business processes in place, but it didn't stop developers from going out on their own and setting up business critical systems at AWS without going through any of the business processes that exist [..]
And what about the ones you hired to replace the first lot after they were sacked on the spot?
Apropos broken business processes.
Maybe the rise of 'DevOps' has managed to smear the available operational talent out so thin that the security part of it has ceased to be? Maybe, even though 'development' and 'operations' are now in the same 'team', the 'development' is what is the Earnings side and 'operations' is on the 'Expenses side and therefore still something that should be minimised if one knows whats good for ones career, maybe get some Java-kid right out of school or the team-assistant to do it? All it takes is a credit card, yes?
The emerging hyping of 'Post-DevOps' seems to hint at the need for a 'platform team' which is kinda the olde BOFH setup again, only we can't say that because we are more Woke & Agile now.
The idiots being hired to do the config are taking long coffee breaks and not doing the jobs that they were hired to do. While tickets are being passed around by screw ups who have more arrogance than skills, it's not a mass of hackers getting in and screwing things up. It's System Admins, Site Admins, and other screw ups basically not doing their jobs. They call their Server Vendors IT Departments to ask for tips on how to configure everything. A whirlwind of screw ups, many of whom are off-sure.....
My experience with cloud security was that people tended to do nothing in the way of any form of documentation of the security configuration. At one site, when I noted as a newcomer, that the network configuration seemed to be a real mess and asked "Do you have documentation on how the network is configured for these environments?", I was told "No. AWS only recently gave us the ability to put comments on those settings." Jeebus! Then why didn't you effin' do it yourself? Bringing that up to management was like talking to a wall.
Oooh - someone woke up a PHB I see.
Don't tell me - the 'screw ups' happen to work for <insert ridiculously cheap outsourcing company who promised you the Earth for peanuts>, and you wonder why no-one knows (or even cares) how your business infrastructure is managed?
Until the U.S. Department of Defense puts all their data on the "cloud." After that happens, I wonder how many years months days will it be before it all gets p0wned? And what language should I be studying so I can converse with our new masters: Russian, Chinese, Korean, or something else?
Check out what GovCloud is and get back to us.
The DoD has put significant effort into security - if you want to connect to it, fill in the appropriate forms and they will deliver it.
If you want to make something accessible via the web? It needs signed off by multiple teams and security is tightly controlled.
This isn't standard cloud where you can stand up a public facing web server from your mom's basement.
... as any other IT system you had previously.
Its just that Kevin in Finance couldn't go down the machine room, plug in a load of servers, install a new database application and change the internet firewall so he can reach it from home.... or could he? I'm sure in the "good old days" he probably could do all of that, but you stopped him right?
Stopping him now is a bit harder, he can do this with his company credit card and doesn't need anyone else involved.
Shadow IT is a big old problem and by its very nature, companies struggle to spot it.
You fix the problem in the same way you stopped him before - firstly by having a policy that says DON'T DO IT KEVIN.... and then build all of your processes around control of the virtual computer room the same as your physical one.
Don't let Kevin the dweeb control every component, especially not security ones. Make him have to ask for permission and not go all John Mclane.
No need, because Kevin knows Carol (they go to the same meetings), and Carol is the Executive Assistant to the DG, and if Carol thinks something has merit, the DG will listen.
The role of scruffy T-shirts in IT is to be standing in the way of Business Efficiency. For how long will they hold the fortress agains the accounting hordes?
And this is the thing here - some organisations believe that cloud hosting is a democratisation of IT. That you simply don't need all those wretched IT experts that say "are you insane?" and "NO!".
You know, the sort of "expert" people that quip with things like "any idiot can make something function, but is it functioning in a reliable, sustainable secure way?"
AWS (and friends) can become very much like when Kevin found the server room door was locked and so decided to install a pile of kit hidden under desks in the finance department. It starts with a little amateur tinkering and next thing you know his home-brew dumpster dived platform is actually what glues the entire business together.
The FD loves him, because he's solved that problem for a nothing, but Kevin really doesn't have a clue and the whole thing is teetering on the edge of destruction, he doesn't understand patching or backups, his root password is "root" and when he leaves nobody has a clue.
This all happens for real...
The other one I have seen is when some half a brain decides to play around with AWS and uploads a pile of data from an on-prem production system to AWS to play around with manipulating it. Suddenly your customer database or finance records end up in an unsecured S3 bucket because its only a muck about right?
Wrong. Just because your data is held hostage on AWS (other cloud services are available) and your customers/users can get at it, there is this magic layer that the operator would really rather you didn't know anything about.
Back in the dark old days when you owned your data and it was 'on-premises' or in a bit barn on some dedicated hardware, you had complete control. Now you have abdicated it to Amazon (other cloud magic operators are available) they own you lock stock and customer data.
Clouds clome and clouds go just like your data...
GDPR has the answer, if AWS buckets, MongoDB instances and the like were secured and private out the box and you had to intentionally open them to the wider internet, there would be less issues.
I do accept that not putting the security in place, is a severe failiure on the purchaser's part, but perhaps the vendors should share some of the blame