Web devs forget to check access perms :o
Shouldn't the application block by default such IDORs or the Web devs should at the least google on Path Traversal (2012)
Do you ever have déjà vu Jessica?
Personal, financial, and health information belonging to millions of folks has been stolen via a particular class of website vulnerability, say cybersecurity agencies in the US and Australia. They're urging developers to review their code and squish these bugs for good. The flaws are known as insecure direct object references …
Shouldn't the application block by default such IDORs or the Web devs should at the least google on Path Traversal (2012)
Do you ever have déjà vu Jessica?
developers implement secure-by-design principles at each stage of the software development process
Configure applications to deny access by default and ensure the application performs authentication and authorization checks for every request to modify data, delete data, and access sensitive data
This is critical for anything IoT related and I'll keep saying it. Until there is an inherently secure by design, vendor agnostic, industry standard protocol specifically designed for anything IoT then I will avoid all this crap like the plague as it is not yet fit for purpose.
I'm waiting for the screams of anguish and outrage when whole swathes of Hive and Nest devices are unceremoniously taken offline remotely or bricked when the command and control servers are turned off when the relevant vendors deem them obsolete or too much of a burden to support. Think that won't happen?
@simonlb: “vendor agnostic, industry standard protocol”
Maybe this is one thing the Gov can do, refuse a license to the companies unless the IoT devices have been certified safe for the Internet. There would be revenue to be made out of such certification. Your phone comes with two symbols, one that certifies it electrically safe and one that certifies it doesn't cause RF interference to near-by devices.
In "traditional" desktop applications, you don't have to worry (much) about access controls because the operating system won't let the user see resources they don't have permission to access. That works because one system verifies the identity of the user and enforces the access controls.
In the typical web environment you have multiple software components on a possibly indeterminate number of computer systems. The system(s) that serve(s) the data is/are unlikely to be the one(s) that verify the user identities and may not even know the identity of the ultimate consumer. It's unlikely they know the details of the necessary access controls and, if they did, have no obvious mapping between arbitrary web access tokens and their intrinsic security mechanisms. It's typically left to the application code to decide whether or not to return the data the user requested. That's rather like expecting a desktop application to validate all file accesses to check the user has read permission - and there's a very good reason why it's not done that way.
Web applications are missing a lot of the "systemness" we expect from traditional computing environments and putting the onus on application developers to fill the gaps is, frankly, asking for trouble. There has to be a better distributed authorisation model that can be audited independently from the application code itself. If there is a persistently recurring issue of this kind, blaming the developer is just a lazy admission of defeat.
Absolutely.
Sadly the commonest pattern is for the server-side app code to have a configuration file with a single set of credentials (usually plaintext username/password) that grants it blanket read/write/create-table etc. access to the whole database or often the whole database server. These credentials are used piecemeal by database accesses sprinkled all through the business logic and UI code of the application.
I don't often do this kind of development (I prefer real-time embedded) but my solution where there is not already a robust layered security system in place is always to write a separate daemon between the app-server code and the databases. The daemon does the credential validation and issues the session user-ID token to the app and then requires that token as well as the query which it can then check before allowing the query to run.
The daemon can concentrate on the rules for what user-ID has what level of access to what rows in the database and can have a fully-automated test suite. The app meanwhile can concentrate on the user interface and its functionality.
Yes, web development requires just a little more diligence because we’re dealing with multiple, sometimes disparate systems. Basically anytime you’re not dealing with an entirely self contained desktop app you’re in this situation. But let’s not pretend web development or client-server model applications are suddenly only for crackshot, top tier ace engineers.
I disagree the technology or authN/Z mechanisms are insufficient. Tokens can come in many different forms - but the capabilities are there to support claims and validations. That no one does this and everyone just wants to hit the easy button and use bearer tokens because that’s what they found on SO is not a fault of the ecosystem.
Developers need to just stop being fucking lazy - most of them don’t even belong in development. Copying an pasting from stackoverflow does not a developer make. I know people make that joke all the time - but holy shit it’s sad how often this is true and assholes are in a job where they’re out of their depth. This reeks of poor development practices and immature SDLC by subpar engineers working in an org that can’t be bothered to do it right. Maybe we should consider actually holding someone responsible - real world consequences for the real world harm. Hold developers and the companies responsible - like malpractice. Plenty of professions have a form of malpractice insurance, and perhaps it’s time for software development.
We’re dealing with an ecosystem where people are pretending that the contents have consequential value. If the contents have value, there would be incentives to protect and defend that content. As it stands, there are no consequences and everyone is happy to keep the status quo. So that’s kinda where I am - orgs can chalk this up to the cost of business and I’m not held responsible for fraud with my data — so why does anyone care? There are no incentives or disincentives for anyone to do differently. It’s like some weird game if who’s data is it anyway where the information doesn’t really matter.
Web applications are missing a lot of the "systemness" we expect from traditional computing environments
It depends on what you consider a traditional computing environment. In my working world that was a multi-user database system. If there were access levels for different categories of user it may well have been down to the application designer to build in safeguards.
If Buildings Maintenance was to be prevented from seeing tenant financial data that might be an administrative matter to ensure that Buildings Maintenance didn't have access to the screens for financial data. If, on the other hand, Crawly Buildings Maintenance shouldn't see data on Coventry buildings and vice versa but Estates Management could see data for both then it would have to be handled by something a bit more complex in the application itself.
What web applications are missing is a lot of statefulness and shoving responsibility for that onto the client.
> Web devs should at the least google on Path Traversal (2012)
Good, but hardly news.
In about 1973 {people I heard about} got a TeleType onto the ARPAnet and connected to academic and military servers across the land. Mostly looking for games (text StarTrek) but insightful use of a CD command got peeks at some interesting files. {People I knew} saw the wide-open net close down over the course of a semester as sysadmins fixed-up loose permissions acceptable on-base but not for everybody with a TeleType.
There should be a LOT more teaching of network social history. Should be a quiz in every job interview.
OTOH: the programmer was tasked to reveal the requested data. Did the specification say to NOT reveal unauthorized data? Was it just implied? Assumed? Neglected? I've seen data app specs which didn't touch the subject.
"More recently, Jumpsec security researchers showed how an IDOR vulnerability in Microsoft Teams could be exploited to bypass security controls and send files — specifically malware — to any organization that uses Redmond's chat app."
<cough/>Malware? But they already have Teams installed!
Most faults in this class should be regarded as gross incompetence of the "programmer".
But providing funds for real experience is not often approved. Therefore, the incompetence of the "programmer" is inherited from incompetent management. Time to make the C-suite personally liable. It is their responsibility, either way.
Exactly. Was part of a shop where we all left because they wouldn't pay the required wage. It was discovered they are/were exploiting job titles to fudge yearly benchmarking. The people left behind are also ready to leave for same reasons. The company would rather throw away all that knowledge than pay people the going rate. The problem here is they have moved to a new system that the current left behind people are keeping held together. Once they leave they are fucked. The low wages they want to offer means they'll end up with shit. They'll end up with people that create these kinds of flaws.
The problem won't be with the engineers, the problem is with the directors and money people. They truly need to replaced and staff need to be paid the going wage otherwise that shop is going slowly sink.
Same thing here. They'll have cut corners on a C levels orders. They'll throw the engineer or manager under the bus for that, and ignore they wouldn't budget for security checks.
Liability is the key. At the moment, for large organizations with huge piles of data the cost of a data breach is negligible,and the consequences for those making the decisions zero.
Until actual board -level directors are uninsurably liable for failing to follow good practice nothing will change. If you have planned and paid for quality systems, correctly maintained (i.e.,paid for enough staff to be good enough and knowledgeable enough to know what needs doing, and have time to fo it) you should have nothing to fear.Talk-Yalk levels of malfeasance should be actionable.
.......no mention anywhere in this article of the publicity needed to alert THE GENERAL PUBLIC that using most web sites might be bad for their health.
Sure.....El Reg correspondents focus on the technology......but THE GENERAL PUBLIC is not interested in the technology per se......THE GENERAL PUBLIC needs safe secure tools so that they can get on with their lives!!!!
"800 million personal financial files" compromised?????? Why is this not on the front page of The Sun??? ...or the New York Times???
Why are we not hearing about jail time for the people responsible???
Its much like what was mentioned on Politics Joe. They had a think tank guy on recently explaining about how re-nationalising the energy companies would actually benefit the general public, would actually mean people would save money. Politics Joe said information like that doesn't end up on the front page as its not "sexy enough" which they pointed out is one of the problems. Same with info like this.
Their way of checking you were authorised to an application's page after logging in - did the refer domain match the page's domain as they only showed links you were authorised to access.
Checking permissions for back end calls was done by did the document have certain codes added during render, checked by JavaScript and passed to call via parameter.
Fortunately just internal application, but they were aware it may be opened up to companies we worked with!!!
When I was a British Gas customer, I paid my bills online by credit card. After the transaction had gone through they would always give payment confirmation using someone else's name, payment reference number and their payment amount. They were obviously losing track of customer transactions at that point. Checking my online account showed the transactions were actually going through correctly though. Tried reporting the issue to them on several occasions but never got a response. The issue continued for a year or more before I switched to another supplier. Don't know if it's still an issue now.
Classic fraud scenario. Payments can't be verified by the customer, nor by the auditors. And for that reason, the company doesn't want to fix the problem.
I'm not saying the British Gas was run by crooks (was it?), but I'd certainly expect to see losses reported somewhere down the line.
That British Gas problem sounds like it could be an 'off by one' error somewhere, or some other issue with the wrong id number getting passed in the wrong place (which, of course, competent programmers and testers should have already tested for and detected)?
You should have reported it to the Information Commissioner, especially after BG failed to respond to you alerting them to the problem. That's the sort of thing the ICO should be concerned about (hmm, who am I kidding, they're far too useless (with occasional exceptions) and far too good at stalling on issues by making you trudge through the data controller's complaints department first (yeh, just the sort of thing you really want to do when a definitely-dodgy spamming company has slurped up your phone number or email address from somewhere and are spamming you - not!)).
Forcing interface designers to "check permissions" for each transaction number, requiring extra coding for an extra permission lookup, and therefore more chance of bugs, would not be necessary if the user id were part of data base key. Then it would only be necessary to confirm that the userId field matched the login userId for the secure connection.
For example #transactionKey = #userId_#transactionUUID
.
This isn’t as clever as you think, I’m afraid…
If multiple users need to be able to see the same data, what do you do? Duplicate data everywhere? Now you have issues with source of truth, increased database size, performance etc
Your critique was you would need extra code to check permissions, but now you need lots of code to do lots of different things AND you’ve got a more denormalized database. Bugs galore…
In general, if you think you’ve thought of a solution to a widespread problem - and your solution is simple - and you have not spent several years trying other solutions: you’re probably not on to a winner.
You’ll end up revising your data model to a permissions table M:M, and then want to optimize those out and end up with something like… an access token! Back to square one
Assuming a MVC-framework, this type of exploit is trivial to avoid. That's what's the controller for.
Alternatively wire access control in to the model.
E.g.
Transaction_Model {
method get(int id, User for_user) {
if (! for_user->isAdmin ) {
FetchWhere(id = $id and owner = $for_user->id)
....
The most cursory code‐review or unit test also picks up this sort of bug.
Prevention of insecure code has to be built-in to the software development life cycle. The org has to prevent devs from pushing insecure configuration and design, devs will take the shortest path to get the job done,especially when they are measured on commit or release frequency. So what's to stop them building bad designs.
But building that secure development pipeline costs. The org needs pen testers, security teams, and tools. None of which adds directly to shareholder value.
Queue company directors. They say they are answerable to shareholders. If they aren't directly liable when the data breaches occur, what's to stop them spending less on security, and returning more profit to shareholders in it's place?
Good governance should be at the core of good security. But instead company directors maximise profits and pay themselves tens or hundreds of times more than the developers who build their code. They can get their pals in one of the big four accounting firms to sign-off on the audits in return for a guarantee to conduct next year's review. We are audited, job done.
I don't see this circus ending any time soon.
So much for plethora of standards and certified professionals being certified but it seems they are not doing the job or the execs do not give enough importance to security. Something big is missing.
The cyber security should be included from primary leve education curriculum.
Plenty of cases in most devs history where they do care, but are shot down by their 'uppers' as this is 'an edge case'/
Uppers do not believe your standard user is intelligent enough to think that they could find somethign else by changing the id on the query string.
The problem with this rationale is that your own user isn't the problem - it's the people out there.
Unfortunately, there isn't a big enough stick to beat the companies with. GDPR isn't it......
#1 rule of anything internet-facing - never trust the client. Maybe that's an actual user with an actual web browser on the other end, maybe not. Once they've successfully logged in, they're probably a legitimate user. But still don't trust any data that the "browser" sends as being correct or not tampered with.