Come on...
"Apple was not immediately available for comment." ... someone pointing this out has to be the top comment on any El Reg Apple report :-D
Six university researchers have revealed deadly zero-day flaws in Apple's iOS and OS X, claiming it is possible to crack Apple's password-storing keychain, break app sandboxes, and bypass its App Store security checks. Attackers can exploit these bugs to steal passwords from installed apps, including the native email client, …
Apple stuff is supposed to be idiot-proof and I suppose it it is when used by idiots.
Unfortunately when used by anyone with even half a brain it appears to be wide open to abuse. And not to even respond to the reported threat is a real abdication of any duty of care to its users.
Not a nice prospect for those naive users who trust Apple implicitly.
Apple currently still has, on its app store, an app expressly stating that it is intended to be used to "bypass your school filter", etc. It's as simple as installing it, and you get full, free, VPN access to the outside world that's almost undetectable.
Not a huge issue, but there are no real ways to "block" a particular app install even with MDM APIs. You can turn app installs on or off and monitor them, but you can't blacklist an app. If you want to use, say, Cisco Meraki to push apps to your iPads in a school (a very popular choice) you need to have the "install apps" option on, or else you have to manually recall every iPad and do it manually every time.
The only real option you have is parental filtering, where you can block apps with certain age ratings.
The above app is STILL, after several reports, marked as being 4+. Apple have steadfastly refused to do anything about it, as they categorically state that it's nothing to do with them and it's up to the app-makers to decide the age-rating (not much point in having an age-rating, then, really?). This app allows bypass of any and all filters and access to the unfiltered Internet, for free, just by clicking "Get App".
However, Chrome was briefly pulled from the store and recategorised as 18+ because it "allows unrestricted access to the Internet".
Apple don't care about what they are doing, so long as they are making money. They are right and everyone else is wrong, and that's the end of it. And no amount of head-banging against their complaint department, tech support, etc. will do anything to change that at the moment.
The name of the VPN app : which one ? There are plenty on the app store, the one I use (VyprVPN) also has the 4+ rating. Of course, the apps are free, the subscription to a VPN service usually is not. But it's only that : a VPN profile. Browsing, mail, other apps : they use the VPN connection for access, nothing more. So any age limits on apps using that connection stay in place.
Apple currently still has, on its app store, an app expressly stating that it is intended to be used to "bypass your school filter", etc. It's as simple as installing it, and you get full, free, VPN access to the outside world that's almost undetectable.
If your school system can't stop VPNs, you're doing it wrong. Pretty much any corporate network I've had to plug into has blocked pretty much all VPN connection methods. Some proxies are even smart enough to detect "SSL" connections that have been transferring far more data than what a regular HTTPS request would require and cut off those connections.
Haha, good luck with that SSL method when the whole of the web goes SSL which is slowly happening.
Actually, that won't matter, since the method in question allows the proxy to act as a man-in-the-middle to decrypt the connections, inspect the contents, and reencrypt the HTTPS connections. They get away with this through the use of an internal CA pushed by default to all internal systems, such that the proxies are always trusted, even when they impersonate external HTTPS sites.
Evil. Pure evil.
"Some proxies are even smart enough to detect "SSL" connections that have been transferring far more data than what a regular HTTPS request would require and cut off those connections."
So what happens when a false positive complaint appears, such as someone trying to download a perfectly-legal Linux Live ISO? Plus I would think it would have a hard time trying to handle smurfed sessions or ones limited to small things that are plausible under normal web use.
So, the note about the Chromium team disabling the affected part escaped your notice?
You need bright light to find bugs. Delaying publication does not really improve security. Who's to say that other people (criminals, spies) haven't found the same vulnerabilities?
But neither does exposing updates too soon. You don't have to be smart enough to find a vulnerability if you can see what's changed in the latest patch and work back from there. And while some bugs are just simple errors that can be easily fixed others require core components to be redesigned and rewritten. Applying the same time scale to all vulnerabilities shows a lack of understanding of software development.
So it's a balancing act, allowing the vendor time to fix things in a timely manner is fine. Waiting six months seems a too generous to me though....
Enough. I'm absolutely sick of this bullshit. The way you fucking morons blather on is ridiculous. If you are a Goigle or Microsoft advocate, just shut the fuck up. Those to are demonstrably more complicit, despite Larry's protestations, than Apple. Open Sourcer? The code is fucking open!!! Do you really think that penetration of your systems is beyond the bods at the NSA's capability? Hubris is going to get the better of you and I for one cannot wait until it does you bunch of sanctimony pricks.
And breathe...
LOL! It's astounding how MS is evil, Apple and Google always right... MS has to fix everything in 90 days, Apple can ask for 180 (and is this fixed?) and nobody complains... if the vuln was already known in October 2014 and Chromium fixed it only recently, it took more than 90 days too... it's always easier to apply hard schedules to others than ourselves, right?
Delaying publication until a fix is ready - and the vendor is working on it - is a good thing - sure, somebody else can have found the same vulns, but maybe not. If you publish them, you ensure any criminal knows it and can use it easily. The day a vuln will hit you hard in the face, you'll change your mind...
LOL! It's astounding how MS is evil, Apple and Google always right.
What a load of crap! Time to burn your strawman!
Apple is known to have a terrible record on security updates. That's why many of those who use Macs don't really on Apple for POSIX libraries. Interestingly, however, it looks like they have learned from the openssl debacle and are moving to libressl for the next version.
Google might well want everybody's data but does have a good track record when it comes to bug-fixing. This may come from having a pretty good open source culture within the company: they have long been good players in many projects. The proof will, of course, come when someone discovers a major flaw in something like Android that they want holding back.
Not knocking the work, and certainly not the results, but when these guys say that this research will be invaluable for future reference, is that really the case ?
We all know about buffer overflows, yet that door is still open in almost every new malware report. Sometimes they even concern products made by big companies who definitely know better.
This new report is bringing to light some new obscure chain of consequences that constitute a vulnerability. Great news, but who exactly is going to pore over this to understand what is going on and how to avoid it ? Security researchers, not application coders.
When I search for "good programming practice", what I find is stuff that generally concerns code clarity and maintainability, rarely security.
In the best case, there will be a mention of using fgets instead of gets in C, because buffer overflow. But the rest is all about indenting, variable name formatting, function wrapping and commenting. Nothing to do with security.
We need an easy-to-read overview of good security practices that does not just say "check your inputs" but details what to check and how to make sure. Is that available somewhere ?
True, but I'm not even sure if this that much about programming. It sounds a lot more like design, especially Apple's much flaunted app sandboxing that seems to have been undermined.
Have you read the paper? It does discuss specific issues for app developers, even though the general problem probably can't be solved entirely at the app level.
In any case, saying this sort of research isn't useful for programmers is like saying research into the performance of building materials isn't useful for house builders. Yes, programmers are able to continue writing crap code. That doesn't mean it's impossible for them to learn to do better.
This post has been deleted by its author
The paper does criticise Apple for not making developers aware of the vulnerabilities or providing ways to spot them.
But, skimming the paper, I saw two classes of problems:
1. IPC is public. You'd be castigated for allowing access to private website data without forcing the user to login. The same applies to an app's internal services: if the service allows an app to change something or read sensitive data, then verify the caller is who they say they are. This includes communication via any url-scheme; so, for example, if your app can be reached via anglegrinder:param1¶m2&etc
then any tom, dick or malicious app could do so. Also websockets, etc...
2. Impersonation. It's possible for one app to impersonate another. (They can register your keychain id and then steal your data. Or register your url scheme and intercept data before it gets to you.) The fixes for this are dependent on Apple, and will probably break apps. But avoid keychain until Apple have sorted it.
There wasn't a buffer overflow insight. These weren't programming errors, they were design errors. And for those of us who have been around the block, mitigation is plain common sense (AKA EXTREME PARANOIA).
We need an easy-to-read overview of good security practices that does not just say "check your inputs" but details what to check and how to make sure. Is that available somewhere ?
All over the place.
If you're programming in traditional procedural languages, try Howard et al., 24 Deadly Sins of Software Security. Originally 19 Deadly Sins.... I think the first edition came out in 2005, so it's been available for the past decade.
Organizations like SANS and OWASP have been publishing "top ten" vulnerability lists for years. The main SANS list goes back at least to 2000. The OWASP list is specifically for web applications, though some of the concepts are applicable elsewhere. OWASP has a good wiki and other materials that describe specific remediation steps. There are many, many online articles that discuss these lists and remediation steps for the vulnerabilities they describe.
There are the Security Focus mailing lists. Bugtraq is the most famous, but they have a "Security Basics" list, and in the early 2000s there was a "Security Programming" (SecProg) list; the archives are still available at securityfocus.com/archives, along with those for VulnDev and others. Back in the day there was plenty of activity on Usenet groups like comp.unix.security.
And of course there are any number of more-general treatments that will actually teach developers how to think about security and develop with it in mind, rather than simply following a list of rules. There's the O'Reilly Computer Security Basics book (Russell & Gangemi), for example, or Anderson's Security Engineering - which is available free online.
Give the problem with this flaw, Apple shouldn't need 6 months hell they knew about it for 9 months now still hasn't fixed it. Reason you say 1 months is to make Apple get off their butt and FIX IT. Problem is Apple has a nasty problem of NOT fixing stuff in a timely manner EVEN with ALL THE MONEY they make takes them months on end to fix an issue.
Problem is Apple has a nasty problem of NOT fixing stuff in a timely manner EVEN with ALL THE MONEY they make takes them months on end to fix an issue.
I don't see evidence of them NOT reacting to issues, but it is true that they sometimes take their time. The OpenSSL and bash bugs were nailed pretty quickly though, so I wonder why they took this long. Maybe the issue is too complex to patch quickly? It would be interesting to know.
I never said they haven't reacted to it, i said they have an issue getting things fixed in a timely manner. There was a flaw called flashback i think it was many years ago. Was a java exploit, 0 day bug that windows fix was out within a day, Apple had the updated code to fix it as well but took then 2 months before releasing the fix. Apple has a habit of taking a LONG time to fix nasty security flaw's. Its so bad that it would be easy to say Windows is 10x more secure then and Apple's OS's just on fact MS fix's things in a reasonable amount of time, where as apple you can't expect it to be fixed for least 2months if not more.
This post has been deleted by its author
A car with a serious flaw in its operation would be recalled and be fixed by the manufacturer or supplier in a time scale commensurate with the scale of the danger. If the scale of the fault is as great as losing passwords then a fast fix has to happen - or it might be better to turn the product off and not use it again until it is fixed.
would be recalled and fixed... sometimes.
Sometimes, in the past the mfr has decided its cheaper to settle the surviving family members' lawsuits than fix the flaw. Or just ignore the issue like with diesels' fuel filters, or VW Touran ABS modules and flywheels falling apart.
The article is generally better than Mr Pauli's dashes but still contains some misleading and poorly expressed parts. For example,
They found "security-critical vulnerabilities" including cross-app resource-sharing mechanisms and communications channels such as keychain, WebSocket and Scheme.
In this context "security-critical vulnerabilities" should not be quoted because it is in the context of the report. If the author wants to emphasise that this is a claim made by the researchers that has yet to be confirmed then more explicit context can be added: "the researchers claim that there are security-critical vulnerabilities…"
Resource-sharing is essentially what an operating does for applications and is always "cross-app". However, this sounds more like it is related to resources being shared between apps.
"Scheme" is a programming language, LISP like as far as I know but I'm probably wrong. Further down in the report this is clarified as referring to the URL-scheme used and not the programming language. BID is thrown in later without explanation of the acronym.
XProtect and app signing both work on filename metadata, if you strip it with a single xattr command you've lost protection.
Apple didn't backport the root pipe vulnerability to Mavericks or lower and the fix for Yosemite didn't really address the issue.
https://reverse.put.as/2015/04/13/how-to-fix-rootpipe-in-mavericks-and-call-apples-bullshit-bluff-about-rootpipe-fixes/
Yosemite networking is a disaster thanks to discoveryd.
Security fix policy is what we know by what's been updated rather than any official statement.
Now Keychain was owned six months ago and they've been unable to fix it.
It has rather gone downhill since Snow Leopard.
... if you have 2 processes running under the same user id on a system, then 1 process can attach to the other and scan its memory anyway. Which admittedly requires a large amount of knowledge of unix systems programming but the potential is there. How is this different other than some badly written libraries make it slightly easier?
"No, modern (post 1990s multi-user system) operating systems should manage the memory space for applications to prevent this."
I didn't mean read the memory directly, its needs OS support to bypass the standard memory protection. But it can be done otherwise debuggers, trace and profiling programs wouldn't work.
They can also attach to running processes. At least on Unix, no idea about Windows.
Windows as well. The design of the Windows protection model for userland processes is different from the UNIX one, but the result is broadly a similar protection model.
Windows has a more thorough use of object ACLs so the access determination is more complex and nuanced than just "source uid == target uid == target euid", but to a first approximation it's the same sort of thing. Particularly when you compare it with the whole universe of commercial OSes, some of which are significantly different (e.g. System i) or very different (e.g. Orange Book A1 systems like SNS).
No, modern (post 1990s multi-user system) operating systems should manage the memory space for applications to prevent this.
This is simply wrong. Take Linux, for example. From the ptrace(2) man page:
EPERM The specified process cannot be traced. This could be because the parent has insufficient privileges (the required capability is CAP_SYS_PTRACE); non-root processes cannot trace processes that they cannot send signals to or those running set-user-ID/set-group-ID programs, for obvious reasons. Alternatively, the process may already be being traced, or be init (PID 1).
Consider in particular the bit about "non-root processes". Processes with normal privileges (non-superuser, without CAP_SYS_PTRACE) can trace processes running with the same uid and euid. That includes reading and write process private memory.
On Windows, similarly, a normal-privilege process can open a handle to another process running with the same security token, and through that handle manipulate process memory and even do things like creating threads in the target.
Security models for multiuser operating systems typically impose access-control requirements at user and system granularity: that is, access controls must protect resources owned by a user from other users, and system resources from invalid access by user-mode code. That's essentially how the Orange Book (which came out in 1983, by the way - your "post 1990s" date is way off) defines the C2 level, for example.
... if you have 2 processes running under the same user id on a system, then 1 process can attach to the other and scan its memory anyway
That depends on the operating system. But I'll assume we're talking about UNIX-family OSes here.
That's why the resource isolation model in iOS doesn't simply run apps as conventional UNIX processes under the same ID. There's more information in the paper, or elsewhere.
Under Android, according to the paper, each app runs under a different UID. (I haven't bothered trying to confirm this from other sources.)
I have never used Keychain - I've never felt it was safe 'leaving' my passwords 'in the computer' even if they are encrypted and supposedly locked down somehow. Having said that, like most people my email password is stored within my mail client and I don't want to have to type it in every time I want to check for email.
OSes eh? Is it going to be linux next time for me or give up the computer time wasting.
"I've never felt it was safe 'leaving' my passwords 'in the computer' "
Err, where do you think passwords are normally stored on a home computer? Or do you think everything is stored on a remote server... sorry, I mean in Dah Cloud? Even the browser stores your online passwords in an encrypted cache.
"On your computers maybe. But on mine, and presumably the op's, my browsers do not store any passwords encrypted or not."
I'm assuming you have a password for the main login on your computer, which whether you like it or not will be stored on the local machine.
And if for example you're using a web proxy server you enter the proxy password every time your browser needs to fetch a page do you? And you manually type in every password for every website you use?
It makes sense not to cache banking passwords, but crap like social media, who cares?
"Personally I'd hope that my computer just stores a salted hash of my password note the password itself :-)"
The OS yes, not browsers , they store the actual encrypted password otherwise they wouldn't be able to auto complete password fields.
That was rather the OP's point. The OS can store a non-reversible password verifier (a hash, a ZKP verifier as with SRP or PAK-RY, etc). The browser needs to store a reversible encrypted password. So not using the browser's autocomplete feature removes a significant branch of the attack tree, and your post about the OS "storing the main password" is irrelevant.
On the other hand, not using the browser's autocomplete feature or other "password safe" technologies means the user types the password more frequently, increasing the attack surface for e.g. keyloggers and some forms of phishing attacks. It's a trade-off. Personally, I don't use a password safe and disable many other sorts of credential caching, as that's the less risky option under my threat model. It means I type my (38-character) Windows domain password half a dozen times each day, but I'm a fast typist.
Well I feel a bit of a dunce now - but better informed (so thanks in a way), I had a look in Keychain (never opened it before) and saw that there are some passwords there. I did not realise that for some systems (typically Apple stuff) it stored them there whether you liked it or not. When the pop-up had appeared in the past 'Do you want Keychain to store this password' - I had always said no.
I guess my browsers store some passwords - I do occasionally reset them clearing certain logons but I type in passwords for things like online stores and banking when I need them - don't know how risky it is being logged into the register often - I tend to have different passwords for everything.
Being a miserable git (or whatever term takes your fancy) I've not really got into the social media stuff
I feel that there are basically two main options for why this has not been fixed yet.
option 1.
We are apple, we don't care.
option 2.
The issue is a design flaw more then an implementation flaw and thus they have no clue of how to plug it without breaking everything or doing big rewrites of loads of core components, thus involving tons of management at all levels of the company and burying the "project" in glue.
At first sight this looks like an issue that is buried very deep and could require a considerable overhaul of the underlying system. Not a question of checking some bounds or adding an escape character.
As that could take fairly long to fix they've probably already separately looked at the attack vector. At least that reduces the risk until the developers have fixed the underlying problem. It seems that the malware would have to come in via the App Store, otherwise why would these researchers have gone down that route? If the App Store is the only attack vector and you know what a malicious app needs to do to gain access you can look for it in the app vetting process.
It's not perfect but it's better than nothing.
Reading the paper, the vulns are in the app sandboxing, IPC and WebSocket design - thereby well deep into the OS - any change may impact a lot of application that could stop working - it could really take not a little time to understand how to fix it without big braking changes.
That's why those who believe you can always fix a vuln in a few weeks are those who never worked on a complex piece of software, with a lot of other software beyond your control depending on it. And a lot of customers who would become really angry if you broke something badly.
That's why those who believe you can always fix a vuln in a few weeks are those who never worked on a complex piece of software, with a lot of other software beyond your control depending on it.
It's primarily a design issue that should have been picked up a long time ago. How do you think the liability should be handled if someone experiences harm as a result? Disclosure isn't really any different to finding defects in laptop batteries, or car accelerator pedals.
Having just read the PDF -
- The keychain on iOS is not affected.
- The only thing on iOS that is affected is URL schemes. This has been known forever; anyone can publish an app which claims any URL scheme, so you shouldn't send anything sensitive using them.
OSX has more holes....
I imagine a lot will feel the screw. Many of us CAN'T upgrade to the next version of OSX every time they release one. It breaks our existing programs to often. Add to that Apples' POLICY of refusing to patch ANY past versions once another is released......we are just screwed. This is the "Apple Screw™" and how it turns...
Once again, Apple is caught with its pants down despite being based on a BSD OS. As a customer (most certainly not a fanboi), I find this very concerning.
A vulnerability that allows peeking into a password manager? Can't get much worse than that, can we?
They are after all a $700B+ company. Surely, locking down their systems entirely, and not pulling a "geez, hard to fix it, man", like they have been doing with rootpipe. Or doing a mid-90s Microsoft and claiming that integration and ease of use trumps security. For example, let me disable Keychain, at least for certain apps until this is fixed.
Start with the premise that anything that involves authentication or authorization, and does not come from the BSD core needs to reviewed with the most extreme paranoia. Well, the BSD core too, but that's been out there longer.
That would be worth a $1B or 2, surely. Lots of tasty bounties for example.
Rather than splurging $3B on a maker of flashy headphones with questionable acoustics.
Awww, what do I know? Just a dumb user.
Seems to be a recurring trend with Apple, everything is a hard to fix problem. Guess that is how it ends up when only thing in your face is money.
If you remember the celeb photo leak, yea Apple knew of the flaw that was used 6 months ahead of time, nothing was done to fix it til after the leak.
Take a look here.
https://discussions.agilebits.com/discussion/comment/212590/#Comment_212590
Note that their attack does not gain full access to your 1Password data, but only to those passwords being sent from the browser to 1Password Mini. In this sense, it is getting the same sort of information that a malicious browser extension might get, whether or not you use 1Password.
I never felt very happy myself with password manager to browser integration and kept it turned off. So it seems I am OK for now.
Note also that if you think, like some individual posters morons & fanbois at 9to5mac.com, that this whole thingy is overblown, having the vendor of a password manager confirm that they've worked on it and it is real should be a bit of a reality distortion field remover. If that is possible with deep down fanbois.
Hi, I'm Megan and I work for AgileBits, the makers of 1Password. For our security expert's thoughts on this article, please see our blog: https://blog.agilebits.com/2015/06/17/1password-inter-process-communication-discussion/. If you have further questions, we'd love to hear your thoughts in our discussion forums: https://discussions.agilebits.com.
Despite the assertions from numerous pro-Apple commentators in these hallowed forums, it would appear the the much-vaunted screening of apps in the App Store isn't all it's claimed to be.
This isn't the first time that researchers have had malicious apps published and made public, yet so many Apple apologists insist that there is no malware in the App Store, Apple weed it all out, you only get malware if you jailbreak, etc.
Surely nobody with an ounce of common sense can really believe that, if security researchers can have malware published, that there are no apps on the App Store created by the criminal fraternity containing malware exploiting as yet undisclosed vulnerabilities.
>>yet so many Apple apologists insist that there is no malware in the App Store, Apple weed it all out, you only get malware if you jailbreak, etc.
You've misunderstood. Nobody thinks Apple's vetting process is anything special.
The reason people say there's no malware in the App Store, and that you only get malware if you jailbreak, is because of Apple's sandboxing system, which has seemed pretty darned robust until just now.
Apple fans aren't necessarily as stupid as you seem to assume.
The reason people say there's no malware in the App Store, and that you only get malware if you jailbreak, is because of Apple's sandboxing system, which has seemed pretty darned robust until just now.
I dare say most of the people who say that have no idea the sandbox even exists, and are just repeating what they've heard.
Certainly most of the people that I've heard claim the iOS App Store is free of malware are not software security experts, or even vaguely familiar with the field.
If this truly is a fundamental flaw in the OS, it's going to take another version to fix it. We can expect to see new programmer training and new versions of most OSx compatible apps with the next variation.
It's bound to happen. Write once, run only in the old version, write it again to run it in the new and improved secure version.