So, are Apple going to sue him because bugs and vulnerabilities are Trade Secrets?
Would be about par for the course in the crazy world of the Orange Free States.
Six months after software developer Jeff Johnson told Apple about a privacy bypass vulnerability opening up protected files in macOS Mojave, macOS Catalina, and the upcoming macOS Big Sur, the bug remains unfixed – so he's going public. Johnson, who runs app developer Lapcat Software, said he submitted details about the …
this reminds me of a recent piece of wisdom by a president of a certain country who, quite sensibly observed that the more you test, the more cases you get. So (Monty Python and the Holy Grail mode on), the less you test...
p.s. but then, there was a different president, different country, who in 1990s offered a similar advice: don't want to run a fever? Break the thermometer! Nihil novi then.
I must say that is disappointing. Apple isn't doing too badly on the security front because of decent fundamentals, and now they have started talking explicitly about privacy protection as well (featured big in de WWDC) I would have hoped they would become a bit more intelligent about handling security issues as well - including the ones they find challenging to address. The argument for that is simple: an informed customer can always find other ways to address known issues, giving customers the mushroom treatment will alienate those who recognised they were doing quite well.
I've seen the same with Microsoft: when a fairly massive problem in email authentication was pointed out to them, they didn't pay out, but they did silently change the process. Fat chance that whoever reported that is ever going to bother filing a report again.
Stupid, IMHO.
The funniest thing, is that the end user is going to detect peculiar behavior at the browser level simply by performance hits when the exploit is active. But Apple's current 'security' philosophy has removed the /Library tree from visible access to the masses. So they can't even go in there to flush Safari files if they are worried about a possible exploit. So dumb.
They hid ~/Library for a very good reason. It would confuse the general public, and if files there are deleted or modified, things break. Just like why Windows hides AppData by default too. You can either unhide it permanently, or you can access it on a one-time basis. The procedure is relatively easy. Just enter ~/Library in the path window.
You assume that there will be performance hits when the exploit is active and that the users will notice these hits. I don't know about either argument. If the concept exploit is not efficient, that doesn't prevent someone else from reimplementing it to avoid any bottlenecks or to schedule inefficient behavior for times where users aren't going to notice. If a user is on a browser and notices a performance hit, I'm guessing they will assume what I would probably assume: that there is a misbehaving script in an open tab. This may also cause them to restart their browser, but the exploit can be restarted too.
Actually, the point I was making, was that if it's sending data from those libraries, dumping that data on the regular leaves less tracks with which to be tracked. Until the specific exploit is fixed.
Nothing in that Safari library is necessary to retain, except the bookmarks, because it will be regenerated when you reopen the browser.
As long as the person owning the machine issues the appropriate SU command, you can see it, and flush it. This is also useful for other nasties that might creep into Safari, which does in fact happen on a more or less continuous basis.
This is merely another example of mainstream OSs thinking that obfuscation = security, when it obviously does not. M$ is famous for this. I was hoping Apple would be less so.
Perhaps because Unix user/group or ACLs are based on WHO, not WHAT?
The example exploit says the browsers history could be accessed by A.N.Other application. Nothing about one user accessing another's files.
That said, there's probably ways of partitioning processes in *nix even when running as the same user. It sounds like a reasonable security requirement that has probably already been addressed elsewhere.
Android runs each app under its own unix user id.
The android system then allocates to each process what needs to be shared, but by default, the usual unix permissions apply (and it would work well if they hadn't ballsed it up with their use of a non-user aware FAT filesystem for "external" data... [ Yes, FAT was originally used so you could interchange sdcards with other non-android devices, but they still could have added an overlay to their driver... something they've half-heartedly done lately, but still, badly.] )
Android has lots of problems, and it's not because of their SD card format. If they wanted to, they could sandbox the SD card easily without doing anything to the format. It's already set up to have directories where apps write by default. They just block access to those directories based on the app, allowing the user to override that. Problem solved. Except that's not the problem. Android's problems run a lot deeper than that, and the choice of format and decision not to sandbox the SD card too is somewhere between inconsequential and slightly positive.
I didn't say android didn't have lots of other problems. The fiasco with the internal/external sd card fiasco is still a big one, and of course they could limit access to stuff, but they don't. And that's my point.
It also takes the piss when you don't have enough space to install apps, yet have over 100GB free on your sdcard. I know the technical side - I've been dealing with it long enough, but it's still badly though out bollocks, with useless half-hearted hacks added each version to try and mitigate the issue.
On most consumer computers, everything is run with the same user account. Try explaining to the general public that yes, we know you are just one person, but you should create multiple users to run different applications. It sounds ridiculous. That's because, in most cases, it is ridiculous. I have done it with a few applications I have reasons to limit, but most of the time, I have no reason to and I don't. With this, you can take one of a few approaches to solving this problem:
1. What problem? Everything in the user's directories can be read or written if the permissions say so. This is generally fine if malware doesn't get into that directory. Not so good if that happens.
2. Create various areas where applications can write which are sandboxed away from other applications. This actually makes a lot of sense, because user documents can be stored in general-purpose directories.
3. Throw up warning screens whenever a new application wants to read or write to a new area. This will probably generate user annoyance and high blind click-through rates.
4. Warn the user on each file an app loads. The users will soon throw the computers on the floor.
Apple went with a combination of options 2 and 3. Option 3 is the more annoying, whereas option 2 makes a lot of sense. Unfortunately, we now know that they failed to implement option 2 correctly, became aware that they failed, didn't fix it, and ignored the problem completely. So we're essentially back to options 1 and 3. If you're sufficiently confident that you will never have malware running on your user account, you're fine. If you think that's a possibility, you're less fine.
TCC in Mojave and Catalina is a bad poorly/undocumented joke that at least in my case needs manual editing of the TCC.db to work right.
I have a suspicion that Apple chose to defer a painful fix in light of their ARM transition. On the MacipadOS they are implementing a new security model that presumably preempts this problem in the first place. Thus I'm assuming their reasoning is that "why fix something that will be fixed by default over the next 24 months"? (Not that I agree with it.)