Cupertino is ...
... just keeping up with Redmond.
Apple has distributed a fresh round of security updates to address remote-code execution holes in iOS, macOS, Safari, and the firmware for Apple Watch and AppleTV. Miscreants who exploit these flaws can take over the vulnerable device – all a victim has to do is open a JPEG or PDF file booby-trapped with malicious code, so get …
They're obviously trying but they still have a long way to go before they catch up in total patches. I would think the fanboi smugness is gone now that Apple has become a viable target due to the numbers of devices. Used to be, no one bothered.
I guess Linux will be next once the numbers of installs on home PC's hit a certain point.
With Linux, if vendors follow Apple's route then definitely yes. But if the vendors support multiple distro families (Ubuntu, Debian, Arch, Slack, Red Hat/Fedora, etc) then it will be somewhat more difficult. Also, if the vendors follow the basic Unix practice of splitting user accounts from admin accounts that will limit the possible damage.
Linux should be getting attention because of its server dominance now from the hackers.
>Also, if the vendors follow the basic Unix practice of splitting user accounts from admin accounts that will limit the possible damage.
I run linux as my day-to-day OS of preference, but I think the above statement is probably not too accurate if the Linux desktop is targeted. Once crackers have a foothold, its going to be game over on any OS, given time. Most of the time, they don't need root for their requirements. Who cares about damage to the OS if they have access to your data?
What we need is heavy-duty sandboxing so that *when* the application is compromised, the miscreants don't have much in the way of resources to play with. Web browsers shouldn't have access to any user data, they should run in temporary, mostly ram-based file systems with minimal "what bits of the OS does this program need to run?" contents and userland display systems. Yes, you'll probably have to sacrifice speed and battery for security. On the plus side, your AV can probably stop monitoring all file access and just focus on the high-risk, inter-security zone data transfers from designated locations.
Good if you only read The Register or something alike - but any useful "web application" will need access to your data. A containerized application would be even more useful than a sandbox, it won't share any OS bit, but it will be even less user-friendly.
It's the "generic browser and application host" model which is utterly broken. It does get content from everywhere, often without much user control (see ads, and how they become vectors for malware).
What we need is heavy-duty sandboxing so that *when* the application is compromised, the miscreants don't have much in the way of resources to play with.
We already have this - its called apparmor
However, its not usually configured because it "gets in the way" and you also have the problem that many developers don't give a flying fsck about looking after a sane access profile. See also:
Done. (Posted from a machine running Qubes OS 3.2.)
Or you can run something like Porteus in always-fresh mode -- loads into RAM from a thumb drive, doesn't touch your HDD.
Both solutions (as well as TAILS, etc) come with caveats. Qubes has trouble with non-block USB devices. Copying between sandboxes and saving data sometimes takes extra steps, depending on how you tuck your TAILS.
But the point is, for causal browsing, there are Linux OSes which do set a much higher bar for anyone trying to hack in.
"Also, if the vendors follow the basic Unix practice of splitting user accounts from admin accounts that will limit the possible damage."
Remember, with the most popular Linux distributions, then after installation the root account password is the same as the first/main user account password. Consider the following on-screen message from some malware:
"For your security and protection, enter your password to allow operations to continue."
What percentage of 'ordinary' people would enter their password at that point?
How many people who read El Reg have changed their first user password to make it different from the root password? (I haven't and I'm sure you'd all do a double WTF! if you saw that happen because El Reg readers are 'special'.)
"How many people who read El Reg have changed their first user password to make it different from the root password?"
Uhhhh... I literally have forgotten my root password so many times that about 15 years ago I made it a habit to have a recovery disk on hand at all times...which is making me worry because I don't see it here anywhere!!!!
As everyone already knew by the early 90's, as soon as an account that regularly escalates to root privileges (su, sudo, kerberos tickets, etc.) is compromised you are toast as soon as that happens.
Just stick binaries that log the password before real ones in PATH. Or use aliases in the shell. Or shared library hooking. Or for X, stuff commands into escalated terminal sessions. Or display an identical window covering the real one when a password prompt appears. Etc. Etc. Etc
It does, however, offer some protection against an instant-.compromise-of-everything-and-then-get-out-of-dodge scenario provided that the attacker doesn't have any exploits to get instant root.
And if you run high risk stuff like web browsers as separate users, atleast he can't do an instant data grab either (if permissions on the data are properly set).
As Linux gains foothold in the desktop market, users will try to workaround too many password requests. Even on servers, I've seen "sysadmins" starting their sessions with "sudo bash" to be not bothered anymore by the need of typing sudo again, and - god forbids! - the password again.
Linux got attention and Linux servers are routinely compromised. They could be more targeted attacks than the classic ad malware/phishing attack against desktop users, but it's not so difficult to find some vulnerable Linux servers around, and use it as a beachhead to compromise a network - you don't really need always zero days to perform that...
And keeping up with Google. Did you see the huge list of critical exploits fixed in the latest Android release?
This isn't an Apple problem, or a Microsoft problem, or a Google problem. No one is immune, no one is writing quality secure code. The question is, are software developers getting worse than they used to be, or are we merely better at finding such problems than we used to be?
But Apple have WAY more exploitable code.
Worse than Flash, Apple is right at the top of the list of CVE's Android is way down the list, outside the top 10 even/
Kssh, back in your kennel. This isn't about Microsoft. You can abuse statistics and be laughed at when we're talking about Microsoft.
That's not particularly reliable for several reasons:
1) it varies by vendor whether a separate CVE is used for each individual issue, or one is used for a whole class of exploits or all security issues in a given subsystem
2) Apple files CVEs even for internally discovered issues, which few others do
3) having more CVEs means more issues were FOUND and FIXED, that doesn't mean the code is worse. If you didn't look too hard and found only two bugs in your code versus someone else who looks really hard and finds 20, it doesn't mean his code is 10x worse. It might mean you have a lot more unfixed vulnerabilities than he does.
Judging from some of the stories I've read regarding the very first versions of MS DOS I'd say we're definitely getting better at finding them. MS DOS when it first came out was so buggy you could crash it with 4 lines of code (code that worked perfectly well in any other BASIC environment)
This post has been deleted by its author
... it used to be that on Linux or other unixoid operating systems, people tried to avoid those problems. They tried to make code as simple as possible so there is more care going into each and every line of code. (this changes now with the FreeDesktop/systemd people)
Also on Linux you already had those problems and the libraries tend to be fixed already. There's also more of a culture of fixing bugs, which may or may not turn out to be security problems, as a priority. (again apparently except for the systemd FreeDesktop people)
You must have missed this weeks DNS issues. That was caused by infected Linux based systems...
I just shopped one of those wannabees to his service provider*. He managed to hit one of my diagnostic boxes, and that logs that sort of crud so we have stacks of evidence. Judging by the way that box dropped off the Net just now I suspect the provider pulled the plug on him. A gazillion to go..
* Normally we don't bother, but the provider happened to be one we know, so we punted over the relevant logfiles.
"I guess Linux will be next once the numbers of installs on home PC's hit a certain point."
You must have missed this weeks DNS issues. That was caused by infected Linux based systems...
Yes. But read the line you quoted. They weren't PCs. And it was all about common creds across multiple internet-facing devices. Which is hardly an OS-specific issue.
On a per product basis, Apple are way way ahead. For instance OS-X is on well over 2,000 known security vulnerabilities, and IOS is on several hundred....
FFS, "Way ahead" of what?
It's you AGAIN? I have disproven the myths you seek to peddle over and over again (on occasions using your own feeble facts you brought up when challenged), but you really seem to think that the Trump approach to "truth" (repeating the same lie over and over again) is going to work in a forum where people work with hard facts on a daily basis.
Give it a rest, will you? It's infantile and irritating and worse, it's not true. Show me facts, and I'll show you where you have been misinterpreting them, otherwise tell your friends in Marketing that we're on to you.
"...if the vendors support multiple distro families (Ubuntu, Debian, Arch, Slack, Red Hat/Fedora, etc) "
How about they just get back to proper embedded C? The general concept of running Ubuntu on a watch is what is putting the IDIoT in IoT. The $$ signs of it all have just fooled people into step 1/2 which is getting an environment up that will run a Linux kernel. Then the step 2/2 is how they just work out the space to add in all that GUi code that nobody really has read, but hear is "cool" and "cheap. Security, step 3/2. Maintainability, step 4/2. Practicality, step 5/2.
Alas, Apple have tried very hard and succeeded. Sierra really is the proverbial iceberg with little of real use visible above the water. Siri for example.
I wasn't using that anyway, but God alone knows what they did to Safari. The Guardian dropped out of sight again and in general it's made me more productive because using the Web is just bloody hard work. Well, OK, I switched to Firefox :).
Talking about long outstanding problems, Apple Mail is a classic example of that. Did you know it is quite simply not capable of attaching files? It can only enclose them, but it cannot create a bona fide attachment, even if you throw it all the way into text mode. It's done a Microsoft insofar that non-Apple Mail users won't, for instance, be able to make much sense of any pictures you add because they all come up in a 150x150 pixel frame, because that's the sort of HTML it makes, which means they'll have to manually save the pictures somewhere and then look at them with what the recipient uses for their file system - and that's just ONE problem. People writing plugins such as GPG support (or, indeed, a plugin that would enable it to properly attach files as per RFC) are constantly derailed by Apple changing the game on them and frankly, it's becoming too much work to support as a consequence. We may even have to allow Microsoft in again as a consequence :(.
I swear, in that context it's gone so much like Microsoft I'm seriously thinking about switching to Linux desktops in the tech division. OSX and Linux used to be a good mix but it's not heading the right way IMHO, and email is rather critical for a business. If Thunderbird didn't suck so much at non-text email I'd use that permanently instead.
Why is that exactly? Does the BSD kernel have some magic support for preventing code that displays JPEGs from having exploits? Of course not, in fact no kernel has JPEG related code - which it shouldn't.
Which means that for holes involving JPEGs, PDFs, font files and the like it doesn't make a damn bit of difference whether you are running BSD, Linux, Windows or DOS for that matter. The security issues found in iOS, Android, and Windows are almost always at a higher layer. True kernel exploits are rare, so even if BSD was perfectly bug free it wouldn't help with issues like these.
Spray paint malware source code on the sidewalk, and some phone will see it, screen capture it, OCR it, compile it, and (of course) immediately execute it.
Modern OS are just like the computer hobbyists of the very early 1980s. Computer magazine arrives. Flip through, find code, immediately type it in and run it.
MUST. RUN. CODE. ... ANY CODE. SEE CODE, RUN CODE. CAN'T STOP RUNNING CODE.
SQUIRREL! AH, POSSIBLE CODE. RUN "SQUIRREL!".
I *told* you that the Harvard computer architecture was better. But *no*, you said you wanted the von Neumann architecture. You said it'd be better. Now look at you. Puh.
The hardening of standard libraries rarely seems to happen, so one can only assume there's no money in it. Which is likely because there are few costs in getting pawned (to those that wrote/maintain/use such libraries).
Solutions? The private sector is never going to fix this without being forced by litigation which won't happen. Alternatives? Perhaps a gov (or international body) backed bounty on bugs in OS libraries? The NSA/GCHQ reporting the flaws they find rather than using them? Browsers implementing the recent Android-esque "No permissions on this run until you ask" (nags are annoying, but who really needs the web to talk to their printer more than once in a blue moon?). Others...?