Or it was deliberately put in there from day one.
The lengths governments will go to for access is boundless.
Kaspersky's Global Research and Analysis Team (GReAT) has exposed a previously unknown "feature" in Apple iPhones that allowed malware to bypass hardware-based memory protection. Addressed as CVE-2023-38606, which was patched in July 2023, the issue affected iPhones running iOS versions up to 16.6, according to the …
A testing feature which *may* have other uses
Is not a bug, is a feature!
...which allowed miscreants to gain access to targeted devices, deploy spyware, and snoop user data
So the usual problem of creating a feature that can be used by official miscreants, but then is swiftly discovered and exploited by real miscreants. It could be entertaining if lawyers start a class action against Apple to discover how this feature crept in to something Apple market as a 'secure' device.
Pssh.
Apple Employees have full unrestricted access to your iCloud etc. Apple claims they don't but there's a backend database that can be used to extract/move data around if you call for support and your iCloud is borked server-side in someway. Employees basically have a file manager interface.
Did you think the Fappening/jennifer lawrence thing was an external hack? oh sweet summer child.
PUBLICLY the government occasionally drops apple a prize, letting them complain "we can't access that iPhone or its data" but in reality behind the scenes they do it all the time to comply with warrants.
They just don't want the average joe to know iOS is NOT a secure way to chat with your crime buddies.
"hardware-based protections can be rendered ineffective [...] when there are [undocumented] features allowing to bypass [them]"
GReAT job by the GReAT gang! CPUs, and related hardware, should really be much better documented than many of them currently are. There's no way to get many subsystems (USB, display, some timers, ...) configured on Allwinner A64, Amlogic S905, Broadcom BCM2712 (Raspberry Pi 5), and even NXP i.MX8 8M (esp. DDR4 init), for example, from their datasheets and technical reference manuals -- one needs instead to wade waist-deep through the murky waters of U-Boot's device-specifc driver source code, without a map, or paddle (not for the faint of heart!), or worse yet, gdb/objdump disassemble BL31 binaries, and then snorkel through the resulting compost heap ... with fortitude.
Hats off to Kaspersky (and the Asahi Linux team) for going through this sort of digital torture and self-immolation to provide us with analyses and solutions in relation to Apple's hardware obscurantism. Next step may be for each individual to verilog her own CPU onto some FPGA, over the weekend, and stick it to those corporatists who insist on sacrificing our computational security to the ghouls of concealment sorcery, on the altar of evil spellbondageness!
That it is a GPU cache debugging feature. Because of Apple's unified memory and not having a separate IOMMU for the GPU the memory range was accessible from the main CPU. It was trivial to block this once they were alerted (just map out that memory range) but the design of the GPU security wise seems a bit lacking - perhaps GPU guys aren't used to worrying about security since that type of stuff is typically inaccessible to the CPU.
https://social.treehouse.systems/@marcan/111655847458820583
"The attempt to infiltrate Varadarajan’s phone and install Pegasus, which took place on Oct. 16, failed, Amnesty found. That’s because Blastpass had been revealed in September by Citizen Lab, Apple had fixed the two flaws it used and Varadarajan had kept his iPhone’s software updated."
from
India targets Apple over its phone hacking notifications
https://www.washingtonpost.com/world/2023/12/27/india-apple-iphone-hacking/
also here
https://wapo.st/3NEJYub
Didn't the English Parliament just pass a law mandating "backdoors" on everything?
Like, "backdoors" that the government GoodGuys can use to ProtectTheChildren and catch terrywrists but that could *NEVER* be used by BadGuys to harm
or exploit the users?
And wasn't it Scientifically Proven, Clinically Tested and Assured By Nine Out Of Ten YouTube Video Producers that the security of those "backdoors" was totally,
uterlly, completely and eternally uncrackable and that we were vastlysafer with them than we had been without them?
Or was that something I dreamed?
I'm sure no governement, anywhere would ever insist on a backdoor that the evil doers could exploit.
"Any single exploit is a *total* exploit*." Harold Vulture, "Person Of Interest".
They must [have] been in on the original design phase.
This is a dangerous fallacy; it's precisely the one that supports security by obscurity. We have overwhelming evidence that determined, well-resourced attackers are able to find and exploit extremely subtle vulnerabilities. Assuming the attackers in a case like this must have had inside knowledge is to ignore the much greater and more dangerous possibility that they simply found the vulnerability by looking.
Also, "of" ≠ "have".
I wouldn't call it similar, except in the very general sense that complex systems are complex, and complexity is the enemy of security.
Interactions among subsystems with differing security designs and requirements are the source of a great many — perhaps the majority (and depending on how far you want to stretch your definitions, all) — of vulnerabilities.
This, too, is a dangerous fallacy: assuming malice where accident, ignorance, or incompetence are also plausible causes. It's dangerous because it incorrectly narrows the threat tree. To satisfy the security requirements of a particular threat model, you have to guard against all plausible sources of vulnerability, not just the sexy Hollywood ones.
Where in the world do you get the idea that the "primary vector" is Javascript? It was an exploit with seven different stages. Javascript was used for one of them because that's what they had handy for that step in the exploit chain, but they could have slotted in something else if Javascript was bulletproof.
A superb article for a Friday any week of the year. The only gripe comes from the authors last sentence,
'Security through obscurity' just doesn't cut it anymore.'
I was given to understand that "security through obscurity" does defintitely "not" provide security.
That a major provider of hardware thinks they can use this to pull wool over the customers eyes shows how fucking retarded capitalist thinking is.
I've never bought an Applie product and after this "news" story I will continue to scoff at people who still believe that MAC's users can't catch viruses or other malware.
Not that Microshite is any better, after more than 30 years in the field I finally got my first trojan and crypto miner on my Windose 10 box. Not surprising really as my chosen Anti Virus Sophos will no longer work on Windose 10 without crippling the machine - as it is attempting to work along side Microsoft Defender. Microsoft Defender had failed to notice that I was unable to log in to my bank. I had to start poking about ahem...... before Microshit went, ooh, look, you just had x, and y, never mind we have dealt with it. No don't bother checking the logs we deleted those as well.................
Is there any hardware software combination which is safe ? Seems like everyone in the business has chosen the "fuck you by obscurity route..........."
ALF
Agreed, an excellent article!
Technically speaking, even strong encryption is security through obscurity, just with an extremely high degree of obscurity with a well understood mathematical definition of just how much obscurity there is.
A lot of this reminded me of the days when people would poke around inside Z80 CPUs looking for extra instructions that the manufacturer were prone to slipping in.
There’s also plenty of scope for this to be a simple screw up. When designing a device you might start off with an assumption of how much memory mapped IO it’s going to need. So you slap down that many memory cells into the design. Then as the design matures you run out of ideas for what registers to have on the device and end up with loads left over. No one ever reviews that part of the design, the docs are written, bonus earned and some fab somewhere starts stamping out chips with more addresses than they know what to do with.
@bazza
Quote: "....even strong encryption is security through obscurity..."
Yup....but probably using multiple levels of obscurity:
(1) Anonymous identities....so even if items #2, #3, #4 and #5 are broken.....no one even knows who is doing the messaging
(2) Public end points (internet cafes)....and not the actual address of a real person
(3) Diffie/Hellman -- so only tokens are exchanged -- and actual encryption keys are random, used once, and then destroyed
(4) Multiple encryption passes -- two? three? more? AES? samba20? chacha20?.....
(5) Messaging using SIgnal or WhatsApp....so even more layers of obscurity
So......the prudent user wanting anonymity and privacy uses MUCH, MUCH more than just "strong encryption"......
....but.....yes......every additional layer is "obscure"!!!!!
Technically speaking, even strong encryption is security through obscurity
No, it is not. That's not what "security through obscurity" means as a term of art (which is what "technically speaking" means). In particular, your claim discards Kerckhoffs's principle, which is precisely the technical criterion that applies here.
Your interpretation of that sentence is strained, and probably neither what the author intended nor how it was interpreted by the audience.
I am not myself a fan of Apple products (or of nearly anyone else's, for that matter), but this is not a case where Apple claimed that obscurity was sufficient protection for this mechanism. There's a strong argument that this was left in for GPU debugging and not adequately vetted for risk (see earlier post from DS999 linking to the comment by Hector Martin); the next-most-likely explanation was that it was left in by accident. What we do know is that when informed of the vulnerability, Apple fixed it. They didn't claim it was meant to be "secure by obscurity" or that obscurity was an acceptable position; they treated it as a flaw.
There are vendors which produce security-enhanced smartphones and other devices, such as NitroKey. I have not used their phones, but I've used other NitroKey products and not found any flaws with them; the list of security features for the phones is impressive, and I've seen some tentative suggestions from other security experts that the design looks good. (You can even order one with the microphone, camera, and sensors physically removed, so it's only useful for messaging, and incapable of gathering many forms of data even if compromised.)
Of course the NitroPhones are expensive: they're coming from a small vendor, they represent a significant amount of work, they're expected to last a while (5 years of updates) rather than fattening the vendor with planned obsolescence, and NitroKey doesn't have a bunch of other revenue streams. So a cheaper alternative is to use a feature-phone, or buy a cheap Android device and flash it with open-source firmware you've vetted yourself. Of course the latter is not cheaper in terms of labor, but if you're so motivated...
And don't forget that iPhones using Lockdown Mode were protected from this exploit and a lot of future exploits by virtue of limiting some things it can do (which is not that different from what Nitrophone is doing by limiting things its Android install can do)
Some people (especially those who frequent El Reg) like to complain about security issues, when offered suggestions like enabling Lockdown Mode or using a Nitrophone instead of a Samsung Galaxy they'll complain those options are too limiting!
True, but surely the best type of flaw to create would be one that could easily be dismissed in such a way. That's the problem with this kind of thing - you don't really know how it came to be. After all, you would expect some kind of change management process for the software whereby a reviewer would rightly ask "what the actual f*ck is this in your change buddy?" meaning there'd need to be at least two failures.