Just Beautiful
> "We want to thank the researchers for their collaboration as this research advances our understanding of these types of threats," an Apple spokesperson told The Register.
And icing on the cake.
A side-channel vulnerability has been found in the architecture of Apple Silicon processors that gives malicious apps the ability to extract cryptographic keys from memory that should be off limits. Dubbed GoFetch by the team that discovered it, the issue stems from how processors equipped with data memory-dependent …
I'm with you on that one. These things are either implemented in error or on purpose. After the SMB debacle with the NSA and other exploits they use you have to wonder. Do they find them or do they mandate them? One has to wonder but then we come to my number one rule on conspiracy theories. How many people that need to keep it secret are there? If there are a huge amount then it's obviously false. This one not so many and a lot of them under fear of the NSA so could be true.
How many people that need to keep it secret are there?
The common belief that widespread malicious activities require a large group of insiders is misplaced. In reality, such schemes often involve only a few individuals in key positions. These compromised individuals subtly guide projects in ways that introduce vulnerabilities as if by accident, rather than overtly instructing teams to create security loopholes.
Workers involved in these projects might recognise the signs of a security backdoor – akin to noticing something that both looks and behaves like a duck likely being a duck. However, the prospect of whistleblowing presents significant personal and professional risks. Given the scarcity of employment opportunities in that sector, the years of training required to enter the field, and the lack of tangible benefits from reporting suspicious activities, employees may choose to remain silent. The fear of being branded as unstable or facing repercussions for dissent discourages action.
Furthermore, individuals who do identify these issues are often persuaded that compromising on security is the only feasible approach due to financial constraints, such as the high cost of proper implementation. This rationale, combined with a general desire among employees for a peaceful work life and financial stability, deters them from challenging questionable practices.
Organisations tend to weed out those with a strong moral compass or a desire to rectify perceived wrongs, often during the recruitment process or through these funny personality assessments. Such individuals, who might act on their convictions regardless of the cost, are viewed as incompatible with the company culture and are likely to be dismissed or not hired at all in the first place.
“iOS zero-click attack targeting Kaspersky iPhones bypassed hardware-based security protections to take over devices.”
“Constant-time programming forbids mixing data and memory access patterns, but Apple's implementation does just that”
I haven’t been able to come up with a (very simple) example of a DMP that wouldn’t violate rule #1. Unless the goof was to not stall everything as if a memory access had occurred?
If you are worth spying on, you can be sure your government will be spying on you regardless of privacy laws, constitutions or the tech you use. The rest of us have nothing to worry about unless you are genuinely up to something and very lazy about hiding it. In that case, it will eventually be impossible for the old bill to avoid, and you will get your collar felt.
I think people forget that life in not like the telly. Both the spy agencies and the police have limited staff, limited budgets, limited time and fixed priorities. You have to tick an awful lot of boxes for GCHQ or the NSA to take an interest in you.
"If you are worth spying on, you can be sure your government will be spying on you regardless of privacy laws, constitutions or the tech you use. The rest of us have nothing to worry about"
Well, the first bit is fairly true, but the second bit isn't. They'll be spying on everyone, just making more of a targeted effort with those who they regard as particularly worth spying on.
But no doubt it's just a coincidence that many of Europe's main internet cables take a detour to just west of Harrogate...
The attack takes many operations under the same key to recover the key so is not applicable to things like ephemeral keys used in TLS. The efficiency cores in the M-series chips are also unaffected by this attack as they don't have a data memory-dependent pre-fetcher. Therefore one mitigation for the M1 to M3 series where the pre-fetcher can't be disabled on the performance cores would be to make sure that crypto operations are only done using the efficiency cores.
It is an interesting academic piece of work, however the attack is only approaching anything like a practical exploit where one is repeatedly encrypting or signing with a long term key. Most likely real world scenario would be a server TLS key on a VPS where other VPS instances run on the same machine thus allowing the malicious code to run the attack.
Yeah, came to the same conclusion. It's clever but I think you'd need a very atypical workflow to expoit it - unloaded machine, same key used repeatedly, and a very high accuracy timer (the kind of which was disabled in Javascript a few years back for just this reason). Mind there are some clever folk out there, so I'm prepared to be proved wrong.
If they can apply this to something like AES session key rather than the asymmetric ciphers they've tested against then that would change things significantly.
Because these companies shouldn't be allowed to take shortcuts, claiming their chips are a zillion percent faster than the competition, whilst ignoring security issues like this one.
"Our Chips are quick, but only if you don't value security" doesn't have the same marketing ring to it eh.
This post has been deleted by its author