back to article Apple's GoFetch silicon security fail was down to an obsession with speed

Apple is good at security. It's good at processors. Thus GoFetch, a major security flaw in its processor architecture, is a double whammy. What makes it worse is that GoFetch is a class of vulnerability known about years before the launch of Apple Silicon processors. How did Apple's chip designers miss it? A similar problem …

  1. Korev Silver badge
    Terminator

    Apple is good at security. It's good at processors. Thus GoFetch, a major security flaw in its processor architecture, is a double whammy.

    Looking at the number of vulnerabilities from Apple recently, I'm not sure how much I agree with the first statement!

  2. Pascal Monett Silver badge
    Windows

    Am I glad I'm not that close to the metal

    It is insane to realize just how complex things are when you're at machine code level. There are really intelligent people out there and it is humbling to realize that, as good as I may deem myself in my specialty, I don't hold a candle to the minds that can not only handle this level of programming, but also extract the proper conclusions and pull the whistle when things aren't going right.

    My respects to those who can do this level of code analysis.

    1. abend0c4 Silver badge

      Re: Am I glad I'm not that close to the metal

      I was reading a post recently in which someone was praising the ingenuity of the VMS scheduler, particularly in dealing with the very limited amount of real memory and consequent constant stream of page faults. And then I thought about the complexity of writing a scheduler for a modern system with a mixture of core types of different (and perhaps variable) performance and having to work out which thread got to run where taking all that into account as well as which CPU core had access to still-valid cache entries for it. And that's before thinking about VM optimisations.

      I think it's an area which has the potential to become ever-more complex as (a) there's a limit to what can be done reliably in hardware with the technology available at any point in time and (b) we have ever more experience about the trade-offs and optimisations that mostly work in the real world.

      However, I think we're already overdue a serious re-evaluation of the balance between performance and security. And that doesn't just apply to hardware - software engineers are just as guilty of benchmark fixation.

    2. Anonymous Coward
      Anonymous Coward

      Re: Am I glad I'm not that close to the metal

      I don't hold a candle to the minds that can not only handle this level of programming

      I acknowledge there are a few genii around, but I still I suspect you don't give yourself enough credit. Most of us specialise in a certain field and become experts in our niche. Very, very few people are polymaths, and I'm guessing chip designers are mostly experts in tiny bits of chip design, with few or even nobody who really understands all of it.

      1. Claptrap314 Silver badge

        Re: Am I glad I'm not that close to the metal

        Oh my word. You have no idea. These HW guys are absolute geniuses--with Verilog. But you really, really don't want to look at anything in assembly that they touch. I owned a small project for a few years. My predecessor (HW guy) needed 6 months to do an update for the next generation of the same architecture. After I had (painstakingly--the code was small, but it was working a fiendish problem) rebuilt the code, I had an associate port to a completely different architecture--to 32 bits from 64 & a completely different paging system. Took him two weeks.

        OTOH, these guys would rightfully just giggle at my ideas regarding the hw itself.

    3. martinusher Silver badge

      Re: Am I glad I'm not that close to the metal

      Its not that difficult, honest. Its just hardware engineering. One of the problems with hardware engineering is time -- its up there front and center with the actual logic, you can't have one without considering the other and, naturally, they usually work against each other. (If you're not used to this idea then just bear in mind a handful of things -- logic pulses are really not square, they take time to get from place to place and the things they feed don't just take a certain amount of time to operate, they take a range of times, for example.).

      Anyway, I think I'll lay this one at the door of Marketing as usual. Engineers spend a lot of time trying to reconcile what's possible with what would be nice (the typically marketing pitch is "if you make something that does everything then we can sell lots" -- its rare that they truly understand tradeoffs). Human nature being what it is, though, you'll always find someone who will say "yes" to something best left alone, especially if they don't actually have to make it happen. Any engineering caveats will just get lost in the wash.

  3. Bebu
    Windows

    Ye cannae change the laws of physics

    The dilithium crystals can only take so much. :)

    A Scots accent should be mandatory for all (real) engineers (and Timelords :)

    1. b0llchit Silver badge
      Coat

      Re: Ye cannae change the laws of physics

      Are you saying that Apple incorporated a Blue Box in their processors?

      1. BartyFartsLast Silver badge

        Re: Ye cannae change the laws of physics

        It's nice to see them staying true to their roots and incorporating a bluebox, Woz will be pleased

  4. Bebu
    Windows

    balance between performance and security

    My guess is if you can prove your system is actually secure in some reasonable sense which has been rigorously defined then the performance will follow.

    If you can prove the contents of memory, cache, buffers, registers etc belonging to one task can never be accessed by a second task without the first explicitly invoking a sharing mechanism, I suspect the resulting designs would be simpler and likely faster.

    I also wonder whether much that is currently done in hardware should, for better security, be done in software and vice versa?

    1. heyrick Silver badge

      Re: balance between performance and security

      "My guess is if you can prove your system is actually secure"

      Basic tenet of good science: You can make as many tests to prove something as you like, but it only takes one to disprove it.

      Maybe rather than proving something is secure, we ought to be failing to prove it is insecure. And I rather suspect it won't take much of that to see the flaws in the design show up and start to dance.

    2. Anonymous Coward
      Anonymous Coward

      Re: balance between performance and security

      The second task does not have access to another process' cache content. It can infer it by measuring the timing of its own instructions.

  5. Anonymous Coward
    Anonymous Coward

    HW security must be easier to access for SW devs

    I’m regularly surprised how much crypto is still being done in SW when hardware co-processors for most standard algorithms are present in most CPUs/SoCs. I do understand that cross-platform compatibility is an issue and that for many SW developers it’s not worth the struggle as it is today but I don’t understand why the big SW companies and HW vendors can’t (or don’t want to) agree on a standardized access layer that would enable SW devs to use those HW resources efficiently and reliably.

    For symmetric crypto algorithms, HW co-processors typically avoid constant time issues by design. For RSA and ECC they’re at least easier to verify. For ML-KEM and ML-DSA one has to work around the inherent probabilistic run time of both algorithms (the problem is a lot worse for ML-DSA) but again that is an issue that needs to be solved independently of whether it’s a SW implementation or a HW co-processor. (To be very precise: the overall timing doesn’t have to be constant. The timing only needs to be independent of all secret data. But it’s a lot easier to call it constant time even if ML-KEM uses pseudo-random rejection sampling to derive the A matrix from the public key.)

    1. An_Old_Dog Silver badge

      Re: HW security must be easier to access for SW devs

      Yes, HW crypto usually is faster than SW crypto, but it brings the risk that HW implementation is buggy. It also has the downside of possibly (correctly) implementing a flawed SW crypto algorithm. How do you fix that on your CPU? Microcode defines the higher-level machine instructions, but cannot, as I understand it, alter the values in the CPU's on-chip ROM tables which are (depending on the instruction) used to help implement an instruction, i e. "sin()". This is relevant because some crypto algorithms depend on such tables.

  6. DS999 Silver badge

    Patent trolls

    That's why chipmakers don't want to provide low level details of their implementations. What Apple, Intel, AMD and Qualcomm really fear is that some technique they are using in their latest and greatest was patented 10 years ago by a university when some grad student was researching then-esoteric future approaches to making chips faster.

    If they wrote up a long paper saying exactly how their newest chip works they give a roadmap for that university's lawyers and armies of lawyers employed by patent troll organizations that buy up patents by the bushel from defunct startups etc. to know who to sue.

    It would be a dream come true for them if Apple was found to be using something they have a patent on since the A14/M1, as Apple will have shipped about a billion infringing chips since then. Imagine the payday! The court won't care that Apple had no idea someone had patented the technology, or that it was not being used or planned to be used in any products, because independent invention and disuse do not mitigate damages at all in a patent infringement lawsuit.

    So there is ZERO incentive for anyone to talk too much about how their chip works, because odds are 100% that everyone's latest chip violates multiple patents owned by NPEs (non practicing entities) who either derive significant funding (like at least a half dozen universities whose names people would recognize from previous large patent suits against big name tech companies) or have as their entire reason for existing filing lawsuits against those infringing on patents they own.

    1. heyrick Silver badge

      Re: Patent trolls

      "multiple patents owned by NPEs"

      Simplest solution is to invalidate those. Make it like trademarks - use it or lose it. And, no, screwing people for cash is not "using it".

  7. Anonymous Coward
    Anonymous Coward

    "If this is a clash of two fundamental aspects of computing, how did it happen and why did nobody pot it until now?"

    Probably wouldn't be able to smoke it?

  8. aerogems

    Another consideration here is just how long it takes to design, test, and mass fab chips. Apple only just released their M3 chips, so I'd be surprised if they weren't already locking down the design of the M6. You can't just throw together a chip design in a weekend, or even a few months, and then have sufficient quantities ready to go to launch a new product within a calendar year. So, in fairness to everyone, when the Spectre and Meltdown flaws were found, a lot of the designs for chips that are just coming out now, were probably already too far along for the company to go back and make changes to correct the issue.

    Which is part of why I wish companies like Intel would slow down on the number of new chips they put out, not that they will. We've hit a point where, at least for PCs, there's a glut of processing power, so just adding more cores or upping the clock speed will do very little, and there's really not a whole lot of improvement from one "generation" to the next. Instead, they could take a year or two to really think through a design, test the bejeebus out of it, and then give people a more substantial upgrade. Sort of like the difference between the 386 and 486 or 486 and Pentium. As opposed to minor incremental improvements on a core architecture shared by 3-4 generations. Apple and Qualcomm could be focused largely on improving energy efficiency, though Qualcomm also has a ways to go to catch up to Apple performance-wise.

  9. Someone Else Silver badge

    Einstein was brilliant

    From the article:

    Let's start with Einstein, who said one of the rules of reality is that the further away something is, the longer it will take to get to you.

    Brilliant! No, really! The innate simplicity of this rivals that of Fudd's First Law of Opposition, which clearly states:

    If you push something hard enough, it will fall over.

    1. Brewster's Angle Grinder Silver badge

      Re: Einstein was brilliant

      That really wound me up because it's pretty much been obvious to every human---ever---that the further away something is, the longer it will take you to get there. And I'm sure a good many animals understand that too.

      Even if we are talking about light, firstly, these aren't optical chips. Secondly I don't think anybody ever thought light travelled instantaneously from one place to another - only, until Maxwell (not Einsteiin) that light could be any speed - not a constant speed. And Einstein then suggested this speed was the upper bound for all matter. (And then got really upset when Quantum Mechanics did book keeping in violation of this rule...)

      1. that one in the corner Silver badge

        Re: Einstein was brilliant

        > Even if we are talking about light, firstly, these aren't optical chips.

        Electrical signals propagate down conductors at (near as damn it) the speed of light. There are retarding effects, but for day to day use, assuming C when laying out your PCB is ok.

        > I don't think anybody ever thought light travelled instantaneously from one place to another

        Yes, people did - it wasn't until some time around the 17th century, IIRC, that anyone started to prove that light has a speed, let alone one we could measure.

        Considering that we used to believe that our vision worked by the eyes emitting rays that bounced back, don't underestimate how much of what we accept as "the obvious ways things are" differ from what was accepted (and how much our descendants will be saying "I don't think anybody ever thought...").

  10. andrewj

    pointer to a pointer even

    Freudian slip? "requests for memory locations that contain pointers to to memory locations"

    1. Anonymous Coward
      Anonymous Coward

      Re: pointer to a pointer even

      A pointer in a tu-tu is not strange, it just wants to be cached this way.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like