back to article Another data-leaking Spectre bug found, smashes Intel, Arm defenses

Intel this month published an advisory to address a novel Spectre v2 vulnerability in its processors that can be exploited by malware to steal data from memory that should otherwise be off limits. Arm said a number of its processor cores are also affected by this security flaw, and like Intel, its hardware defenses can't block …

  1. YetAnotherJoeBlow Bronze badge

    Actually...

    According to my friend (an EE for a large mfg.), Speculative execution was designed solely as a performance win.

    so when a statement implying a discussion - "which engineers ended up prioritizing performance over security:"

    that discussion never happened.

    1. m4r35n357

      Re: Actually...

      IMO . . . I am not an expert, but have been in engineering/testing since the dawn of home computing, and this is my take.

      Speculative execution was created to support the (hugely naive) threading approach to multi-processing.

      Mutable shared state has caused countless problems in hardware & software over the decades, and will continue to do so unless it is stopped ;) It is just _too hard to do safely_, for mere mortals!

      If you want to use multi-cored hardware efficiently, use processes, not threads!

      What do more highly-trained software engineers think?

      1. Brewster's Angle Grinder Silver badge

        Re: Actually...

        The shared state is in the CPU itself. In this particular example, it's the transition between kernel and user space that's being exploited and having one thread per process wouldn't stop that.

      2. Anonymous Coward
        Anonymous Coward

        Re: Actually...

        Speculative execution doesn't have anything inherently to do with threading or multi-processor design.

      3. Steve Channell

        Re: Actually...

        speculative execution existed om mainframe when intel was still primarily a memory chip manufacture.

        Intel aggressively used it to recover from the advantage that AMD achieved with its Opteron processors unified memory model (Intel had stuck with its north/south memory bridge).

        It's fair to say that performance was favored over security.. through conspiracy followers would suggest it was a deliberate design. Caches do not use address translation tables, so odd sequence of instructions can bypass virtual memory security. It only because a real risk when virtualization and cloud computing emerged (you need to inject machine code into the environment to use it).

    2. NoneSuch Silver badge
      Mushroom

      Re: Actually...

      I'd bet some Intel (and ARM / AMD) engineer is making a decent living as a part-time consultant for the NSA.

      There's no way these continuously exposed "flaws" are accidental when seen on different generations of chip designs and across multiple vendors.

      1. An_Old_Dog Bronze badge
        Meh

        Re: Actually...

        Were I a we-must-have-access-at-any-cost government type, bribing some engineers to "accidently" include a vulnerability (hardware or software) would be a convenient method of achieving my aim.

        At the same time, there's a human tendency of running down a rathole, doing "more" of what worked before to seek more performance, rather than creating the new structures and algorithms needed to get substantial speed boosts.

        (No matter how well you optimize a bubble sort, it's *still* a bubble sort and constrained to O(n^2) performance.)

    3. Robert Helpmann??
      Facepalm

      Re: Actually...

      ...when a statement implying a discussion - "which engineers ended up prioritizing performance over security:" that discussion never happened.

      Never happened because it never crossed their minds that it might be important? Color me shocked!

    4. diodesign (Written by Reg staff) Silver badge

      'ended up'

      Well yeah, that's why we used the words "ended up," as in: one way or another, they put performance before security.

      It could have been intentional, it could have been accidental. I've heard anecdotally in the Valley that some CPU designers had an inkling that speculative execution left a trace in the cache that could be used to leak data but thought it was either theoretical or not worth worrying about.

      C.

      1. TReko

        Re: 'ended up'

        Yes, there is a paper from 1995 talking about the security problems with speculative execution.

        However, speed sells, security is an afterthought.

  2. Anonymous Coward
    Anonymous Coward

    Man

    why does this feel like COVID for CPUs? It just keeps going and going. Do we need a mask mandate for Intel?

  3. msobkow Silver badge

    Funny how rarely AMD gets pinned as being affected...

    1. hoola Silver badge

      Possibly Intel should have spent a bit more time having their researchers look at the Intel product line rather than trying to smear a competitor.

      I have no idea if what they are stating about AMD is valid or not but this feels like "lets throw some mud around to distract from our issues".....

  4. 89724102172714182892114I7551670349743096734346773478647892349863592355648544996312855148587659264921 Bronze badge

    CPU design should be done by AI; it's now beyond commercially viable human capability

    1. doublelayer Silver badge

      Good plan. You make the AI and come back to me when you're done.

      1. 89724102172714182892114I7551670349743096734346773478647892349863592355648544996312855148587659264921 Bronze badge

        They wouldn't even need to be true AIs, just what is being passed off as "AIs" currently - at least three are needed: design, penetration and consolidationstages, with many iterations. The result would be better than any group of humans is capable of... If you want details, I accept bitcoin.

    2. amanfromMars 1 Silver badge

      What’s to Worry About when All is/are Just Parts of Grand AIMasterPlans

      CPU design should be done by AI; it's now beyond commercially viable human capability ..... 89724102172714182892114I7551670349743096734346773478647892349863592355648544996312855148587659264921

      You might like to consider and accept and be suitably concerned to know that AI appreciates all of the human CPU design flaws and vulnerabilities to constantly relentlessly exploit and expand upon to satisfy ITs goals and provide future needs and feeds and seeds, 89724102172714182892114I7551670349743096734346773478647892349863592355648544996312855148587659264921

  5. HildyJ Silver badge
    Holmes

    Not Shocked

    Manufacturers and developers almost always focus on performance. In the current case, it should be remembered that Spectre has to get to the hardware first and that is enabled by software that prioritized performance or configurations that prioritized performance.

    Eventually new chips will 'fix' this. At least until Spectre v3.

    1. Bartholomew Bronze badge
      Coat

      Re: Not Shocked

      > developers almost always focus on performance

      Not Microsoft, they focus on making sure that Wirth's law is obeyed, or maybe Gates's law.

      Wirth's law: "software is getting slower more rapidly than hardware is becoming faster"

      Gates's law: "The speed of software halves every 18 months"

  6. martinusher Silver badge

    Surely a nonsense?

    This problem is really about operating system designers who are too lazy to consider the security implications of stuffing confidential information into general purpose memory. When it becomes impractical to secure this data they complain to the processor designers about how its their fault and expect - as usual -- a hardware solution to a software architecture problem. The solutions provided by Intel and AMD should be 'good enough' for most applications; they won't be perfect but as someone else has already pointed out there's likely to be a whole lot of easier ways of getting at that memory than laboriously inferring it from jump behaviour.

    If you want a truly secure system then it should be baked into the architecture. You may have to pay for it, though. There may even be a software workaround to the problem but it might be a bit tedious to code.

  7. DenTheMan

    I always thought it would be useful to be able to turn off the Big cores as a power saving method.

    Obviously here, if you had the option to turn it off for newly installed apps and app updates then the problem is mitigated.

    With Arm it is only a problem for big cores.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022