back to article Spectre flaws continue to haunt Intel and AMD as researchers find fresh attack method

Six years after the Spectre transient execution processor design flaws were disclosed, efforts to patch the problem continue to fall short. Johannes Wikner and Kaveh Razavi of Swiss university ETH Zurich on Friday published details about a cross-process Spectre attack that derandomizes Address Space Layout Randomization and …

  1. Spazturtle Silver badge

    I think it is time to give up on security for Performance cores and add some dedicated in-order non-speculative execution Secure cores to CPUs for all tasks that need to be done securely.

    General purpose cores had a good run but I think their time is over what with them now already being split into Performance and Efficiency cores, in addition to having an increasing number of dedication accelerators for different tasks. Apple even put dedicated JavaScript accelerators on their CPUs.

    We already have Performance cores, Efficiently cores, and a whole bunch of application specific cores, so why not off-load secure computing too?

    1. abend0c4 Silver badge

      It's a reasonable point, but (at least so far) we're talking about the exfiltration of relatively small amounts of high-value data. There are already other ways to deal with these (TPM, HSM, etc). If it gets to the point where it's feasible to exfiltrate large amounts of data, you wouldn't want any of it in a less-secure environment. We've kind of got used to the idea of all-or-nothing security (eg "root"), but we're going to need multiple layers beyond those we already have, though the some of the initial attempts at secure enclaves and the like have not been a promising start.

      1. martinusher Silver badge

        The data that's being looked for using these bugs isn't general data but things like encryption keys. The exfiltration mechanism is too slow to get bulk data.

        Ultimately the problem stems from the way we write code.

        1. ecofeco Silver badge

          Exactly.

          https://medium.com/@antweiss/learned-helplessness-in-software-engineering-648527b32e27

        2. Claptrap314 Silver badge

          Not this time. Again, to refresh: I spent a decade doing microprocessor validation 1996-2006--the starts of the speculative load era. There are actually quite a few classes of microprocessor issues that cannot be fixed in software. Specter-class issues are a great example. All of the software workarounds were/are partial solutions. At the time, I spent some cycles thinking of ways to address the issue. I've not followed the hardware developments since, but getting this one right is **** hard.

          1. martinusher Silver badge

            >the start of the speculative load era

            Speculative loading and other performance enhancing techniques like register renaming were used before this timeframe, just not in (Intel type) microprocessors.

            It looks like the fundamental problem is that the processor should throw an out of bounds exception when a memory location is accessed but actually only does this is the location is on the active path. I suppose this design decision was made to make debugging easier (possible?) but it might be justified to throw the exception for any attempt because it means the code is likely suspect.

          2. Anonymous Coward
            Anonymous Coward

            When a backdoor has a backdoor, across multiple CPU architectures, it's really hard to get away from the 'NSA paid off chip engineers' rumors.

    2. An_Old_Dog Silver badge

      Security Core & Spectre, et. al.

      For the "security core" idea to work, you'd need a completely-isolated memory subsystem, one inaccessible from the non-security cores. As experience with side-channel attacks has shown, there is a difference between "theoretically isolated", and "truly isolated".

    3. O'Reg Inalsin Silver badge

      Isolation is hard

      The problem is in datacenters where, on a shared machine, one clients process can look into another clients secrets because of what's stored in the cache or memory after a process swap. Even if the victim client is not using speculative instructions, that clients process will still be interrupted, and their secret data will be in the cache or memory, such that a bad actors process can take advantage of speculation to look at cache/memory beyond what they should be able to see.

      One would need atomic processing without interruption, and a cleanup at the end - a kind of realtime processing. Integrating that into into a non-realtime system would be hard - and what if the semaphore holder doesn't relinquish control? Deadlock!

      The way it's done now is for a single company to rent their own dedicated server which is shared with nobody else. Which is how the problem avoids being supercritical. Only smaller players and cheapskates cannot afford their own dedicated servers.

      1. Anonymous Coward
        Anonymous Coward

        Re: Isolation is hard

        Um. No. That's not how it works at all.

        Do you think the guest gets unrestricted access to the hosts memory?

        1. bazza Silver badge

          Re: Isolation is hard

          Er, the whole point of spectre and meltdown is that, yes, guests can contrive to get access beyond their bounds. This whole article is about just such a process gaining access to arbitrary kernel memory, which is pretty terrifying.

        2. O'Reg Inalsin Silver badge

          Re: Isolation is hard

          Wikipedia summering the spectre paper:

          the entire address space of the victim process (i.e. the contents of a running program) is shown to be readable by simply exploiting speculative execution of conditional branches in code generated by a stock compiler or the JavaScript machinery present in an existing browser.

    4. bazza Silver badge

      The problem with the proposition that there should be security cores, and that these are somehow carved off from the rest of the system, doesn't really work. As this article says, the technique used was happily able to access arbitrary kernel memory. Kernel memory is definitely something to protect (especially as some of it is what defines a process's run time privileges). But, you need tight coupling between applications and the kernel, because the kernel does so much for applications. All the IO, all the memory allocation, all the services provided by an Operating System. If the kernel is kept stuck on some sort of remote-ish or slow-ish CPU, application and system performance as a whole is going to be terrible.

      The separation of Cores is the way to go, it's just that we need to wean ourselves off SMP, and move to architectures mores like, well, Transputers and Communicating Sequential Processes. Languages like Go implement this anyway (on top of SMP). That clear, physical, separation of different Cores and their memory from other cores and their memory is far easier to make "secure". Data is exchanged only by consent of the software running (and not simply by the CPU / caches / memory system because another process somewhere else has decided to try accessing that data). It's not perfect - what does one do about multiple processes on the same CPU? Transputers had an interesting hardware scheduler (not an OS scheduler), which maybe that's the approach to take (because, it'd be deterministic real time, and not influenced by software). It's likely an awful lot better than today's SMP. Unfortunately, it's a complete re-write of all software and OSes, and starting again on CPU architectures.

  2. williamyf Bronze badge

    And this is the reason why Win11 drops older processors

    Most likely Intel and/or AMD will no develop fixes like this for processors older than 8th gen, or Zen2 (respectively).

    Ditto for drivers for other parts of their SoCs (nee processors).

    Basically, when Microsoft meet with Intel and AMD and said ¿For which processors could you guarantee driver and microcode updates for the 10 year life of a "Win11" OS?, the answer was what we know now.

    All the 7th gen and Zen+ exceptions, were oneoffs negotiated between the HW maker and intel/AMD

    And I am guessing something similar will happen withy Win12, the Processor gen supported will be decided by the processor makers themselves, by denying support (Microcode and driver updates) to older gen processors.

    1. Anonymous Coward
      Anonymous Coward

      Re: And this is the reason why Win11 drops older processors

      Or, the PC manufacturers wanted to sell more hardware by forced obsolescence.

      1. M.V. Lipvig Silver badge

        Re: And this is the reason why Win11 drops older processors

        The simplest explanation is usually the correct one. Imagine, Microsoft being concerned about user security!

        1. An_Old_Dog Silver badge

          Occam's Razor is Largely Useless

          ... because you have no way of telling whether or not the specific issue you're looking at is one of the exceptions to the rule, or not.

          For years, it was comonly accepted that human dieases were caused by "evil spirits", which is a simple answer. That there is a huge ecosystem composed of living things, too tiny to be seen by the naked eye, snd that *some* of those organisms (germs) cause disease is a complicated answer, which is correct (yet not complete). Getting into how DNA and RNA work, and how viruses hijack RNA, is a magnitude more-complex still, but it is correct.

          1. Mike 125

            Re: Occam's Razor is Largely Useless

            Occam is less about 'correctness', and more about choosing a 'working hypothesis', given many, with which to proceed.

            > For years, it was commonly accepted that human diseases were caused by "evil spirits", which is a simple answer.

            It may be simple, but it raises more questions than it answers- because now we have 2 things to explain- disease and evil spirits!

            For Occam, that's surely a very poor hypothesis. But yes, I accept that it was probably the only one available at the time.

            ...although "God did it" also works...

  3. amanfromMars 1 Silver badge

    Many Thanks for All of the Coarse Phishing/Raw Intel

    And whenever it is not an exploitable vulnerability for patching but a programmable feature for expanding? What then can one do?

    Would there be anything one couldn’t do?

  4. An_Old_Dog Silver badge
    Stop

    Stop Digging!

    Rule: When you find you're in a hole, stop digging.

    The x86 designers have gone down the hole of speculative execution. To date, it has yielded substantial performance gains. But, it has been discovered to be security-flawed. After multiple rounds of microcode patches and OS patches, researchers keep discovering new side-channel attacks based on residual artifacts of speculative execution.

    Intel, AMD, et. al.: stop digging!

    Develop some other method of CPU acceleration.

  5. Anonymous Coward
    Anonymous Coward

    Once Upon A Time In Fort Meade.....

    ...and at NIST.....

    ...snoops persuaded (forced?) Cisco to adopt dodgy encryption and related backdoors...........

    Maybe the same procedure was standard practice with Intel? ...Jupiter? ....IBM?

    I think we should be told!!

  6. M.V. Lipvig Silver badge
    Pirate

    Spectre

    Special Executive for Counter-intelligence, Terrorism, Revenge and Extortion. Not who you want developing your processor security.

  7. Bartholomew
    Meh

    dumb it down ?

    Another article here is "Intel, AMD team with tech titans for x86 ISA overhaul" where they have solicited the help of Broadcom, Dell, Google, HPE, HP, Lenovo, Meta, Microsoft, Oracle, Red Hat, as well as individuals, including Linux kernel-dev Linus Torvalds and Epic's Tim Sweeney. This makes me wonder if maybe speculation is a dead end, and needs to just go the way of the dodo. What happens if you remove all the out of order and speculation and allocate the saved resources to increase the core count by an order of magnitude of really simple secure by default dumb cores.

    I'm thinking back to Windows NT 3.51 that ran on x86/MIPS and Alpha, where the drivers, including graphics ran in user space for security (and stability - you might see a message where the graphics driver had crashed and was restarted just like any other user process could be, instead of a blue screen of death - or a red screen of death - there are options to change the colour). The downside was that this did have lower performance, but the increase in performance for other operating systems was paid for with much weakened security. My rule is: "If a program can crash an operating system, it can probably be used to own the box".

    We need to end the race after performance, and go back to the fundamentals. In the battle between security and performance, security should win every single time - there should be zero compromise. And if the saved resources can add 10x the numbers of basic cores, the overall performance hit should be minimal. If we get to the stage where every application has one or more dedicated cores, it will probably make a lot of things much simpler. And simple is the friend of security, complex has always been security's arch nemesis.

    1. An_Old_Dog Silver badge

      Many Cores [was: Re: dumb it down ?]

      There is an optimum balance between number of cores, speed of intercore communication channels, the amount of private memory each core has, and the parallelizableness of the program.

      We aren't yet good at figuring that out.

  8. gnasher729 Silver badge

    Different from Spectre

    With Spectre, someone figured out that predictive execution could open a side channel. On the other hand those calling to abolish it seem unaware what kind of performance gains it gives. It’s not percent, it’s factors two or three. Want your processor to crawl, turn off predictive execution.

    The article states quite clearly that this is different; Out of the many changes to defang Sceptre one wasnt implemented correctly. There is just a bug. I guess it’s an area that is very hard to test, but still a bug. They had a correct design that would have fixed the problem and it wasn’t implemented correctly.

  9. O'Reg Inalsin Silver badge

    Certainly N times faster is a simple truth. Whether the bug fix was simply implemented incorrectly, or whether the designers are being squeezed between enormously incongruous constraints of safety and performance, I would want more evidence

    1. gnasher729 Silver badge

      That’s what the article says. The obvious way to go is to use speculative execution wherever it doesn’t reveal internal information to the outside (99% of the time) and don’t use it when it does. So with a lot of care you can remove the problems with very little performance impact.

      The main problem is that this is hard to test. You need a test case that is vulnerable against spectre and not vulnerable against spectre after a fix. And that apparently wasnt tested correctly.

      1. OhForF' Silver badge

        >You need a test case that is vulnerable against spectre and not vulnerable against spectre after a fix.<

        I'm pretty sure they had at least one test case that was vulnerable and could not be exploited after the fix. That doesn't show there are no scenarios not covered by test cases that can still be exploited by spectre.

        Creating test cases to cover all possible scenarios is hard and proofing those cover everything potentially vulnerable is even more complicated.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like