back to article Spectre rises from the dead to bite Intel in the return stack buffer

Spectre, a class of vulnerabilities in the speculative execution mechanism employed in modern processor chips, is living up to its name by proving to be unkillable. Amid a series of mitigations proposed by Intel, Google and others, recent claims by Dartmouth computer scientists to have solved Spectre variant 1, and a proposed …

  1. artem

    I applaud the researchers however there's one thing I cannot understand at all: Intel was let know about some of these vulnerabilities (Meltdown and Spectre v1/v2) at the very least a year ago and to this date they don't have a single CPU where these vulnerabilities are mitigated at the hardware level. And according to rumors and leaked documentation they'll soon release yet another Skylake reiteration.

    1. John Brown (no body) Silver badge

      Probably because it's more than a year from (re-)design to production sales.

    2. asdf

      rome wasn't built in a day

      Product design cycles are a lot longer than a year especially when you have to balance the fix against the tremendous performance increases the vulnerability gave your product. Not excusing Intel or chip makers in general especially since they spent billions acquiring McAfee which was supposed to increase their security (so showed they at least gave it lip service) but not surprised new products don't have the fixes yet.

    3. imanidiot Silver badge

      It takes more than a year to make the new design and then another more than a year to test and verify the litho process to produce it. I wouldn't expect hardware mitigation within about 3 years imho.

    4. Zippy's Sausage Factory

      Aside the fuss about production cycles, my other question is why are people accepting delivery of broken parts in the first place? Maybe there should be legislation about putting warning stickers on things: "warning, this device has X known vulnerabilities". I suspect Intel would start taking it a bit more seriously then...

      1. Jon 37

        Re: Warning stickers

        Sadly, it would go the way of the California cancer warnings.

        Pretty much everything is "known to cause cancer" as far as the state of California is concerned, so pretty much everything and every building has to have a stupid warning sign. So the signs don't actually provide any useful information, and most people ignore them. The only people who like the signs are the lawyers, who make money suing anyone who doesn't have the signs up.

    5. wownwow

      Stupid mass don't care

      "I cannot understand at all: ..."

      Since stupid mass keep buying the products with the buggy chips inside, why will Intel urgently need to fix it?

  2. Mark 85 Silver badge

    Other than "state actors" probably developing exploits, are there any in the wild? Just wondering if all this research by the boffins will lead to new attacks or because of the conditions needed to exploit it if we really need to be concerned?

    1. a_yank_lurker Silver badge

      @Mark 85 - I have not heard of any in the wild. While the effects would be damaging I suspect it is much harder to do in the wild than in the lab as I suspect there are probably more processes running the background in the real world.

      1. Roo

        I agree with your assessment a_yank_lurker, but the thing that bothers me about Spectre class exploits is how do you detect them reliably in the wild ? Quite frankly I fully believe that at least a few Kskiddies to be trying it out in AWS right now.

        From a risk management point of view nothing has changed, if you care about keeping stuff secret you don't share your box with anyone else. :)

        1. Anonymous Coward
          Anonymous Coward

          Out in the wild.

          I read that while no code was known, the 6 month or so grace period was practically causing some providers to turn off features and or servers just in case. So some people took the risk very very seriously.

    2. Anonymous Coward
      Anonymous Coward

      The exploit has been published

      Now that Intel is known to be as leaky as a sieve then everyone with a security interest is poking holes, since spectre give access to all the keys of intel's kindom then those that are already exploiting will be keeping their mouths shut

  3. elvisimprsntr

    When Spectre and Meltdown first became public in January, I decided to wait 5-7 years before purchasing a new device (laptop, NAS, phone, tablet, router, etc.)

    I think I'll use the money saved to buy a new vehicle instead.

    1. onefang

      "I think I'll use the money saved to buy a new vehicle instead."

      There'll be lots of CPUs inside your new vehicle, some of them controlling security stuff. You ain't getting off that easily.

  4. amanfromMars 1 Silver badge

    What you are not being told is that which rules and reigns and reins you. And 'twas ever the case.

    Spectre, a class of vulnerabilities in the speculative execution mechanism revive doubts about whether current and past chip designs can ever be truly fixed and is revised to render the possibility and therefore assured probability that even all future chip designs will be unfixable.

    Good questions to ask here are .... What is the fix for/what is the concern trying to prevent? Virtual Machines and SMARTR Autonomous Systems acting and doing the Internetworking Things their way?

    You might like to consider that is both a battle and a war you can never win and all is already lost.

    Does that register with you, a_yank_lurker and Mark 85, and answer your questions/concerns?

    1. Claptrap314 Silver badge

      Re: What you are not being told is that which rules and reigns and reins you.

      I just love how Spectre makes amanfromMars1 parsable & even sensible.

      1. Roo

        Re: What you are not being told is that which rules and reigns and reins you.

        Fair play to amanfrommars - their posts been uniquely quirky for decades now. The USENET posts were utterly impenetrable. :)

      2. onefang

        Re: What you are not being told is that which rules and reigns and reins you.

        "I just love how Spectre makes amanfromMars1 parsable & even sensible."

        I'd call that yet another variant of the Spectre bug. Does that mean amanfromMars1 is Intel powered?

  5. Schultz


    The hardware fix might require a rethinking of the processor architectures. Take the performance hit of non-speculative execution in one/several cores for safety-relevant processes. Separate those from the performance-optimized number-crunching cores. It's a kind of striatification, offering the hardware to perform the different jobs encountered in the wild.

    That should solve the problem for intel, they can blame the programmers if the software doesn't use the hardware properly ;). Allow the programmers to set the flag for 'optimized' (i.e., speculative) execution at their own risk if they want the performance boost. Give it a catchy name to clarify that your hardware offers xy% performance boost for optificated software and watch the programmers scramble to release their new versions.

  6. Peter Gathercole Silver badge


    I've only had a short think about this, but it strikes me that the main problem here is that the contents of the Return Stack Buffer persists across context switches.

    If whatever OS kernel is being used invalidated the RSB when context switching between different process/threads, then this may affect performance, but should prevent this type of leak between processes. Any performance impact would only be when a process is re-scheduled.

    Switching to kernel mode (a system call) would be a bit more problematic, as system calls happen frequently. You would not really want to invalidate the RSB on every syscall, but I would have thought that there should be something that the syscall interface could do to sanitize the RSB it inherits from the process. But the separation of the kernel and process address spaces in the Meltdown fixes should really limit the damage.

    As I say, I've not read the full papers yet, so there may be something I haven't considered.

    1. MacroRodent Silver badge

      Re: RSB

      > Switching to kernel mode (a system call) would be a bit more problematic, as system calls happen frequently.

      I don't think invalidating at every syscall would be such a big deal. System calls are already very slow compared to normal calls, and subsequently the kernel will internally do a lot of other function calls before returning, so I would estimate the performance hit to be very small, or non-existent.

  7. defiler


    Okay - so which muppet is going to suggest that Intel, AMD et al stop manufacturing CPUs until *this* hole is patched too? As I keep saying, this is going to take a long time to fix, and the world can't just stop. We'll just have to muddle along as best we can, uncertain as to who can break our security.

    Still, we all use mechanical locks, and they've been proven to be vulnerable time and time again...

    1. Peter2 Silver badge

      Re: STOP THE WORLD!!

      Still, we all use mechanical locks, and they've been proven to be vulnerable time and time again...

      In comparison to what?

      1. defiler

        Re: STOP THE WORLD!!

        In comparison to what?

        In comparison to flailing around wailing "Oh noes! The lock is imperfect - I can't buy a lock until all of the security flaws are fixed! The lock manufacturers are robbing everyone by continuing to make locks with these known vulnerabilities! They need to stop making any locks until they perfect them!" whilst somebody nicks the lawnmower from your unlocked shed...

        That's my bugbear with the gnashing of teeth going around here - the idealism in the face of the real world, and the notion that I (as someone who just needs to get a job done) am somehow a lackey to the corrupt semiconductor industry.

        Phew - glad to get that off my chest! Also, I didn't downvote you - it was actually a fair question.

    2. ForthIsNotDead

      Re: STOP THE WORLD!!

      Meanwhile, in other news, reports are coming in that the Motorola 68000 is unaffected.

      I think I've got some in a drawer here somewhere...!

      1. defiler

        Re: STOP THE WORLD!!

        the Motorola 68000 is unaffected

        So, embedded systems are fine. Shame the IoT boys and girls seem to invent new ways of running everything in the software instead! :)

  8. picturethis

    Is it safe?

    Is is safe (to connect to the internet)?

    Apparently, not yet...

    How some might feel when reading about all of these problems:

  9. Giovani Tapini

    Asking (possibly) dumb question

    Why is normal software even able to access the buffers in question, let alone write to them. I would have thought this would effectively be internal to the CPU or possibly OS kernel level software only.

    As a "not an expert in this area" IT person, are there any legitimate use cases for this sort of capability?

    1. MJB7

      Re: Asking (possibly) dumb question

      There's no such thing as a stupid question (but failing to ask can be very stupid).

      In general, you can't gain access to these buffers directly - but you can do things (like call a function), which will modify the buffers in a predictable way. Furthermore, by carefully timing things(*) you can estimate the contents of the buffer (if it has one value an operation, like return, will be fast; if it has another, it will be slow).

      *: You might think that just adding a bit of timing jitter would be enough to fool this. Sadly, it turns out to be easy enough to repeat the exercise and average out the jitter. It turns out that you can do accurate-enough timing from within javascript - you don't need access to the hardware cycle counter

      1. Peter Gathercole Silver badge

        Re: Asking (possibly) dumb question

        In this case, I don't think that it is a timing issue (the timing issue leaks were to decide whether a data cache contained a value, or whether the system had to go an fetch it from main memory).

        In this case, what is being done is that a buffer that caches the return address from functions is being controlled, so that when returning from a function or sub-routine, it is possible to change the return address from the next instruction after the call (the normal result) to an arbitrary address controlled by the malicious code.

        Normally, the processor would go to the stack frame stored in main memory to fetch the return address (and changing this is the primary technique in a 'stack smashing' attack, as the stack is in the address space of the user process), but it looks like Intel and ARM have found a way of keeping this return address in a faster cache, so that the return can save some clock cycles. If you can arrange to change entries in this buffer cache to point to some malicious code, and get the processor to return to this code while still in kernel mode, in theory you can get access to memory that would normally be protected.

        The write up and description of the Return Stack Buffer and this vulnerability is quite involved, and I'm not sure I fully understand it, because in ARM at least, there appears to be two buffers, one of which is a conditional buffer that tracks predictive returns, which can be manipulated using 'branch not taken' types of speculative attack.

        As I said earlier, invalidating the RSB (assuming the processor has a suitable instruction) on a syscall or context switch should limit this type of attach to the current process, which is still not that good, but should prevent the leaking of information from other processes or the kernel.

    2. Anonymous Coward
      Anonymous Coward

      Re: Asking (possibly) dumb question

      No. Intel have designed themselves into a corner and now they need to ermm cut corners to remain competitive against AMD, that’s why they are affected so much more than AMD are. The corners they chose to cut this time were around checks before accessing data. There may be others.

  10. MJB7


    Bruce Schneier commented at the time that Spectre was first found that there are going to be a whole class of issues like this, and academics are going to be busy finding them for years.

    1. Anonymous Coward
      Anonymous Coward

      Re: Inevitable

      There have been ways of fuzzy data collection over the air for decades. Radio wave timing and communication timing "hacks" have been known for ages.

      The ones that get a lot of flack here are the "read a HDD via temps/power draw" or "guess your password from the heat of the CPU" stories.

      While those may use different methods, and power use or heat and not timings as much, they are all statistical analyses when you don't have direct access.

      However, IIRC one of the Intel holes was direct access to one of the prediction branches? Hence the even larger flack sent their way.

  11. Cynic_999 Silver badge

    How serious is it ... really?

    Yes, I've read the descriptions and the theoretical attack vectors of these CPU vulnerabilities. And am left wondering whether anyone would in practise be able to write an exploit that actually achieved anything useful for the exploiter except in a miniscule percentage of occasions.

  12. Claptrap314 Silver badge

    Estimated time to fix: 3 years

    Yes, my knowledge here is a bit out of date. However, standard processor release times in during the decade surrounding 2000 were a bit more than 2 years. Less if the design did not represent any great architectural changes, more if it did. (STI Cell did, for instance. IBM G5 did not.) This represents a complete change in focus of the architecture. I would not be surprised if the design work proper has not even started as the big boys are probably petrified about the idea of missing something that their competition does not, and so are red-teaming design concepts like mad. (The hardware-only solution would be to systematically make execution times data independent, but in my experience, they lack the imagination & patience to proceed in that fashion.)

    I think it is also going interesting to see what responsibilities are going to fall to the OS. One could, for example, add a "flush everything" command, and require that the OS invoke it on any context switch to untrusted code. This extreme would of course kill performance, but it would allow a single chip design for multi- and single-tenet systems. I mention it to demonstrate that security does not have to be entirely in hardware. For performance reasons, it cannot be. We already have things like separated data and instruction caches & we require the OS to keep them valid.

    1. Roo

      Re: Estimated time to fix: 3 years

      I worry that the big boys are overthinking it a bit. All they have to do is put in a "run like a dog" mode that flushes the entire state every context switch and job's a good 'un. The lawyers can sleep at night on their overstuffed mattresses and the punters can carry on sacrificing security & reliability for speed - remember "Parity is for farmers"...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022