back to article Don't be BlindSided: Watch speculative memory probing bypass kernel defenses, give malware root control

Boffins in America, the Netherlands, and Switzerland have devised a Spectre-style attack on modern processors that can defeat defenses that are supposed to stop malicious software from hijacking a computer's operating system. The end result is exploit code able to bypass a crucial protection mechanism and take over a device to …

  1. sev.monster

    Based and redpilled. Speculatively.

    CPU vendors need to stop taking shortcuts for the sake of speed, and for that we need true innovation again to be able push us up to the current expected performance margin without them. Hopefully the research here (and the inevitable crooks and SLAs that make use of it later) will finally push the envelope.

    1. Warm Braw Silver badge

      That's depressingly close to the Brexit argument: we don't like the current, imperfect situation, we're unable to create a perfect world for ourselves, but we're sure if we just insist that someone else will do it for us.

      Sigh...

  2. Mike 137 Silver badge

    Stack cookies, DEP, ASLR etc., etc. are all merely elastoplast first aid for the symptoms of a fundamental flaw - a common stack for return addresses and parameters.

    Given the flawed von Neumann architecture we seem to be stuck with, that interprets words as code or data sequentially, dependent on the immediately preceding word (i.e. a chain of interpretation that breaks down irrecoverably at the first misinterpretation), the only practical protection would be dual stacks - one for return addresses and the other for parameters. It's not a perfect fix as we can't completely segregate code and data, but at least we could isolate them at the key decision point where the targets of jumps into code are determined.

    Ultimately, if we really want to solve this problem we should migrate to a Harvard architecture.

    1. Blazde

      Not that they're not good ideas but both things you describe could also be called elastoplast first aids. The only truly fundamental problem is that software sometimes has bugs in it.

      People used to say 'All these defences like strncpy() aren't fundamental enough. If only the stack wasn't executable, all our problems would be solved'

      1. vtcodger Silver badge

        Betcha a beer

        "Not that they're not good ideas but ..."

        Indeed. Would you bet a beer that a decade after a dual stack architecture was implemented, we wouldn't be treated to weekly announcements of newly discovered "stack-inversion" vulnerabilities where the CPU is somehow tricked into using the parameter stack for addresses and/or the address stack for data?

        (And yes the ideas do seem worth considering).

        1. Mike 137 Silver badge

          Re: Betcha a beer

          '"stack-inversion" vulnerabilities where the CPU is somehow tricked into using the parameter stack for addresses and/or the address stack for data'

          That of course depends on the quality of the stack management code. The best option would be stack segregation at silicon level.

          However the problem can't happen in a traditional (true) Harvard architecture (even with a common stack) as code and data are separate physical memories and the "wiring" of the buses doesn't allow it. For example, there's no "write" capability in a true Harvard code space, and a data word can't be interpreted as an instruction because it's accessed separately from data memory when an instruction is fetched from code memory. It's therefore impossible for a data word to be interpreted as an instruction word.

          Sadly there are now several "modified Harvard" architectures that break these rules, devised to accommodate recent desires for "self modifying code" - a concept that's intrinsically antithetical to security. These have invaded even the supposedly high reliability microcontroller space, much to its detriment.

          1. Blazde

            Re: Betcha a beer

            A strict Harvard architecture is theoretical. In practice data has to become code somewhere or else you can't compile anything, or load programs from disk, etc. (Depending what level of the memory hierarchy you extend the separation to).

            The article is about compromising the kernel, but let's assume the OS handles all the code/data access control and the kernel is somehow secure. Even then, user space programs need to be able to ask the kernel to convert data to code. Would you have a UAC-style pop-up every time that happens? As you mentioned you'd also lose self-modifying/self-creating code so there'd be some other heavy usability penalities to things like JIT-compiling, sandboxing and live debugging.

            Once you accept all those compromises security is improved, sure, but it's not solved because you can often do a lot of damage just by influencing a program's own control flow by corrupting it's data. If it's a program which contains code fragments intended to ask the OS and convert some data to code you may even still be able to get it to run arbitrary code.

    2. Brewster's Angle Grinder Silver badge

      There's nothing new...

      Forth had a computation stack (for parameters) and a return stack (for subroutine addresses). And the 6809 had two stack pointers (User and System).

  3. cb7 Bronze badge

    I've said it before, I'll say it again. The future is lower latency RAM. And stacked behind the CPU to minimise distance related latency.

    Then there's no need for caches, prefetching, speculative execution, branch prediction etc. Just simple high speed in/out processing.

    Assuming we're not all using quantum computers by then.

    1. DS999
      Facepalm

      Are you saying you know how to build this lower latency RAM, or are you just assuming engineers will find a way to do something they've been trying and failing to do for almost 50 years now?

      1. sev.monster
        Pint

        I got this. Just some bubblegum and paperclips and -->

  4. Anonymous Coward
    Anonymous Coward

    Interesting academic research but...

    in the real world where users and admins can be tricked in disclosing privileged login credentials with a simple phishing or social engineering attack, how much of this is relevant.

    If an attacker can already access a system with enough rights to install and run unauthorized software, there are probably enough other poor security configurations/vulnerabilities which are easier to exploited than memory/processor flaws.

    It's a bit like being concerned that the combination of your safe is easy to guess, when the front door is unlocked and all the valuables are left round the house.

    1. Cuddles Silver badge

      Re: Interesting academic research but...

      It's important because layers of security matter. If you accept that you can never remove the risk of your users leaving the door unlocked, then having a safe to keep your valuables in has an obvious benefit, and therefore so does worrying about how good the safe's lock actually is. Sure, you'd be better off if the front door was never compromised. And maybe you could focus more of your resources reducing the chances of someone getting through it. But you still need other layers of security that assume someone can get inside, and given that, it would be pretty silly not to wonder about how secure those additional layers actually are and how they could be improved.

    2. Brewster's Angle Grinder Silver badge

      Re: Interesting academic research but...

      Do we want anyone installing a bit of software in the cloud or a hosted web site gaining root privileges?

  5. JCitizen Bronze badge
    Coat

    A comment ignorant of all things said so far..

    I'm ignorant of both low kernel level science and the silicon infrastructure that runs it; but even I can maybe comment on crazy ideas on how to mitigate it, at least some more. Maybe they need a section of the CPU, or perhaps even something placed at a tactical bus monitoring all digital traffic, that runs a read only AI program that checks all logic results running in the CPU, and maybe even I/O ports, that looks for activity that could change the state of root privileges. Or maybe something similar to steady state invented to prevent compromise of disk memory; only it would be a steady state architecture that monitored the CPU to keep it at one state of permissions and only that one state. It would be the changes that would suck - because it would naturally have to be difficult to manually change administrative permissions at that level. Maybe the AI chip could keep a read only snap shot of the true state, and when it changes, reset the CPU to the former state, so that operations could continue normally.

    Bear in mind, I'm ignorant, but I like to brainstorm none the less - it would seem like such a scheme - when under attack would show evidence not only to the AI chip but to anyone using the machine or services. They would hopefully be nothing more that blips in operation, but plenty noticeable enough that IT personnel could react to the attack. Perhaps the introduction of a laser programming device plugged into the machine would be the only way to change the kernel level permissions in the AI as a singular way to rewrite the permissions at that level, and from then on, it would only be necessary to detect a change in that state - maybe using the term "Advanced Intelligence" is overkill, it might not have to be that advanced at all.

    I remember when protecting the state of recorded memory of spinning magnetic discs was done with steady state boards plugged into the mother board to control snap shots of the former state of memory in the disc - if users noticed an attack or compromise any time during operation, they could simply reboot and recover back to the former state, and no malware or subsequent changes to memory were in existence any more. Microsoft invented Steady State for 2000 and XP using only code operations, I assume at the master boot record level, or perhaps a partition created for such duty, with no need of hardware. But it wasn't perfect and could be compromised, so they abandoned it when Vista came out. There are still coders out there that claim they can still do it right, but I've not tested any of their claims, but one made by Faronics(years ago), and it met the claims at that time. I'm not even sure they work on the new UEFI scheme and/or Windows 10 now. Libraries still used something like it - last I checked. Faronics used to use "Deep Freeze" successfully for years doing the same thing.

    My coat is the one with the pocket protector in the breast pocket.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020