back to article Hate to ruin your day, but... Boffins cook up fresh Meltdown, Spectre CPU design flaw exploits

When details of the Meltdown and Spectre CPU security vulnerabilities emerged last month, the researchers involved hinted that further exploits may be developed beyond the early proof-of-concept examples. It didn't take long. In a research paper – "MeltdownPrime and SpectrePrime: Automatically-Synthesized Attacks Exploiting …

            1. MrBoring

              Re: Don't panic, "No exploit code has been released."

              I agree with this guy. One shouldn't worry (give a shit) about things that are totally out of ones control to fix, and for 99.99% of IT professionals this issue is something none of us can fix.

              Saying that, i give a shit because it means more patching, more bugs, more crashes, worse performance - more wasting time on infrastructure when we could be doing stuff that actually adds value.

            2. Anonymous Coward
              Anonymous Coward

              Re: Don't panic, "No exploit code has been released."

              @werdsmith and "The reality is that I can't do anything about this or any of the multitude of other security vulnerabilities that exist on my computer"

              Actually you can:

              Never buy intel again

              Take your hardware back to the vendor and demand a refund as it was faulty when delivered

              Use a secure operating system (good indicator is if they want to spy on your usage)

              Complain to your Government representative and ask them what they are going to do about it

              Tell everyone you know about the problem and suggest the above advise to them

              Basically do not take this lying down, I could go on but just accepting that you have been violated and trying to forget it will not stop it happening again.

              Sorry I can't help with the vulnerabilites in your life without more detail which would then be picked up by google etc and get you excluded from medical treatment or atleast make your insurance premiums go up.

          1. Steve Davies 3 Silver badge
            Facepalm

            Re: "And some of us couldn't give a shit."

            Still using the Abacus then I see?

            1. Anonymous Coward
              Anonymous Coward

              Re: "And some of us couldn't give a shit."

              But even an abacus is vulnerable to side-channel attack if someone with very acute hearing is sitting close to you.

        1. David Roberts
          Pint

          Re: Don't panic, "No exploit code has been released."

          Definite lack of shit donation over here as well.

          First there should be a realistic (!) proposal of how to fix it.

          First stage of that is to produce a new/upgraded/different architecture which has security against these flaws built in. Followed by implementation, testing, running up the fabs, producing the suport chips and motherboards and starting commercial roll out. Not gonna happen this year.

          Next stage is to recognise the enormous real estate of vulnerable hardware out there and that there is no economy in the world which can afford to ditch all that and start again even if some mad manufacturer was prepared to ramp up production to meet all new demand plus full replacement.

          In the mean time all demand for new/replacement computing capacity will have to be met from existing architectures, constantly increasing the real estate of vulnerable hardware.

          Not fair, cry the commentards, that means you are forced to buy dodgy hardware from the people who designed it to be dodgy.

          So come up with an alternative which keeps feeding society's insatiable demand for cheap computing and which demand resulted a long time ago in the dominance of Intel as a single supplier. You get what you pay for. Or don't. If there were say four different competing architectures all at similar volume you could afford to drop one and ramp up the other three.

          Nobody has yet made a reasonable commercial case for curing Meltdown by ditching Intel in all new machines and letting ARM and AMD take up the slack. Because there just isn't the capacity. That is using existing factories with fully functional production lines.

          So enjoy you ranting and beating of your manly (or womanly) breast in outrage. [Um....nearly wandered into mind bleach territory there.] However come up with a viable alternative or accept that we now have an ongoing cycle of software mitigation in the same way we have with all other software products. Coupled with a performace degradation in heavy use scenarios.

          Life sucks. Deal with it.

          Since I can't see any way that I can solve the problem or even influence the outcome, there isn't much point in wasting time worrying. It will either be fixed or it won't. Meanwhile I think my time would be more productively spent sampling a few brews.

      1. James 51
        Boffin

        Re: Don't panic, "No exploit code has been released."

        @Bazza Y2K could have been a big problem except for the years of effort that went into rewriting and testing a whole bunch of code all over the world.

        1. Doctor Syntax Silver badge

          Re: Don't panic, "No exploit code has been released."

          "Y2K could have been a big problem except for the years of effort that went into rewriting and testing a whole bunch of code all over the world."

          Perhaps one outcome of this would be a few man-years of effort in trimming bloat to mitigate the performance loss in mitigating meltdown.

          1. Anonymous Coward
            Anonymous Coward

            Re: Don't panic, "No exploit code has been released."

            "Perhaps one outcome of this would be a few man-years of effort in trimming bloat to mitigate the performance loss in mitigating meltdown."

            Momentarily I read that as "trimming boats" being reminded of all the contractors who became boat owners as a result of Y2K, including the guy I knew who took 6 months off in the Caribbean.

  1. onefang

    So what we need is Optimus Prime to step up and sort out all these bad CPUs once and for all. nVidia's Optimus chips might be good for something after all.

  2. bazza Silver badge
    Mushroom

    Time for NUMA, Embrace your Inner CSP

    This particular round of hardware flaws has come about because the chip manufacturers have continued to support SMP whilst building architectures that are, effectively, NUMA. The SMP is synthesised on top of the underlying NUMA architecture. That's what all these cache coherency and memory access protocols are for.

    This is basically a decades long bodge to save us all having to rewrite OSes and a whole shed load of software. This is the biggest hint that the entire computing community has been recklessly lazy by failing to change. If we want speed and security it seems that we will have to rewrite a lot of stuff so that it works on pure NUMA architectures.

    <smugmode>The vast majority of code I've ever written is either Actor Model or Communicating Sequential Processes, so I'm already there</smugmode>

    Seriously though, languages like Rust do CSP as part of their native language. An OS written in Rust using its CSPness wouldn't need SMP. Though the current compiler would need changing because of course it too currently assumes an underlying SMP hardware architecture... If the SMP bit of our lives can be ditched we'll have faster CPUs and no cache coherency based design flaws, instead of slowed down software running on top of bodged and rebodged CPU microcode.

    Besides, CSP is great once you get your head around it. It's far easier to write correct and very reliable multi-threaded software using CSP than using shared memory and mutexes. You can do a mathematical proof of correctness with a CSP system, whereas you cannot even exhaustively test a multithreaded, shared memory + mutexes system.

    Oh, and Inmos / Tony Hoare got it right, and everyone else has been lazy and wrong.

    1. missingegg

      Re: Time for NUMA, Embrace your Inner CSP

      I love Rust, but it doesn't do that much to enable performant software in NUMA systems. Rust protects against various kinds of memory misuse. Good NUMA software requires clever planning to get the data needed for an operation on the same node the code is running on. Naively written CSP code will flood whatever memory fabric the hardware uses, and prevent code from executing for lack of the data it needs.

      1. bazza Silver badge

        Re: Time for NUMA, Embrace your Inner CSP

        You're missing the point. Naively written anything will overwhelm underlying hardware. There's nothing magical about shared memory in a SMP-on-top-of-NUMA system that means that poorly written code won't run into the limits of the QPI / Hypertransport links between CPUs.

        Synthesising SMP on top of NUMA requires a lot of traffic to flow over these links to achieve cache coherency. Ditch the SMP, and you've also ditched the cache coherency traffic on these links, meaning that there's more link time available for other traffic (such as data transfers for a CSP system). And you've got rid of a whole class of hardware flaws revealed in the article, and you have a faster system. What's not to like?

        From what I hear from my local Rust enthusiast, Rust's control of memory ownership boils down to being the same as CSP. Certainly, Rust has the same concept of synchronous channels that CSP has.

        One of the good things about CSP is that it makes it abundantly clear that one has written rubbish code; there's a lot of channel reads and writes littering one's code.

        1. Anonymous Coward
          Anonymous Coward

          Re: Time for NUMA, Embrace your Inner CSP

          Thanks for the CSP references, best of luck with them too, though the marketdroids and 'security researchers' won't thank you for them. Maybe there's a DevoPS angle on them somewhere?

          A full version of the CSP book appears to have been legitimately freely downloadable for the last few years, see e.g. http://www.usingcsp.com/

          One thing I've not seen quite so explicitly mentioned (though your NUMA references come close) is the role of the memory consistency model, and to a lesser extent, what a process (nb process not processor) can and cannot be permitted to see, directly or indirectly.

          As far as I know, modern RISC processors have tended to be built around a memory model which does not require external memory to appear consistent across all processes at all times. So if some code wants to know that its view of memory is consistent with what every other process/processor sees, it has to take explicit action to make it happen. Especially where the processor is using complex multilevel cache to provide interesting performance. Hence things like conditional load/store sequences found on ARMs and Alphas and... As it happens, they're the kind of thing that NUMA people have been thinking about for years, and CSP people before them. It's a solvable problem, and non-Intel people had solved it.

          As far as I can see, x86 (even modern ones) dates back to an era where there was only one processor and memory consistency was something that software designers or even system designers could happily ignore, because all memory was always consistent. Except it wasn't really.

          ARM and AMD64 do not seem to assume this legacy behaviour and as such they miss out on some of the recent fun.

          I could be wrong though. Where can readers find out more about this particular historical topic and its current relevance?

          1. Claptrap314 Silver badge

            Re: Time for NUMA, Embrace your Inner CSP

            Even 10-15 years ago, this is simply not true. Cache consistences was "eventual" in the modern parlance unless you explicitly called a sync--and there were various levels of syncs available. New cache states (like "T") seemed to pop up every few years.

            Yes, in NUMA, every application is required to figure out how to do this, as opposed to having hardware do it.

            But NUMA systems are still going to be vulnerable to this sort of thing, absent proactive steps taken by the design team. You have to have some way to manage synchronization. Timing will always matter.

            1. bazza Silver badge

              Re: Time for NUMA, Embrace your Inner CSP

              Yes, in NUMA, every application is required to figure out how to do this, as opposed to having hardware do it.

              But NUMA systems are still going to be vulnerable to this sort of thing, absent proactive steps taken by the design team. You have to have some way to manage synchronization. Timing will always matter.

              The nice thing about a NUMA system is that if the software gets it wrong, it can be fixed in software. Plus, faults in software are going to be fairly specific to that software. The problem with having hardware second guess what software might do is that it does it the same way no matter what, and if it gets it wrong (like has been reported in this article) it's machine fault that transcends software and cannot be easily fixed. Ooops!

              1. Claptrap314 Silver badge

                Re: Time for NUMA, Embrace your Inner CSP

                Except that we have a very, very long & sad history of the same class of bug popping up over & over. Every hear about buffer overflow? How is that even still a thing? And yet, we continue to see them.

                You are right in what you are saying. It's what you are not saying that bugs me.

                1. bazza Silver badge

                  Re: Time for NUMA, Embrace your Inner CSP

                  Except that we have a very, very long & sad history of the same class of bug popping up over & over. Every hear about buffer overflow? How is that even still a thing? And yet, we continue to see them.

                  You are right in what you are saying. It's what you are not saying that bugs me.

                  Ah, I think I see what you mean (apologies if not). Yes, timing is an issue.

                  CSP is quite interesting because a read / write across a channel is synchronous, an execution rendezvous. The sending thread blocks until the receiving thread has received, so when the transfer completes each knows whereabouts in execution the other has got to. That's quite different to Actor Model; stuff gets buffered up in comms link buffers, and that opens up a whole range of possible timing bugs.

                  CSP by being synchronous largely gets rid of the scope for timing bugs, leaving one with the certainty that one has either written a pile of pooh (everything ends up deadlocked waiting for everything else), or the certainty that you haven't got it wrong if it runs at all. There's no grey in between. I've had both experiences...

                  However, nothing electronic is instantaneous; even in a CSP hardware environment it takes a finite amount of time for signals to propagate; no two processes in CSP are perfectly synchronised, so there is some tiny holes in the armour. The software constructs may think they're synchronised ("the transfer has completed"), but actually they're not quite. But it is good for the needs of most real time applications.

                  One advantage of this approach is that it doesn't let one trade latency for capacity. With actor model systems data can be off-loaded into the transport (where it gets buffered). Therefore a sender can carry on, relying on the transport to hold the data until the receiver takes it. That's great right up until someone notices the latency varying, and until the transport runs out of buffer space. With CSP, because everything is synchronously transferred, an insufficient amount of compute resource late on in one's processing chain shows up immediately at the very front; there is no hiding that lack of compute resource by temporarily stashing data in the transport. This is excellent in real time systems, because throughput and latency testing is conclusive, not simply "promising".

          2. bazza Silver badge

            Re: Time for NUMA, Embrace your Inner CSP

            Thanks for the CSP references, best of luck with them too, though the marketdroids and 'security researchers' won't thank you for them. Maybe there's a DevoPS angle on them somewhere?

            No worries. I've no idea about what security researchers would think, etc. Adopting CSP wholesale is pretty much a throw-everything-away-and-start again thing, so if there is a DevOPS angle it's a long way in the future!

            Personally speaking I think the software world missed a huge opportunity to "get this right" at the beginning of the 1990s when Inmos Transputers (and other things like them) looked like the only option for faster computers. Then Intel cracked the clock frequency problem (40MHz, 66MHz, 100MHz, topping out at 4GHz) and suddenly the world didn't need multi-processing. Single thread performance was enough.

            It's only in more recent times that multiple core CPUs have become necessary to "improve performance", but by then all our software (OSes, applications) had been written around SMP. Too late.

            As far as I know, modern RISC processors have tended to be built around a memory model which does not require external memory to appear consistent across all processes at all times. So if some code wants to know that its view of memory is consistent with what every other process/processor sees, it has to take explicit action to make it happen.

            Indeed, that is what memory fences are, op codes explicitly to allow software to tell the hardware to "sort its coherency out before doing anything else". Rarely does one call these oneself, they're normally included in other things like sem_post() and sem_wait(); they get called for you. The problem seems to be that the CPUs will have a go at doing it anyway, so that when a fence is reached in the program flow it takes less time to complete. And this is what has been exploited.

            Where can readers find out more about this particular historical topic and its current relevance?

            A lot of it is pre-internet, so there wasn't vast repositories online to be preserved to the current day! The Meiko Computing Surface was a super computer based on Transputers - f**k-loads of them in a single machine. Used to have one of these at work - it had some very cool super-fast ray tracing demos (pretty good for 1990). I heard someone once used one of these to brute force the analogue scrambling / encryption used by Sky TV back then, in real time.

            The biggest barrier to adoption faced by the Transputer was development tooling; the compiler was OK, but machine config was awkward and debugging was diabolically bad. Like, really bad. Ok, it was a very difficult problem for Inmos to solve back then, but even so it was pretty horrid.

            I think that this tainted the whole idea of multi-processing as a way forward. Debugging in Borland C was a complete breeze by comparison. If you wanted to get something to market fast, you didn't write it multi-thread back in those days.

            However, debugging a multi-threaded system is actually very easy with the right tooling, but there's simply not a lot of that around. A lot of modern debuggers are still rubbish at this. The best I've ever seen was the Solaris version of the VxWorks development tools from WindRiver. These let you have debugger session open per thread (which is really, truly nice), instead of one debugger handling all threads (which is always just plain awkward). WindRiver tossed this away when they moved their tool chain over to Windows :-(

            There was a French OS called (really scrapping the memory barrel here) Coral; this was a distributed OS where different bits of it ran on different Motarola 68000 CPUs. I also recall seeing demos of QNX a loooong time ago where different bits of it were running on different computers on a network (IPC was used to join up parts of the OS, and these could just as easily be network connections).

            The current relevance is that languages like Scala, Go and Rust all have CSP implementations in them. CSP can be done in modern languages on modern platforms using language fundamentals instead of an add-on library. In principal, one attraction of CSP is system scalability; your software architecture doesn't change if you take your threads and scatter them across a computer network instead of hosting them all on one computer. Links are just links. That's a very modern concept.

            Unfortunately AFAIK Scala's, Go's and Rust's CSP channels are all stuck in-process; they aren't abstract things that can be implemented as either a tcp socket, ipc pipe, or in-process (corrections welcome from Go, Scala and Rust aficionados). I think Erlang CSP channels do cross networks. Erlang even includes an ASN.1 facility, which is also very ancient but super-useful for robust interfaces.

            The closest we get to true scalability is ZeroMQ and NanoMsg; these allow you to very readily switch from joining threads up with ipc, tcp, in-process exchanges, or combinations of all of those. Redeployment across a network is pretty trivial, and they're blindingly fast (which is why I've not otherwise mentioned RabbitMQ; its broker is a bottleneck, so it doesn't scale quite as as well).

            I say closest - ZeroMQ and NanoMsg are Actor Model systems (asynchronous). This is fine, but this has some pitfalls that have to be carefullly avoided, because they can be lurking, hidden, waiting to pounce years down the line. In contrast CSP (which has the same pitfalls) thrusts the consequences of one's miserable architectural mistakes right in one's face the very first time you run your system during development. Perfect - bug found, can be fixed.

            There's even a process calculii (a specialised algebra) that one can use to analyse the theoretical behaviour of a CSP system. This occasionally gets rolled out by those wishing to have a good proof of their system design before they write it.

            Not bad for a 1970s computing science idea!

            OpenMPI is also pretty good for super-computer applications, but is more focused on maths problems instead of just being a byte transport.

            1. Anonymous Coward
              Anonymous Coward

              Re: Time for NUMA, Embrace your Inner CSP

              Sir, sir, Mr Register sir, please can we have an artickle from Bazza please?

              Meantime, while I think about those words (including words like Ada that might seem to fit in the context but didn't appear):

              The distributed French OS might have been Chorus:

              https://en.wikipedia.org/wiki/ChorusOS

              Coral was a programming language whose origins were in the UK MoD in the 1960s:

              https://en.wikipedia.org/wiki/Coral_66

              QNX is still around, though you probably can't build a functioning browser, GUI, and IP stack to fit on a 1.44MB (megabyte? what?) floppy like you could in days gone by. Owned by Blackberry nowadays?

              http://toastytech.com/guis/qnxdemo.html

              https://www.youtube.com/watch?v=K_VlI6IBEJ0

              Is VxWorks/Wind River still owned by Intel?

              Is Simics (the system-level simulator) still owned by Wind River, and thus owned by Intel?

              https://en.wikipedia.org/wiki/Simics

              Do Intel "eat their own dog food"? Should they?

              It's been a while...

    2. Christian Berger

      Re: Time for NUMA, Embrace your Inner CSP

      It's fascinating how normal UNIX commands would be a good fit for CSP architectures.

      1. bazza Silver badge

        Re: Time for NUMA, Embrace your Inner CSP

        It's fascinating how normal UNIX commands would be a good fit for CSP architectures.

        Very nearly! However piping UNIX commands together is closer to Actor Model than CSP; the pipes really are asynchronous IPC pipes, not synchronous channels that CSP has. Also there's some limits on how commands can be plumbed together; I don't think you can do anything circular.

        The irony of IPC pipes is that what they provide is an asynchronous byte transport, but they're implemented in the kernel using memory shared between cores and semaphores. The ironic part is that that shared memory is faked; it's an SMP construct that is synthesised on top of a NUMA architecture. That in turn is knitted together by high speed serial links (QPI, Hypertransport), and these links are asynchronous byte transports! Grrrrr!

        The one hope is that microkernel OSes come to predominate, with bits of the OS joined up using IPC pipes instead of shared memory. That opens up the opportunity for the hardware designers to think more seriously about dropping SMP. It may happen; even Linux, courtesy of Google's Project Treble, is beginning to head that way.

  3. Glad Im Done with IT

    Just kill ALL code in a browser.

    The way things are going in this arena recently the only sane thing you can do now is to disable anything that is capable of running code in a browser.

    I said goodbye to Java and Flash years ago, now time to say goodbye permanently to Javascript and never let web assembly anywhere near my browser when that tries to become flavour of the day.

    1. bazza Silver badge

      Re: Just kill ALL code in a browser.

      Though perfectly valid, that's a very "me" point of view. I wholeheartedly agree that running arbitrary code downloaded into some sort of browser based execution engine is asking for trouble.

      Other people have the problem that, by intent and design, they're letting other users choose what to run on their hardware. Services like AWS are exactly that. If one lets employees install software on their company laptop, it's the same problem. A computer that is locked down so that only code that the IT administrator knows about is running is very often a useless tool for the actual user.

      So really, the flaws need fixing (as well as ditching Javascript) otherwise computers become pretty useless tools for the rest of us.

      1. Lee D Silver badge

        Re: Just kill ALL code in a browser.

        No, I think the lesson is "don't try to get clever for the sake of performance".

        Meltdown was caused by lack of security checks on speculatively executed instructions. If you're going to speculatively execute, why would you handle the instruction any different to when you normally execute it? That's a disaster waiting to happen and people knew it.

        Spectre is the same except instructions are executed that give away information to the process about what happened. Again... this shouldn't be possible. To any process running, why is it ever made aware of the results of a speculative execution? By definition, that execution shouldn't be detectable or it's not "speculative", it's literally execution and rollback.

        The latter is more subtle, but both are the product of not executing speculatively at all... but actually just executing. And in the former case, executing without the same security boundaries.

        They were also known about for quite a long time, people have been saying it's ripe for attack for years along exactly these kinds of lines (I think people actually expected Spectre more than Meltdown, to be honest - a side-channel attack on such a process is much more easily predicted than an abject failure to apply memory protection).

        If you can't execute arbitrary code as an ordinary user without compromise, your system is flawed as a general purpose operating system running on a general purpose computer. That's not to say that you let your users do what they like - appropriate security controls should ensure they can only interfere and trash their own stuff, not anything else, however. But we still live in an age where thousands of users sharing a machine aren't contained, isolated, bottled, virtualised and removed from the hardware such that it doesn't matter what they do. This is something we learned in the early mainframe days.

        Sure, it costs on performance to do things properly. But in the days of 2GHz processors being "the norm" despite much faster processors existing, performance isn't actually our top concern any more. But billions of machines in the hands of idiots who'll click anything is. Rather than say "Ah, well,t hey shouldn't have clicked that", it's time to make a processor, architecture and OS where it DOESN'T MATTER that they clicked something... it can't break out of its process, memory space, virtualised filesystem (with no user files by default until the user puts them in that program), etc.

        We're designing systems on the basis that every user is a computer expert who religiously verifies every code source they ever see, while putting a smartphone in everyone's pocket for £20.

        1. OldCrow
          Holmes

          Re: Just kill ALL code in a browser.

          An OS where the user can't install random crap from a phishing email approaches Windows 10S or iOS in lockdown. Usability suffers as a consequence.

          This is also wasteful. For protection from legal liability, it is sufficient that the machine can not be compromised without user error (i.e. user's assistance).

          A likely path forward for Intel (et.al.) is to add a dedicated core with an "untrusted software" mode. This mode would disable speculative execution. Further, the operating system will have to be aware of these "untrusted processes / threads", so they can perform threat mitigations (that are now performed for all threads, sapping performance).

          Of course, software such as browsers would have to support "untrusted execution" by declaring their javascript engine threads as such.

          Anyone willing to make bets?

        2. Anonymous Coward
          Anonymous Coward

          Re: Just kill ALL code in a browser.

          "No, I think the lesson is "don't try to get clever for the sake of performance"."

          The rest of your comment makes very clear that what you meant to say might have been slightly better as "... for the sake of performance on a multiuser multitasking system which aims to have any pretence of security."

          Seems like it might be time for a return to single user single tasking non networked systems. Either that, or take properly architected processors and properly architected OSes seriously, and admit that the apparent performance of x86 frequently comes with a functional penalty in real-world work.

          Plenty of people understood this already, but it wasn't a popular message.

        3. bombastic bob Silver badge
          Thumb Down

          Re: Just kill ALL code in a browser.

          No, I think the lesson is "don't try to get clever for the sake of performance".

          there is NO virtue in mediocrity. BOOOOoooo...!

          <sarcasm>

          yes, the clever ones - chain them up, drug them into complacency and mediocrity with Ritalin, and start when they're really small, because kids that are smarter than their teachers will turn into brilliant spark engineers, and we can't have THAT, now can we? No, we must have GROUP think and MEDIOCRITY, where NOBODY is better than anyone else, and "the masses" are carefully managed by "the elite" for their own good...

          </sarcasm>

          1. Anonymous Coward
            Anonymous Coward

            Re: Just kill ALL code in a browser.

            @bombastic bob

            While I appreciate your sentiment, aren't browsers the epitome of mediocrity?

    2. sabroni Silver badge
      Facepalm

      Re: Just kill ALL code in a browser.

      Yeah, that'll stop anyone exploiting cpu flaws.

      Get the torches!!! They're running JavaScript!!!! It looks like C but the scoping's different!!!!!!!!!!!!

      1. Lysenko

        Re: Just kill ALL code in a browser.

        Yeah, that'll stop anyone exploiting cpu flaws.

        Get the torches!!! They're running JavaScript!!!! It looks like C but the scoping's different!!!!!!!!!!!!

        JavaScript isn't the issue. Automatically downloading and executing code that arrives over the internet (*.vbs email attachments?) is the issue.

        The positive side is there are only a handful of JS engines in common use with V8 (Google open source) being the market leader. It should be possible to stamp out these exploits inside TurboFan (the V8 compiler) and the equivalents in other JS engines, which would automatically sanitise all the JS in circulation. Statically compiled code (C/C++ etc) is a much bigger problem in this regard.

        1. Anonymous Coward
          Anonymous Coward

          Re: Just kill ALL code in a browser.

          "Automatically downloading and executing code that arrives over the internet (*.vbs email attachments?) is the issue."

          Don't forget unauthenticated (shell)code can also arrive courtesy of an email, web page, whatever, courtesy of a "specially crafted JPEG" or whatever the trendiest buffer overflow CVE-of-the-week is today.

          It's the second decade of the 21st century, and Windows (and other) apps still have DOS-era coding errors. In Windows in particular, some of them provide trivially easy routes to running in kernel mode. History is there to be learned from, but at the moment the end users pay the price while the computer industry gets the profits, so there's no effective motivation for corporates to learn.

          1. Anonymous Coward
            Anonymous Coward

            Re: Just kill ALL code in a browser.

            And TrueType fonts, which execute on a turing-complete VM with branches, loops, and variables.

            And WOFF webfonts, which can contain TrueType, OpenType, or PostScript fonts - the latter being a complete language.

            And PDF, which is basically PostScript with embedded TrueType fonts, JS scripts, JPEG and TIFF images - all fertile ground for exploits.

            We are screwed.

      2. Ken Hagan Gold badge

        Re: Just kill ALL code in a browser.

        "Yeah, that'll stop anyone exploiting cpu flaws."

        Umm, yeah, actually it might. You see, none of these flaws are remotely accessible. They all require the attacker to actually run code on the target computer. Traditionally, the way around this annoying limitation is to persuade everyone that it is safe to run arbitrary third-party (untrusted) code in a browser because the browser's sandbox will protect the machine. We now find that this ain't necessarily so. Solution: stop running untrusted code in your browser (or anywhere else).

  4. Joerg

    Now CPU manufacturers must find GPU security bugs as well...

    ... because there surely are plenty of design flaws in GPUs by Nvidia and AMD , security issues are surely not limited to CPUs alone.

    It is pretty obvious that all these researchers all of a sudden focused on finding out security design flaws that no one didn't give a damn for decades are paid to do so on purpose ... and it is nothing good for the whole industry. These design flaws should have remained unknown outside of IT manufacturers design labs !

    1. Frank Gerlach #2

      Re: Now CPU manufacturers must find GPU security bugs as well...

      Yeah, slay the messenger !

    2. Anonymous Coward
      Anonymous Coward

      Re: Now CPU manufacturers must find GPU security bugs as well...

      @"These design flaws should have remained unknown outside of IT manufacturers design labs !"

      Only if your name is intel, for everyone else who has discovered that they got ripped off when they bought Intel inside and now find their expensive investment is a liability no matter what OS they put on it then knowing is very very important.

      I can only understand your "if only they had not let the truth out" mentality if you own shares or work for intel, otherwise your wanting to pretend the problem does not exist or suggesting that other electronic devices might also have problems then intel is somehow not to blame for screwing their customers is just bizarre.

      People paid top money for a product that was faulty, they deserve compensation or replacement of the effected product. If the later requires replacement of parts that are not compatable with the replacement then they need to be replaced too. This is not an unreasonable expectation outside of the US and yet intel are just ignoring their responsibility and trying to confuse the issue in the hope that they can distract their disgruntled customers. What is most annoying is the silence of our own government agencies supposedly created to deal with exactly this sort of problem.

      So don't tell me that they should have kept schtum, Intel took the piss and finally got caught and now is the time to cough up the cash to their effected customers, not just in the US, everywhere.

      If the cost shuts Intel down then it will be a lesson for every other manufacture and will go some way towards restoring customer confidence.

  5. hjns62
    Stop

    Oportunity for anti-malware?

    Meltdown and Spectre flaws seems to be the result from speed vs security compromises and business ambitions to overcome benchmarks.

    What this paper may show (i'm speculating...) is that perhaps there is no complete solution for speculative processing at CPU microcode. To be fast, CPUs guess what should be happening next. In this compromise for speed, building full security on it, may be always prone to flaws (...continuing the speculation...). Like a theorem... (bold speculation...)

    Then wouldn't it be the place for anti-malware paradise at the OS level? Let's run the CPUs faster and let the anti-malware tools detect programs that tweak around side-channel attacks?

    Wouldn't the result be better for performance? What about a anti-malware function that disables CPU patches and do the job for you? Sure to be, covering web browsing. JS, whatever...and all OSes...

    1. Frank Gerlach #2

      Re: Oportunity for anti-malware?

      Up to now only Windows needs Anti-Malware programs ("virus scanners"). It would be nice to keep it this way.

    2. JCitizen Bronze badge
      Coffee/keyboard

      Re: Oportunity for anti-malware?

      I can remember Microsoft grudgingly allowing Symantec into the kernel space of one of their new operating systems under a new NT filing architecture. Nobody was happy about that, especially since nobody trusts Symantec to be any more secure with their code than Microsoft was; and perhaps even worse.

  6. Archtech Silver badge

    Consistent

    "The Meltdown and Spectre design flaws are a result of chip makers prioritizing speed over security".

    Which is just another typical instance of modern business prioritizing marketing over quality.

    "Never mind the quality; feel the width!"

    1. Arthur the cat Silver badge

      Re: Consistent

      "The Meltdown and Spectre design flaws are a result of chip makers prioritizing speed over security".

      Which is just another typical instance of modern business prioritizing marketing over quality.

      More a case of business listening to their customers. Everybody wants faster CPUs, almost nobody(*) screams "make my CPU slower and more secure".

      (*) Maybe a few security types did, but they're such a small minority they rarely get heard until it's too late.

      1. Claptrap314 Silver badge

        Re: Consistent

        The feds are not a really small case. There are special designs that are made for them. I was never cool enough to get close to those designs, however...

  7. jeffdyer

    I really can't see the fuss. Unless you know exactly what other process has just written to memory, and exactly what that data is, I don't see what use it could possibly be to anyone.

    Anyone tried debugging an application at the CPU level? There is so much going on, it would be practically impossible to know what that string of hex means, assuming you can read it in the first place.

    1. Frank Gerlach #2

      False

      For all practical purposes, data has Redundancy. From redundancy, you can figure out which data record you are looking at. The attacking program would search for file headers, magic strings and the like to find the target data structure it is looking for.

      For an attack against cipher keys it would also be highly useful to simply have a full dump of the target process image. Then simply use every 16/32 octet sequence in the image as a key candidate. This reduces an "impossible" (key space search) problem to a "20 minute problem".

    2. Claptrap314 Silver badge

      So you've not bothered to fix Heartbleed on your systems?

      There are techniques for figuring out where interesting data lies--that's what all the address randomization stuff it supposed to help with.

      Yes, this is hard, or nigh-impossible without the right tools. Get those tools, however, and it becomes a craps shoot. And generally, you don't need snake eyes to win.

  8. jason 53

    this means any new pc for the next 6month - 18 months currently/future on the shelf say " best buy " should be recalled.

    or at least sold with a warning that they are flawed.

    1. Frank Gerlach #2

      Well

      Your CPU works quite nicely as long as you do not run untrusted code from www.shadyAdFlinger.com and the like. It is as fast as a supercomputer was in the early 1990s, but you pay only $1000 for it.

      1. Claptrap314 Silver badge

        Re: Well

        Install uMatrix, and then explain to me how easy it is to do this. We are in serious trouble.

      2. Anonymous Coward
        Anonymous Coward

        Re: Well

        @ "Your CPU works quite nicely as long as you do not run untrusted code from www.shadyAdFlinger.com"

        Really? so all the websites with bitcoin JS are doing it deliberately?

        Since there is no really way to avoid all malware since it only gets detected once it is recognised and the OS assumes that the CPU is secure then unless your machine is completley isolated you are just as vulnerable.

        That is may have the similar capablities to an obsolete super computer and is "only" $1000 is a meaningful as saying before cars existed then people had to walk and ignoring that the change has created new perils the old pedestrians never experienced. Things have changed and the comparison of just speed/cost alone is ignoring all the good and bad consequencies of that change, consequencies that make the speed/cost ratio irrelevant

  9. Frank Gerlach #2

    Fix A: Transputers

    Give each program its own transputer to do its respective work. We have more than enough silicon to do that. Connect transputers via fast transmission-line-type message links (not just TTL lines as in the original transputers).

    Then $EvilJavaScript cannot snoop on your Excel sheets.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020