back to article Hate to ruin your day, but... Boffins cook up fresh Meltdown, Spectre CPU design flaw exploits

When details of the Meltdown and Spectre CPU security vulnerabilities emerged last month, the researchers involved hinted that further exploits may be developed beyond the early proof-of-concept examples. It didn't take long. In a research paper – "MeltdownPrime and SpectrePrime: Automatically-Synthesized Attacks Exploiting …

  1. Androgynous Cupboard Silver badge

    Oh that's just great

    The vast majority of coders struggle to get application software working in multiple threads at all, the rest of us make do with abandoning any sort of codepath or timing predictability, and yet these clowns manage to make it so predictable they can use if for a timing attack? Enough. I cannot take any more clever. I have reached my peak, I am tapped out. No more I say.

    1. Anonymous Coward
      Anonymous Coward

      Re: Oh that's just great

      Just goes to show what a focused, smart mind can accomplish when tasked with a singular goal.

      But honestly, I can't help but feel a bit sad that the same effort wouldn't be better expended on something of more value to society. And to be clear, I am NOT knocking the researchers. I just lament at the number of hours and brain cells that are going to be spent for YEARS on these two issues, with the result being little more than what we already thought we had.

      1. Claptrap314 Silver badge

        Re: Oh that's just great

        Less. It's going to take a very long time to recover the performance loss.

        1. TheOldFellow

          Re: Oh that's just great

          Oh, it's easy. Just buy 3 Intel Computers where you used to use 1. Everyone happy, especially the Intel stockholders.

        2. Anonymous Coward
          Anonymous Coward

          Re: Oh that's just great

          Speaking of performance loss, hasn't this just broken moore's law?

      2. Anonymous Coward
        Anonymous Coward

        Re: Oh that's just great

        And so it begins .... the next 'Cat & Mouse' Game !!!

        Intel et al, will eventually develop new CPU Designs that are supposed to eliminate these sorts of 'Side Channel' problems.

        Our 'usual suspects' will once again prove that anything that is built can be 'mis-used' in 'interesting' ways and 'new' vulnerabilities will be found !!!

        Meanwhile, what do all the millions of people do with the 'old' CPU's that are flawed and from the look of it will remain flawed.

        I cannot afford to replace all the PC's / Laptops / Tablets /Phones etc and at the moment even if I could what do I replace them with ???!!!

        Software based fixes can be undone, I would not be surprised if someone is trying to engineer 'fixes' to the fixes that have been released !!! [Other than Intel :) :) ]

        (Worth the effort as there are so many machines out there to 'run amok' on !!! )

        (I know that the fixes are microcode updates BUT have often wondered why someone has not written code that performs microcode changes from Windows itself. It is the ultimate 'Hack' and if done it would be invisible to most users.)

        Careful saying it cannot be done :) :)

        There are lots of 'Tools' (for want of another name) available that seem to perform things that were once called 'impossible from Windows' !!!

        i.e. tools that can change the BIOS from Windows as an example.

        Even if it requires a complete re-boot, windows crashes/reboots so often or Win10 requests reboots for updates, so such changes would be performed then.

        1. TonyJ

          Re: Oh that's just great

          "...Our 'usual suspects' will once again prove that anything that is built can be 'mis-used' in 'interesting' ways and 'new' vulnerabilities will be found !!!.."

          As E.E. 'Doc' Smith put it in the Lensmen books:

          "Anything that science can devise, science can analyse and synthesise"

      3. Anonymous Coward
        Anonymous Coward

        Re: Oh that's just great

        @AC and "But honestly, I can't help but feel a bit sad that the same effort wouldn't be better expended on something of more value to society"

        There can be no greater benefit to society than banishing ignorance.

        Poopooing their accomplishments in revealing the truth is the same as standing in the corner with your fingers in your ears squealing "I am not listening" over and again like a child unwilling to face reality.

        The researchers might have brought you knowledge you didnt want but then again they didn't make the problem they just told you that it existed.

        1. Anonymous Coward
          Anonymous Coward

          Re: Oh that's just great

          Having read you post again it's clear i responded to just the first sentence. Sorry.

          1. defiler

            Re: Oh that's just great

            Sorry.

            This is the internet. "Sorry" has no place here...

            I swear I say YouTube comments the other day that managed to retreat from the usual name calling to an acceptance of each other's positions and an apology. It's the End of Days, I tell you.

        2. Charles 9

          Re: Oh that's just great

          "There can be no greater benefit to society than banishing ignorance."

          Ever heard the phrase, "There are some things Man was not meant to know"?

          1. onefang

            Re: Oh that's just great

            'Ever heard the phrase, "There are some things Man was not meant to know"?'

            What it's like to be pregnant is a thing Man was not meant to know, but I'm sure science will sort that out soon enough.

      4. Anonymous Coward
        Anonymous Coward

        "effort wouldn't be better expended on something of more value to society."

        You mean that investigating what is dangerous to society itself is not valuable? If went unnoticed, such issues could one day trigger very big damages - computers are no longer big machines running in isolated complexes, or funny things nerds play with in their bedrooms.

        Almost anything important is today run by using computers - and it will just increase. Ensuring computers and their software are safe enough is not different from ensuring cars, planes, appliances, houses, drugs, food, etc are safe.

        1. Anonymous Coward
          Anonymous Coward

          Re: "effort wouldn't be better expended on something of more value to society."

          The syndrome here is very similar to that with GM food (and other substances).

          Some of the engineers object that the full consequences are unknowable, and the badness of those consequences seems to have no limit.

          The bosses reply that they have to think about next quarter's profits and stock price, so do it anyway or be fired.

          In the conflict between a potential serious risk to the whole human race and someone's personal wealth in the short term, always back the latter.

          1. Muscleguy

            Re: "effort wouldn't be better expended on something of more value to society."

            Hmm except with GM foods when people object on the basis that transgenes might act like viruses then they have left contact with verifiable reality. I saw and still see an awful lot of stuff along those lines.

            I have made in my time a small mountain of transgenic mice and the world has failed to deform in grey goo and the transgenes didn't jump across the mouse room to other mice and wild type littermates were even entirely possible and if you put transgenic containing embryos into a wild type recipient mother mouse she does not become transgenic. It is easy to tell.

            The level of knowledge and understanding of transgenesis and molecular genetics is inversely proportional to the lurid and virulent objections to them.

            Observing the GM debates made me fear for the future of humanity.

            That is not to absolve Monsanto from blame. The first GM products were designed to sell more weedkiller, hardly the best advert for the technology. This queered the pitch for transgenics from then on. Future generations will look back and wonder at the luddite stupidity.

            The US has been eating GM food for several decades now and the bodies continue to abjectly fail to pile up and the goo is neither present nor grey. Ebola comes out of the forests with bushmeat and is entirely natural, the vaccine against it may well have relied on recombinant and transgenic techniques.

            Any ideas of natural = Good and technological = Bad are easily knocked down with such examples.

            BTW lateral gene transfer which is what we call it when Nature swaps genes around without so much as a by-your-leave is so common you can literally fall over it. I did in the lab one day, I found a gene from chickens which was only otherwise present in humans and malaria mosquitoes. Not mice, not chimps, not fruit flies, not quail.

            The poster child for it though are sea squirts, the tunicates. Their leathery tunic which they wrap themselves in is made of cellulose, plant fibre. A chordate animal is making cellulose. Genome sequencing revealed they pinched the entire multi gene cellulose synthesis pathway from a seaweed. They are genetically modified in spades and have been infesting the seas without control for millions of years without Nature falling into grey goo.

            Calm down.

            1. nagyeger

              Re: "effort wouldn't be better expended on something of more value to society."

              Well argued and informative. Have an up-vote.

              Now, we need similarly sane and coherent* arguments against HTML in email. Any takers?

              * Not to be confused with the light-sources on top of sharks.

            2. Mark Eaton-Park

              Re: "effort wouldn't be better expended on something of more value to society."

              @Muscleguy and GM is totally a good thing

              Genentic modification of other organism for the benefit of mankind is not in itself a bad thing however being allowed to patent the modification and just dropping the "designs" into everyone's environment pretending things are under control when the complexity inherant make any safety assessment impossible are.

              Like the Scientist who made a press statement that british beef was "safe" (without qualification), scientists are not free from corruption and everyone knows that corporate bodies will do anything for even a tiny increase in profits.

              Add the two and you get a series of GM nightmares that the corporates and their paid supporters are trying to pretend could never happen, they do not know or care they just want the money and to hell with the rest of the world.

              Also given that gene mutatation is how we get difference and is random then how can anyone know if any sequence has not occured randomly before so how can they to ascribe novelty.

              "The US has been eating GM food for several decades now and the bodies continue to abjectly fail to pile up and the goo is neither present nor grey" The US is also not known for being the most healthy country with the most nutritious food and so your evidence that GM is "Safe" is unconvincing especially after such a short period and against the media and legal bias bought by the GM companies.

              GM is potentially too dangerous to be allowed to be in the hands of bodies who put profit above all else until we have the proven science to predict and avoid in advance the possible disasters this tech allows

            3. Mark Eaton-Park

              Re: "effort wouldn't be better expended on something of more value to society."

              @Muscleguy and GM is totally a good thing

              Genentic modification of other organism for the benefit of mankind is not in itself a bad thing however being allowed to patent the modification and being allowed to drop the "designs" into everyone's environment alonog with saying there are no problems when the complexity inherant make any safety assessment impossible.

              Like the Scientist who made a press statement that british beef was "safe" (without qualification), scientists are not free from corruption and everyone knows that corporate bodies will do anything for even a time increase in profits.

              Add the two and you get a series of GM nightmares that the corporates are trying to pretend could never happen, they do not know or care they just want the money and to hell with the rest of the world.

              Also given that gene mutatation is how we get difference then how can anyone know if an sequence has not occured randomly before and hence impossible to ascribe novelty.

              "The US has been eating GM food for several decades now and the bodies continue to abjectly fail to pile up and the goo is neither present nor grey" The US is also not known for being the most healthy country with the most nutritious food and so your evidence that GM is "Safe" is unconvincing especially after such a short period and against the media and legal bias bought by the GM companies.

              GM is potentially too dangerous to be allowed to be in the hands of bodies who put profit above all else until we have the proven science to predict and avoid in advance the possible disasters this tech allows.

              If you were a real scientist then you would know that absolutely nothing is safe without qualification so I am presuming that you are just messing with things you do not understand and telling people that since you havent wiped out millions yet that there are no problems.

            4. Anonymous Coward
              Anonymous Coward

              Re: "effort wouldn't be better expended on something of more value to society."

              "The US has been eating GM food for several decades now and the bodies continue to abjectly fail to pile up"

              It's not as if the US cancer incidence rates suddenly shot up since the early 1990's and coinciding with this nasty Corporatist greed-shit being introduced into the human food supply chain is it?

              In other words, not all science is progressive, you should try being a bit more objective.

              Here's some GMO food for thought for you - if the eastern religions are right, you might come back as GMO lab mouse :-)

      5. Muscleguy

        Re: Oh that's just great

        A Biologist writes: this is the Red Queen scenario. If you recall the Red Queen has to run fast just to stay still. This name has been given to the hypothesis explaining why sexual reproduction is so very widespread even if some species (rotifers, daphnia, aphids) get by apparently fine without it.

        Living animals have to fend off so many parasites from the retroviruses which are just slightly encapsulated RNA strings up to multicellular parasites and sexual gene shuffling allows at least some offspring to simply survive in the face of the onslaught.

        There are so many bad actors out there in the computer ecology from script kiddies to state actors that an immune system is needed. These researchers and those who search in pursuit of bounties are sadly necessary.

    2. Anonymous Coward
      Anonymous Coward

      Re: Oh that's just great

      It's a different problem. Timing attacks work by analysing the time an operation takes depending on the input. Since most people have either an x86 or ARM CPU, a lot of information on expected time is already available, you just have to collect information from sample inputs to find an input that takes a different amount of time. So while developers have to write code that works correctly as often as possible, an attacker only needs to get it right once and will have a lot of opportunities.

      1. Loud Speaker

        Re: Oh that's just great

        Except that "out of order" cpus do not inherently have a predictable instruction execution time, even in a single thread environment, and Intel's threads are "virtual" ie not dedicated - which is where these bugs originate - which means if the CPU is hard at work on multiple threads, unless you have control over what all of them are doing, timings are being actively randomised,<p>

        I am not saying "don't panic" I am saying "you only need to panic a small amount, and quite slowly" - there is time for a cup of tea first.<p>

        OTOH, since Intel did this deliberately, you might want to go to another supplier next time.

        1. Claptrap314 Silver badge

          Re: Oh that's just great

          Maybe not to ordinary users. Isolating the source of many bugs requires cycle-accurate simulations. Lots of fun has been had convincing successive design teams of this fact.

          Moreover, there are instruction sequences that will put the entire microprocessor and caches into known states. (Up to cycle-accurate predictions.) It's not trivial to develop such a sequence, but I have done this. Again, it's a lot easier to do this if you have the cycle-accurate sim to verify your work, but in the case that I'm talking about, it would be over a year before the sim was finally made cycle accurate.

          Vulnerability & bug hunting at the processor level is just flat different than other types of programming. A non-trivial number of programmers fail to grasp this, and end up going elsewhere. Don't assume that what you have been told, or what you have learned, applies directly here.

        2. Claptrap314 Silver badge

          Re: Oh that's just great

          "Except that "out of order" cpus do not inherently have a predictable instruction execution time, even in a single thread environment, and Intel's threads are "virtual" ie not dedicated - which is where these bugs originate - which means if the CPU is hard at work on multiple threads, unless you have control over what all of them are doing, timings are being actively randomised,<p>"

          Wow. Hard to keep up with what all is wrong here. Microprocessors are not magic. If you get them into a known state, and feed them a given set of inputs at given points in time, they will give you the same results. Every time.

          I did microprocessor validation for a decade just as OOE became a thing at AMD & IBM. We had cycle-accurate simulations of all of these processors (eventually). This includes, for instance, the STI Cell microprocessor which had two clock domains (one for the ppc core and one for the spus).

          Yes, if threads are sharing execution units, you have to know what is being executed on both threads to predict timing. But again, from a given initial state and a fixed set of inputs, the final state is deterministic.

          1. Anonymous Coward
            Anonymous Coward

            Re: the same results. Every time

            "Microprocessors are not magic. If you get them into a known state, and feed them a given set of inputs at given points in time, they will give you the same results. Every time."

            ?

            Microprocessor systems are not always perfectly designed or implemented, and even if they were, they may not be 100% predictable especially once you move outside the core itself and into chip level and system level components and behaviours, e.g. caches, DMA capability, etc.

            E.g. where do things like soft errors in caches fit into the picture of perfectly predictable timing? They don't, not for people (such as safety criticial systems people), who take their behavioural and timing analysis seriously. Obviously that makes life inconvenient for Der Manglement in these cases 'cos it means that they're not able to justify using widely used chips and technologies which rely on cache, OOE, etc. Not without having to handwave quite a lot anyway..

            A soft error on something that was in cache (resulting in a forced cache miss) is routine expected behaviour, it's inevitable that they will happen, they just can't be predicted in terms of when they will happen. When it does happen, the visible timing of the system may be different than it would without the soft error. That timing difference may then propogate in an unmodellable way, rendering any system-level timing predictions largely irrelevant.

            A bit like the butterfly/chaos effect, except not as pretty.

            DMA transactions may have similar effects on timing predictability.

            Here's one prepared earlier for the FAA, from their "Handbook for the Selection and Evaluation of Microprocessors for Airborne Systems " at

            https://www.faa.gov/aircraft/air_cert/design_approvals/air_software/media/AR_11_2.pdf

            "Nondeterminism arises because the availability of a shared resource becomes largely dependent on the run-time behavior of other processes sharing the same resource. In many cases, the run-time behavior of programs is data-dependent and cannot be predicted offline."

            [snip]

            "Out-of-order instruction execution or dynamic scheduling of instructions may cause timing anomalies. For instance, when there is a cache hit, an instruction takes longer to execute than when there is a cache miss, contrary to popular knowledge that cache hits take less time. For example, in a processor that employs out-of-order execution, a cache miss will allow subsequent instructions to begin execution. This out-of-order behavior may lead to a reduced execution time for a set of instructions. This makes the worst case execution time of tasks hard to predict."

            Mostly this doesn't matter. Sometimes it does. Handwaving doesn't make it go away, proper design and analysis might make it less dangerous.

    3. Claptrap314 Silver badge

      Re: Oh that's just great

      Don't feel so bad. Work at that level is quite different, and we use a trick--we generally write in assembly language and have access to cycle-accurate simulations. Very, VERY different from writing concurrency code in something higher level where you don't have simulators and no idea at all about what code is actually executing. We also get to experiment--a lot--to see what happens.

  2. Anonymous Coward
    Anonymous Coward

    And so it begins.

    Hardware fixes will start to be delivered after this pustulent mess is thoroughly picked over.

    And the picking has only just started.

    Now where's my pen and paper?

  3. Anonymous Coward
    Anonymous Coward

    Not so great for anyone usign Intel CPUs or those who violate security command structure

    Intel by far is the only mainstream CPU to suffer the most serious CPU security command violations. Intel has tried to mislead consumers, enterprise and the Feds about their defective CPUs while the WinTel Cabal attempts to mitigate some of the security holes, by punishing all who use Windoze OSs. Microsucks should be prosecuted for their defective code on all levels including the crap code from Intel to deal with the defective CPUs. Allowing these criminals to just keep spewing defective goods on mankind is incomprehensible.

    1. Michael Duke

      Re: Not so great for anyone usign Intel CPUs or those who violate security command structure

      WOW Really.

      So the exploit was proved on an Mac with MacOS 10 on Intel. AMD is vulnerable, ARM is vulnerable and so are most versions of Linux.

      But well done with the Intel and Microsoft hate.

      1. Anonymous Coward
        Anonymous Coward

        Re: Not so great for anyone usign Intel CPUs or those who violate security command structure

        Intel are the most vulnerable and this "not just Intel" as an attempt to spread the blame doesn't convince anyone with a clue.

        Whilst AMD and ARM may have individual products that have some issue with meltdown and spectre, pretty much everything x86 from intel has been dodgy for years.

        I personally would guess that the other guys saw how intel were able to sell dodgy crap and thought "it sells so lets make our own version" but never forget intel did it first and IMHO intentionally to gain an edge on their competitors.

        1. defiler

          Re: Not so great for anyone usign Intel CPUs or those who violate security command structure

          never forget intel did it first and IMHO intentionally to gain an edge on their competitors.

          You don't remember 1996, do you?

          Of course they fucking did! Everything was about going faster. We'd finally hit the Holy Grail of one instruction per cycle, and people still wanted more speed. So let's try sneaking in extra instructions onto idle silicon. It's genius! And if AMD had thought of it first, or ARM, Motorola, MIPS, Zilog or any of the others had thought of it, they'd have tried to do it first too. That's business - getting an edge on your competitors. And for what it's worth, out-of-order execution is an astonishingly clever way to do that.

          In 1996 you could log into most FTP servers as "anonymous", and it didn't even check if the password you gave was an email address. In 1996 almost all comms across the internet was unencrypted. In 1996 every internet-connected device had a public IP address. In 1996 you could bounce whatever emails you wanted off whatever SMTP server you wanted. In 1996 the Internet was like the Garden of Eden it was so innocent. Nobody thought like this. Then it got filled with dick-pill adverts and went to crap.

          Everybody trusted everybody else in computing (as a rule). It was like Shetland 30 years ago - everyone left their doors unlocked and their keys in their car. If somebody took it, they'd bring it back with a good reason why. Of course Intel made it faster, and of course they didn't think about a bafflingly complicated way to sneak a peek at unauthorised memory. We were using Windows 3.11 and Windows 95 in companies! Security? That just wasn't thought of back then...

          1. Anonymous Coward
            Anonymous Coward

            Re: 1996 - a lesson from history

            https://en.wikipedia.org/wiki/Advanced_RISC_Computing

            Pretty sure that quite a few of those products were before 1996, and quite a few did more than one instruction per cycle. Some of them even ran Windows NT, while it was permitted.

            Intel's answer to RISC was going to be IA64, because a 64bit x86 couldn't be done.

            Well, it looks quite likely that AMD64 will be around rather longer than IA64.

            1. Anonymous Coward
              Anonymous Coward

              Re: 1996 - a lesson from history - and back to 1986 too

              And, IIRC, IA64 was(is) a sort of VLIW arch.

              VLIW pushes all the optimisations that have caused these issues out of the hardware and into the compiler,

              Which is part of what did for IA64 - that problem turned out to be harder than anticipated. Shame, because if it had succeeded, we wouldn't be here.

              And, as someone else said, Inmos and the transputer got it right (via a different route) too. I remember guys at Intel at the time of the 386 launch being very worried about it. But marketing beat technical excellence in the end, as per.

          2. Anonymous Coward
            Anonymous Coward

            Re: Not so great for anyone usign Intel CPUs or those who violate security command structure

            It's rare that anyone is so dead-on the money. Beautiful summarization of history here.

      2. Missing Semicolon Silver badge

        Re: Not so great for anyone usign Intel CPUs or those who violate security command structure

        We're all doing the hate on Intel because their devices are the ones vulnerable to Meltdown particularly. AMD devices suffer from spectre only. Effectively Intel cheated on the benchmarks by skipping some of the security

      3. qudofzakvafu@dropmail.me

        Re: Not so great for anyone usign Intel CPUs or those who violate security command structure

        https://www.amd.com/en/corporate/speculative-execution

        So basically AMD is saying 'near zero' chance.

        It's still not proved on their CPUs, so atm it looks like 'the industry' is trying to make it look like Intel is not alone on this one. Let's face it. The real problem here is Meltdown and that is Intel only, a major security flaw by design. They choose to fuck security for perfomance.

        The spectre thread was linked with the meltdown just to muddy the waters, that is.

        1. bombastic bob Silver badge
          Unhappy

          Re: Not so great for anyone usign Intel CPUs or those who violate security command structure

          "The spectre thread was linked with the meltdown just to muddy the waters, that is."

          Like cyclamates and saccharine (in the USA anyway)... as in, how the sugar lobby made quality artificial sweeteners illegal, and only "let" us have the mediocre ones. [more on cyclamates wikipedia page]

          basically, use bad press, "you too", and FUD to keep your competitor from being able to leverage the situation.

    2. Anonymous Coward
      Anonymous Coward

      Re: Not so great for anyone usign Intel CPUs or those who violate security command structure

      Yes, I don't know why Torvalds and Tanenbaum didn't use Amiga, Motorola or PowerPC to develop Linux and Minix... probably they too are part of the great WinTel conspiration - while of course was IBM to select Intel for its PC, and once there were millions around who would have forced users towards a different incompatible one??

      Without billions of WinTel machines ready to run a different OS, there would have been no Linux as well. Who would have tried and worked on it if it could run only on little used CPUs?

      That of course doesn't excuse Intel for its big blunders - being the main computing platform also means you have big responsibilities for security.

    3. bombastic bob Silver badge
      Devil

      Re: Not so great for anyone usign Intel CPUs or those who violate security command structure

      "by punishing all who use Windoze OSs"

      nice paranoia-rant. and the punishment for 'Windoze' OSs is more self-inflicted these days.

      I think it's simpler: Intel engineers didn't consider the possibility of side-channel attacks in their design. Oops.

  4. Anonymous Coward
    Anonymous Coward

    Can somebody wake me up when/if a working patched microcode from intel arrives?

    1. Adam 1

      I heard that some of the patches were so effective that after applying them there would be no way to run this sort of exploit code.

    2. Flywheel

      "Mr Van Winkle .. it's time to wake up..."

  5. JeffyPoooh
    Pint

    Don't panic, "No exploit code has been released."

    "...panic: don't. No exploit code has been released."

    3...

    2...

    1...

    Ding!

    Okay, now you can panic.

    The axis of time is your friend. But it's not that much of a friend.

    Panic-results integrate over time. So panic early, and panic often. A proactive approach to panic can avoid the dreaded Panic Clipping™.

    1. bazza Silver badge

      Re: Don't panic, "No exploit code has been released."

      Oh don't worry, some of us have been "deeply concerned" (actually, quivering wrecks but masking it well, chin up) for quite some time now.

      This whole thing is going to pan out to be far worse than Y2K, for there will be real and far reaching consequences.

      1. werdsmith Silver badge

        Re: Don't panic, "No exploit code has been released."

        This whole thing is going to pan out to be far worse than Y2K, for there will be real and far reaching consequences.

        And some of us couldn't give a shit.

        1. Anonymous Coward
          Anonymous Coward

          Re: Don't panic, "No exploit code has been released."

          And some of us couldn't give a shit.

          Perhaps not, but fixing all this will cost a lot of money, and that's passed on to the customer in one form or other.

          One way or other, you will be helping pay for that, even if you don't use, own or care about computers.

          Then again, if your bank goes down the shitter because someone has launched a really juicy attack based on these quite significant hardware flaws, I suspect you will start giving a shit then. At the very least making alternative banking arrangements will give you a belly ache of a day.

          1. amanfromMars 1 Silver badge

            Re: "No exploit code has been released." A Blatant Lie Hiding in Clear Sight of NEUKlearer Space

            Perhaps not, but fixing all this will cost a lot of money, and that's passed on to the customer in one form or other. .... Anonymous Coward

            And exploiting it, the systemic processor vulnerabilities and finger in the dyke fixes, will generate even more money and beautifully frantic energy .... and shift the balance of effective global power to, well, ...Autonomous Heroes rather than Anonymous Cowards, Anonymous Coward.

            And such be only the Start of SCADA Systems' Worst Nightmare ..... a Runaway China Syndrome Meltdown with Processes Fed Super Enriched Fuel/Novel Intellectual Property beyond the Command and Control of Existing Levers of Distribution.

            And don't believe a word of Don't panic, "No exploit code has been released." for you now know it is released and running wild and rampant and rogue renegade too. But you might like to realise that is not necessarily bad whenever exploits are intelligently designed to permit better actions in deeper processes with both secure and secretive programs.

            For some, who may be more than just a Chosen Few, is that AIMajic to Exploit.

          2. amanfromMars 1 Silver badge

            Re: Don't panic, "No exploit code has been released."

            Then again, if your bank goes down the shitter because someone has launched a really juicy attack based on these quite significant hardware flaws, I suspect you will start giving a shit then. .... Anonymous Coward

            Methinks really juicy attacks against banks based upon quite significant hardware flaws are in the public interest, given the fact that then might bankers give a shit about anything/everything other than themselves and profitable debt and deficit [money for and from nothing], and in so doing crush and crash those systems which are based/predicated on being too big to fail and therefore ripe for executive rape, abuse and misuse, both personal and corporate ....... which is where/what they are currently at, is it not?

            And although not a significant hardware flaw, the likes of a Bitcoin virtual currency mine is something which successfully challenges fiat currencies earlier monopoly position in the field of transferable value reflecting a systems friendly supportive worth?

        2. Anonymous Coward
          Anonymous Coward

          Re: Don't panic, "No exploit code has been released."

          @ werdsmith and "And some of us couldn't give a shit."

          Happy is the fool who doesn't give a damn.

          Given this is a forum for computing professionals you might want to return to your comics and let the grown ups deal with reality.

          1. werdsmith Silver badge

            Re: Don't panic, "No exploit code has been released."

            Given this is a forum for computing professionals you might want to return to your comics and let the grown ups deal with reality.

            I'll let the grown ups like you get on with their sanctimony and patronising.

            The reality is that I can't do anything about this or any of the multitude of other security vulnerabilities that exist on my computer equipment and in my life in general.

            So I'm ****ed if I'm going to waste time and energy worrying about if I'm going to die in a car accident on my way home or this ridiculous hand-wringing over whether my computer my run a bit slower when I do certain things. So it's going to cost? We know, what are you going to do about it then? Bitch on a forum? That'll make it cheaper.

            There's the reality.

            Signed.

            36 years professional in computing, done quite well thanks.

            1. MrBoring

              Re: Don't panic, "No exploit code has been released."

              I agree with this guy. One shouldn't worry (give a shit) about things that are totally out of ones control to fix, and for 99.99% of IT professionals this issue is something none of us can fix.

              Saying that, i give a shit because it means more patching, more bugs, more crashes, worse performance - more wasting time on infrastructure when we could be doing stuff that actually adds value.

            2. Anonymous Coward
              Anonymous Coward

              Re: Don't panic, "No exploit code has been released."

              @werdsmith and "The reality is that I can't do anything about this or any of the multitude of other security vulnerabilities that exist on my computer"

              Actually you can:

              Never buy intel again

              Take your hardware back to the vendor and demand a refund as it was faulty when delivered

              Use a secure operating system (good indicator is if they want to spy on your usage)

              Complain to your Government representative and ask them what they are going to do about it

              Tell everyone you know about the problem and suggest the above advise to them

              Basically do not take this lying down, I could go on but just accepting that you have been violated and trying to forget it will not stop it happening again.

              Sorry I can't help with the vulnerabilites in your life without more detail which would then be picked up by google etc and get you excluded from medical treatment or atleast make your insurance premiums go up.

          2. Steve Davies 3 Silver badge
            Facepalm

            Re: "And some of us couldn't give a shit."

            Still using the Abacus then I see?

            1. Anonymous Coward
              Anonymous Coward

              Re: "And some of us couldn't give a shit."

              But even an abacus is vulnerable to side-channel attack if someone with very acute hearing is sitting close to you.

        3. David Roberts
          Pint

          Re: Don't panic, "No exploit code has been released."

          Definite lack of shit donation over here as well.

          First there should be a realistic (!) proposal of how to fix it.

          First stage of that is to produce a new/upgraded/different architecture which has security against these flaws built in. Followed by implementation, testing, running up the fabs, producing the suport chips and motherboards and starting commercial roll out. Not gonna happen this year.

          Next stage is to recognise the enormous real estate of vulnerable hardware out there and that there is no economy in the world which can afford to ditch all that and start again even if some mad manufacturer was prepared to ramp up production to meet all new demand plus full replacement.

          In the mean time all demand for new/replacement computing capacity will have to be met from existing architectures, constantly increasing the real estate of vulnerable hardware.

          Not fair, cry the commentards, that means you are forced to buy dodgy hardware from the people who designed it to be dodgy.

          So come up with an alternative which keeps feeding society's insatiable demand for cheap computing and which demand resulted a long time ago in the dominance of Intel as a single supplier. You get what you pay for. Or don't. If there were say four different competing architectures all at similar volume you could afford to drop one and ramp up the other three.

          Nobody has yet made a reasonable commercial case for curing Meltdown by ditching Intel in all new machines and letting ARM and AMD take up the slack. Because there just isn't the capacity. That is using existing factories with fully functional production lines.

          So enjoy you ranting and beating of your manly (or womanly) breast in outrage. [Um....nearly wandered into mind bleach territory there.] However come up with a viable alternative or accept that we now have an ongoing cycle of software mitigation in the same way we have with all other software products. Coupled with a performace degradation in heavy use scenarios.

          Life sucks. Deal with it.

          Since I can't see any way that I can solve the problem or even influence the outcome, there isn't much point in wasting time worrying. It will either be fixed or it won't. Meanwhile I think my time would be more productively spent sampling a few brews.

      2. James 51
        Boffin

        Re: Don't panic, "No exploit code has been released."

        @Bazza Y2K could have been a big problem except for the years of effort that went into rewriting and testing a whole bunch of code all over the world.

        1. Doctor Syntax Silver badge

          Re: Don't panic, "No exploit code has been released."

          "Y2K could have been a big problem except for the years of effort that went into rewriting and testing a whole bunch of code all over the world."

          Perhaps one outcome of this would be a few man-years of effort in trimming bloat to mitigate the performance loss in mitigating meltdown.

          1. Anonymous Coward
            Anonymous Coward

            Re: Don't panic, "No exploit code has been released."

            "Perhaps one outcome of this would be a few man-years of effort in trimming bloat to mitigate the performance loss in mitigating meltdown."

            Momentarily I read that as "trimming boats" being reminded of all the contractors who became boat owners as a result of Y2K, including the guy I knew who took 6 months off in the Caribbean.

  6. onefang

    So what we need is Optimus Prime to step up and sort out all these bad CPUs once and for all. nVidia's Optimus chips might be good for something after all.

  7. bazza Silver badge
    Mushroom

    Time for NUMA, Embrace your Inner CSP

    This particular round of hardware flaws has come about because the chip manufacturers have continued to support SMP whilst building architectures that are, effectively, NUMA. The SMP is synthesised on top of the underlying NUMA architecture. That's what all these cache coherency and memory access protocols are for.

    This is basically a decades long bodge to save us all having to rewrite OSes and a whole shed load of software. This is the biggest hint that the entire computing community has been recklessly lazy by failing to change. If we want speed and security it seems that we will have to rewrite a lot of stuff so that it works on pure NUMA architectures.

    <smugmode>The vast majority of code I've ever written is either Actor Model or Communicating Sequential Processes, so I'm already there</smugmode>

    Seriously though, languages like Rust do CSP as part of their native language. An OS written in Rust using its CSPness wouldn't need SMP. Though the current compiler would need changing because of course it too currently assumes an underlying SMP hardware architecture... If the SMP bit of our lives can be ditched we'll have faster CPUs and no cache coherency based design flaws, instead of slowed down software running on top of bodged and rebodged CPU microcode.

    Besides, CSP is great once you get your head around it. It's far easier to write correct and very reliable multi-threaded software using CSP than using shared memory and mutexes. You can do a mathematical proof of correctness with a CSP system, whereas you cannot even exhaustively test a multithreaded, shared memory + mutexes system.

    Oh, and Inmos / Tony Hoare got it right, and everyone else has been lazy and wrong.

    1. missingegg

      Re: Time for NUMA, Embrace your Inner CSP

      I love Rust, but it doesn't do that much to enable performant software in NUMA systems. Rust protects against various kinds of memory misuse. Good NUMA software requires clever planning to get the data needed for an operation on the same node the code is running on. Naively written CSP code will flood whatever memory fabric the hardware uses, and prevent code from executing for lack of the data it needs.

      1. bazza Silver badge

        Re: Time for NUMA, Embrace your Inner CSP

        You're missing the point. Naively written anything will overwhelm underlying hardware. There's nothing magical about shared memory in a SMP-on-top-of-NUMA system that means that poorly written code won't run into the limits of the QPI / Hypertransport links between CPUs.

        Synthesising SMP on top of NUMA requires a lot of traffic to flow over these links to achieve cache coherency. Ditch the SMP, and you've also ditched the cache coherency traffic on these links, meaning that there's more link time available for other traffic (such as data transfers for a CSP system). And you've got rid of a whole class of hardware flaws revealed in the article, and you have a faster system. What's not to like?

        From what I hear from my local Rust enthusiast, Rust's control of memory ownership boils down to being the same as CSP. Certainly, Rust has the same concept of synchronous channels that CSP has.

        One of the good things about CSP is that it makes it abundantly clear that one has written rubbish code; there's a lot of channel reads and writes littering one's code.

        1. Anonymous Coward
          Anonymous Coward

          Re: Time for NUMA, Embrace your Inner CSP

          Thanks for the CSP references, best of luck with them too, though the marketdroids and 'security researchers' won't thank you for them. Maybe there's a DevoPS angle on them somewhere?

          A full version of the CSP book appears to have been legitimately freely downloadable for the last few years, see e.g. http://www.usingcsp.com/

          One thing I've not seen quite so explicitly mentioned (though your NUMA references come close) is the role of the memory consistency model, and to a lesser extent, what a process (nb process not processor) can and cannot be permitted to see, directly or indirectly.

          As far as I know, modern RISC processors have tended to be built around a memory model which does not require external memory to appear consistent across all processes at all times. So if some code wants to know that its view of memory is consistent with what every other process/processor sees, it has to take explicit action to make it happen. Especially where the processor is using complex multilevel cache to provide interesting performance. Hence things like conditional load/store sequences found on ARMs and Alphas and... As it happens, they're the kind of thing that NUMA people have been thinking about for years, and CSP people before them. It's a solvable problem, and non-Intel people had solved it.

          As far as I can see, x86 (even modern ones) dates back to an era where there was only one processor and memory consistency was something that software designers or even system designers could happily ignore, because all memory was always consistent. Except it wasn't really.

          ARM and AMD64 do not seem to assume this legacy behaviour and as such they miss out on some of the recent fun.

          I could be wrong though. Where can readers find out more about this particular historical topic and its current relevance?

          1. Claptrap314 Silver badge

            Re: Time for NUMA, Embrace your Inner CSP

            Even 10-15 years ago, this is simply not true. Cache consistences was "eventual" in the modern parlance unless you explicitly called a sync--and there were various levels of syncs available. New cache states (like "T") seemed to pop up every few years.

            Yes, in NUMA, every application is required to figure out how to do this, as opposed to having hardware do it.

            But NUMA systems are still going to be vulnerable to this sort of thing, absent proactive steps taken by the design team. You have to have some way to manage synchronization. Timing will always matter.

            1. bazza Silver badge

              Re: Time for NUMA, Embrace your Inner CSP

              Yes, in NUMA, every application is required to figure out how to do this, as opposed to having hardware do it.

              But NUMA systems are still going to be vulnerable to this sort of thing, absent proactive steps taken by the design team. You have to have some way to manage synchronization. Timing will always matter.

              The nice thing about a NUMA system is that if the software gets it wrong, it can be fixed in software. Plus, faults in software are going to be fairly specific to that software. The problem with having hardware second guess what software might do is that it does it the same way no matter what, and if it gets it wrong (like has been reported in this article) it's machine fault that transcends software and cannot be easily fixed. Ooops!

              1. Claptrap314 Silver badge

                Re: Time for NUMA, Embrace your Inner CSP

                Except that we have a very, very long & sad history of the same class of bug popping up over & over. Every hear about buffer overflow? How is that even still a thing? And yet, we continue to see them.

                You are right in what you are saying. It's what you are not saying that bugs me.

                1. bazza Silver badge

                  Re: Time for NUMA, Embrace your Inner CSP

                  Except that we have a very, very long & sad history of the same class of bug popping up over & over. Every hear about buffer overflow? How is that even still a thing? And yet, we continue to see them.

                  You are right in what you are saying. It's what you are not saying that bugs me.

                  Ah, I think I see what you mean (apologies if not). Yes, timing is an issue.

                  CSP is quite interesting because a read / write across a channel is synchronous, an execution rendezvous. The sending thread blocks until the receiving thread has received, so when the transfer completes each knows whereabouts in execution the other has got to. That's quite different to Actor Model; stuff gets buffered up in comms link buffers, and that opens up a whole range of possible timing bugs.

                  CSP by being synchronous largely gets rid of the scope for timing bugs, leaving one with the certainty that one has either written a pile of pooh (everything ends up deadlocked waiting for everything else), or the certainty that you haven't got it wrong if it runs at all. There's no grey in between. I've had both experiences...

                  However, nothing electronic is instantaneous; even in a CSP hardware environment it takes a finite amount of time for signals to propagate; no two processes in CSP are perfectly synchronised, so there is some tiny holes in the armour. The software constructs may think they're synchronised ("the transfer has completed"), but actually they're not quite. But it is good for the needs of most real time applications.

                  One advantage of this approach is that it doesn't let one trade latency for capacity. With actor model systems data can be off-loaded into the transport (where it gets buffered). Therefore a sender can carry on, relying on the transport to hold the data until the receiver takes it. That's great right up until someone notices the latency varying, and until the transport runs out of buffer space. With CSP, because everything is synchronously transferred, an insufficient amount of compute resource late on in one's processing chain shows up immediately at the very front; there is no hiding that lack of compute resource by temporarily stashing data in the transport. This is excellent in real time systems, because throughput and latency testing is conclusive, not simply "promising".

          2. bazza Silver badge

            Re: Time for NUMA, Embrace your Inner CSP

            Thanks for the CSP references, best of luck with them too, though the marketdroids and 'security researchers' won't thank you for them. Maybe there's a DevoPS angle on them somewhere?

            No worries. I've no idea about what security researchers would think, etc. Adopting CSP wholesale is pretty much a throw-everything-away-and-start again thing, so if there is a DevOPS angle it's a long way in the future!

            Personally speaking I think the software world missed a huge opportunity to "get this right" at the beginning of the 1990s when Inmos Transputers (and other things like them) looked like the only option for faster computers. Then Intel cracked the clock frequency problem (40MHz, 66MHz, 100MHz, topping out at 4GHz) and suddenly the world didn't need multi-processing. Single thread performance was enough.

            It's only in more recent times that multiple core CPUs have become necessary to "improve performance", but by then all our software (OSes, applications) had been written around SMP. Too late.

            As far as I know, modern RISC processors have tended to be built around a memory model which does not require external memory to appear consistent across all processes at all times. So if some code wants to know that its view of memory is consistent with what every other process/processor sees, it has to take explicit action to make it happen.

            Indeed, that is what memory fences are, op codes explicitly to allow software to tell the hardware to "sort its coherency out before doing anything else". Rarely does one call these oneself, they're normally included in other things like sem_post() and sem_wait(); they get called for you. The problem seems to be that the CPUs will have a go at doing it anyway, so that when a fence is reached in the program flow it takes less time to complete. And this is what has been exploited.

            Where can readers find out more about this particular historical topic and its current relevance?

            A lot of it is pre-internet, so there wasn't vast repositories online to be preserved to the current day! The Meiko Computing Surface was a super computer based on Transputers - f**k-loads of them in a single machine. Used to have one of these at work - it had some very cool super-fast ray tracing demos (pretty good for 1990). I heard someone once used one of these to brute force the analogue scrambling / encryption used by Sky TV back then, in real time.

            The biggest barrier to adoption faced by the Transputer was development tooling; the compiler was OK, but machine config was awkward and debugging was diabolically bad. Like, really bad. Ok, it was a very difficult problem for Inmos to solve back then, but even so it was pretty horrid.

            I think that this tainted the whole idea of multi-processing as a way forward. Debugging in Borland C was a complete breeze by comparison. If you wanted to get something to market fast, you didn't write it multi-thread back in those days.

            However, debugging a multi-threaded system is actually very easy with the right tooling, but there's simply not a lot of that around. A lot of modern debuggers are still rubbish at this. The best I've ever seen was the Solaris version of the VxWorks development tools from WindRiver. These let you have debugger session open per thread (which is really, truly nice), instead of one debugger handling all threads (which is always just plain awkward). WindRiver tossed this away when they moved their tool chain over to Windows :-(

            There was a French OS called (really scrapping the memory barrel here) Coral; this was a distributed OS where different bits of it ran on different Motarola 68000 CPUs. I also recall seeing demos of QNX a loooong time ago where different bits of it were running on different computers on a network (IPC was used to join up parts of the OS, and these could just as easily be network connections).

            The current relevance is that languages like Scala, Go and Rust all have CSP implementations in them. CSP can be done in modern languages on modern platforms using language fundamentals instead of an add-on library. In principal, one attraction of CSP is system scalability; your software architecture doesn't change if you take your threads and scatter them across a computer network instead of hosting them all on one computer. Links are just links. That's a very modern concept.

            Unfortunately AFAIK Scala's, Go's and Rust's CSP channels are all stuck in-process; they aren't abstract things that can be implemented as either a tcp socket, ipc pipe, or in-process (corrections welcome from Go, Scala and Rust aficionados). I think Erlang CSP channels do cross networks. Erlang even includes an ASN.1 facility, which is also very ancient but super-useful for robust interfaces.

            The closest we get to true scalability is ZeroMQ and NanoMsg; these allow you to very readily switch from joining threads up with ipc, tcp, in-process exchanges, or combinations of all of those. Redeployment across a network is pretty trivial, and they're blindingly fast (which is why I've not otherwise mentioned RabbitMQ; its broker is a bottleneck, so it doesn't scale quite as as well).

            I say closest - ZeroMQ and NanoMsg are Actor Model systems (asynchronous). This is fine, but this has some pitfalls that have to be carefullly avoided, because they can be lurking, hidden, waiting to pounce years down the line. In contrast CSP (which has the same pitfalls) thrusts the consequences of one's miserable architectural mistakes right in one's face the very first time you run your system during development. Perfect - bug found, can be fixed.

            There's even a process calculii (a specialised algebra) that one can use to analyse the theoretical behaviour of a CSP system. This occasionally gets rolled out by those wishing to have a good proof of their system design before they write it.

            Not bad for a 1970s computing science idea!

            OpenMPI is also pretty good for super-computer applications, but is more focused on maths problems instead of just being a byte transport.

            1. Anonymous Coward
              Anonymous Coward

              Re: Time for NUMA, Embrace your Inner CSP

              Sir, sir, Mr Register sir, please can we have an artickle from Bazza please?

              Meantime, while I think about those words (including words like Ada that might seem to fit in the context but didn't appear):

              The distributed French OS might have been Chorus:

              https://en.wikipedia.org/wiki/ChorusOS

              Coral was a programming language whose origins were in the UK MoD in the 1960s:

              https://en.wikipedia.org/wiki/Coral_66

              QNX is still around, though you probably can't build a functioning browser, GUI, and IP stack to fit on a 1.44MB (megabyte? what?) floppy like you could in days gone by. Owned by Blackberry nowadays?

              http://toastytech.com/guis/qnxdemo.html

              https://www.youtube.com/watch?v=K_VlI6IBEJ0

              Is VxWorks/Wind River still owned by Intel?

              Is Simics (the system-level simulator) still owned by Wind River, and thus owned by Intel?

              https://en.wikipedia.org/wiki/Simics

              Do Intel "eat their own dog food"? Should they?

              It's been a while...

    2. Christian Berger

      Re: Time for NUMA, Embrace your Inner CSP

      It's fascinating how normal UNIX commands would be a good fit for CSP architectures.

      1. bazza Silver badge

        Re: Time for NUMA, Embrace your Inner CSP

        It's fascinating how normal UNIX commands would be a good fit for CSP architectures.

        Very nearly! However piping UNIX commands together is closer to Actor Model than CSP; the pipes really are asynchronous IPC pipes, not synchronous channels that CSP has. Also there's some limits on how commands can be plumbed together; I don't think you can do anything circular.

        The irony of IPC pipes is that what they provide is an asynchronous byte transport, but they're implemented in the kernel using memory shared between cores and semaphores. The ironic part is that that shared memory is faked; it's an SMP construct that is synthesised on top of a NUMA architecture. That in turn is knitted together by high speed serial links (QPI, Hypertransport), and these links are asynchronous byte transports! Grrrrr!

        The one hope is that microkernel OSes come to predominate, with bits of the OS joined up using IPC pipes instead of shared memory. That opens up the opportunity for the hardware designers to think more seriously about dropping SMP. It may happen; even Linux, courtesy of Google's Project Treble, is beginning to head that way.

  8. Glad Im Done with IT

    Just kill ALL code in a browser.

    The way things are going in this arena recently the only sane thing you can do now is to disable anything that is capable of running code in a browser.

    I said goodbye to Java and Flash years ago, now time to say goodbye permanently to Javascript and never let web assembly anywhere near my browser when that tries to become flavour of the day.

    1. bazza Silver badge

      Re: Just kill ALL code in a browser.

      Though perfectly valid, that's a very "me" point of view. I wholeheartedly agree that running arbitrary code downloaded into some sort of browser based execution engine is asking for trouble.

      Other people have the problem that, by intent and design, they're letting other users choose what to run on their hardware. Services like AWS are exactly that. If one lets employees install software on their company laptop, it's the same problem. A computer that is locked down so that only code that the IT administrator knows about is running is very often a useless tool for the actual user.

      So really, the flaws need fixing (as well as ditching Javascript) otherwise computers become pretty useless tools for the rest of us.

      1. Lee D Silver badge

        Re: Just kill ALL code in a browser.

        No, I think the lesson is "don't try to get clever for the sake of performance".

        Meltdown was caused by lack of security checks on speculatively executed instructions. If you're going to speculatively execute, why would you handle the instruction any different to when you normally execute it? That's a disaster waiting to happen and people knew it.

        Spectre is the same except instructions are executed that give away information to the process about what happened. Again... this shouldn't be possible. To any process running, why is it ever made aware of the results of a speculative execution? By definition, that execution shouldn't be detectable or it's not "speculative", it's literally execution and rollback.

        The latter is more subtle, but both are the product of not executing speculatively at all... but actually just executing. And in the former case, executing without the same security boundaries.

        They were also known about for quite a long time, people have been saying it's ripe for attack for years along exactly these kinds of lines (I think people actually expected Spectre more than Meltdown, to be honest - a side-channel attack on such a process is much more easily predicted than an abject failure to apply memory protection).

        If you can't execute arbitrary code as an ordinary user without compromise, your system is flawed as a general purpose operating system running on a general purpose computer. That's not to say that you let your users do what they like - appropriate security controls should ensure they can only interfere and trash their own stuff, not anything else, however. But we still live in an age where thousands of users sharing a machine aren't contained, isolated, bottled, virtualised and removed from the hardware such that it doesn't matter what they do. This is something we learned in the early mainframe days.

        Sure, it costs on performance to do things properly. But in the days of 2GHz processors being "the norm" despite much faster processors existing, performance isn't actually our top concern any more. But billions of machines in the hands of idiots who'll click anything is. Rather than say "Ah, well,t hey shouldn't have clicked that", it's time to make a processor, architecture and OS where it DOESN'T MATTER that they clicked something... it can't break out of its process, memory space, virtualised filesystem (with no user files by default until the user puts them in that program), etc.

        We're designing systems on the basis that every user is a computer expert who religiously verifies every code source they ever see, while putting a smartphone in everyone's pocket for £20.

        1. OldCrow
          Holmes

          Re: Just kill ALL code in a browser.

          An OS where the user can't install random crap from a phishing email approaches Windows 10S or iOS in lockdown. Usability suffers as a consequence.

          This is also wasteful. For protection from legal liability, it is sufficient that the machine can not be compromised without user error (i.e. user's assistance).

          A likely path forward for Intel (et.al.) is to add a dedicated core with an "untrusted software" mode. This mode would disable speculative execution. Further, the operating system will have to be aware of these "untrusted processes / threads", so they can perform threat mitigations (that are now performed for all threads, sapping performance).

          Of course, software such as browsers would have to support "untrusted execution" by declaring their javascript engine threads as such.

          Anyone willing to make bets?

        2. Anonymous Coward
          Anonymous Coward

          Re: Just kill ALL code in a browser.

          "No, I think the lesson is "don't try to get clever for the sake of performance"."

          The rest of your comment makes very clear that what you meant to say might have been slightly better as "... for the sake of performance on a multiuser multitasking system which aims to have any pretence of security."

          Seems like it might be time for a return to single user single tasking non networked systems. Either that, or take properly architected processors and properly architected OSes seriously, and admit that the apparent performance of x86 frequently comes with a functional penalty in real-world work.

          Plenty of people understood this already, but it wasn't a popular message.

        3. bombastic bob Silver badge
          Thumb Down

          Re: Just kill ALL code in a browser.

          No, I think the lesson is "don't try to get clever for the sake of performance".

          there is NO virtue in mediocrity. BOOOOoooo...!

          <sarcasm>

          yes, the clever ones - chain them up, drug them into complacency and mediocrity with Ritalin, and start when they're really small, because kids that are smarter than their teachers will turn into brilliant spark engineers, and we can't have THAT, now can we? No, we must have GROUP think and MEDIOCRITY, where NOBODY is better than anyone else, and "the masses" are carefully managed by "the elite" for their own good...

          </sarcasm>

          1. Anonymous Coward
            Anonymous Coward

            Re: Just kill ALL code in a browser.

            @bombastic bob

            While I appreciate your sentiment, aren't browsers the epitome of mediocrity?

    2. sabroni Silver badge
      Facepalm

      Re: Just kill ALL code in a browser.

      Yeah, that'll stop anyone exploiting cpu flaws.

      Get the torches!!! They're running JavaScript!!!! It looks like C but the scoping's different!!!!!!!!!!!!

      1. Lysenko

        Re: Just kill ALL code in a browser.

        Yeah, that'll stop anyone exploiting cpu flaws.

        Get the torches!!! They're running JavaScript!!!! It looks like C but the scoping's different!!!!!!!!!!!!

        JavaScript isn't the issue. Automatically downloading and executing code that arrives over the internet (*.vbs email attachments?) is the issue.

        The positive side is there are only a handful of JS engines in common use with V8 (Google open source) being the market leader. It should be possible to stamp out these exploits inside TurboFan (the V8 compiler) and the equivalents in other JS engines, which would automatically sanitise all the JS in circulation. Statically compiled code (C/C++ etc) is a much bigger problem in this regard.

        1. Anonymous Coward
          Anonymous Coward

          Re: Just kill ALL code in a browser.

          "Automatically downloading and executing code that arrives over the internet (*.vbs email attachments?) is the issue."

          Don't forget unauthenticated (shell)code can also arrive courtesy of an email, web page, whatever, courtesy of a "specially crafted JPEG" or whatever the trendiest buffer overflow CVE-of-the-week is today.

          It's the second decade of the 21st century, and Windows (and other) apps still have DOS-era coding errors. In Windows in particular, some of them provide trivially easy routes to running in kernel mode. History is there to be learned from, but at the moment the end users pay the price while the computer industry gets the profits, so there's no effective motivation for corporates to learn.

          1. Anonymous Coward
            Anonymous Coward

            Re: Just kill ALL code in a browser.

            And TrueType fonts, which execute on a turing-complete VM with branches, loops, and variables.

            And WOFF webfonts, which can contain TrueType, OpenType, or PostScript fonts - the latter being a complete language.

            And PDF, which is basically PostScript with embedded TrueType fonts, JS scripts, JPEG and TIFF images - all fertile ground for exploits.

            We are screwed.

      2. Ken Hagan Gold badge

        Re: Just kill ALL code in a browser.

        "Yeah, that'll stop anyone exploiting cpu flaws."

        Umm, yeah, actually it might. You see, none of these flaws are remotely accessible. They all require the attacker to actually run code on the target computer. Traditionally, the way around this annoying limitation is to persuade everyone that it is safe to run arbitrary third-party (untrusted) code in a browser because the browser's sandbox will protect the machine. We now find that this ain't necessarily so. Solution: stop running untrusted code in your browser (or anywhere else).

  9. Joerg

    Now CPU manufacturers must find GPU security bugs as well...

    ... because there surely are plenty of design flaws in GPUs by Nvidia and AMD , security issues are surely not limited to CPUs alone.

    It is pretty obvious that all these researchers all of a sudden focused on finding out security design flaws that no one didn't give a damn for decades are paid to do so on purpose ... and it is nothing good for the whole industry. These design flaws should have remained unknown outside of IT manufacturers design labs !

    1. Anonymous Coward
      Anonymous Coward

      Re: Now CPU manufacturers must find GPU security bugs as well...

      Yeah, slay the messenger !

    2. Anonymous Coward
      Anonymous Coward

      Re: Now CPU manufacturers must find GPU security bugs as well...

      @"These design flaws should have remained unknown outside of IT manufacturers design labs !"

      Only if your name is intel, for everyone else who has discovered that they got ripped off when they bought Intel inside and now find their expensive investment is a liability no matter what OS they put on it then knowing is very very important.

      I can only understand your "if only they had not let the truth out" mentality if you own shares or work for intel, otherwise your wanting to pretend the problem does not exist or suggesting that other electronic devices might also have problems then intel is somehow not to blame for screwing their customers is just bizarre.

      People paid top money for a product that was faulty, they deserve compensation or replacement of the effected product. If the later requires replacement of parts that are not compatable with the replacement then they need to be replaced too. This is not an unreasonable expectation outside of the US and yet intel are just ignoring their responsibility and trying to confuse the issue in the hope that they can distract their disgruntled customers. What is most annoying is the silence of our own government agencies supposedly created to deal with exactly this sort of problem.

      So don't tell me that they should have kept schtum, Intel took the piss and finally got caught and now is the time to cough up the cash to their effected customers, not just in the US, everywhere.

      If the cost shuts Intel down then it will be a lesson for every other manufacture and will go some way towards restoring customer confidence.

  10. hjns62
    Stop

    Oportunity for anti-malware?

    Meltdown and Spectre flaws seems to be the result from speed vs security compromises and business ambitions to overcome benchmarks.

    What this paper may show (i'm speculating...) is that perhaps there is no complete solution for speculative processing at CPU microcode. To be fast, CPUs guess what should be happening next. In this compromise for speed, building full security on it, may be always prone to flaws (...continuing the speculation...). Like a theorem... (bold speculation...)

    Then wouldn't it be the place for anti-malware paradise at the OS level? Let's run the CPUs faster and let the anti-malware tools detect programs that tweak around side-channel attacks?

    Wouldn't the result be better for performance? What about a anti-malware function that disables CPU patches and do the job for you? Sure to be, covering web browsing. JS, whatever...and all OSes...

    1. Anonymous Coward
      Anonymous Coward

      Re: Oportunity for anti-malware?

      Up to now only Windows needs Anti-Malware programs ("virus scanners"). It would be nice to keep it this way.

    2. JCitizen
      Coffee/keyboard

      Re: Oportunity for anti-malware?

      I can remember Microsoft grudgingly allowing Symantec into the kernel space of one of their new operating systems under a new NT filing architecture. Nobody was happy about that, especially since nobody trusts Symantec to be any more secure with their code than Microsoft was; and perhaps even worse.

  11. Anonymous Coward
    Anonymous Coward

    Consistent

    "The Meltdown and Spectre design flaws are a result of chip makers prioritizing speed over security".

    Which is just another typical instance of modern business prioritizing marketing over quality.

    "Never mind the quality; feel the width!"

    1. Arthur the cat Silver badge

      Re: Consistent

      "The Meltdown and Spectre design flaws are a result of chip makers prioritizing speed over security".

      Which is just another typical instance of modern business prioritizing marketing over quality.

      More a case of business listening to their customers. Everybody wants faster CPUs, almost nobody(*) screams "make my CPU slower and more secure".

      (*) Maybe a few security types did, but they're such a small minority they rarely get heard until it's too late.

      1. Claptrap314 Silver badge

        Re: Consistent

        The feds are not a really small case. There are special designs that are made for them. I was never cool enough to get close to those designs, however...

  12. jeffdyer

    I really can't see the fuss. Unless you know exactly what other process has just written to memory, and exactly what that data is, I don't see what use it could possibly be to anyone.

    Anyone tried debugging an application at the CPU level? There is so much going on, it would be practically impossible to know what that string of hex means, assuming you can read it in the first place.

    1. Anonymous Coward
      Anonymous Coward

      False

      For all practical purposes, data has Redundancy. From redundancy, you can figure out which data record you are looking at. The attacking program would search for file headers, magic strings and the like to find the target data structure it is looking for.

      For an attack against cipher keys it would also be highly useful to simply have a full dump of the target process image. Then simply use every 16/32 octet sequence in the image as a key candidate. This reduces an "impossible" (key space search) problem to a "20 minute problem".

    2. Claptrap314 Silver badge

      So you've not bothered to fix Heartbleed on your systems?

      There are techniques for figuring out where interesting data lies--that's what all the address randomization stuff it supposed to help with.

      Yes, this is hard, or nigh-impossible without the right tools. Get those tools, however, and it becomes a craps shoot. And generally, you don't need snake eyes to win.

  13. jason 53

    this means any new pc for the next 6month - 18 months currently/future on the shelf say " best buy " should be recalled.

    or at least sold with a warning that they are flawed.

    1. Anonymous Coward
      Anonymous Coward

      Well

      Your CPU works quite nicely as long as you do not run untrusted code from www.shadyAdFlinger.com and the like. It is as fast as a supercomputer was in the early 1990s, but you pay only $1000 for it.

      1. Claptrap314 Silver badge

        Re: Well

        Install uMatrix, and then explain to me how easy it is to do this. We are in serious trouble.

      2. Anonymous Coward
        Anonymous Coward

        Re: Well

        @ "Your CPU works quite nicely as long as you do not run untrusted code from www.shadyAdFlinger.com"

        Really? so all the websites with bitcoin JS are doing it deliberately?

        Since there is no really way to avoid all malware since it only gets detected once it is recognised and the OS assumes that the CPU is secure then unless your machine is completley isolated you are just as vulnerable.

        That is may have the similar capablities to an obsolete super computer and is "only" $1000 is a meaningful as saying before cars existed then people had to walk and ignoring that the change has created new perils the old pedestrians never experienced. Things have changed and the comparison of just speed/cost alone is ignoring all the good and bad consequencies of that change, consequencies that make the speed/cost ratio irrelevant

  14. Anonymous Coward
    Anonymous Coward

    Fix A: Transputers

    Give each program its own transputer to do its respective work. We have more than enough silicon to do that. Connect transputers via fast transmission-line-type message links (not just TTL lines as in the original transputers).

    Then $EvilJavaScript cannot snoop on your Excel sheets.

    1. Anonymous Coward
      Anonymous Coward

      Re: Fix A: Transputers

      Your average computer runs HUNDREDS of processes at any one time. And MANY of them need the full resources of the system, so try cramming hundreds of Core i-class CPU cores on a single die.

  15. Anonymous Coward
    Anonymous Coward

    Fix B: Don't share Cloud CPUs

    Sharing "cloud" CPUs is obvouisly a risky thing. Rent one CPU to one customer at a time.

  16. Anonymous Coward
    Anonymous Coward

    Fix C: Disable JavaScript

    Disable JS for random sites and enable only when required for work purpose, banking, mail etc.

    1. Charles 9

      Re: Fix C: Disable JavaScript

      But isn't that exactly how they get you in drive-by attacks, poisoning "trusted" sites?

  17. Lion

    Trust

    'Intel, the chipmaker most affected by these flaws, incidentally just announced an extension of its bug bounty program – just through the end of 2018 – covering side-channel vulnerabilities, with awards of up to $250,000.'...

    That is an indication that they are aware that the current firmware fixes are deficient. It also tells me that their subsequent products will continue to be vulnerable if they do not redesign the chips. Intel have only released firmware fixes for Sky Lake and Kaby Lake systems and their remaining product line (all of it) is still being evaluated. A stopgap perhaps.

    I certainly get the impression that everyone is being manipulated by Intel. Did they release their updates merely to console their strategic partners who have been left holding the bag? Also, Intel boldly announced that their product line will be free of meltown and spectre vulnerabilities by the end of 2018. That appears more hubris than fact.

    If Intel is not being ethical in their response, they should be punished for it. Some big Cloud Providers have already suffered performance hits from Intel fixes and that could get even worse if more are required. Damage, large or small from potential exploits, will be the litmus test. The enterprise leases and the consumer buys their computing products, so there is a lot at stake. Trust is paramount.

    BTW, Intel made 'The 2018 World’s Most Ethical Companies' chart, released by Ethisphere. To determine if a company is worth including on the list, Ethisphere calculates what is called an Ethics Quotient, which is an objective score that assesses each firm’s performance in five different categories, as it follows: ethics and compliance program (35 percent), corporate citizenship and responsibility (20 percent), culture of ethics (20 percent), governance (15 percent), and leadership, innovation and reputation (10 percent). Intel is in good company as Microsoft made the list as well.

    1. Anonymous Coward
      Anonymous Coward

      Well

      Intel already has Itanium. So they might already have a "fix" in production. Tried and tested...

      (yes, I know, HP/Multiflow did the heavy lifting and then sold/gave it to Intel for breadcrumbs)

  18. Anonymous Coward
    Anonymous Coward

    Options

    Some cloud hosts already offer "bare metal" - just like plain old dedicated servers, with the quick setup and hourly rates of virtual servers. Expensive, though.

    Atom or ARM hosting should be cheaper. However, renting N cores on a 16- to 512-core machine might not be sufficient isolation.

    Earlier-generation Atom CPUs may be immune to Spectre/Meltdown; Intel used a 486-ish architecture to reduce battery consumption. Performance is "only" about 50% lower than a comparable speculative-execution CPU. I'd bet performance is actually higher relative to hardware and electricity costs - for massively parallel workloads.

    Unfortunately the current Atoms are vulnerable. One would imagine Intel is looking to revive the old architecture if they can be reasonably certain other vulnerabilities won't be found in it. Could they make a 4-ghz 64-core Atom for the price of an i5?

  19. Tom 7

    Right that does it

    Anyone know how to put a TItan on a raspberrypi?

    1. Anonymous Coward
      Anonymous Coward

      Re: Right that does it

      @"Anyone know how to put a TItan on a raspberrypi?" yes you just need to reverse the polarity

  20. Michael Wojcik Silver badge

    Yes, that's what a Spectre attack is

    variants of Meltdown and Spectre exploit code that can be used to conduct side-channel timing attacks

    All variants of Spectre are side-channel attacks. That's what Spectre is: a class of side-channel attacks using speculative execution.

    And Meltdown is a subclass of Spectre.

    While this new research is a solid contribution to the field, everyone already knew that coherence protocols were a target. They're mentioned in the original papers, along with a bunch of the other well-known timing side channels.

    1. Triumphantape

      Re: Yes, that's what a Spectre attack is

      Interesting, so this has been a known vulnerability for some time. I suspect that's all anyone needs to start a class action lawsuit, and following that when the stock drops, invest in Intel for the subsequent rise in value once they address the hardware issues.

      1. Anonymous Coward
        Anonymous Coward

        Re: when the stock drops

        "when the stock drops, invest in Intel for the subsequent rise in value once they address the hardware issues."

        Intel's CEO certainly seems to have followed the first part of that process:

        "Brian Krzanich, chief executive officer of Intel, sold millions of dollars' worth of Intel stock—all he could part with under corporate bylaws—after Intel learned of Meltdown and Spectre, two related families of security flaws in Intel processors." from e.g.

        https://arstechnica.com/information-technology/2018/01/intel-ceos-sale-of-stock-just-before-security-bug-reveal-raises-questions/

        The shares were sold (and the reason for sale was speculated on) in 2017 e.g.

        https://www.fool.com/investing/2017/12/19/intels-ceo-just-sold-a-lot-of-stock.aspx

        Nice work if you can get it.

      2. Claptrap314 Silver badge

        Re: Yes, that's what a Spectre attack is

        It has been a suspected vulnerability for a long time. There is a huge difference in the two. And "for a long time", I mean "nearly 20 years". And by "suspected", I mean "taught in every serious CS major".

        Now, follow me here. Suppose you are an ambitious graduate student. You know that the worlds #1 supplier of CPUs has as their flagship product a processor which has characteristics that the theory categorically states is vulnerable to this sort of attack. What do you do?

        The fact that this vulnerability was not identified (that we know of) until last year when 90% of the graduate students and professors of CS for the last twenty years had every reason to believe that it was out there and more than a little motivation to go after it should tell you something about just how hard it is to track down this class of bug without a roadmap.

        And go ahead an throw in select teams at IBM, AMD, and, yes, Intel who would be looking for these if for no other reason that to not be caught flat-footed if someone from one of the OTHER companies made an announcement. Much smaller group, but they would have much better tools at their disposal.

  21. Triumphantape

    So I can assume that anything under High Sierra is still vulnerable? How do these exploits affect Virtual Machines?

    1. amanfromMars 1 Silver badge

      Assume Nothing, BetaTest Everything

      So I can assume that anything under High Sierra is still vulnerable? How do these exploits affect Virtual Machines? ... Triumphantape

      They provide them with outstanding tools and almighty weapons, Triumpantape. Nothing more, nothing less.

  22. Anonymous Coward
    Anonymous Coward

    Fix D: EPIC / Itanium

    As far as I understand it, Itanium does not use speculative execution. Maybe the huge investment into this type of CPUs was more useful than we thought up to now ?

    Any expert opinions on this technological option ?

    1. Anonymous Coward
      Anonymous Coward

      Re: Fix D: EPIC / Itanium

      Yep, I said that a long way up there. IA64, VLIW, do the optimisation at compile-time, not run-time. Problem solved.

      Not so fast, Mr. Bond. Getting the compiler working and producing efficient code proved hard. And you still need to drain the pipeline on occasion.

      See Multiflow for the first attempt at commercialising the arch,

  23. Anonymous Coward
    Anonymous Coward

    Re. Fix D: EPIC Fail / Itanic

    NooooooooOOOOOOO!!!!!

    On the other hand, maybe all those "useless" AMD dual core dinosaurs I have can be tested and sold, with boards at a premium as "SPECTRE/Meltdown/Multiplicity proof" with Rowhammer/ASLR proofing being designed into the BIOS/uEFI.

    Interesting aside, back in the day my ancient SN-25 had this "problem" with some games failing badly until the multicore issue was patched on *every* *single* *feckin* *game* using some clever code.

    Interestingly it turned out that the code introduced other problems like BF1942 crashing later on in the game due to a possible feedback loop causing a race condition (DDR2 rowhammer?!) which also got fixed.

  24. Anonymous Coward
    Anonymous Coward

    CPU Memory

    "Because accessing CPU memory is comparatively slow"

    What's that?

    1. Claptrap314 Silver badge

      Re: CPU Memory

      L1, L2, L3? Yep, even the L1 is slow by some important measures. That why you see things like ERATs out there. You really, really, don't want to wait on the L1 cache to serve up your page translations if you don't have to.

  25. Anonymous Coward
    WTF?

    So where do we go from here - Just wake up firstly

    I don't need a 64 bit processor,

    I remember the story that when Intel told Microsoft that it wanted to build a spiffy new non 86 processor that MS said they would not support it, Intel then crawled back into their corner. Shame Intel should have forged onward then as the competitive landscape was different.

    It is not going sideways or backward to use sixteen, sixteen bit processors - 16 x 16, and a processor master. We don't have to reinvent the cube and pretend we are aliens with another wonderful design.

    async or in sync we could then run 16, 32 or 64 bit code when needed.

    I keep thinking about the days of CD's when a retinue of manufacturers produced 8 bit DAC (Digital to Analog) converters Sony and Technics produced an S-bit 1 bit DAC that flew along more quickly processing 8 bits than the 8 bit DAC's.

    Then there was in a bench-marking program called Winbench that exampled many benchmarks, one was a 200mhz DEL Inspiron dual processor (Pentium) that screamed along at 2 bit Read & Write and Processes vastly superior to all others, but as the bench marking used larger bits 4,8.16.32......1024 & 2048 the results plummeted and approached the lowly results of the other models (including my own tiny 90mhz AMI).

    Then there's Windows as it went from 16 bit to 32 bit and had to resort to all kinds of silly tricks to provide some mitigation for slowness in handling all the extra zeros (0) and Apple had recently announced warnings about NOT being 64 bit code but old depreciated 32 bit.

    Why are we fooling ourselves, every time we invent something large the world goes small, (see wide screen monitors and TV's then we turn to watch screens on our arms), Mobile phones ran code with small processors.

    Now we find Intel has resorted to prediction (psychics) to increase the processor speed by 30%

    Well Intel, many PC manufacturers and re-sellers clock down your chips as they utilize slow memory, WTF are you trying to achieve against the tide.

    The internet is streamed in single bits, and we not going to parallel bit transmissions in like forever so,

    WAKE UP !!!

    1. Anonymous Coward
      Anonymous Coward

      Re: So where do we go from here - Just wake up firstly

      I started off replying rationally to this post then I thought - everybody knows it's nonsense but the author, and I suspect he's fact proof.

      Please, go away and read a book about computer design.

      1. Anonymous Coward
        Anonymous Coward

        Re: So where do we go from here - Just wake up firstly

        Thanks, I'll read the book of practical computer application, and the list of great tech that has been scrapped as corporations kill the quality to suck the $$$ from it.

    2. Anonymous Coward
      Anonymous Coward

      Re: So where do we go from here - Just wake up firstly

      Using large storage with small bit devices and operating systems:

      Just an addendum to say that

      In the same way as Printer and Scanner manufacturers got the jack of Microsoft bullying them into allowing the Kernel to run printer and scanner, and produced even more capable printer and scanner able to operate without PC or separate operating system, Hard Drive manufacturers could always build independent storage systems of about 64bit or so that would run large addresses for storage with large buffers and caches but receive them as they do via Sata or USB3. from operating systems of any bit size. we could just tell the drive to store the data (not how to store the data) and it would do so. We would use a simple config to divide it up.

      We are now using SSD and NAS it's not that far away.

      This would allow INTEL and AMD to create the SWIFT processor 16 x 16bit and make our devices fly.

  26. wownwow

    Not even talking about whatevePrime yet, for Intel chips other than just SEVERAL (not all) Skylake-based platforms, where is the mitigation for the Spectre Variant 2?

    180127-- Critical Windows Update (KB4078130) to DISABLE mitigation against Spectre, Variant 2.

    180207 -- Intel released production microcode updates for just SEVERAL (not all) Skylake-based platforms.

    ?????? -- Windows update for the Spectre Variant 2?

  27. secboffin

    Don't worry everyone, Artificial Intelligence will somehow solve this on it's own... Right? Right?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like