back to article You can't ignore Spectre. Look, it's pressing its nose against your screen

The Spectre processor design vulnerability is here to stay. Even if you choose to ignore it, the problem still exists. This is potentially a very bad thing for public cloud vendors. It may end up being great for chip manufacturers. It's fantastic for VMware. Existing patches can fix Meltdown, but only seem to be able to …

Page:

  1. Anonymous Coward
    Anonymous Coward

    Arm A53

    The Arm A53 is still current in many mod-range phones and AFAIK doesn't have out of order execution. The same goes for the A55. AFAIK this should mean that Raspberry Pis and many mid-range Androids are not affected by Spectre.

    1. MacroRodent Silver badge

      Re: Arm A53

      I'm afraid that does not help much, since what we need is an in-order CPU that is also fast!.

      1. Dan 55 Silver badge

        Re: Arm A53

        It's all about the parallelism these days. You can make a Beowulf cluster of Pis.

        1. Anonymous Coward
          Anonymous Coward

          Re: Arm A53

          Raspberry Pi is indeed immune to Spectre:

          https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-to-spectre-or-meltdown/

          So, it seems, is Intel Itanium. See comments at:

          https://forums.theregister.co.uk/forum/1/2018/01/25/intel_spectre_disclosed_flaws_november/

          Neither Pi nor Itanium is particularly fast, but I posit that most computers in use today are not CPU-bound so it doesn't matter much. Where CPU is crucial, there are often opportunities for parallelism as already mentioned.

          In the 1990s, one definition of a supercomputer was, a computer that would change a CPU-bound task to an I/O-bound task. If we expand "I/O" to include reads/writes over the Internet, almost anything I do at home is limited more by slow I/O than by CPU speed. I went through a phase of using a Raspberry Pi as my home computer, and it was not too bad. I gave up in the end, mainly because of low RAM on current versions of the Pi and the absence of MS Office to read attachments.

          The main problem using either widely may be that neither ARM as on Pi - nor Itanium - is binary-compatible with x86-64. This isn't an insuperable problem but it implies effort re-compiling and/or developing emulators/translators.

          1. Gezza

            Re: Arm A53

            the transputer's time has come. Step forward Tony Fuge and the Inmos posse. (its always the Brits who turn out to have been right in the end)

          2. Jonathan Schwatrz
            Boffin

            Re: DanielBarker Re: Arm A53

            ".....So, it seems, is Intel Itanium....." There is a good information in

            this explanation by Theresa Degroote at Secure64 of why Itanium's EPIC architecture is immune to Spectre and Meltdown. But it's unlikely that Intel will be shoe-horning Itanium's EPIC architecture into a Xeon package, or that anyone will be rushing out to replace all their Xeon servers with existing Itanium ones. The problem is - and always has been for Itanium - that it's architecture is more expensive to fabricate than x86-64. It would be pretty trivial for Microsoft to get Windows Server 2016 booting on Itanium, it's just would Microsoft be bothered to? Getting the OS to boot is just one problem, after that you have to get all your applications rewritten for the Itanium version of Windows, or accept the probable performance hit of x86-64 emulation on Itanium. After all, the OS and app vendors can simply wait for Intel to temporarily gin up the current Xeon designs with a die-shrink performance boost to alleviate any Spectre fix hit, a temporary cover until the next generation of Spectre-proofed Xeons are designed. Unfortunately for AMD, they seem to be right on the bleeding edge of die shrinkage, so their chance of recovering from Spectre fix performance hits is likely to be harder.

        2. kryptylomese

          Re: Arm A53

          And they are way less powerful than an equivalent costing Intel machine!

          1. Dan 55 Silver badge
            Trollface

            Re: Arm A53

            And they are way less powerful than an equivalent costing Intel machine!

            Was that before or after Spectre/Meltdown patches?

          2. Daniel 18

            Re: Arm A53

            You miss the point.

            All machines wait at the same speed.

            As long as a machine is fast enough for what your are doing, additional speed is of no real benefit.

        3. stephanh

          Re: Arm A53

          The Raspberry Pi is more expensive than a core i7 - when measured in performance per £.

          Also the Beowulf cluster of Raspberry Pis was a nice hack, but impractical for almost anything since the networking on a RPi is so slow (it goes over the USB interface).

          Of course, you could put a bunch of ARM A53 on a die in an advanced technology node, with fast interconnect, and get something which would be like a modern-day transputer. Then you merely need to rewrite your software to be efficiently multi-threadable.

          1. Doctor Syntax Silver badge

            Re: Arm A53

            "Then you merely need to rewrite your software to be efficiently multi-threadable."

            You might need to do that anyway if the existing architectures need to be re-done with less out of order processing.

            1. Charles 9 Silver badge

              Re: Arm A53

              But some stuff is too interdependent for efficient multiprocessing, like video encoding.

      2. Anonymous Coward
        Anonymous Coward

        Re: Arm A53

        "I'm afraid that does not help much, since what we need is an in-order CPU that is also fast!."

        For some value of "need".

        We've gone in the direction of virtualisation essentially because we can. That doesn't mean it is the best long term solution. The Transputer, IIRC, represented a different approach based on throwing lots of CPUs at a problem - parallelism with fast interconnect. Sure, not everything can be parallelised. But if you have one cluster of N CPUs running Y instances where Y >> N, surely you could have a managed cluster of Y CPUs each running one instance? Of course it would mess with licensing and the like, but these are not computer science constructs, they are just ones designed to please Wall Street.

        To use my favourite car analogy, for years manufacturers got more performance with more cylinders and bigger capacity. Then along came the small, efficient turbocharger and advanced simulation and engine management, and suddenly engines were getting smaller, with fewer cylinders, and more powerful. The technology changed to meet new conditions.

        If this all causes a major rethink of computer architecture, it may be a big blessing in disguise.

    2. WatAWorld

      Re: Arm A53

      "Raspberry Pis and mid-range Androids aren't affected by Spectre."

      You aren't implying they're a secure solution are you?

      Yeah single threaded non-speculative processors aren't susceptible to Spectre, but they're susceptible to many many other publicly known and still classified vulnerabilities.

      1. This post has been deleted by its author

  2. Anonymous Coward
    Anonymous Coward

    State-sponsored actors

    Can I be the first to nominate North Korea, Russia or China?

    To be honest I would be surprised if there aren't already tools to abuse this flaw because lets be honest if you are exploiting it you're going to do your best to keep it on the down low.

    1. el kabong

      Re: State-sponsored actors

      NSA, anyone?

    2. Zippy's Sausage Factory

      Re: State-sponsored actors

      I'd also add the USA and Israel to that list. They're almost certainly up to no good, I'm sure...

      1. RegGuy1 Silver badge
        Facepalm

        Re: State-sponsored actors

        And the UK.

        Opps! Silly me. They are too tied up with Brexit to notice. Getting out of Europe is far more important.

        1. Tigra 07 Silver badge
          Coat

          Re: State-sponsored actors

          Pfft. Like UK spooks could manage something like this. Ours are busy doing dodgy shit in hotel rooms and turning up dead in suitcases...The ones that aren't dodgy are Johnny English...

          Mine's the bag with the dead 007 in it.

          1. Bronek Kozicki Silver badge
            Black Helicopters

            Re: State-sponsored actors

            That's what GCHQ wants you to think ...

            1. Yet Another Anonymous coward Silver badge

              Re: State-sponsored actors

              It's time for Britain to step up and state sponsor its own actors.

              I nominate Judy Dench and Daniel Craig

              1. Will Godfrey Silver badge
                Unhappy

                Re: State-sponsored actors

                @WatAWorld

                What makes you think they are on 'our' side?

    3. WatAWorld

      Re: State-sponsored actors

      Amazing how of us overlook the fact of their own governments are doing stuff like this to their allies, even their own citizens.

      Yes, the NSA, GCHQ, Mossad, more than any other intelligence agencies they're likely to have known about this for years, decades, or maybe even before the hardware first shipped.

      That they're on our side doesn't mean we should leave them off the list.

  3. Duncan Macdonald Silver badge

    No shared CPUs

    One way to mitigate the Spectre problem (at a cost) for public cloud providers - do not share CPUs between customers. If only one customers code runs on any CPU at a given time then the problem of Spectre allowing reading of data from other VMs is greatly reduced.

    For big cloud jobs reserving a number of physical CPUs would not impose very much inefficiency but for small jobs that only need one or two cores reserving a whole CPU (with possibly over 10 cores and hyperthreading) would greatly affect the economics.

    It would not surprise me to find Amazon and Microsoft adding the option (at a price) of having dedicated CPUs for customers that are concerned about data security. (Though that begs the question - WHY use a public cloud if you care about data security?)

    1. Gordon 10 Silver badge
      FAIL

      Re: No shared CPUs

      Errr - thats exactly what the article said. Did you read it?

    2. Dan 55 Silver badge

      Re: No shared CPUs

      Or why use a public cloud if it's just a server dedicated for your own use? You might as well just use... your own server.

      1. Loud Speaker

        Re: No shared CPUs

        You might as well just use... your own server.

        but ...

        DevOps

    3. HereIAmJH

      Re: No shared CPUs

      Except that a lot of people use cloud for high availability. If you're going to throw all your hosted VMs on a single host you have just created a single point of failure. For reliability you want your VMs spread across hosts and data centers.

      1. kryptylomese

        Re: No shared CPUs

        You can do that with a Hyper converged solution e.g. Proxmox for very little outlay!

      2. Loyal Commenter Silver badge

        Re: No shared CPUs

        Except that a lot of people use cloud for high availability.

        Not to mention on-demand scalability, something that is important if you have a website that has occasional surges in demand, but you don't want the outlay for tens or hundreds of times the computing power needed for your typical load, which would otherwise sit there idle.

        You know, the actual reason we have cloud computing...

        1. Doctor Syntax Silver badge

          Re: No shared CPUs

          "You know, the actual reason we have cloud computing."

          So what about all the other situations where it's being used?

      3. Ken Hagan Gold badge

        Re: No shared CPUs

        "For reliability you want your VMs spread across hosts and data centers."

        There's no reason why you can't have your dedicated iron spread across several locations. Still, I'm not sure that the article's optimism is well placed. Whilst you may not be sharing iron with other customers, you are sharing it with your VM provider. That provider is still "at risk" from whoever they rent the iron to. Furthermore, as I understand it there is no way to *detect* that you are under attack from Spectre.

        Against that, it is probably true that an outfit like VMware can afford to replace all their kit as soon as safe hardware is available and, as valued customers, will be at the front of the delivery queue.

      4. Loud Speaker

        Re: No shared CPUs

        For reliability you want your VMs spread across hosts and data centers.

        For security, you might not!

        If your organisation is big enough to have more than one building, you can have a server closet in each. Hell, if you are a CEO, you probably have several closets big enough to hold a rack full of servers, and desperately need a reason why your entire mansion should be tax deductable expense: put an Enterprise scale server in one and network it to your galactic HQ. It justifies the cost of food for the enormous, man eating dog you need for security. Saves on the heating bill too! With some creative accounting, it probably even covers a pink pony for your daughter as well.

        (But remember 77dB is QUITE LOUD!)

    4. Jonathan Schwatrz

      Re: Duncan Macdonald Re: No shared CPUs

      "....do not share CPUs between customers...." You can already buy cloud services where you get dedicated hardware, even down to dedicated networking and storage, but it is more expensive.

  4. alain williams Silver badge

    State actors = malware developers

    State-sponsored actors absolutely have the resources to produce malware to exploit Spectre.

    I would be surprised if some did not already have the tools/malware. But as we well know they cannot keep their toys in their playpen, so we have to expect that other ne'er do wells will acquire them.

    1. WatAWorld

      Re: State actors = malware developers

      They have widely used tools, tools they know will be discovered by the other side because they're used so much and so many people internally have access to them.

      And they have tools kept in reserve and only used sparingly.

      Given that each of our intelligence agencies has many times more people dedicated to finding such vulnerabilities and exploiting them than Google does, and that they've all been at this game far longer, I fully suspect that Spectre and Meltdown were discovered and have been used by some of those tools that have been kept in reserve.

      1. Bronek Kozicki Silver badge

        Re: State actors = malware developers

        vCPU pinning is well know, but it makes load balancing difficult. Regular load balancing would be based on assumption that you can always pin more than one vCPU to a single core, and you pin vCPU from multiple VMs to cores on one physical CPU. These assumptions need to go out of the window now.

        I expect Amazon, AWS, Azure etc. will start offering a new tier of services where they indeed guarantee that only your VMs run on any single physical CPU, but this is going to be expensive (you pay for more vCPUs than stricly needed), or slow (poor load balancing), or both.

  5. Milton

    Reaping what you sow

    The article is correct, and the fallout from this will continue to be enormous. I have litle confidence that some highly capable actors haven't already started very quietly raiding juicy targets. There are bad things happening now that we'll find out about in six, or 36 months' time, when we'll say "Duh, of course".

    Technical issues aside, I do think there's a moral in here somewhere too. "Cloud" has been relentlessly overhyped as a solution for everything, and its operators have worked tirelessly to sucker customers in, playing up performance, playing down security worries, all the while trying to squeeze every last drop of cash from punters while cutting their own costs. The promise to ghastly beancounters slavering over their next bonus has been irresistible and companies have, often with dangerous haste and poor preparation, tried to offload costs, worries and skills to "Anything Cheaper".

    Now it's not entirely fair to say that "Cloud" is distinct from "servers-in-a-datacentre" mostly because the former opens up yet another dangerous security compromise—but it's not completely wrong either. Beancounters: you believed the 'Good+Cheap+Quick' marketurds' spiel, didn't think hard enough about downsides (security and privacy risks that many folks much more knowledgeable than I have been going on about for years now) and so today ... well, to coin a phrase, the skeletons are coming home to roost.

    Just as you can stipulate that, say, no one with serious security needs would consider SMS-based 2FA, I suggest you could also state that no one with data of real importance or value would keep it on a shared platform in the "cloud".

    1. SquidEmperor

      Re: Reaping what you sow

      I think your last sentence is really the whole point. Regardless of where your server sits unless you have an air-gap between it and ??? you are vulnerable.

    2. John Brown (no body) Silver badge

      Re: Reaping what you sow

      "Beancounters: you believed the 'Good+Cheap+Quick' marketurds' spiel, didn't think hard enough about downsides (security and privacy risks that many folks much more knowledgeable than I have been going on about for years now) and so today ... well, to coin a phrase, the skeletons are coming home to roost."

      On the other hand, it's all one big house of cards. It only takes one bean counter to realise that cheap works for the majority and that's where it all goes. If your competitors don't follow you down that road, they'll go bust. This applies across most of industry, goods and services. There's usually some small niche at the top for quality, lots of cheap tat at the bottom, and not much in between.

    3. Loud Speaker

      Re: Reaping what you sow

      the skeletons are coming home to roost.

      featuring Wallace and Gromit?

  6. Anonymous Coward
    Anonymous Coward

    The good news is it's not being exploited in the wild yet, the bad news is when.

    1. Warm Braw Silver badge

      The good news is it's not being exploited in the wild yet

      It would be difficult to prove that assertion...

      1. ma1010

        So true!

        The good news is it's not being exploited in the wild yet

        It would be difficult to prove that assertion...

        Eventually we might find out that NSA/GCHQ/FSB, etc have known about and been exploiting this for a while.

    2. SquidEmperor

      The bad news is if it was being exploited in the wild you wouldn't know.

  7. Crypto Monad

    Dedicated instances

    In many ways, VMware on AWS may be just be the ultimate solution here. After all, it is dedicated hardware to just you.

    You don't need VMware for that. AWS already offer dedicated instances which are guaranteed not to share hardware with any other customer, but otherwise are managed exactly like regular EC2. Sadly, the small and tiny instance types are not available.

    You pay a premium of $2 per hour per region, so about $17,500 per year, for the privilege. Still, in the wake of Spectre, I expect business to be brisk.

    VMware on AWS costs $51,987 per year per host (if you pay 1 year in advance). Ouch. That gets you a single 2 CPU, 36 core, 512GB RAM box; clearly you'll want at least two for some sort of real-time redundancy.

    Traditional data centre hosting starts to look attractive again.

    1. Crypto Monad

      Re: Dedicated instances

      Oops: for VMware on AWS, "Minimum required configuration is 4 hosts per cluster". So you are looking at minimum spend of $207,948 per year, plus data transfer and IP address charges.

      There is a "hybrid loyalty discount" of "up to" 25% if you already have the full ESX stack licenced and in use on-prem (vSphere, vSAN, NSX).

    2. big_D Silver badge

      Re: Dedicated instances

      Makes me glad that my employer won't even consider cloud computing.

      My current employer won't consider it out of security grounds.

      My previous employer won't consider it, because it is "their" data and therefore needs to be on "their" hardware on "their" site, regardless of any arguments to the contrary.

      1. Yet Another Anonymous coward Silver badge

        Re: Dedicated instances

        My current employer won't consider it out of security grounds.

        And they have you to secure it - and you are better at cybersecurity than all the people at Google and amazon. Or is the entire data center air gapped and in an underground bunker somewhere ?

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020