The Arm A53 is still current in many mod-range phones and AFAIK doesn't have out of order execution. The same goes for the A55. AFAIK this should mean that Raspberry Pis and many mid-range Androids are not affected by Spectre.
The Spectre processor design vulnerability is here to stay. Even if you choose to ignore it, the problem still exists. This is potentially a very bad thing for public cloud vendors. It may end up being great for chip manufacturers. It's fantastic for VMware. Existing patches can fix Meltdown, but only seem to be able to …
Raspberry Pi is indeed immune to Spectre:
So, it seems, is Intel Itanium. See comments at:
Neither Pi nor Itanium is particularly fast, but I posit that most computers in use today are not CPU-bound so it doesn't matter much. Where CPU is crucial, there are often opportunities for parallelism as already mentioned.
In the 1990s, one definition of a supercomputer was, a computer that would change a CPU-bound task to an I/O-bound task. If we expand "I/O" to include reads/writes over the Internet, almost anything I do at home is limited more by slow I/O than by CPU speed. I went through a phase of using a Raspberry Pi as my home computer, and it was not too bad. I gave up in the end, mainly because of low RAM on current versions of the Pi and the absence of MS Office to read attachments.
The main problem using either widely may be that neither ARM as on Pi - nor Itanium - is binary-compatible with x86-64. This isn't an insuperable problem but it implies effort re-compiling and/or developing emulators/translators.
".....So, it seems, is Intel Itanium....." There is a good information in
this explanation by Theresa Degroote at Secure64 of why Itanium's EPIC architecture is immune to Spectre and Meltdown. But it's unlikely that Intel will be shoe-horning Itanium's EPIC architecture into a Xeon package, or that anyone will be rushing out to replace all their Xeon servers with existing Itanium ones. The problem is - and always has been for Itanium - that it's architecture is more expensive to fabricate than x86-64. It would be pretty trivial for Microsoft to get Windows Server 2016 booting on Itanium, it's just would Microsoft be bothered to? Getting the OS to boot is just one problem, after that you have to get all your applications rewritten for the Itanium version of Windows, or accept the probable performance hit of x86-64 emulation on Itanium. After all, the OS and app vendors can simply wait for Intel to temporarily gin up the current Xeon designs with a die-shrink performance boost to alleviate any Spectre fix hit, a temporary cover until the next generation of Spectre-proofed Xeons are designed. Unfortunately for AMD, they seem to be right on the bleeding edge of die shrinkage, so their chance of recovering from Spectre fix performance hits is likely to be harder.
The Raspberry Pi is more expensive than a core i7 - when measured in performance per £.
Also the Beowulf cluster of Raspberry Pis was a nice hack, but impractical for almost anything since the networking on a RPi is so slow (it goes over the USB interface).
Of course, you could put a bunch of ARM A53 on a die in an advanced technology node, with fast interconnect, and get something which would be like a modern-day transputer. Then you merely need to rewrite your software to be efficiently multi-threadable.
"I'm afraid that does not help much, since what we need is an in-order CPU that is also fast!."
For some value of "need".
We've gone in the direction of virtualisation essentially because we can. That doesn't mean it is the best long term solution. The Transputer, IIRC, represented a different approach based on throwing lots of CPUs at a problem - parallelism with fast interconnect. Sure, not everything can be parallelised. But if you have one cluster of N CPUs running Y instances where Y >> N, surely you could have a managed cluster of Y CPUs each running one instance? Of course it would mess with licensing and the like, but these are not computer science constructs, they are just ones designed to please Wall Street.
To use my favourite car analogy, for years manufacturers got more performance with more cylinders and bigger capacity. Then along came the small, efficient turbocharger and advanced simulation and engine management, and suddenly engines were getting smaller, with fewer cylinders, and more powerful. The technology changed to meet new conditions.
If this all causes a major rethink of computer architecture, it may be a big blessing in disguise.
"Raspberry Pis and mid-range Androids aren't affected by Spectre."
You aren't implying they're a secure solution are you?
Yeah single threaded non-speculative processors aren't susceptible to Spectre, but they're susceptible to many many other publicly known and still classified vulnerabilities.
This post has been deleted by its author
Amazing how of us overlook the fact of their own governments are doing stuff like this to their allies, even their own citizens.
Yes, the NSA, GCHQ, Mossad, more than any other intelligence agencies they're likely to have known about this for years, decades, or maybe even before the hardware first shipped.
That they're on our side doesn't mean we should leave them off the list.
One way to mitigate the Spectre problem (at a cost) for public cloud providers - do not share CPUs between customers. If only one customers code runs on any CPU at a given time then the problem of Spectre allowing reading of data from other VMs is greatly reduced.
For big cloud jobs reserving a number of physical CPUs would not impose very much inefficiency but for small jobs that only need one or two cores reserving a whole CPU (with possibly over 10 cores and hyperthreading) would greatly affect the economics.
It would not surprise me to find Amazon and Microsoft adding the option (at a price) of having dedicated CPUs for customers that are concerned about data security. (Though that begs the question - WHY use a public cloud if you care about data security?)
Except that a lot of people use cloud for high availability.
Not to mention on-demand scalability, something that is important if you have a website that has occasional surges in demand, but you don't want the outlay for tens or hundreds of times the computing power needed for your typical load, which would otherwise sit there idle.
You know, the actual reason we have cloud computing...
"For reliability you want your VMs spread across hosts and data centers."
There's no reason why you can't have your dedicated iron spread across several locations. Still, I'm not sure that the article's optimism is well placed. Whilst you may not be sharing iron with other customers, you are sharing it with your VM provider. That provider is still "at risk" from whoever they rent the iron to. Furthermore, as I understand it there is no way to *detect* that you are under attack from Spectre.
Against that, it is probably true that an outfit like VMware can afford to replace all their kit as soon as safe hardware is available and, as valued customers, will be at the front of the delivery queue.
For reliability you want your VMs spread across hosts and data centers.
For security, you might not!
If your organisation is big enough to have more than one building, you can have a server closet in each. Hell, if you are a CEO, you probably have several closets big enough to hold a rack full of servers, and desperately need a reason why your entire mansion should be tax deductable expense: put an Enterprise scale server in one and network it to your galactic HQ. It justifies the cost of food for the enormous, man eating dog you need for security. Saves on the heating bill too! With some creative accounting, it probably even covers a pink pony for your daughter as well.
(But remember 77dB is QUITE LOUD!)
State-sponsored actors absolutely have the resources to produce malware to exploit Spectre.
I would be surprised if some did not already have the tools/malware. But as we well know they cannot keep their toys in their playpen, so we have to expect that other ne'er do wells will acquire them.
They have widely used tools, tools they know will be discovered by the other side because they're used so much and so many people internally have access to them.
And they have tools kept in reserve and only used sparingly.
Given that each of our intelligence agencies has many times more people dedicated to finding such vulnerabilities and exploiting them than Google does, and that they've all been at this game far longer, I fully suspect that Spectre and Meltdown were discovered and have been used by some of those tools that have been kept in reserve.
vCPU pinning is well know, but it makes load balancing difficult. Regular load balancing would be based on assumption that you can always pin more than one vCPU to a single core, and you pin vCPU from multiple VMs to cores on one physical CPU. These assumptions need to go out of the window now.
I expect Amazon, AWS, Azure etc. will start offering a new tier of services where they indeed guarantee that only your VMs run on any single physical CPU, but this is going to be expensive (you pay for more vCPUs than stricly needed), or slow (poor load balancing), or both.
The article is correct, and the fallout from this will continue to be enormous. I have litle confidence that some highly capable actors haven't already started very quietly raiding juicy targets. There are bad things happening now that we'll find out about in six, or 36 months' time, when we'll say "Duh, of course".
Technical issues aside, I do think there's a moral in here somewhere too. "Cloud" has been relentlessly overhyped as a solution for everything, and its operators have worked tirelessly to sucker customers in, playing up performance, playing down security worries, all the while trying to squeeze every last drop of cash from punters while cutting their own costs. The promise to ghastly beancounters slavering over their next bonus has been irresistible and companies have, often with dangerous haste and poor preparation, tried to offload costs, worries and skills to "Anything Cheaper".
Now it's not entirely fair to say that "Cloud" is distinct from "servers-in-a-datacentre" mostly because the former opens up yet another dangerous security compromise—but it's not completely wrong either. Beancounters: you believed the 'Good+Cheap+Quick' marketurds' spiel, didn't think hard enough about downsides (security and privacy risks that many folks much more knowledgeable than I have been going on about for years now) and so today ... well, to coin a phrase, the skeletons are coming home to roost.
Just as you can stipulate that, say, no one with serious security needs would consider SMS-based 2FA, I suggest you could also state that no one with data of real importance or value would keep it on a shared platform in the "cloud".
"Beancounters: you believed the 'Good+Cheap+Quick' marketurds' spiel, didn't think hard enough about downsides (security and privacy risks that many folks much more knowledgeable than I have been going on about for years now) and so today ... well, to coin a phrase, the skeletons are coming home to roost."
On the other hand, it's all one big house of cards. It only takes one bean counter to realise that cheap works for the majority and that's where it all goes. If your competitors don't follow you down that road, they'll go bust. This applies across most of industry, goods and services. There's usually some small niche at the top for quality, lots of cheap tat at the bottom, and not much in between.
In many ways, VMware on AWS may be just be the ultimate solution here. After all, it is dedicated hardware to just you.
You don't need VMware for that. AWS already offer dedicated instances which are guaranteed not to share hardware with any other customer, but otherwise are managed exactly like regular EC2. Sadly, the small and tiny instance types are not available.
You pay a premium of $2 per hour per region, so about $17,500 per year, for the privilege. Still, in the wake of Spectre, I expect business to be brisk.
VMware on AWS costs $51,987 per year per host (if you pay 1 year in advance). Ouch. That gets you a single 2 CPU, 36 core, 512GB RAM box; clearly you'll want at least two for some sort of real-time redundancy.
Traditional data centre hosting starts to look attractive again.
Oops: for VMware on AWS, "Minimum required configuration is 4 hosts per cluster". So you are looking at minimum spend of $207,948 per year, plus data transfer and IP address charges.
There is a "hybrid loyalty discount" of "up to" 25% if you already have the full ESX stack licenced and in use on-prem (vSphere, vSAN, NSX).
Makes me glad that my employer won't even consider cloud computing.
My current employer won't consider it out of security grounds.
My previous employer won't consider it, because it is "their" data and therefore needs to be on "their" hardware on "their" site, regardless of any arguments to the contrary.
Biting the hand that feeds IT © 1998–2020