Next
Some poor soul is going to get Special High Intensity Training for this. They might as well charge it through the ARSE contract while they're at it.
9 publicly visible posts • joined 2 Oct 2018
This is oracle we're talking about. They're probably just waiting for widespread adoption before suing everyone running Linux for having their dubiously licensed kit. I'm sure they will be merciful and allow their victims to buy Oracle Linux licenses as a remedy (one per installed kernel, just to be safe).
Updated version of their logo ->
I was wondering about that too. Given that threads already have access to all the data in a process, restricting SMT to threads with the same process ID presumably wouldn't allow any new channels for information leakage.
I don't know exactly how Intel's hyperthreads are managed, but I assume the thread selection is handled by the OS like in other CPUs. While that would be a big performance penalty for virtualized workloads and things like web browsers, it probably would be a cheap/free mitigation for compute intensive single process workloads like rendering, simulation, etc.
We might once again live in a world where people stop bolting on multi-process to their programs when multi-thread would do.
I don't think Spectre is really the fault of the MMU. It's more that the caches are leaking program state via the side-effects of speculated instructions and other processes can infer that state with careful timing analysis.
The article hints at the hardware solution to this problem, which could be something like tagging cache entries with a process ID to ensure they won't be accessible from another process. It might also be possible to hold pending cache entries from speculated loads separate and not commit them until the instruction graduates. Either way, it's not a simple software fix. The processors in question all violate their own programmer's model, that is speculation will have no visible side-effects.
You're right that Spectre does look a lot like the timing and information leak attacks used against embedded devices, and while Spectre targets most speculating CPUs, I hope people who use caches in general are now carefully considering what could go wrong from a security perspective.
We're already there. A lot of websites are following in Google's footsteps with their one-page all javascript runs-in-the-browser nonsense. Other than purely cosmetic features and some specialty stuff like WebRTC, javascript and other browser delivered executable code has done nothing except introduce a lot of security vulnerabilities.
I wonder how much electricity has been wasted worldwide over the last decade because of browser scripts. I bet it's at least in the terajoule range.
At orbital altitudes, generally it's atomic oxygen that poses a big corrosion threat. Everything from plastics to ceramics to metals all suffer from impact erosion and chemical corrosion along their leading edges.
Because of the Space Shuttle and the LDEF (which got stuck in orbit for over 5 years), it's a reasonably well studied subject. NASA has an accessible paper with pretty pictures here: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20040191331.pdf
I think the sharing of FPU resources is completely irrelevant. As everyone has noted, FPUs are sometimes shared among heterogeneous multicore CPU designs (or omitted entirely).
The bigger problem is the shared instruction fetch and decode. What AMD has here is much much closer to a conventional superscalar CPU, with independent execution resources able to run through a single instruction stream out-of-order, than the usual definition of multi-core.
Calling the single superscalar module two cores is definitely misleading, at least from a CPU design perspective. By AMD's logic, many Intel, IBM, ancient MIPS, etc. chips would all have inflated core counts.