Well well well well...
...well well well well well then.
A fourth variant of the data-leaking Meltdown-Spectre security flaws in modern processors has been found by Microsoft and Google researchers. These speculative-execution design blunders can be potentially exploited by malicious software running on a vulnerable device or computer, or a miscreant logged into the system, to …
This post has been deleted by its author
I'm aware of a group selling a Spectre vuln. They won't disclose the source as that would be giving it away for free. One has to buy on faith. The government would by it that way - who's going to con the NSA? The price is in the stratosphere. The government price is too low.
13 by years end? Easily but we will never know how many there were will we? Best wait for new dies.
They aren't copying each other, it's just that there's only so many ways to make something execute more instructions faster. And yes, speed is freaking important.
There are a lot of timing attacks and other side channels that yield information. One of the important points of all of this is that too many applications don't encrypt sensitive data, even with minimal encryption.
In this context encryption outside of the CPU doesn't really matter; the compromised processor is the thing that must touch decrypted data to, well, process it.
Not necessarily true - that is, if the data is being loaded prior to decryption (for example, if decryption is being done by the core being probed), then encryption in memory would prune the Spectre attack tree somewhat. It's not a perfect defense by any means, but it narrows the scope for usefully probing that particular data.
This is simply a specific case of the more general observation that a Spectre probe sequence will reveal much low-value data, possibly in addition to some high-value data. Encrypted data (which the attacker cannot economically decrypt) is low-value.
Of course, the attacker may be able to find the key by probing elsewhere. It's a very partial measure.
Half right, half wrong:
"They aren't copying each other, it's just that there's only so many ways to make something execute more instructions faster. And yes, speed is freaking important."
Right. Someone comes up with an idea that solves a perceived problem, and it gradually becomes standard practice, unless someone else comes up with a better solution. That's just the way a successful society makes progress.
"One of the important points of all of this is that too many applications don't encrypt sensitive data, even with minimal encryption."
Wrong. In general, things have to be in comprehensible form for processing. There are a few ways of doing certain limited operations on encrypted data, but this is orders of magnitude slower than operating on the unencrypted data. Better to just scrap speculative execution, as it is a much lower performance hit.
Well, think of it like this:
Every modern CPU that suffers from these vulnerabilities has literally billions of transistors. Your higher end CPUs (and GPUs) have more transistors per chip than there will be people on the Earth tomorrow or twenty years from now. It's amazing that we don't have more of these flaws to deal with and that they are not worse. Perhaps there will be more that come to light soon, or in the next decade. What matters is that we find the flaws and learn how to fix them. It's a case of not being able to make progress until we fail and learn from our mistakes.
Computing is still in its infancy.
We have only (relatively) recently started accepting that malicious stuff will run on our computers (invited in by making the web able to run stuff locally). We used to think it was the exception that malicious code ran, whereas now it's the norm.
Most security still stems from only running stuff from trusted sources. The main security holes are "run everything" platforms such as browsers and Flash.
"We have only (relatively) recently started accepting that malicious stuff will run on our computers (invited in by making the web able to run stuff locally)."
Are you serious? And, by the by, who is this "we"?
Long before the era of ubiquitous web access, it was quite popular for someone to send someone else this week's weekly spreadseet (or whatever) and it has a macro in it that does the equivalent of "format c:" I'm thinking that goes back to the 1990s, when MS and IT departments had discovered email but the unprotected web hadn't become ubiquitous.
And because the underlying commodity software people and commodity sysadmin people typically had no systematic concept of protecting their important resources (files and filesystems, for example) against inappropriate access (stuff visible that shouldn't be, stuff writable that shouldn't be), such matters being beneath outdated relics of a forgotten era where security and access controls *had* to be considered as part of a bigger picture, the rest of us end up two decades later with an industry literally subject to Meltdown.
Maybe the IT crowd should switch it off and switch it on again and see if it works better afterwards. It seems to be the industry standard approach.
"Maybe the IT crowd should switch it off and switch it on again and see if it works better afterwards. It seems to be the industry standard approach."
It is a standard approach because resetting a system in an unknown state to a consistent starting configuration is a logical and efficient way to start.
As the complexity and interconnectedness, both obvious and invisible, of computing environments increases, and the costs and impact of extended downtime to our lives soars, fast solutions or at least fast diagnosis becomes ever more logical.
"As the complexity and interconnectedness, both obvious and invisible, of computing environments increases, and the costs and impact of extended downtime to our lives soars, fast solutions or at least fast diagnosis becomes ever more logical."
These "costs of extended downtime" you mention.
Who's picking up the costs? The system (hardware, software, etc) suppliers, the end users, the magic money tree?
E.g. Do readers think the TSB IT people, as part of the diagnosis of the recent and ongoing issues, or the IBM people who were ordered in, by the CEO or whoever, might have tried "switching it off and on again"? Does the process seem to have helped resolve the issues?
Have readers (some of whom must be TSB customers or TSB staff) been asked whether they care about complexity and interconnectedness, or whether they just might perhaps prefer to get at their money again so they can move it somewhere safer (e.g. shoe box under the bed).
Complexity is not a valid excuse.
"on-off-on provides fast recovery to a known state" might be admissible as a plea for leniency in certain very restricted circumstances.
"It is a standard approach because resetting a system in an unknown state to a consistent starting configuration is a logical and efficient way to start."
It's the standard conditioned approach we use since we have been forced to use fragile systems where various components are allowed to affect each other in unpredictable ways.
It's a sad state of affairs, that I mainly blame "ctrl-alt-del MicroSoft" for.
"Are you serious? And, by the by, who is this "we"?"
It was a generalisation, of course.
Obviously people have been tricked into running bad stuff on their machines for a very long time. Thanks MS for helping facilitating this.. Why not just run emailed stuff when people click on it? Brilliant!
With "we" I meant your average home PC user, which, by the way, wouldn't even have had email facilities back in the pre-WWW era. (Yes, I know that _some_ would have had that.)
It's a fact that the WWW has opened up the possibilities for trojans and viruses massively.
"Almost as if they are all copying each other......."
CPU design has been openly discussed in fora since day one.
Most performance enhancement methods are very well known, and subject of research at universities etc.
Developers get poached between companies.
All currently used mainstream CPUs follow the same basic design pattern.
Performance improvements are in the details of implementation, more than overall architecture.
Pressure to make the fastest processors would lead to designers doing similar things, perhaps ignoring some obscure and unlikely to be exploited side effects (if they even considered them in the first place).
Might be because, as an engineer said to me the other day, she said "Maybe I don't see these things as I'm not criminally minded". This was as I was talking about potential exploits in some software we were using.
Maybe the engineers that design the CPUs think the same. They just want to design the fastest chip possible and not have to think about the security of it.
In my mind. As an engineer theses days I think you, unfortunately, do needed to think criminally in your work but only in order to protect yourself from what you think criminals might exploit.
"A better word might be that we have to be paranoid"
No, paranoia is believing things that are not true. Paranoids don't generally worry about real risks.
Criminal minded is the way to go. Criminals look on everything as a potential opportunity for theft. They seek advantage, not fear.
"the fastest chip possible and not have to think about the security of it."
That's probably a reasonable starting point for designing a system to run the DOS version of Crysis - no need to consider data security or data integrity, no need for access controls like real computers used to have, just make that frame rate the fastest you can.
For anything more realistic, there may be other fundamental considerations, along the lines of "should this instruction in this process with these access rights be permitted this kind of access to this kind of object".
I'm struggling with some of the published descriptions of "rolling back" the consequences of mispredicted speculative execution.
As far as I understood it, one of the fundamentals of getting speculative execution to work right in the real world (it's not easy, but it's not impossible either given sufficiently clear thinking) is that the results cannot become visible 'elsewhere' (e.g. to other applications), directly or indirectly, until the speculation up to that point is fully confirmed as correct. Hence multiple 'shadow' register sets and reservation stations and other such well documented and (I thought) well understood stuff.
Shadow register sets provide multiple virtual (god i hate that word) copies of the real internal processor registers for speculative instructions to play with. Once it's determined which instruction stream gets to execute to completion, all the now-irrelevant copies aren't "rolled back", they're marked as outdated, and only the successful values are allowed to be used for further work. In any case, any speculative values *must not* be used for anything that will become visible in the outside world, e.g. a speculative load from real cache - such an operation cannot be "thrown away", and in the right circumstances potentially becomes a route for data leakage.
Part of this is about processor architecture, part of it is about OS security. All of it requires clear thinking, not just a focus on 'how do we make this code sequence run faster' while forgetting the bigger picture - e.g. should this code sequence be permitted to execute at all.
There used to be people who understood these things.
There used to be people who understood these things.
There still are. This is not a problem of understanding. It's a problem of economics.
Things will change if and when a group of people representing a sufficient concentration of market power come to value particular security measures more highly than other attributes of whatever they're buying.
And that's how things have always worked. A Honeywell running Multics was a hell of a lot more secure, under many reasonable threat models, than an Apple II. That didn't stop people from buying an Apple II to do their financial analysis with - because security was not an overwhelming economic advantage.
"Maybe I don't see these things as I'm not criminally minded".
THIS. This is the mentality that made my time in microprocessor validation so...fruitful. This is the same mentality I tried to beat out of my calculus students. It's not lack of criminality, it's lack of rigor.
I don't know how engineers are trained, but the important part of a mathematician's training is to find the edge cases that you missed the first time around. And the second.
"Maybe the engineers that design the CPUs think the same. They just want to design the fastest chip possible and not have to think about the security of it."
In part, it's a matter of metrics. Engineers are not particularly rewarded for producing theoretically secure chips, they are rewarded for producing faster chips on time for the sales types to hype them as faster than the competition.
In part, it's because a few engineers have months or years to design incredibly complicated chips, many many attackers, some lavishly supported by nation states, some by criminal organizations, some in a quiet basement somewhere have decades to find the small flaws that can be exploited.
Federico Faggin designed the Z80 in 1974. It was, I'd bet, the last non-risc CPU that one person could get their head round. Since then people have designed bits of CPUs but how the hole thing works, along with the non too simple problem of the operating system running on it, is beyond one persons ability to fully understand. If you look at the way these things are being hacked you have to give some kudos to the people doing the hacking - just before you seriously deform their nasal passages.
I would imagine, now these mechanisms have been uncovered they will be added to a long list of things to check for in future designs.
Having said that I can easily see a bright engineer in Intel having spotted this already but the bean counters decided performance figures were more important than a hopefully sufficiently obscure security flaw.
"Federico Faggin designed the Z80 in 1974. It was, I'd bet, the last non-risc CPU that one person could get their head round. Since then people have designed bits of CPUs but how the hole thing works, along with the non too simple problem of the operating system running on it, is beyond one persons ability to fully understand."
Off hand I don't know the exact date or chip generation, but it's been decades since CPUs were designed directly by humans, rather than by human guided design tools. That has to translate to a lessened understanding of what is going on 'under the hood' in detail... not that humans could do all the circuit analysis the tools do, even in a lifetime, for a chip with tens or hundreds of billions of transistors, data paths, etc.
And not only pop out in Intel chips.. but everything else out there too.
This is not at all surprising if you understand the basic concepts of information thermodynamics.
A system that dissipates energy, where that dissipation is not a completely unbiased random function, is leaking information. In other words, it has side channels.
If 1) any of those channels are detectable within the system, and 2) the system contains components with different security domains, then you have a potential violation of security boundaries.
1 & 2 are true of essentially all general-purpose computing, and much embedded (dedicated-purpose) computing, today. The Spectre class has focused specifically on the side channels created by speculative execution, but that's simply because there are a number of ways in which those channels are detectable from within the system.
Also, again, and contra Chris: These are not "blunders". They are deliberate design trade-offs. Arguably "oversights" is valid; those trade-offs were made based on incomplete risk analysis. But they were deliberate, and made to achieve the explicit goals of the project.
It would have been nice if the redhat video had extended the quite nice analogy of how speculative execution works to how this vulnerability exploits it. It kind of felt like it leapt from a helpful, high-level analogy - useful for explaining an obscure subject - to "and bad people could exploit this.." It would have been helpful to have an expanded analogy that explained how the speculatively produced bill could lead to another customer receiving your order (or something)
I can't immediately think of a good way though - anyone else want to have a crack at stretching the analogy to it's limits?
"Also, to exploit these flaws, malware has to be running on a device"
Unfortunately, just visiting a website starts all sorts of cra*p running. Draining the battery, flashing useless ads, and other oh-sooo important stuff going on.
But, yes, these information leak bugs aren't exactly low hanging fruit. Much easier to just fool a gullible user to do something stupid.
BTW, was it just me who found the explanatory RedHat video explanation not very useful? (I can't quite map waiters running around with how a CPU works..)
There's a whole criminal industry around persuading suckers to download and run malware, but the crooks aren't that clever. They can be traced. But nobody seems to bother, we never hear of anyone even getting to court.
Is it an investigation failure or a reporting failure?
I still don't see how this valuable secret data that is now in the cache can be accessed by a third party. Even if it's based on a timing attack, if someone attempts to access data they aren't allowed to read, I'd have thought the cache wouldn't affect the speed of response, because the cached data would be unavailable anyway.
Also, generally, they say virtual servers are likely to be badly affected, but I'd assume most of the hosts of these servers are not going to be idling, so the CPU shouldn't ever do 'idle-time' speculation, and just to be sure, wouldn't running something like SETI or crack in idle time solve that..
Which leads me to another thought... CPU idle speculation must have an impact on kernel process scheduling, imagine:
Case 1: A heavy job runs - not much else running on system. CPU speculates during brief idle times.
Case 2: A heavy job runs. SETI etc. set to run at idleprio only - so shouldn't ever impact on the heavy job. However, in this case, the heavy job now loses the potential CPU speculative advantage, as the CPU is no longer idling as much.
Argggh, too much to handle, and I've not had my coffee yet...
I like kittens.
Furry, purry, cuddly kittens...
Ahhh. Much better!
"I'd assume most of the hosts of these servers are not going to be idling, so the CPU shouldn't ever do 'idle-time' speculation"
It's not that sort of idle time (on a very macro scale). It's running some instructions while waiting for data for some other instructions: Out-of-order execution. Making your system more busy on a process level scale won't make any difference.
I still don't see how this valuable secret data that is now in the cache can be accessed by a third party.
It can't. Or, at any rate, that's not what Spectre-class attacks are about.
Spectre-class attacks use speculative execution to alter the observable state of the system, then observe those state changes to infer what "secret" (not directly accessible) data was subject to their probes.
In variant 1, for example, the attacker mistrains the branch predictor so that it will reliably take a path that tries to load from an invalid address (having found a suitable gadget in memory). That causes a speculative load into cache. The results of that branch are thrown away, but the cache remains warm, and the attacker can then time some loads to see whether a given address was cached or not. That, in turn, tells the attacker about the address computed by the code on the mispredicted branch; and that leaks some information about whatever went into computing that address.
So the attacker gets the gadget code to read the "secret" memory (which it has access to) and use it in creating those addresses, gradually leaking information.
That's only one variant (and rather simplified). The original Spectre paper explains variants 1 and 2, and other side channels that might be exploitable, in some detail.
But the point is that the attack code never sees the secret data directly. It sees what effects the secret data has on rump post-spec-ex execution system state, when that secret data was misused to alter that state.
We're going to see more of these. To get the performance that users have come to expect, modern CPU's are so fiendishly complicated that nobody (even the people who design them) can possibly know how they will behave in all possible situations. I have every sympathy with the chip designers. Getting your head around CPU design these days must be extremely challenging.
And, to some extent, I blame software developers like myself. We have got lazy. "CPU's are fast" we say, "we don't need to bother about the efficiency of our code". I installed Windows 10 recently on a machine that, a few years ago, was state of the art and always had excellent performance. It ran like a dog, even doing something mundane like popping up a menu. Draw your own conclusions.
For many years the Microsoft path to software "efficiency" is to throw more hardware resources at it. I don't recall any real instances where they've genuinely made something faster and more efficient.
If you've ever had cause to step through code at the CPU level you realise that not only is the shitty x86 instruction set wasting huge amounts of time juggling and swapping registers around, but much of the Microsoft code (i.e. libraries, variant hell, .net string handling, etc) spends huge amounts of CPU instructions not doing anything particularly constructive for the code it's meant to be running. While we don't really have to have efficiency everywhere, the level of inefficiency is staggering and whe e this is in lower level libraries then this rapidly escalates to affecting the entire system.
And, to some extent, I blame software developers like myself. We have got lazy. "CPU's are fast" we say, "we don't need to bother about the efficiency of our code".
Speak for yourself!
Of course, this is all true, especially so when microsoft was dominant - they wanted to bloat the system so that their hardware partners would get more sales, and they'd sell more licenses.
In the mid to late 90's we used to HATE this. The philosophy was there with newbie programmers, even managers.
It became the culture - the 'norm'.
"Running slow? Nothing to do with inefficent software - you need a faster machine."
"Low on memory? You need more, obviously! It's perfectly reasonable for "hello world" to need 8Mb!"
And of course, this all lead to comments I'm sure we've all heard: "Well, all computers crash, or need to be rebooted every few days"
The same "couldn't care less" attitude gave us "this site best viewed in IE6 - update your browser, loser - microsoft is da shizz"
But we were just old shites who were pleased to reduce code cycles in bytes, and speed in OP cycles. What would we know?
A bit of a wakeup with Y2K, and a bigger wakeup when mobile phones became more capable, but still relatively underpowered... Of course, as mobile cpu/ram got better, that hope was soon lost.
So now we have a similar "we know better" related to internet and 'cloud".
Many of us facepalm at the lack of security in IoT shite, house door-locks that require internet access, shoving all data onto someone elses servers etc. but still, those of us who have been using the internet since the 80's and are intimate with it's design principles... what would we know..... Door lock not working? Get more memory! Toaster slow? It needs a faster CPU! Your whole system has stopped working? Not our fault, someone has deleted their githib account we were live-linking to!
</get off my lawn you kids>
And, to some extent, I blame software developers like myself. We have got lazy.
This may be true, but it has absolutely nothing to do with the existence of Spectre-class vulnerabilities. The economic forces driving faster CPU designs would still be present if software were, say, three orders of magnitude less resource-hungry on average. People would just be running three orders of magnitude more work.
Work will expand to fill available resources. Faced with a glut of cheap compute resources, companies would do more optimization, more speculative modeling, more whatever.
"I installed Windows 10 recently on a machine that, a few years ago, was state of the art and always had excellent performance. It ran like a dog, even doing something mundane like popping up a menu. Draw your own conclusions."
Did it, thanks. I was forced to buy a new computer with Windows 10 on it. The day I spent installing and customizing Linux was far more productive than the weeks I would have struggled with Windows trying to secure it and tune it.
As a bonus, the one program that I didn't have a Linux native replacement for turned out to run 'out of the box' in WINE, which was installed by default. It was a game, nice to have, but not crucial or time sensitive to get it running.
Potentially exploitable by scripts #thanksnoscript
Of Course this leaves open any amount of holes to snoop data if you are a native code program, but that is something I already assume it is able to do by virtue of healthy paranoia.
A few years ago I would have been indifferent to this - subject to the usual security provisions of course.
But now, seeing the stuff that runs on android, most of it - even from "reputable" companies is basically spyware, and so blatant they don't give a shit about it.
With my programmers hat on, I have no sympathy for those complaining about GDPR etc. - they brought it on themselves.
Well, in the first place, we don't. If anyone does, that's a bug. And it has nothing to do with any Spectre variant. These are side channel vulnerabilities. They're not about "snooping"; they're about detecting state using the inevitable effects of a complex system.
(The sheer amount of misunderstanding about Spectre after these past four months is depressing. Not surprising, but depressing.)
I have to admit I have a hard time distinguishing all the different attacks by this time and I have not read up on most of the newer ones enough to tell exactly how they work, but if you manage in some way shape or form to learn things that have been speculatively loaded into cache,how is that not snooping on cache regardless of what method you use to do it?
I believe my main point still stands, if you disallow everyone and his mother to run what is in reality arbitrary code on the cpu they will not be able to exploit the side channel attacks because they have no ability to run the code needed to do so.
Well, yes, but Google, MS etc don't care about you. They care about what information they can monetise. Running stuff, and letting advertisers run stuff, on your computer, using your electricity and CPU time, is what gives them more money.
I've no idea.
Interesting question though... If you're wasting so many cycles on disabling/stengthening/kludging these fixes, there must be a point where simply removing all trace of it would be more efficient, and leave more "CPU space" for other improvements.
Mind you, maybe a redesign is needed, but I don't see why it's so hard for speculative execution to be done securely. Timing attacks are nothing new.
The ARM A53 (and I think A55) don't have speculative execution, the A57 on do.
Of course there are other differences, but quite a lot of mobile phones in the lower tiers use all-A53 designs, usually with 4 small slow cores and 4 larger faster ones.
So it might be possible to get a rough idea.
One thing I think is clear: most people, for phones and tablets, do not actually need speculative execution. That's most people, not users of PhotoShop.
There is a huge difference between turning off a core architectural feature on an existing product and comparing product A, designed with the feature and product B, designed without.
Turning off speculative execution entirely on a modern processor will be REALLY expensive. I would speculate > 4x slowdown. > 10x would not surprise me. Given the implementation parallels between supporting out of order execution and speculative, you might end up turning off OOO as well. If so, you could see slowdowns > 50x.
> 10x would not surprise me
I think that's optimistic, at least for x86/x64, and other CISCy architectures such as z. You might get away with only one order of magnitude hit on Power. ARM might do even better (i.e. less than an order of magnitude).
But x86? Those pipelines are deep. Kill spec-ex (for a general-purpose workload) and you'll be in a world of pain. And it's worth noting that even JITted managed languages tend to do even more branching than traditional procedural, compiled 3GLs did.
The Meltdown issue was that information about the contents of memory not accessible by a process is available. That is a serious electronics design flaw and I think just applicable to Intel processors.
The various Spectre variants allow information about the contents of memory which is accessible within a process to become known within that process by software which does not directly acess the memory. The issue is that software architects and designers have assumed that this information was inaccessible unless directly accessed. This is just very poor software deisgn and not a hardware bug at all. I have always assumed that anything within a process is accessible to anything else within a process and neither the hardware or operating systems literature./specifications have ever to my knowledge said anything other than this. Ignoring spectre an application software error can expose this information.The fact that software has been written which makes a whole assumptions about what can and can't be accessed that goes way beyond the specifications and statemenst about processors and operating systems is a problem with the software not a bug in the hardware and not a bug in the OS. If you want to control access to something stick it in a sperate process. That has always been the rule and if you do that then, apart from Meltdown, which WAS a stupid piece of electronics design, then you are OK.
I get The Reg is irreverent, and the red top of the IT world, but for some reason the constant use of 'design blunder' to describe a subtle interaction between disparate parts of a CPU that went unnoticed for well over 20 years seems a tad bit disingenuous.
I know we now live in a world where all commentators are perfect and mistakes are to be vilified but still...
...but it might set you down the path to finding an answer.
It doesn't - at least not in any of the published Spectre attacks.
The original Spectre paper explains this, and there are other explanations online (and I've posted explanations in comments to Reg stories, as have some others, though you have to filter for accuracy).
That's the whole point of a side-channel attack. You don't have direct access, so you find a proxy that leaks information about what you want to see.
Despite my previous comment, I am not inclined to be overly harsh on the designers for these issues.
The thing to understand is the difference between architectural state (a-state) and microarchitectural state (m-state). The m-state of a processor is everything that is needed to determine, for any input, what the m-state of the processor will be in the next cycle. This is not a tautology or circularity. We see, for instance, that we need to know the state of the L1 cache to know what will be in the register file. Therefore, the L1 cache is part of the m-state, and we need to include everything that affect the L1 cache as part of the m-state as well. The a-state is everything needed to know the result of executing the next instruction. The difference between the two is mostly caches, but there is another matter. Given the m-state, and a set of inputs, you can know the final m-state. But the a-state is not closed. In particular, performance registers and clocks are part of the a-state, but they are not predicted by the a-state. You can load from one of these registers, but the final state of that register is know known. Therefore, the a-state is NOT enough to predict the result of a series of instructions. Did they teach you that in school? Probably not so that it stood out.
For consumer grade processors, the contract is strictly about the a-state. The m-state might be presented, but it is subject to change at any time. In particular, if a bug is found the a processor, a patch might be issued to the microcode in the processor to fix the bug. This fix is extremely likely to impact the performance of the processor under at least some circumstances. That is, the m-state behavior is thrown out to fix the a-state. Of course, manufacturers are strongly motivated to keep the m-state changes minimal.
Design teams have been told to deliver a-state promises at maximum speed.
Spectre is not a violation of the a-state promises. It is therefore not a "bug" in the sense that the processor is failing to behave as advertised. It is a failure to isolate state, and therefore a security failure in the presence of untrusted code.
Note that at the front of every manual I saw in the 1996-2006 timeframe, there was a big notice just inside the cover that the processor was not cleared for use with information classified "confidential" or higher. Perhaps they could have been a bit more explicit, but processor designers were disclaiming side channel-free products.
So, what to expect? 1) Variants of these bugs are going to continue to dribble out. The only way to avoid them on existing product is to entirely turn off speculative execution, which might not even be possible. If it is possible, expect huge drops in performance. 50x would not surprise me. 2) Designs to get around this issue are going to require huge reworking of the caches. Expect cache memory sizes to halve. This will be a major performance hit. 3) Given the size of the performance hit, I expect compute utilization to bifurcate. In trusted computing environments, the benefits of speculative executing are going to support a continuing market for speculative execution. In general computing, not. I anticipate this split appearing in the cloud.
In another discussion, someone mentioned targeting contention for execution units as a variant. Execution unit contention might even happen with designs that are merely pipelined. Defending against that would involve adding execution units sufficient to ensure that it cannot happen. Given my experience, don't hold your breath.
Technically you are incorrect - the flash card test detects racism by showing people of different ethnicities. There is only one human race, and racism is believing this to be incorrect.
But yes, there are a lot of side channel attacks on people. One used to be to catch suspected deserters representing themselves as civilians by having an NCO shout a word of command in their ear unexpectedly.
Funny thing happened when they analyzed those tests based on the political views of the takers. Turns out, conservatives & libertarians often show almost no bias.
Presumably, it is because we don't see people primarily as members of groups.
But yeah, if you want to fix it in yourself, stop being a liberal. :D
I have been a software developer for over 30 years and have a great insite to the issue this brings up. Most of the processors were designed to be used for a single user Operating System (OS) where you don't have concerns of snooping by another users or applications because it's a single user system! Once code is running on that CPU there are many many ways to snoop which can be considered hacking by looking at memory locations or data written to disk or deleted file space. I believe the issues they are concerned about are really about multi-user operating systems run on CPU's that are designed for single user. This is were the CPU does not provide the protection of snooping between users.
Heck, look at the languages these days. The developers are lazy and don't need to release (delete) memory that was allocated and you let a garbage collector do the cleanup in Java and C#. I really consider this garbage collection scheme to be a security leak itself. In my book this is far worse than the CPU issues they are reporting.
The really truly sad part of all of this, is that there was much discussion over these types of "features" in a CPU 40 years ago. It was determined that these were a "Bad Idea". The favoured idea at the time was multiprocessing could solve the problem, if we could find a few people smart enough to write the code for the hardware we designed. ((z80/8080/68000 era)). We are still waiting. CPU design went down a path that put money in pockets now. Screw the future. Its not like this stuff will still be running when anyone notices the issues.....
Biting the hand that feeds IT © 1998–2020