Can you clarify?
Can you clarify what you mean by all out-of-order execution Intel processors?
I havent heard that terminology before. Are we talking i3/i5/i7 processors? Or just older processors?
In computer engineering, out-of-order execution (or more formally dynamic execution) is a paradigm used in most high-performance microprocessors to make use of instruction cycles that would otherwise be wasted by a certain type of costly delay.
In 1990s, out-of-order execution became more common, and was featured in the IBM/Motorola PowerPC 601 (1993), Fujitsu/HAL SPARC64 (1995), Intel Pentium Pro (1995), MIPS R10000 (1996), HP PA-8000 (1996), AMD K5 (1996) and DEC Alpha 21264 (1998). Notable exceptions to this trend include the Sun UltraSPARC, HP/Intel Itanium, Transmeta Crusoe, Intel Atom until Silvermont Architecture, and the IBM POWER6.
The Intel 'Core' architecture (i3's, i5's, i7's etc) are basically a derivation of the Pentium Pro that, as per the referenced wikipedia page, introduced out-of-order execution in 1995.
> Notable exceptions to this trend include the Sun UltraSPARC
Not true. See this link and this link and this link [Warning: last two URLs are PDF].
Dynamic branch prediction, instruction prefetch+decode and speculative execution were first introduced in the UltraSPARC-IIi.
These have been grouped into two logo'd and branded vulnerabilities: Meltdown (Variants 1 and 2), and Spectre (Variant 3).
Other way around, based on the preceding CVE list, it should be "Spectre (Variants 1 and 2), and Meltdown (Variant 3)."
Can't use the corrections link when I don't have an email client installed...
Also grouping two variants under one name allows Intel PR to work their magic and claim others are affected by the same thing too.
Well some AMD CPUs are affected in a non-standard kernel.configuration but the fix for that variant doesn't slow down kernel system calls as much.
This post has been deleted by its author
If the extraction rate is a function of RAM capacity, then there must be a benefit in Increasing RAM, just like bit lengths are increased to improve resistance to brute force in security functions.
Cloud vendors and virtualisation providers stack machines high with RAM to get better consolidation ratios, so does it follow they are better protected ?
Given that large amounts of RAM are used to cram in many virtual machines, I'd say they're not "better protected", in fact quite the opposite. You'd have a single physical attack surface containing many machines which can be compromised, which in turn represent many more virtual attack vectors. It might take you longer to dump the physical hosts entire memory, but you'd get access to many more VMs for your increased effort. Also consider that one VM owned by one customer could potentially dump out memory of another customers machine that just happens to be running on the same physical host.
Think you missed the point I was trying to make. The volume of data is higher, therefore it will take more time to get anything useful out, hence slowing down the attack. Sifting the useful bits from the non-useful bits takes more time again and who's to say that the couple of bytes you got from VM1 and couple from VM27 are any good without the rest that has not been recovered yet.
I accept that it doesn't fix the problem, but it would buy a lot of time.
Those NCIS & Castle clips have it all wrong, this is how modern day hacking scenes should be played out, as demonstrated by The Shatner:
But where's the Unix expert when you need one?
And just before Christmas, who sold most of their stock in Intel? Intel's CEO.
It was noted in another thread that executives have to give months of notice before trading their own shares, so this is probably innocent. On the other hand, the article indicates that the bug was reported last summer. I don't know how much notice is actually required, but it is possible that there are legitimate questions to answer.
However, whilst the impact of this bug is obvious to me, it may not be obvious to a CEO. If I went to *my* boss and said there is a flaw in almost every product we've produced in the last 20 years which is financially quantifiable (at least for cloud users, the impact of this bug *can* be measured in dollars) and is by design so we can be sued to pieces ... he might not believe me.
That usually depends on who you are - what position you hold in the company, and of course, how much pointy-haired the boss is.
Anyway, usually bosses may listen when they hard words like "shares downfall" - "legal issues" - "recall and replacements", etc. etc. - even when they can't understand the technical details.
Don't get too complacent.
From reading some comments and posts both here on El Reg and elsewhere it seems to me as if a blunderbus approach to fixing these snafus is being contemplated.
Even though AMD have said that their CPUs are only affected minimally from what I have read all CPUs will be targeted by the patches whether they need it or not. So that AMD and ARM will be slowed down as well as Intel stuff.
Now it may well be that I have got hold of the wrong end of the stick, and I hope I have, but if true then a lot of collateral damage will done and we will all suffer from this mess.
That's weird, I was under the distinct impression of having read about AMD submitting a patch explicitly to _prevent_ the "fix" activating on its processors. Granted, there's a bit too much confusion going around on what does what / affects precisely what / implies precisely what at the moment.
Your both asking the questions I'm interested in !!!
from what was in the article it seemed as if the researcher's were going out of their way to make it work on AMD and even when they could prove it possible it wasn't easy.
*Disclaimer I am a bit of a AMD fanboi, not so much that I don't imagine AMD are not affected by this just hoping not.
"Your both asking the questions I'm interested in !!!"
The disabling of PTI (and associated performance impact) does not happen on AMD CPUs, in the Linux kernel fixes at least (can't speak to other affected OSes).
"from what was in the article it seemed as if the researcher's were going out of their way to make it work on AMD"
Rather the opposite, at least so far as Google's team is concerned: they state in their post "Our research was relatively Haswell-centric so far. It would be interesting to see details e.g. on how the branch prediction of other modern processors works and how well it can be attacked." They did test their PoC exploits against AMD CPUs, and state how badly they are affected by each one, but they appear to have focused on Haswell's design in actually *developing* the attacks.
Seems like we're sleepwalking to the greatest clusterfuck in tech history. Smart devices everywhere but no actual Smarts. Is there something in the water / air lowering IQ? Speaking of air, mines the PC getting air-gapped.
"A mega-gaffe by the semiconductor industry. As they souped up their CPUs to race them against each other, they left behind one thing: security."
It's the curse of the presentation layer people.
If it looks shiny, ship it. No matter whether it's fit for purpose, no matter whether it's got serious design flaws, which will inevitably come back to bite the purchasers and users in the backside, just ship it. And if anyone dares question the dominance of shiny over well-engineered, the heretics are defined as "not a team player".
Been that way for at least a couple of decades in quite a few "leading tech companies" and industry sectors. Companies and people that cared about decent engineering have largely vanished from the business.
Shiny sells, Marketing and Finance don't care about what might happen a few years down the line as much as the bottom line today. Shareholders don't care about the product as long as they get their dividend. Management don't care about customers other than as a source of income. Customer Support is seen as a necessary evil that gets the bare minimum of funding to put a layer of separation between the people making decisions and the customers who enjoy the "benefits" of those decisions.
This is obviously an exaggerated description and not representative of many companies in the Real World but it does, unfortunately, seem to bear an annoying resemblance to some of them, from IT suppliers to retail businesses, vehicle manufacturers and holiday companies...
Perhaps Facebook and smartphones are lowering IQ?
Also, "natural selection has not stopped": "genetic contributions to intelligence and educational achievement are currently disfavoured by natural selection. In evolutionary terms, it seems, humans are now brainy enough" (https://www.economist.com/news/science-and-technology/21732803-it-does-however-no-longer-seem-favour-braininess-data-half-million)
But it doesn't matter, because Artificial Intelligence will save us!
"Genetic contributions to intelligence and educational achievement are currently disfavoured by natural selection. In evolutionary terms, it seems, humans are now brainy enough"
Seems like we're sleepwalking to the greatest clusterfuck in tech history. Smart devices everywhere but no actual Smarts. Is there something in the water / air lowering IQ? Speaking of air, mines the PC getting air-gapped.
Well, people are suffering from infocrap overload everywhere and a sociopath and Wall-Street driven sales cycle. But also...
The population-weighted cross-national mean IQ-score is 89.03, with SD of 12.89, for 123 nations. There are roughly 550,000 individuals in the included samples.
So people overall may not be as smart as they think.
That would imply the IQ test administered was flawed? There are countries exceeding a 100
The only thing that IQ tests measure is how good you are at doing IQ tests..
(I speak as a once-and-former member of MENSA who took the test as a teenager, specifically to check if I was brighter than my brother. I was - according to MENSA. However, he has a first-class honours degree, a masters and a PhD (in mathematics) whereas I have year 1 of an HND..)
u think it's said "money is the root of all..."
Actually - the original phrase is "the *love* of money is the root of all kinds of evil". Money itself is just a tool and a means to an end and is, of itself, not bad.
Doing anything and everything to gain money, on the other hand, is a Bad Thing[TM].
As with (pretty much) everything, *why* you do things is as important as what you do.
 The end being "buying stuff that I need to survive - stuff like food, shelter and cats".
A few decades isn't such a long time after all.
I always was suspicious of taking shortcuts for performance reasons. MS had a good thing going with Windows NT, but then had to cram things into the kernel for short term performance gains, for just one irritating example. Auto-run this and that. Thousands of ways to hide auto-start of crap. MS is a friggin master of making things obscure and insecure.
These things always have a way of catching up with us.
Yeah, because everybody has a hundred thousand dollar's worth of microscope sitting in their clean room, and the skills to remove the casing of the CPU, the years of education to understand what they see and the months to actually analyse it.
It'd be quicker and cheaper to conduct your secure communications thus: fly to wherever your correspondent lives and go for a nice walk and a chat in the woods.
I guess you're being obtuse ?
AC wasn't suggesting you actually tried to verify the silicon.
Just be aware that if you haven't. it's a potential vector for badness which no amount of open sourced code can counter.
First rule of secure communications, is to assume that your communications aren't secure.
"First rule of secure communications, is to assume that your communications aren't secure."
It sounds nice, but if you take the position that your communications *are not* secure then logically there is no point in taking any steps to secure them.
What you actually have to do is assume that they *might not be* secure in ways that you don't yet know and you should attempt to mitigate against those by layering security elsewhere and (if you have the resources) supporting attempts (by yourself or others) to learn more about the things you don't yet know. This philosophy is much less memorable, but leads to concrete suggestions for action on your part, so it is more useful.
"It sounds nice, but if you take the position that your communications *are not* secure then logically there is no point in taking any steps to secure them."
No, that's not logical at all. Assume you phone is tapped. Do you decide 'oh well, phone's tapped, may as well just email my secret plan direct to the FBI now'? Or do you just draw up a basic code to use on the phone?
'Assume compromised' doesn't mean 'give up on security all together', it means 'implement additional layers of security even if you're supposed to be safe'.
Easily solved. Just get on the next
Space Shuttle sorry Soyuz / Dragon capsule and take a nice long spacewalk with your interlocutor in vacuum*, physically touching your helmets to convey vibrations the good old fashion style.
* make an effort to try staying aware of any incoming laser beams trying to bounce off your helmet, you know, just to be on the safe side...
It's not that people haven't believed that there are problems with the silicon but that you frame it as FOSS issue, possibly. If there's a problem with the silicon it goes well beyond OS unless a particular OS knows to look for that issue and develop around it. That, of course, is a sub-optimal process, but better than none at all.
"If there's a problem with the silicon it goes well beyond OS unless a particular OS knows to look for that issue and develop around it."
Ah yes, the bane of FOSS driver devs. GFX cards especially seem to end up with multiple bugs in the silicon and have undocumented s/w workarounds in the drivers (not that they are documented much, if at all, anyway)
>> Now will people believe me ?
There's nothing to believe... there is no such thing as perfect security which means every subsequent discussion claiming it is moot. There is no perfectly secure OS, perfectly secure silicon, perfectly secure system operator.
Perfectly unintelligent claims do look possible.
>Now will people believe me ?
I won't believe you, because you sound like a 'holier-than-thou' bellend. Just whinging on about how crap everything is helps nobody.
It's pretty obvious that if we're serious about security, we need open hardware *and* software. The question is how to get there with the hardware.
Now will people believe me?
No, because FOSS has nothing to do with any of these hardware bugs.
Witness the fact that Microsoft Windows is also affected. That being the farthest thing from FOSS that I can think of, except maybe Oracle's crap.
You do know the difference between hardware and software, yes?
When will Intel be shipping CPUs without these vulnerabilities? And obviously, I'd want to wait a few months after that to allow a window for other people to find any issues.
I've been thinking about getting a new laptop for a while - lots of useful features have matured since my Core 2 Duo machine. Whilst my workload - causal / CAD - may not incur too much of a slow down, I may as well get a fixed CPU.
Either that, or take a performance hit (which I won't notice because the CPU will be faster to begin with than the one I'm used to using) and shop around for a discount on existing stock.
Well beyond Intel and AMD I don't have much of an option, seeing as my Playstation 3 Cell-powered cluster has achieved self awareness and told me to bugger off and leave it alone to contemplate its own navel.
Hence the twofold question: Can this issue be rectified in new silicon, and what's the lead time on implementing fixes on modern CPUs?
"MS have previously said that they would not support Win7 on new Intel processors like Kaby Lake. Throwing away your old CPU may not be an option for some corporates."
Does this mean Win7 won't run on Kabylake or just that they don't "support" it to the extent that some on-chip features or optimisations won't work and they won't be fixing that?
"MS have previously said that they would not support Win7 on new Intel processors like Kaby Lake. Throwing away your old CPU may not be an option for some corporates."
This would be a good time then to make MS support Win 7 on newer processors. And, while we are at it, make them promise to never again break things just to try to sell new shiny crap. (To the tune a a few billion dollar's worth of a fine.)
I think based on information to date that people *might* want to be asking about "lead time on OLD CPUs" (ie ones that don't have speculative/out of order execution).
If people's systems can't cope without speculative/OoO execution, perhaps people might want to look into the benefits of older simpler software, that wasn't so bloated with shininess as to need the joys of inadequately tested processors underneath.
Any experts on "product liability" laws care to comment here? It would seem unfortunate if the manufacturer of the defective products is the one who ultimately benefits because their customers have to pay again for hardware that wasn't fit for purpose when sold, because it wasn't properly architected, wasn't properly implemented, wasn't properly tested.
And yes I have worked with a team validating an out-of-order microprocessor implemetation. It didn't need to do OoO but having it OoO made it shinier. It also made it less likely to be implemented correctly and less likely to be properly tested.
And look where that leads.
Maybe Intel will attempt to sell Itanium again.
Jokes aside, it needs a silicon redesign. Memory accesses should be checked anyway for privileges before moving anything to the cache, and probably performance will suffer as well.
There could be other solutions, but they may be even more complex. And of course there are inventories to to sell... it can take many months.
@jmch: Yes, and for the avoidance of doubt let me say that your phrase "NSA-types" should be taken to include all the bad guys. We should not forget that whilst 99% of humanity does not look for ways to screw each other over, 99% of those who do are the kind of folks who won't share when they find a new way to do that.
"Lots of fundamental development process rethinking required in the semi-conductor world required."
Or go back to some old ideas.
Does anyone remember the Z80? Two sets of registers and an instruction to swap between them. It made for quick context swaps. There were no security advantages, of course, because back then there was no concept of security rings on an 8-bit processor.
The same thing could be adapted to the modern world. Two sets of registers and two sets of cache (OK, for any given number of transistors it would mean reduced cache sizes for each half). That would mean that an independent address space could be kept for the kernel with only a single instruction to swap the context with one set having security privileges. Extra Brownie points if the cache split can be tuned to suit workloads. There might even be scope for adding more sets for quick changes between running processes.
"Lots of fundamental development process rethinking required in the semi-conductor world required...."
Broadly agreeing - but I don't see this as an industry wide problem. There are plenty of well established tools and techniques in place that would catch this kind of thinko - but they all require a precise, complete and self-consistent definition of how the chip is meant to work. The x86 doesn't have such a definition in the public domain, and given the nature of the errata over the years there is plenty of evidence that Intel doesn't have one (or make use of one) in their design process either.
"For 99.9% of the planet that could be tricky."
That's actually a very big problem. For years people have believed that you can somehow "sandbox" code, so it won't be able to harm your system, and for years people have warned that this might be an illusion. Now we actually see yet another proof that it was.
The trick is not to go back to Lynx, but to set your Firefox/Chrome user agent to it. I do this (using an eLinks user agent string), and enjoy simplified JS free versions of Office365 and gmail.
Decent sites recognise it, and provide the simplified view above, others ignore it and you're in the same boat.
One of the variants requires that compilers be changed and software recompiled, which means there's no real fix for malware written in assembler or someone still using an old compiler or software.
So Intel "the big evil" would be the one hit hardest by design flaws that AMD and ARM have too, uh?
Because AMD and ARM are so sweet angels and their bugs and flaws don't stink as much, uh?
AMD statements are beyond silly. Also it is pretty clear that AMD employees have been spread all over the 'net to attack Intel just like a few weeks ago they did using an Intel Management Engine bug as it was the end of the world and couldn't be fixed...
"AMD statements are beyond silly."
Are they? We appear to have proof-of-concept demos that work on Intel. If those don't work on AMD then the onus is on you (or, more likely, Intel) to demonstrate that it can be done. New information is coming to light at quite a rate and such demos may already exist or may exist by the time you read this reply, but it is not obvious to me that all OoO processors are necessarily vulnerable or are vulnerable in ways that cannot be patched in software, so "beyond silly" seems rather harsh.
A lot of people will be stuck as unable ti upgrade OS / browser.
Plenty of old kit does not meet hardware spec of newer OS versions so has to stay marooned (and Firefox, Chrome etc only support a few latest & greatest OS versions - they do not support "vintage" systems)
I would like to see OS fixes for older OSes (and browsers supporting older OSes)
Plenty of people cannot afford to chuck out and replace old (running) kit - and its a real waste of resources to get rid of stuff that still works - I have various bits of kit with OSes that cannot be upgraded further.
With most vulns, being sensible on scripting and sensible set up in firewall layers and intrusion monitoring software can make older OSes relatively safe, but these vulns cannot really be mitigated in that way & a game changer for trying to run vintage kit securely.
> But the control centre for my nuclear reactor only works on XP
Windows Embedded 2009 uses the NT 5.1 kernel and has extended support through 2019. I wonder if Microsoft has enough customers with support contracts that they'll be pressured into back-porting this fix to that old code tree.
"If the hardware runs a multi million pound piece of equipment and can't easily be upgraded, air gap it."
That worked so well with Stuxnet didn't it. And Stuxnet was just the one that got the most publicity (which you apparently didn't see or don't want to understand). Lots of similar methods of defeating "airgaps" have existed and will exist.
Get a clue or STFU.
edit: Ill-inforrmed ignoramuses who have ended up in roles way beyond their competence carry much responsibility for the untrustworthy and dangerous mess which is today's InternetofThings, and now it also seems similar (but worse) trustworthiness problems apply to lots of other rather more expensive gear which have been built to favour shiny shit over simple and trustworthy.
Airgaps include scanning removable media for viruses, and not allowing autorun facilities. Stuxnet was a state sponsored hacking tool deliberately designed to wreck infrastructure.
If you have a nation state determined to infiltrate your systems you'd better be certain that all your security is watertight.
Which includes making sure systems are fully up to date. So, fuck off.
For practically everyone else, an airgap suffices.
"Airgaps include scanning removable media for viruses, and not allowing autorun facilities."
How does even an up to date and uncompromised virus scanner detect a previously unpublicised exploit method ?
How does trying to disable autorun prevent e.g. the kind of "specially crafted JPEG(or whatever) may cause unauthorised code execution" vuln which has been happening in commodity OSes for decades?
> bla-bla-bla ... SPARC M7 ... bla-bla-bla
How exactly do you know that SPARC M7 or M-whatever isn't affected by Spectre?
Do you have a PoC program proving that it is not? If you do, please post the source code for it, in the open, for everyone to download, compile and test.
I take it you don't have such a program, and you never will.
You don't seem to understand even the basics of the Spectre hardware vulnerability.
For anyone who pays for their time on a platform, "30% slower" equates pretty directly to a figure for damages. We won't necessarily see a class action though. Instead, we may find that cloud providers simply lower their charges for Intel-based VMs (to avoid being sued by their own customers) and then turn around to Intel and ask for a lump sum to cover it.
For anyone running a system on average at anything below 70% of its rated power, it would be harder to come up with (and defend in court) a particular figure for damages. Those cases would be messy, so I don't expect too many of the little guys will take Intel to court.
IANAL . but this is tricky to argue I think.
If anything it's the SW vendor that you sue, they are creating the performance degradation. If there is some intel documentation stating that it is guaranteed to be secure, then yes a lawsuit would have potential.
Intel instead would argue that the pre-KPTI implementation is a performance feature and does not guarantee security, and that SW should use a KPTI like implementation if security is to be improved. They will also not promise KPTI is secure either. This fits with the Intel PR blurb that stuff is working as expected.
And SW vendors sell SW on an as-is basis - for eg you cannot sue them for any patches/bugs.
Neither Intel nor the SW vendors promised anything here.. it's caveat emptor. So unless you can prove your evaluation of the processor and/or SW was affected by misleading statements from these vendors, the products are not sold guaranteeing security.
So the court could rule that how you evaluated whether it was fit for your purpose was what was flawed.
"This work was supported in part by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 681402)."
One of the teams was an Austrian university so it's not a huge leap to realise one or more of the team or their resources was grant funded rather than a specific grant to do this specific research,
The description in the article would seem to allow a fairly simple fix in the OS.
When the original page fault occurs, control is passed from user-space (or guest space) to kernel space (or host space). The handler can determine whether the faulting address is outside user-space or not. In fact, it probably already has to do that in order to process the fault. If not, the fault is legitimate and will be related to (say) stack guard pages or virtual memory paging. We wouldn't want to penalise those, so we proceed as usual.
However, if it *is* outside user-space, I can't see any reason not to "punish" the application program (or guest kernel) by performing a full cache flush. This blocks the information disclosure. It is obviously quite costly, but as long as the bill is charged to the offending application (or guest, and in the case of cloud providers that will really mean *charged* so the provider is still happy) then it doesn't count as a DoS attack and no properly written application will ever have to pay the bill.
What have I missed?
You don't need to have a page fault in order to succeed at the exploit.
Because the program contained two (or more) possible paths of execution, and the CPU took all of them speculatively, the fault is inhibited because the naughtiness took place in a branch that ultimately the program wasn't supposes to take. Somehow the data leaks over from the path it wasn't supposed to take.
Imagine you have a time travel gizmo. The only catch is that your memory gets erased when you travel back in time. You try to break into some highly secure facility to steal some documents. You systematically take every possible corridor, and push the button in the gizmo to send you back in time just before the guards shoot you. Down one corridor you find the secret documents but get shot every time. Somehow you find a way to trick the forced amnesia, and get away with seeing the secret documents without technically ever seeing them because time paradox.
Anonymous, because there's no such thing as time travel.
I spent 10 years doing microprocessor validation. This was my first idea. Then I realized that the OS never sees the instructions that fetch the cache line. <doh> Note that it might be easier for attack code to do a divide by zero to cause an exception than to outsmart the branch prediction hardware.
During my ten years, I did my share of fussin' & cussin' about designers just doing things wrong. But I'm also a mathematician. My brain is wired for this stuff, and it STILL took years & years of my professors pointing out where my errors were before they even began to think about letting me into the program.
This is a side-channel attack on the warmth of the cache. If you think you can, please detail how the hardware can prevent such an attack without substantially compromising performance. I've got some ideas, but after realizing that my OS fix would not work, I'm going to shut up about them for a while...
Someone has spent a lot of time on the branding of this to create some cool images, please make sure use them. This is clearly more important than actually trying to fix the thing, obviously;
Seems VMware have been sparked into life too, and have released some patches for ESXi Workstation and Fusion;
I would be trying to make sure CPU designers "overlook" that problem deliberately. I mean all CPU vendors have plausible deniability since this could just as likely have been an accident.
It's just like UEFI or ME. It looks like simple stupidity, but it greatly benefits certain agencies.
[...] it greatly benefits certain agencies
Exactly that. Especially given that Intel and AMD are American, and ARM is British, but their chips are used globally. From an agency and gov point of view: What's not to like? I bet they are more upset that this has come to light than they ever were about the existence of those flaws.
I'd also be inclined to wager that there are more flaws like this in CPUs and other chips/hardware. It's no secret after all that the 5 Eyes would like to see backdoors and reversible encryption everywhere.
Lets face it, the underlying problem is the "need for speed" and the resulting mismatch between the CPU core at ~3GHz and main memory in the ~1GHz and below range. So lets throw hardware at it, millions and billions of transistors to try and play God/quantum by plying out all possible paths within the instruction pipeline.
And they got it wrong. Not massively so in normal terms, but they did not design based on the assumption of bad actors abusing this. Because no one bought hardware that was slow and secure, at least, not the majority of PC gamers or business managers chasing the ever-bloating OS and web browser problems. Make it fast, make it now. Ship it when its half-baked and if we get too many problems then put out a microcode update which users may (or probably not, given the shittyness of many motherboard makers) apply.
Sorry, but in most cases like this it is simple "incompetence" for not really planning high security from the original start because that is not what the boss will get bonuses for.
"... they did not design based on the assumption of bad actors abusing this."
I agree. For illustration, have a look at this Intel manual page from 1986, explaining why CPU-enforced sandboxing was introduced: the focus was entirely on detecting, and confining the damage of, "bugs". I think this is understandable because malware wasn't such an issue back then, but it has been obvious for a long time now that Protected Mode is a critical security defence, not just a stability feature, and there is no excuse for holes in its sandboxes in recent CPUs.
'Trusted Computing' Model 2.0'...
"....."The design choice of putting a secretive, unmodifiable management chip in every computer was terrible, and leaving their customers exposed to these risks without an opt-out is an act of extreme irresponsibility," (EFF)..."
OK - as a home user, here's a couple of data points for you to consider.
MS have issued the patch for Windows 10. which takes you from build .125 up to build.192.
I ran a handbrake video conversion before and after and also ran the passmark test before and after. This was on my I7-3770.
Handbrake. Before: average FPS 168. Time taken - 18mins.
Handbrake. After: average FPS 167.5. Time taken 18 mins 20 seconds.
Passmark. Before After
Total 3219.7 3228.7
CPU 8214 8224
2D 557 561
3D 3585 3605
Mem 1752 1758
Disk 2444 2409
So, the only thing that seems to have suffered is disk I/O and that by around 1.5%
YMMV - this is just what I found.
"the 30% that has been bandied about." needs to specify the workload used for the performance measurement. My recollection is that it was one of the SPEC benchmarks. If it matters, you can find it, the truth is out there somewhere. Use the source, always use the source.
Seems Theo was looking at this a decade ago so I guess OpenBSD is already okay.
"Seems Theo was looking at this a decade ago so I guess OpenBSD is already okay."
AFAICT those OpenBSD fixes related to an unpublished change w.r.t bits of page table being cached when previously they were not. I think it would be dangerous to assume those fixes also cover Meltdown.
The points Theo made about the errata preventing people from implementing secure software remain valid.
As I've said before folks really should look at the errata before purchasing a CPU - it is shocking just how broken some of them really are. That won't always help though - case in point try tracking down all the errata that Theo talked about (eg: AI90) 10 years ago... You may well struggle - because Intel's policy is to unpublish errata after they've made a fix/spec change... If anyone does find those errata - let me know. ;)
As it turns out (and in fairness to Intel) I did actually find the Core 2 Duo errata Theo referred to back in 2007 after a bit more fiddling around with search criteria...
The closest issues to Meltdown that I found (maybe someone smarter can find more) were AI56, AI91 and AI99:
AI56 "Update of Read/Write (R/W) or User/Supervisor (U/S) or Present (P) Bits without TLB Shootdown May Cause Unexpected Processor Behavior"
AI91 "Update of Attribute Bits on Page Directories without Immediate TLB Shootdown May Cause Unexpected Processor Behavior"
AI99 "Updating Code Page Directory Attributes without TLB Invalidation May Result in Improper Handling of Code #PF"
There are probably places where the first two generations of Alpha-architecture chips (e.g. EV4 aka 21064 and EV5 aka 21164, and so on) are still available. They didn't have speculative/OoO execution; Alpha only got speculative execution in the EV6/21264 chips.
At a complete guess...
I'd assume if you set part of the computation to CPU core 1, and part to 2, with the requirement of 1 to compute before 2, but allow it to pre-fetch the data (as in Meltdown). But this time you adjust that code, then execute with core 2.
If the CPU has to write to memory to pass from core 1 to core 2, it could allow you to arbitrarily set any code, by arbitrarily setting something into pre-fetch, knowing no checks will be done on the pre-fetched data!?
Good question. Core 2 cannot see core 1's L2, so does the OoO write on core 1 cause the written data to propagate to L3 to maintain cache coherency? Otherwise, the OoO write never makes it past core 1's L2 and core 2 then loads it's L2 with the original copy from L3 and so never sees core 1's abandoned write.
The Faily Fail will no doubt have a sensationalist article along the lines of "your data is about to be stolen and everyone's ID will be stolen and everyone's bank accounts will be emptied computer armageddon horror" followed by some useful advice along the lines of "Don't use your computer or phone and keep a lookout for immigrants trying to steal your data".
actually is roughly four column inches of inevitably-oversimplified description which seems to come largely from ex-Sophos bloggist Graham Clueless and ends by crediting TheRegister for discovering this particular fail.
(I read the Daily Mail at my neighbour's, honest).
Maybe scripts on webpages.
Mozilla managed to make Firefox 57 incompatible with Noscript (they made it hard for devs to migrate by NOT documenting API and releasing versions to devs first). Now they updated Firefox 52ESR (52.5.3) to break every plug-in. Got all working again except noscript.
So I have installed uMatrix on Firefox as a script blocker. It also uses a database of evil tracking and malware domains. So good.
Iceweasel now simply installs Firefox 52.5.3, so no good. Palemoon seems too much like a beta.
I have Classic Theme restorer. Mozilla, if I wanted something like Google Chrome, I'd install Chromium.
So yet again, the scary zero days are NOT beaten by AV systems (that often slow or break Windows), but by no remote content in email (I use a client for POP3 & IMAP), not opening attachments you shouldn't and Script blocking (White listing, blacklisting and blocking entire 3rd party domains).
"Interface is not as nice as the previous verison, but it does work."
The interface is genuinely terrible -- it "guesses" what scripts to allow if you don't have a rule, and doesn't inform you about them in the icon (i.e., no partial "no" symbol over the "S" as in the old version of Noscript).
On the other hand, the old version of Noscript does work on Firefox 52.5.3, contrary to what Mage has stated.
(I'm using the 64-bit version, in case that's a factor.)
>>This is, essentially, a mega-gaffe by the semiconductor industry.
This is a bit rich I feel.
It has taken the world a *decade* to find this on what are the two most popular architectures (x86, ARM) which are open on the details of the involved HW (out of necessity for SW use).
The number of technical people and engineers who have seen this is not insignificant over that decade.
Yet it has taken so long to identify it.
Hindsight might be 20/20, but to call this an obvious gaffe is contrary to a decade of evidence.
Well for it to be gaffe it should be an embarrassing mistake, a mistake made rarely/by a few.
For it to be a "mega-gaffe", it would have to be a obvious oversight, made by no-one and blindingly obvious.
So I see a mega-gaffe as a mistake made on the very obvious. And obvious this isn't.
I mean what is "mega-gaffe" about it? "Mega-gaffes" don't take a decade to find which is my point.
"don't forget your routers, raspberry's, and all that wonderful IoT stuff many are based on (Broadcom) ARM Architecture"
ARM's site lists the affected processors. AFAICS Pis aren't amongst those affected. As per a previous comment about stuff you control - the embedded processors shouldn't be exposed to random stuff off the net.
is how, given that this is the result of a flaw at the level of the chip design, how it can affect chips with different architectures. Even if all these chips have speculative execution, surely the in silico implementation must be quite different for the different chips?
I'm no expert at all, but the example exploit relies on using speculative execution to bring out of bounds data into the cache, then hit the cache to get that data... The basic flaw, which as I understand it is that boundary checking can be bypassed through speculative execution then picked out of the cache, seems to be architecture independent as everyone has taken the same approach!
Your bafflement is entirely justified.
Intel are very lucky that their unique to them and trivially exploitable Meltdown bugs are being conflated with Spectre, they should be getting an extra roasting for that one.
In terms of Spectre that seems to be a very generic label for a bunch of quite different vulns when you dig into what info is leaked and how you would exploit them usefully.
As Ebay is about to be flooded with cheap second hand Xeons, ill be grabbing one of these as even with a worst case 30% slow down a 10 core chip with 20+MB of cash, will pretty much curb stomp any consumer grade CPU.
especially as my old AMD FX-8150 chip is getting a bit long in the tooth.
This post has been deleted by its author
In digging around this morning, the info on ARM's website apparently suggests that the problems now revealed do not apply to data in DEVICE address space (or something like that).
Would I be safe to speculate that stuff read from DEVICE addresses does not get cached on ARMs which do speculative execution, and therefore tjhat data is not vulnerable to unwanted disclosure in these circumstances?
Does x86-64 really not understand the difference between cacheable and non-cacheable data? It's a fairly fundamental cioncept to any post x86-32 chip and system design (SMP, multicore, etc).
Clarification very welcome.
Please can anyone give me a simple explanation of how knowing where data is in memory and/or whether it is currently cached or not, allows an attacker to actually access its contents if they don't have the privilege level for the memory addresses in question? Does the underlying caching mechanism *not* check bounds, privilege, etc? Does speculative execution not also do so? Bit rusty with architecture so would appreciate a simplified explanation of the one on the website - thanks.
"Does the underlying caching mechanism *not* check bounds, privilege, etc? Does speculative execution not also do so? "
Exactly. Data are read and brought into cache before the checks are actually performed, probably to avoid a performance hit. If and when the instruction is executed "really", a fault would occur, but if the CPU speculated wrongly, those data should have been simply discarded. Someone at Intel should have thought it would have been a waste to check access rights for data that could be discarded later...
Just, people devised ways to read those data from cache before they are actually removed from the cache, which may not happen quickly enough even when the execution path has been discarded (which is desirable to avoid to trigger a fault for trying to access more privileged addresses).
Meltdown works like this:
Instruction 1 accesses a byte on a protected page and attempts to load it into a register.
Instruction 2 uses the value loaded into the register to access some memory on one out of 256 pages (depending on the value of the register filled by instruction 1).
Now, instruction 1 does an illegal access, so it causes a segfault. However, by that time instruction 2 has already been speculatively executed. Now, all the "normal" processor state (register values, etc.) are rolled back to before instruction 2, but, crucially, on Intel CPUs, NOT the fact that a particular one of these 256 pages was brought into cache.
The attacker can now determine which of these pages was brought into cache by carefully timing how long it takes to access each of them. The fast one is the one brought into cache. Presto, one byte read from kernel space.
Note that the 256 pages are NOT in kernel memory, they are just plain accessible memory in the attacker's process.
"[...]the 256 pages are NOT in kernel memory, they are just plain accessible memory in the attacker's process.[...]"
Nice summary, assuming it's correct. Thank you.
Assuming I've correctly understood the various pictures, your description seems to be consistent with ARM's statement today that DEVICE data (which is deliberately and architecturally non-cacheable) isn't vulnerable to this particular unintended disclosure.
By the sound of things, Intel no longer have enough smart knowledgeable people in positions of authority to make this distinction (or its importance) properly understood.
Lots of other business-class chip architects (e.g. DEC, IBM, Sun, etc) have understood the distinction in the past, and it seems the AMD64 people still do. But then x86 and its allegedly impossible follow-on (Intel's apparently defective copy of AMD64) has never really had an architecture as such, and x86 in the post-DOS era has only been sold widely into business-class because of its apparent cheapness.
Well done PHBs and bean counters. Don't say you weren't warned.
anybody care to speculate what if any impact this has on Intel's SGX (Intel's alleged competitor to ARM's TrustZone, see e.g.
[edited for typos]
Sadly, the attack is not limited to this case. Specifically, OSes typically terminate processes that attempt to access memory they should not. Remember, "Illegal memory access, process has been terminated" from Winblows 95?
To avoid this fate, the attack code needs to ensure that the speculative fetch of protected memory never gets checked. They either need to branch around it, but in such a way that the branch prediction logic incorrectly predicts that the fetch will occur; or by deliberately triggering an exception. The former strikes me a REALLY tough to do reliably. Of course, you need to play some sort of game with the OS to get the return from the exception to be other than the code that will shut you down--I think that is doable.
Thanks - that took me a couple of re-reads but it helps a lot.
To save anyone else the re-reads, the key is the number 256, the number of values a byte can hold plus the memory accessed in the second step need not be protected memory.
1) Access kernel memory to put value in register.
2) Speculative execution subsystem tries to execute as if (1) was OK. It will then speculatively execute "memory access at address **in non privileged memory areas that we have legitimate access to** of which the register forms part of the resolved address (as an index/scale/whatever to a base address) that causes **one of 256** pages to be pulled into cache. It doesn't matter what data values are in these pages, nor which pages, as long as the pages are accessible to us and there are *exactly* 256 possible pages that could be accessed. It is the fact that the data values are now in cache that matters. They could be the number of fleas on your cat, number of bugs in the Pentium FPU, doesn't matter.
3) (1) faults due to privilege level. (2) Is "thrown away" as fault prevents the CPU getting to it "in the real world", **but the cache lines involved are not flushed**".
4) By using timing analysis to find out *which* page was cached, that's also the number in the register and that number was loaded from kernel memory and used in an index register of sorts **before (2) was thrown away**. Thus, you now know which number is in the address in kernel memory.
At least, that's how I read it in simple terms. Ouch!!
If interpreters (like JS's) did not generate JIT code to make the exploit possible, one would need untrusted assembly language code to exploit; and that is not floating around that often anymore.
Shouldn't things like JS, Python, C# etc. (plus real compilers) always have an option *not* to wring the last iota of performance from the code they submit to the processors? Much of that optimization, I guess, is processor-dependent anyway. And that should be the default, throw that switch at your own risk.
Which betrays that:
(a) This flaw is not my field of expertise
(b) Neither is the kind of alchemy JIT uses that makes it exploitable by JS.
(c) I am enough of a lunatic to believe thatsanity can trump short-term thinking, ever.
Turns out just installing the Windows patch isn't enough, you also need to update firmware/microcode - which you can get from... your supplier.
So basically, unless you bought your PC from Dell or HP, you're relying on your motherboard maker to update the bios on what could be a 4-5 year old motherboard. Likely to happen? I don't think so. And even if it did happen, how many users out there are going to know how to do it?
To be honest, I'm going to un-install the Windows patch. It ain't gonna solve the issue.
Can anyone explain if x86/x32 Windows and Linux is affected? Everything I've seen so far says it's x64 only (or rather x64 microcode). In fact El Reg refer to "The crucial Meltdown-exploiting x86-64 code can be as simple as...".
From memory I'm thinking that at boot Windows/Linux x32 place the processor into a non-64 bit mode that disables virtualisation etc. If you try and execute any x64 assembler 'under' Windows x32 it just barfs (again, from memory). (bonus points: can anyone confirm if you can run x64 assembler from an x32 windows process on an x64 OS host?)
But I see that the Microsoft patch KB4056891 has been made available for W10x32. I guess they can still apply the same mitigation measures for x32 - but I wouldn't think it's needed.
I'm confused - can anyone clear it up for me?
But the Linux patch is specifically for x86-64, e.g.: this advisory from Debain:
This specific attack has been named Meltdown and is addressed in the Linux kernel for the Intel x86-64 architecture by a patch set named Kernel Page Table Isolation, enforcing a near complete separation of the kernel and userspace address maps and preventing the attack.
If it affects i386 then why isn't the i386 kernel being updated?
Guess this could be one of the reasons:
IMHO they gave priority to the 64 bit versions because that's what mostly used on actual non-embedded systems. Then patches could be backported to 32 bit kernel later.
Speculative execution was introduced well before 64 bit, so unless 32 bit kernel use a different architecture - I don't remember exactly in which CPU supervisor bit for pages was introduced - it shares the same issue.
Does it have anything to do with Von Neumann / Harvard architectures basic design premise?
One separates executable code from data code, and the other doesn't?
Just wondering if stuff was designed with Harvard design, would the flaw exist.... (because separating user executable code from kernel executable code would follow...)
PS... I'm completely ignorant and guessing here. Help.
We just catch, then flay alive all system crackers. Problem solved! Because the performance penalty we're all going to have to pay otherwise is simply unacceptable. So I say make the guilty pay. Of course if hardware manufacturers want to replace the defective hardware they've sold us all I'd be open to that as well. In a way that is making the guilty pay too.
I am reminded of 1977, when the Colossal Cave "Adventure" program was crowding out real work in the lab on the PDP10. The game had an overlay that randomized it to keep people from gaming the game by reading the core dump.
Now, 40 years later, one must ask why data in caches is in the clear? Shouldn't all data have a wired key relative to its secure level (e.g. user vs kernel) that must be available for the data to be useful? Therefore even if data is "stolen" from the kernel by this nefarious hack, then the data is useless without the associated key. Really NOTHING should be in CLEAR TEXT. Ever. 'Nuff said.
Read about "side channel attacks". Data in the cache is not read directly - some clever techniques - i.e. timing how long it takes to perform an operation after bringing into cache some specific addresses with specific bits set - are used to "read" it, maybe one bit per operation, but with enough time available, you can read a lot of data.
And anyway, at some point *any* data must be in the clear to be operated upon. Here you're in the cache close and designed to let the CPU process data at full speed...
I think it's quite sensible of CERT to change its advice. Even if Spectre can't be addressed properly without replacing your CPU... it takes time to design a new CPU. So what would one replace it with?
It's either turn off your computer and wait a year or two... or hop into a TARDIS to buy its replacement.
For the love of christ!
All I want to do with my PC is play games and maybe some light web browsing! Some of us AREN'T using our systems for "mission critical" work, you know.
I will NOT risk what little performance my ageing I7-2600 (not "K") offers me by applying any patch that will even potentially rob me of 20% of my system performance, whether or not an "expert" claims that I will be unaffected.
I would rather move to using my main rig for gaming, and use my Raspberry Pi 3 for web browsing.
Not happy with intel, but blowing the lid off this was a pretty dick move, El Reg. You've actually, for the first time since I started reading the site in 1998, managed to lose some cred with me - which is too bad because I generally recommend you to people (both tech savvy and not).
To the person who thumb-downed me, I did expect a reaction like that. I also expect that the bulk of people who thumbs-down me won't actually post why, or identify themselves.
Bear in mind that I'm not really happy with Intel at this point, nor am I trying to defend them. I had been planning to start building a new gaming rig soon, as my current one (based around an old Gateway FX-6860) can no longer take video cards capable of meeting the minimum spec for most of the current/future releases. As I am still seeing reports of AMD hardware (both CPUs and GPUs) not running games as well as similarly-spec'ed Intel-based rigs, moving to AMD seems a no-go for me.
Seems that, as a gamer (rather than a datacenter admin or IT professional), I am truly fucked for choices at this point in time.
Console Master Race For The Win!
Well, the Register blowing the lid off it didn't change the situation: the bug still had to be fixed, and the fixes will still cause up to a 30% performance hit.
If exploits come before the bugs are fixed, and the Register's exposure helped that to happen, I can see your objection - but I don't think that the Register engaged in irresponsible disclosure; instead, as they claim, they simply made it harder for the companies involved to put their spin on it with managed disclosure.
Do you want High Performance, or maximum security? Make your mind up, because you can't have both!
It is a simple fact of engineering design, that there is a trade-off between performance and dependability. To get better than standard performance, we need to sacrifice security features that would otherwise get in the way. For example, if we turned on SELinux and disk encryption on an HPC system, it would no longer be High Performance because of the overheads. Remember, High Performance Computing is a relative term, e.g. relative to the norm which could arguably be regarded as enterprise servers, or desktops/laptops.
Speculative execution appears to be one of those shortcuts to get more performance, but potentially at the expense of security. The OS kernel patches required to mitigate against the Meltdown and Spectre vulnerabilities will force a lot more traffic through the kernel, introducing overheads that will probably (?) impact performance for many applications. So, just because a general-purpose processor is capable of doing many different things, does not mean that we should expect them to be all things at the same time.
While the CPU vendors should have a responsibility for warning users of the trade-offs, and maybe this is where they have been incidentally negligent, we who design and build systems cannot absolve ourselves from the responsibility of making sure we understand what we create.
It is time for a pause, to take in the lesson here and make some tough decisions about where we want performance vs where we need security, and those who are developing HPC-as-a-cloud service will need to look closely at how they will present their "High Performance" offerings on shared platforms.
Your comment makes a lot of sense to me, even though I'm just the bloke who cleans the lavatory.
My interests are gaming, not HPC. In either, there is a need for data to be passed as quickly as possible, with a minimum of interference/pipeling lag introduced by what I'll just simplify as "security-related functions".
I only kind of understand the idea of "speculative execution" - I'm not a coder, but it feels like a cousin of a feature that Windows used to use to pre-load certain files based on the times they were most frequently used. Some kind of pre-caching whose specific name escapes me.
Being uneducated in this aspect of computing, I can't help but think that such a function is not as necessary for code which runs from a compiled state, rather than code that is runs "just in time". Or am I wrong on this?
I am only being so serious because my brain hurts from trying to figure this out : On one hand, there are explanations that are way too technical; on the other hand, there's a lot (and I mean a *lot*) of FUD, trolling, and general "Hurr Durr intel Fucked Up", and I can't really make heads or tails of this.
All I know is, If I'm patched, my performance may take a nosedive. If I don't patch, I'm at risk of ... [variable unknown]. Do either or both vulnerabilities really affect non-productivity machines as badly as HPC/Server Class gear? And will there be improvements made to mitigate the performance hit?
Can someone get me a Tylenol? As I said earlier, my brain hurts.
I hope the jerks who made this public are patting themselves on the back and smugly basking in their new-found fame.
These vulnerabilities have lurked around for 20-30 years, without causing anyone any problems, since the average dopey hacker is clueless about silicon architecture, or how it handles branching in cache execution.
Now, thanks to these self-serving idiots, the world is in turmoil, with Intel users wondering how long before the parasites put together a few hacks - based on the suggestions also published with the disclosure - and give them to a botnet to execute.
It doesn't bother me, since all our stuff runs on Sun SPARC, but it occurs to me that there should be a law or, at least, a protocol, whereby people like Intel get the results of such reports in secret, and the dirt isn't made public until there's a fix in-place.
"there should be a law or, at least, a protocol, whereby people like Intel get the results of such reports in secret, and the dirt isn't made public until there's a fix in-place."
The "security resesrchers" and their media mates would typically refer you to the protocols of "responsible disclosure" at this time. Whether responsible disclosure actually works to the benefit of the wider world is a whole separate question.
The "modern IT world" is in turmoil today for various reasons, including in large part because of lack of in-depth understanding of issues and technologies and risks and benefits, and and because of dependence on monoculture and monopoly.