Apple's bug description
"Can we vague that up a little more?"
In its ongoing exploration of Intel's Management Engine (ME), security biz Positive Technologies has reaffirmed the shortsightedness of security through obscurity and underscored the value of open source silicon. The Intel ME, included on most Intel chipsets since 2008, is controversial because it expands the attack surface of …
Transparency CAN be a good thing, but for a flaw of this severity it would be like waving a flag the size of the internet around to alert all Mal-writers to their next target. The longer that it is before it goes fully public, the more computers that will be protected. At the rate that most Macs get their system updates the exposed system numbers will have diminished greatly by now, hopefully to the point that they are not worth attacking.
Sounds like a big part of the problem here is since again Intel’s lack of documentation. If Apple had known of the issue from Intel they would surely have fixed this long before.
"Sounds like a big part of the problem here is since again Intel’s lack of documentation."
There's a lack of publicly available documentation about this, but there is no lack of documentation provided to manufacturers -- which includes Apple.
"If Apple had known of the issue from Intel they would surely have fixed this long before."
If Apple didn't know about this, it's because they didn't read the docs. However you slice this, Apple screwed up.
" If Apple had known of the issue from Intel they would surely have fixed this long before."
Yeah, that'd be it, it's Intel's fault.
Apple are known for jumping on and quashing exploits and bugs as soon as they are informed of them.
There's not a single bug, security flaw or exploit Apple know about that hasn't been fixed by Apple as soon as they find out about it.
https://www.google.com/search?q=apple+ignore+security+flaw&ie=utf-8&oe=utf-8&client=firefox-b
Apple may not manufacture their own laptops, but they do write their own software - including parts of the EFI (not, unfortunately, the crap ME stuff Intel is responsible for) so it isn't clear how much blame goes to Apple and how much goes to Foxconn or Pegatron or whoever makes Macbooks and might have been responsible for turning off manufacturing mode after post assembly testing. The fact Apple was able to fix it in a patch shows that they could have / should have anticipated this as a possibility and had the software disable manufacturing mode by default (if it was already off, no harm done)
This is just another in the growing list of reasons that Apple might want to kick Intel to the curb and use their ARM cores in Macbooks. Not that Apple is immune to software bugs, but ME seems to be particularly crap software that has obviously been wide open for years relying on "security through obscurity" for the fact that attacks started becoming known only recently. It is reasonable to expect there are many more ME related attacks yet to be discovered as security researchers continue to poke and prod that software.
"Manufacturing Mode can only be accessed using a utility included in Intel ME System Tools software, which isn't available to the public."
But it will be available to the public. Eventually. If it isn't already. These things always leak, without exception. And then they are taken apart, re-coded & enhanced by "the bad guys" (whoever they are). Security by obscurity never works for long. You'd think that the so-called movers & shakers in the tech world would have noticed this by now ...
"Manufacturing Mode can only be accessed using a utility included in Intel ME System Tools software, which isn't available to the public."
"But it will be available to the public. Eventually. If it isn't already."
Yeah - I was able to find links to downloads of the Intel ME System Tools suite (and other Intel software) in under a minute. There many versions available, including the latest 2018 versions.
Given that it is intended for post assembly testing, it will be used daily by many rather low paid Chinese employees assembling PCs for pretty much every manufacturer on the planet. It only takes one to take a copy out the door and put it on the web.
Hardly surprising that it is easy to find, especially since Intel probably didn't consider it much of a security risk previously (not that takedown requests would have prevented its spread in hacker circles, but it might have made it harder to find via Google)
Some there were these three passwords, then another special mode with a password which does what these three passwords did, and maybe in a few years we'll find a super special mode (Snowden II).
And when that day comes Intel will dust off this press release and update it a bit.
back doors into every CPU. "nice job". Not.
I don't care how many passwords they have. they're all "knowable".
What do we REALLY need? How about a hardware 'off switch' for anything similar to 'Management Engine'?
If I have to unscrew a panel to change the CMOS battery and/or swap hard drives on a laptop, how about the same panel for a jumper to ENABLE management engine? 'Off by default'.
Desktop motherboards should be a no-brainer. A jumper if you WANT the ME, off by default.
They use fuses in the CPUs to disable capabilities - i.e. if they have one they sell as not having HT they'll blow a fuse during manufacturing/testing and then it can't access those features.
Sure would be nice if there was a way in the EFI to disable the ME and cause it to blow a fuse so it would be PERMANENTLY disabled. Being able to turn it off is great, but it leaves open the possibility that it can be turned on again. ME is such a security disaster I wouldn't feel truly safe unless there was a way to turn it off that is as secure as when Intel disables capabilities like HT and VT.
Jumpers aren't practical for laptops, and corporations aren't going to want to open up every PC they buy to flip a jumper so there's no way Intel is going to make the default state "off". That would be admitting they can't make it secure, which they will never admit. Hell, Adobe didn't ever admit Flash was a security disaster, despite El Reg writing an article every couple months for years about the 85 new security issues fixed in it :)
No on the fuse idea. I paid for the capability, there is a small chance I might want to use it for something, someday. A true hardware switch (jumper) is a good compromise.
DougS, corporations have been opening cases and setting jumpers since the year dot. Adding one or eight more to the mix won't add appreciable cost to the corporate bottom line. And with more switches comes more granularity ... a hardware customizable/settable version of something like ME, where you could pick the options you want[0] according to jumper selection, might be something I could get behind.
[0] or more importantly in this case, options you do not want ...
No on the fuse idea. I paid for the capability, there is a small chance I might want to use it for something, someday. A true hardware switch (jumper) is a good compromise.
You absolutely did pay for the capability... we all have, whether we wanted to or not. I'd guess that far fewer people have actually paid for the functionality (in the form of enterprise management software) to actually utilize that capability. I have not, nor do I believe that such low-level access to workstations is desirable to many enterprises in this age of commodity hardware and rapid imaging.
Intel have purposely engineered what seems to be a flimsy backdoor into their hardware to make it easier for select customers to manage their massive inventories. The vast majority of their customers did not ask for this or utilize this, and it is astonishing to me that Intel would not provide people with a way to permanently disable it. I'm all for fuses. I can develop a remote hardware management strategy that doesn't involve the ME and stick to it.
With near unanimity we balk at the idea of software developers writing backdoors into their software (hardcoded creds, requests from nosy governments, etc). I fail to grok why Intel, AMD, or any other hardware developer would get a pass.
"The vast majority of their customers did not ask for this or utilize this, and it is astonishing to me that Intel would not provide people with a way to permanently disable it."
I'm actually very familiar with the ME and its history, so I can comment on this a bit.
Intel's largest customers (enterprises) did ask for this functionality. They needed to be able to do low-level maintenance on large fleets of PCs in a more efficient manner, and the ME allows you to do everything remotely that you can do when physically present (including replacing the operating system, etc.) This is actually a legitimate and non-nefarious use for this sort of technology.
Originally, if you wanted the ME you had to buy special versions of the CPU, and you paid a premium for it. It was not included in CPUs that were aimed at the consumer market. At some point, though, they just started putting it in all of their CPUs.
In my opinion, that was the first huge mistake Intel made. The second was ignoring all the security experts who spent years telling Intel that the ME had serious security problems. The third was (and is) refusing to actually engage in effective measures to fix the problem.
Corporations may have been opening cases since the year dot but they don't want to do it any more. PC installation should not need to be a skilled job in 2018. What Corporations want is a low cost standard architecture PC which can be configured by booting it up connected to a configuration server then being deployed by the same guy who delivers the office furniture. The hardware will not be upgraded during its life, and these days its lifecycle is probably aligned with the next major windows release. To be honest using the current set of Microsoft management tools this is pretty much attainable now. The last few PC roll outs I managed we used very junior staff for the deployment just employing a couple of software packagers and configuration engineers to design the builds & package the relevant applications ( black art with legacy apps) and a software licensing specialist to ensure that we were remaining compliant across the 600 plus apps we have in use and getting the best value for money whilst standardising on product versions. Before the Linux and FOSS crew start berating me there are not the full set of reliable, tested and robust applications available out there for many niche applications and interopability with MS products is a requirement for most large organisations.
The security fsck-up of Intel ME is OS-agnostic, and even penguin-botherers can see why the ME functionality could be useful. No the real issues are:
1) Piss-poor attitude to security in Intel.
2) Lack of tools to see if ME is on and to verifiably disable it for those not wanting it.
3) Suppliers not getting 1 & 2 so leaving it enabled and in manufacturing mode.
"What Corporations want is a low cost standard architecture box which can be configured by booting it up connected to a configuration server then being deployed by the same guy who delivers the office furniture."
Excellent, Anyone wanna buy a boatload of thin client boxes or whatever they were called last time they were rejected by the usual Certified Microsoft Dependent IT Directors?
Happy daze.
They did implement a fuse, from the Intel response:
This includes setting "End of Manufacturing."
that's a software fuse. Obviously you should not be able to access manufacturing mode after this state change has been performed, software tool or no software tool.
"Jumpers aren't practical for laptops"
Why not?
"corporations aren't going to want to open up every PC they buy to flip a jumper so there's no way Intel is going to make the default state "off"."
So what? I don't care what corporations are or are not going to do. I want a way to disable it on my own machines. The default state being "on" is acceptable as long as I can switch it to "off" myself.
"Desktop motherboards should be a no-brainer. A jumperDesktop motherboards should be a no-brainer. A jumper if you WANT the ME, off by default. if you WANT the ME, off by default."
Probably because the vast majority of desktops are sold into corps. and government (local and national) and they want ME on in many/most cases.
https://twitter.com/raptorcompsys
New motherboard revealed at the OpenPOWER summit today (probably about half an hour ago). POWER9 based, no management engine.. Probably sub ATX, looks like it has SATA, and a couple of expansion slots.
(OS selection will be a bit limited (Linux and.. Linux (FreeBSD, others in progress)), code is not always optimised for POWER, compatibility list is short).
Sorely tempted by this, but can't justify the full Talos II motherboard, and the Lite is a bit limited.. If this can support 4-8 cores with a reasonable amount of memory, SATA (or SAS), and at least two expansion slots I might go for it. It'd be good to have a modern non Intel system.
This post has been deleted by its author
"isn't all security based on some dependence on obscurity?"
No, but you ask a fair question. The obscurity being referred to is the mechanism, not the key. That's not obvious in the phrase and the phrase is nearly always just parrotted without explanation and has been for as many years as I've been in the business. I suspect that unless you are of retirement age, you'd need to have taken an interest (*) in security matters ever to have heard the full explanation.
(* Obviously, if asking questions in El Reg comment pages counts as "taking an interest" then I'm setting my bar fairly low here.)
isn't all security based on some dependence on obscurity? whether it's an 8 char easy-to-guess password or a 1024 char key - they're both dependent on how hard it is to guess the information, no?
No. For one thing, many security mechanisms have vulnerabilities with a work factor smaller than brute-forcing the secret.
More importantly, "security through obscurity" refers to violations of Kerckhoffs's principle. The information about a security system which is not known to the attacker is in effect part of the key. You'd like that information to be uniform: equally difficult to derive from side channels, equally easy to change, etc. That makes it amenable to analysis, among other things.
If part of your security comes from keeping a mechanism secret, then part of your key is undesirable. Mechanisms can't be changed as easily as pure-data keys. They're vulnerable to discovery, because they're repeated in every instance of the system. For cryptosystems, it's hard to analyze their strength, because they contain redundancy; useful machines are not evenly distributed in the universe of all possible machines.
So what you want - and this is Kerckhoff's point - is to consider only the actual key as secret. Assume the attacker has everything else. That makes analysis tractable, and avoids overestimating security based on a fragile secret.
There's a difference between obscurity provided by the manufacturer which is common across all users, and obscurity provided by the user which is unique to that user.
If the obscurity is provided by the manufacturer, i can buy the same system myself and investigate it. The system can be reverse engineered and the obscurity uncovered and exploited.
If the obscurity is provided by the user, i can't buy the same system off the shelf and discover the passwords or keys of some arbitrary user since they won't be present.
Good to know, but the current MacOS High Sierra version is 10.13.6, so that issue appeared to have been fixed two updates ago.
Still thinking about 10.14 Mojave - mostly because I may go and rebuild the system from the ground up instead of letting just the update loose (even on Macs, it's occasionally a good idea to clean out the cobwebs). I also like to wait until the first patch has arrived - a habit I have kept from my Windows years (some habits are worth keeping IMHO).
Every cpu security system has been blown wide open. Sometimes it seems that while one part of a cpu team is working hard to secure something, the other part is working hard to undermine the security.
Having managed many sdlc programs, I spent more time going on detective missions where I’d eventually find out a team slipped in a web server, a diagnostic tool, a debug process, etc. without informing us or even documenting it.
And once a device was out in the field there were almost no recalls and the support staff were hooked on to the easy diagnostics (see, no passwords required).
One famous chip vendor’s software team turned off static code analysis because it was giving out too many criticals.
One server code based I scanned had 98,000 criticals. Yup.
In both cases the decision was made to hide everything from my team. Fortunately the CEO stepped in ...