Way to go Intel!
You're up to Microsoft grade fuck ups now! Welcome to the big leagues!
You can remotely commandeer and control computers that use vulnerable Intel chipsets by sending them empty authentication strings. You read that right. When you're expected to send a password hash, you send zero bytes. Nothing. Nada. And you'll be rewarded with powerful low-level access to a vulnerable box's hardware from …
Yeah, totally new.
It's not like Intel endeared themselves to use with the FDIV (aka approximation bug) or F00F bug, for starters.
I could also rattle off many, many other hardware bugs, from various vendors, from old '286 BIOS bugs onward, but it'd be encyclopedic. Going back to things like Award BIOS in 32 and 64 GB hard drive handling, where WD software and the BIOS poorly handled things, resulting in trashed WD hard drives.
I remember being told about it in about 2005, read a doc from the server vendor, sounded really nice (far better than the IPMI and serial port stuff we had at the time anyway, though not as good as HP iLO or (modern) Dell DRAC) at least for servers. Never managed to see it appear in any servers I have had. My last couple of laptops at least have AMT options though without more software it doesn't seem to do anything (was sort of expecting an iLO like experience, be able to connect to a web server on the management processor etc). I guess it was geared more towards corporate desktops these days.
Dug up my email from early 2006, the board the vendor was talking about was the Intel SE7230NH1LX, which was a Pentium D board, looking online I don't see a reference to AMT with that board, maybe it was an add on option though.
Come now, no one in their right mind would put a iLOM / DRAC style management port on t'Internet! At least they are (usually) on a separate physical port.
If you do want remote management you should be jumping in through a VPN first as a minimum as they are notoriously buggy and insecure.
Also not mentioned: Is this Intel vulnerability also exposed over WiFi? Could add a whole new set of fun & games available on public WiFi hot spots!
"Have you heard what passes as modern IT execs ? There is no mind, only a buzzword echo chamber. If it saves 20c it will be ordered to be so."
Part of that problem is that in the, ahem, extremely unlikely event of something going horribly or tragically wrong with this week's fashion, the cost to recover doesn't usually come from the relevant IT exec's bonus or even from that budget. Some other bunch of suckers usually end up paying for it - often customers, other employees , or both.
So for example, offshoring can still look like value for money, if all that people look at is the short term impact on a narrow definition of costs and benefits. Look at the bigger picture and even corporate failures like BT Retail have realised that offshore customer service is not necessarily good for business.
As for corporate data protection: when will they learn? Holding a few board-level people up might focus the relevant minds. But in the UK there's going to be another 'bonfire of the red tape'...
what kind of coder am I?
Rhetorical question. I know what kind of coder I am. I am the kind of coder that lied on my CV, got the job and now I copy/paste the code of clever people into my work. Even though I do not fundamentally understand what I am doing or what them function thingies accept as arguments or return as values... My code compiles.
Come off it, everyone who has ever written software has done something like this, which is why code should be reviewed before being committed. And I reckon too that anyone who has reviewed a piece of code has missed a clunker like this from time to time- noob or otherwise.
"Come off it, everyone who has ever written software has done something like this"
Sure, but it's disappointing that their system for reviewing code before it makes it into firmware didn't catch such an obvious mistake. Human error happens, but the review process should be designed to cope with that.
"Human error happens, but the review process should be designed to cope with that."
Something that I continually strive to achieve in our information security shop, as a hedge for when I make one of my legendary fuck-ups.
You know the type, such as that hibachi accident at Hiroshima in 1945, to which the US quite nicely accepted the blame for my accident.
Another bug that highlights how the lack of a proper string type in C, and related operations, is dangerous from a security perspective. Any language I know that has a native string type also stores the length, and checks it a the first comparison step.
Unluckily, keeping design flaws acceptable when punched cards programs didn't require much strings manipulation is what's shacking many IT foundations.
It's purely bad coding, not C's fault that someone decided to use strncmp instead of strcmp. Looking at the code snippet we can be fairly sure that he two strings have already been validated and stored in their own string buffers, so why not use it? You'd get the same error in BASIC if you'd decided to use LEFT$ instead of = for some crazy reason.
And code review and QA should catch it. The fact that it didn't means AMT is probably full of other bugs.
No, it's C fault the lack of a string type with proper operators, and then the need to have n functions to perform the same task, each slightly different from the previous one, attempting to fix its issue.
It's the arrogance of the Unix/C people who believe they got the perfect design by divine suggestion over forty years ago, and nothing needs to be changed, that is creating innumerable issues in software. Face it, how applications work has changed a lot from the times of punched cards and batch jobs, little memory and slow CPUs. In many languages, that bug is simply impossible.
To have secure systems we need big changes in CPU designs, OSes, and programming tools. Otherwise, it's just a whack-a-mole game.
And how would a string type fix the fact the programmer used a substring compare function instead of a full string compare function?
In many languages, that bug is simply impossible.
There are languages without substring compare? Tell me which ones they are so I can avoid them.
Oddly, during my code monkey era, if I nobbed a bit of code, I examined the hell out of it and figured out what it did, how and why.
Of course, I date back to before the era of compilers being common. We used to do dev parties, where a few maniacs actually wrote raw object code.
While things have moved on, I can still disassemble code and figure out what that compiled code, disassembled and shown inefficient, actually does. While rubbish for complex code, such as office software or an entire OS, it's eminently useful in malware samples.
It is almost impossible to map all states to be honest and if you call external libraries (which is pretty hard to avoid) then you'll have to map those out as well 8) This is a bit of a blinder though, on what must surely be a code path that can be reasonably easily audited. As it is the gatekeeper then surely it shoudl warrant quite a lot of inspection.
Given how face-palmingly obvious it is and how long this has been out there we can assume that lots of cracking has been perpetrated via this channel. It is quite hard to not extrapolate to a conspiracy ...
"This is a bit of a blinder though, on what must surely be a code path that can be reasonably easily audited."
Never attribute malice to that which could be better attributed to being close to lunchtime or quitting time.
I've hastily reviewed documents at both times, to re-review, to reacquire my train of thought later, and was horrified at what I missed and then had to fix and review a bit farther back. And those were simple things, like mission plans (military) and policies.
Eventually, I narrowed my window of distraction time down and ceased such reviews until the later time period and pursued other items that required my attention. The change, distracting enough to avoid such errors.
"Real Men" don't ask for directions, so the man page is totally out of the question.
Although, I'll admit, I've coded such an abomination more than once and while going over the code, thinking, "WTF was I frigging thinking?!".
Although, my best coding was in coding security, authentication and verification systems
I've also an infamous habituation for "fucking off", aka taking an additional break. I was productive enough to be able to do so, in each career I've had, which has now reached a half dozen, all high level successes, until the field faded. Before I could fatigue enough to not pay attention and miss things, it was time for a fuck-off time. Where I circulated among peers, resolved their problems, went out for a smoke, conversed for a bit, then went back to work.
In one corporate environment, efficiency analysts were annoyed at my waste of time and I insisted re-examination and permitted a non-additional break period examination. Shop productivity dropped by 30%, morale dropped even more and my own production dropped.
They re-examined their data and via interviews with those observing, noted my interactions and troubleshooting, while still managing to work my way to the remote smoking area.
Yeah, after, they recommended things my way. Alas, only for me.
Frigging idiots. Drove off other people, who would otherwise had advanced to such an SME level.
Who then worked for competitors.
Thanks for that - very useful.
It makes the point around a firewall running on an AMT-enabled system being unable to properly secure the system (I.e. Has the packet you sent to the firewall been intercepted by the management processor rather than the firewall CPU). I suspect that may affect a lot of security people's assumptions about their network setup if the firewall is running on an AMT platform through pfsense, virtualisation or similar. And I'm very interested to see if any vendors come out with firewall patches. As for any environment you can't physically validate yourself...
I tend to go for stupidity over malice when looking for explanations for this type of thing, but I'm going to add a bit my tinfoil to my head ware just in case...
This problem requires some sort of direct network access. If you have a router based on a Dell/HP/IBM/whatever Core i5 or 7 and your WAN connection comes from the NICs that are onboard then this could be an issue for you.
eg, you repurpose an old server system (with AMT) as a pfSense based router and plug an on board nic into WAN. That NIC is not directly accessible by anyone other than your ISP - in theory. Mind you, who knows what is on your ISP's network anyway?
You get the idea.
Such interfaces, DRAC, this, various other management interfaces, should always be on an internet blind VLAN, accessible only from the management VLAN, which also does not have access to the internet.
*That* is the damage limitation.
WTF would you put a management *anything* openly accessible to the entire frigging internet?! If anything, it should be via authorized VPN connections that are allowed to access the management server's VLAN, which can access that VLAN only.
Christ on a crutch! This isn't complicated!
The firmware update mentioned security and said "HP strongly recommends" so I'm pretty sure it is the one. Luckily I just had bought a new laptop last fall so it is still actively supported. I'll have to see whether Dell ever releases a firmware update for my old laptop. Since I never use either one wired I'm not really too worried.
Due to a relocation, change of lab and production networks, loss of critical equipment, due to that relocation, I'm now down to two potentially vulnerable systems.
A previously desired reconfiguration will be advanced to next weekend.
There's a big plus in having enterprise networking equipment at home. :)
There's lots of badly written, never audited software with hardware privileges running in modes, inaccessible to the operating system.
For example when USB came to the market, operating system and BIOS vendors couldn't be asked to implement it, after all it's a rather complex protocol. So the CPU vendors shipped a special binary blob which used the Service Mode of the CPU to emulate standard PC devices even though you actually had USB ones. On bad laptops even things like battery control are done by Service Mode software.
Depends what you mean by "AMD based". If the motherboard chipset is Intel but the CPU is AMD then the system would be vulnerable.
Definitely rare though, a quick google suggests it was last possible before this vulnerability existed, so unless you have something like this...
https://www.extremetech.com/extreme/225839-there-can-be-only-one-new-msi-modular-motherboard-will-support-both-intel-and-amd-processors
...then you should be safe.
Erm, this is enterprise specific hardware, not consumer geared hardware.
So, 99.9% of the userbase on the planet are not vulnerable to this bug.
So, my wife's hardware isn't vulnerable. Some of my hardware might be. :/
A bit of network reconfiguration would take care of that issue. :)
Having personally known quite a few people (but, not involved with their activities), personnel, I'll suggest, no.
Too clumsy. FVEY is a *bit* more clever, adding authentication of certain sorts, which I shan't discuss.
Not leave shit wide open and hope for the best, their own equipment included.
China and Russia, the same.
This looks like a classic human foul-up, due to likely, a poor selection of copypasta code and distracted, likely pressured code review, if that was even present and not inherited abomination, which never did get code review.
To be perfect, divine. To foul up, quite human.
But, that's this analyst's opinion.
I'll now go to bed. To get 8 hours of sleep.
For the record, for fun also, it's 4 AM where I am.
I love the irony that if they had used strcmp instead, there wouldn't be a bug. Ironic because the programmer probably thought "shouldn't use strcmp... that might be insecure or cause a crash". Probably a form of hypercorrection. It's not strcmp's fault if another bit of your code fails to null-terminate a string.
Still on the subject of strncmp, surely it would be a good idea for the compiler (or a debug version of the C lib) to warn if the call is/can be a no-op? Obviously, I can think of some places where this might have a valid use (like exiting from a partitioned search when you've either found the right string or end up with a partition size of zero; checking which case it was can be deferred to outside the loop) but for the most part, a no-op wasn't what you expected, so it probably indicates a bug like this one.
I'm unclear on why the vendors are involved. What hoops do you need to jump through to patch the microcode on an Intel processor and why are Intel themselves not able to do this? Have they really got themselves into a situation where the door to their processor is unlocked and they are unable to fix it because they don't have a key?
What if I've bought a system from some random box builder? Who do I go to for a patch?
Yet another reason why NAT is still important and exposing stuff via IP6 maybe not so smart!
I hope that Intel and motherboard manufacturers promptly report all affected components and if/when a fix for all the management vulnerabilities will become available, for caution then relief.
This post has been deleted by its author
A
1)Intel employs at least one developer who is a total f**kwit
2)Intel employs no code review process whatsoever for a system that will run code which will be very difficult to alter.
B
1)Intel management were approached by some part of the US Intelligence Community to ensure an advanced persistent threat exists in as many possible processors as possible on the planet that cannot be circumvented by "the bad guys" (as opposed to the real goal of being able to spy on anyone's PC use, anywhere, anywhen forever).
2)Intel management agrees to do so.
I'll leave others to decide which one sounds more likely.
Step A3 appears to be absent without leave, at least according to CharlieDemerjian at SemiAccurate:
A3: People outside Intel warn people inside Intel, repeatedly and with specifics, over a period of years, that Intel's management technology has world-class vulnerabilities. Intel's response until a few weeks ago:
-- is what some are calling this vuln.
Mattermedia blog on disabling AMT.
So my Lenovo ThinkCenter H430 is not listed as affected, but in terminal:
XXXX@XXXX-XXXX:~$ lspci|egrep -i 'mei|heci'
00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04)
No obvious AMT options in BIOS. Have to investigate further, unless a savvy commentard knows whether this particular machine is vulnerable.
Lovely... I guess one can run bloody TAILS or TENS or Kodachi or Whonix, and because this vuln kicks in before the OS boots, then your cheese may be stolen anyway.
My i7 toting HP Elitebook is one such machine, although out of the box, AMT, VPro etc is not enabled.
Accessing the ports listed in the article just gives a message saying there's no active Intel Management Engine (IME) available to do anything. I went a wandering in the BIOS and found some related options which I've studiously left disabled. Not good though that there's a webserver on those ports telling you anything in the first place though. Sounds like some router based port blocking is in order...
Not wishing to downplay the severity, but from what I understand of vPro etc, if your machine (like mine) has discrete graphics, the VNC remote control option is not available. it only works with Intel embedded graphics which a lot of machines don't have enabled at all. Business users would probably have no need for discrete graphics (Interwebs, Excel etc) but as a home user, the GPU does come in handy for video editing (although not being a patch on something like an RX 480).
However, in a corporate environment I can see this creating a veritable sh1tstorm. If you can get physical access to the corporate network, then I shudder to think what sort of nasties could end up doing the rounds.
The nasties are no doubt already doing the rounds. There's no moss growing on these guys and this is a lot of cheddar.
Many many thousands of devices worldwide are vulnerable on their Internet facing ports and who knows how many devices in their LANs. It's going to be ugly.
Agreed.
And don't forget that vulnerabilities are daisychained to get things like VNC type access - AMT is very low level, so could be used to pipe into/out of any hardware, including a video card or USB device (webcam included) using high or low level hardware commands.
... The fruity behemoth never bought into AMT and that stuff is explicitly *not* enabled in the processors in the hardware shipped by them. There are some questions in the Fruity Support forums about when they'd be supporting it (and the answer was a fuzzy 'probably never' or some such).
That said, the question is whether it's possible to switch that on in Bootcamp and then let it continue to run whilst in macOS...
Possibly because some well-meaning-twat in the compiler division wrote a non-standard "deprecated" attribute into the string.h header file and so any attempt to use strcmp() is now rewarded with a compiler warning whereas using a less-safe-but-more-obscure function compiles cleanly.
Actually, strike that. Almost certainly because of the above.
This looks to be a classic example of where the requirements were probably inadequate. If there had been a proper requirements-based design process in place, and the design had been reviewed before code was cut, it is more than likely that anyone with a half-decent security background would has spotted that some length validation was required.
That requirement could then have been traced all the way through to the code and test artifacts.
General point - those working on security-related projects need to adopt the processes that have been mandated for safety-related projects since year dot.
"the processes that have been mandated for safety-related projects since year dot."
Are you thinking of a particular industry, or standard (or set of standards)?
The industry I'm most familiar with (starting in pre-history days with DEF STD 00-55) currently considers DO178 and DO254 and friends to be at the heart of its design/code/test processes.
They're not bad, as processes go, but when I last looked in detail a couple of years back there seemed to be a move to de-emphasise the detail at the back end of these processes because they were "over engineered" (ie costing the company concerned too much money, taking too long before stuff could be shipped).
The management in question didn't seem to have any real documented justification for doing so, or for doing various other things which diminished the trustworthiness of the end product (e.g. testing a *model* of the desired system's behaviour, rather than the executable code itself as produced by the relevant toolchain. This led to the idea that a change from one processor to another radically different one (e.g. MIPS to Motorola or vice versa) didn't require any additional testing to test for target-dependent errors in the toolchains.
There may be trouble ahead...
That is a problem in the compare routine. If the length of the strings is different it should return a mismatch. Checking if two strings are identical means a byte for byte match. If one string is different in length from the other it is an immediate fail. You don't even need to waste compute cycles to start comparing the byte arrays.
Speed optimized code would first check that both character arrays are same length. and only then attempt to compare. on the first mismatch : exit with a fail.
Just, in C you don't know the length of a string until you scan it for the terminator. So, for the sake of optimization you don't check the lengths, you just scan. And because you don't know which is longer, you may need to pass an upper limit. That's only because C still refuses to admit strings aren't just arrays. The Creators couldn't be wrong... and you shall put the every burden on the programmer, the compiler is your Master, it can be bothered to manage things for you....
"That is a problem in the compare routine. If the length of the strings is different it should return a mismatch."
It is not a compare routine. That's the mistake that the programmer made. strcmp() is a compare routine with the semantics you describe.
strncmp() is explicitly a "just compare, at most, the first n characters" routine. To be honest, I can't imagine that this is a common enough requirement to justify inclusion in any kind of standard library, but it's probably a historical accident and we're probably stuck with it now. One could, I suppose, mark it with some compiler extension like __declspec(this_does_NOT_do_what_you_think_it_does) and a stern note in the manual explaining why, but idiots switch off compiler warnings and don't read manuals.
This post has been deleted by its author