You missed one of the Sophos recommendations: “Make regular backups, and keep them offsite and offline where attackers can’t find them.” Probably going to be tricky in these cloud based backup times.
A kernel-level Windows driver for old PC motherboards has been abused by criminals to silently disable antivirus protections, and hold files to ransom. Sophos this month reported that an arbitrary read-write flaw in a digitally signed driver for now-deprecated Gigabyte hardware was recently used by ransomware, dubbed …
Probably going to be tricky in these cloud based backup times.
Many of those systems at least have some level of "version history" and new versions take a bit of time to upload.
Me and my friends? Shared drives in diverse locations. When we're near each other we swap backups. Wellington could be taken out by a meteor/nuke/quake/storm etc but my data will survive - though I may not :) Several backups will disappear with anything I have local, but something of the data will remain.
Fortunately my music and movies are 'recoverable' fairly easily, my photos (and family videos) are safe in a LAS array (large-area Sneakernet) as are other stuff, and my email is "safely" ensconced in the providers' servers. I hope.
Can Microsoft not revoke the signature for the driver, or would that invalidate all Gigabyte drivers?
Loadable kernel modules were always a security exploit waiting to happen. Why bother with enforcing process memory protection if you can load arbitrary code into ring 0 that can modify arbitrary memory. Sure you have to jump through a few hoops to get there, but in the end you are no more secure than MS-DOS.
Hardware vendors can't be trusted to write secure drivers, and can't be relied upon to update them when exploits are discovered.
The driver was signed with a certificate from a Symantec owned company, not Microsoft. The driver has been updated by Gigabyte to remove the exploited flaw. Presumably revoking the certificate will stop machines working that haven't been updated. If the certificate is revoked, the original Sophos article points out that there a plenty of other signed drivers that could be used.
>Hardware vendors can't be trusted to write secure drivers, and can't be relied upon to update them when exploits are discovered.
OS vendors (eg. MS) can't be trusted to write secure drivers, and can't be relied upon to update them when exploits are discovered.
Application vendors (eg. Adobe, MS) can't be trusted to write secure drivers, and can't be relied upon to update them when exploits are discovered.
Better stop using these things called computers...
Actually this exploit nicely illustrates another aspect of the security problem - preventing the old insecure stuff out-in-the-wild from executing.
It would seem that code signing, whilst giving confidence in the providence of a driver, isn't particularly useful when you need to revoke that driver's security clearance. Not saying that revoking execution rights isn't going to be a minefield, just that it doesn't seem to be possible to do today at the granularity of a single driver version.
> Can Microsoft not revoke the signature for the driver, or would that invalidate all Gigabyte drivers?
It would certainly block existing, otherwise working, drivers.
But there is another option with code signing, when the code was signed the timestamp itself should include a certificate that it is a valid timestamp for the then now. (This. in Windows, appears as a counter signature on the signature details.)
The code signing certificate can thus be invalidated from a given date by the issuing authority.
Code with a signed timestamp from before that point is still valid and passes its check. Code signed after that date, or without a signed timestamp does not.
I'm not condoning what they're doing with this technique but hats off to them for the approach. I'm curious though as to why the digital certificate wasn't revoked as soon as a replacement driver was released without the vulnerability. I'm not sure if the limitation is that Microsoft doesn't allow you to revoke on a per-binary basis and you need to go through the hassle of getting a new certificate for each version or if Gigabyte were just incompetent.
This is a perfect demonstration of the issues associated with drivers - just because they are signed doesn't mean that they are safe.
Confirms the sense in Apple's move to ditch kexts (kernel extensions) in future MacOS builds and replace them with dexts (driver extensions built with DriverKit) that run exclusively in user space. So when issues are identified in the driver, they don't provide a gateway to kernel space.
"Plus it wouldn't stop flaws in the kernel itself being used to escalate privileges."
It does reduce the attack surface, though.
"Plus how do you deal with latency-sensitive stuff without context thrashing?"
It depends how latency sensitive it is. VMs seem to manage. But maybe some drivers will have to be partially or wholly in the kernel. That's still a gain if most drivers are user space.
>How about high-throughput networking which requires very low latency to avoid choking?
Simple: Follow what we did in the 80's; then we had the network stack running on the network adaptor. Suspect now it could be run within its own VM/container on its own thread/physical cpu with similar performance gains, without the costs (and security risks) of having an intelligent network adaptor...
Err no, it confirms the commonsense rationale in the decades old 4-ring OS security model (and supported by the 286). Can't think of a modern mass market OS - including WIndows 10 that supports more than a 2-ring model - conclusion all mass market OS's are inherently insecure...
I don't think anybody wants to revisit the horrors of 286 protected mode. But the four privilege levels are still present, even in 64 bit chips, and could be used.
In reality, however, it's hard to grade security that way. (Being "a little bit kernel" is like being "a little bit pregnant".) A capability based model is a far better bet - a driver runs in user space but with the permissions it needs to do special things. Hardware support for that comes in the form of the io port bitmap which allows userspace processes to be granted access to specific ports. (See iopem(2).) But ports are only part of the story, and giving a process access to ports may grant it more power than it needs.
Even at my level, it is clear that any organization that needs certs needs to be issuing its own, and in large quantities.
What I would do:
Copy the cert signed by the outside CA onto write-once physical media. One copy locked in a file cabinet in the CTO's office, one or two backups at different sites with strong physical security. That cert is used to create a "master" cert whose access is limited and logged. Each department that needs to create certs is made a CA, and that CA cert is signed by the "master". The department certs again have limited and logged access. The departments aggressively create new certs.
An ops department will have one per virtual server in their cloud--they may create a CA inside their cloud subnets to simplify management of those certs.
An apps department will have one per blog--they may create a CA per product line to simplify management of those certs.
And yes, there are multiple calendar alerts for when the corporate cert is expiring.
I think it's name was Bronski, or something similar.
For a brief time this thing seemed "airborne" as it was infecting anything in the office. Even air-gapped machines quickly fell.
Turned out it was something that got into the driver area of USB sticks (the little bit of firmware that tells the OS how to handle the stick). Undetectable to scans of the sticks, and we had a policy that sticks were formatted and reloaded only on certain machines, but somewhere someone cross-contaminated a group of machines..
I did get to set up a new Linux machine just for managing sticks, and we had it made that using sticks outside of a specific pool was grounds for dismissal. Couldn't get management to spring for a big bag of sticks and use a brand new one for every transfer. The backup policy I pushed for (lots, kept offline, RO, kept far away) was somewhat vindicated :)
I can't recall if this thing hit anything later than Vista, maybe it was only XP. I'm not sure that we had any 7 machines around the office at the time. Was a bit worrying when air-gapped machines that only got a "clean" USB stick (formatted and re-scanned) in them got infected.
"when the ransomware infects a computer – either by some other exploit or by tricking a victim into running it – and loads the driver, the operating system and antivirus packages will allow it because the driver appears legit."
If you're entirely relying on client side anti-malware, you're wide open anyway. There's plenty of evidence of exploitable holes in it, quite apart from any side issues such as this. You need to block malware before it reaches the client, not just before it reaches the 'desktop'. The gateway, or even better, the cloud, is the place to do it first. By all means also on the client, but not on the client alone.
Biting the hand that feeds IT © 1998–2020