signed using the author's private encryption key
from a supposedly compromised machine - not sure that holds too much water any more, does it?
Multiple servers used to maintain and distribute the Linux operating system were infected with malware that gained root access, modified system software, and logged passwords and transactions of the people who used them, the official Linux Kernel Organization has confirmed. The infection occurred no later than August 12 and …
"according to an email John 'Warthod9' Hawley, the chief administrator of kernel.org, sent to developers on Monday. It said a trojan was found on the personal machine of kernel developer H Peter Anvin"
It wasn't his machine that was compromised.
One thing I always wondered was if a source repository is hacked and its contents modified, what is there to stop them modifying the list of hashes too? What with all the (in)security issues with websites, it seems that it wouldn't be too farfetched for such an eventuality to occur.
The hashes get distributed quite widely on release, so if you missed the very short window between release and other servers making a copy of the hash, you would have to hack thousands of disparate systems to change all the copies of the hash data. I asume this is intentional.
The overwhelming majority of production Linux installations get their software (source and binary) from one of the major distributions' (e.g. Red Hat, Debian, SuSE) repositories. These and the kernel developers use GIT, which was not affected. The source repositories on the compromised machines are mostly there for publication, study and used by experimenters.
Other informed reports suggest that the server misbehaviour was noticed and the investigation started quite quickly, but it took a couple of weeks to trace the problem --
www.h-online.com/security/news/item/Security-breach-at-kernel-org-1334642.html
Still pretty embarrassing for the server admins. But kudos to them for being so open about it. How long does it take the large commercial sites to own up to having lost their customers' details following a security breach?
Short of being able to replace a commit with something with the same SHA-1 hash there really isn't a way to hack the sources on kernel.org. Even that probably wouldn't help because subsequent commits for the same code would no longer apply. There isn't a period in which you can modify the sources because by the time the commit appears on kernel.org the hash has already been created (I'm pretty sure that people using kernel.org push commits there rather than committing changes directly).
Yeah I was thinking the exact same thing. I mean surely the kernel dev team can't be of the opinion still that its "not Windows" so we dont need IPS/tripwire/some other detection service? Really? Its like Foundry/Cisco/Juniper not having firewalls or Symantec not running AV (oh wait, scratch that one). But come on? Really?
I mean IIRC this isn't the first time this has happened.
But still: 17 /days/ ? Yes, this is going to sound like a sneer but honestly I was expecting much better intrusion detection being put to work here. Sure; "nothing was affected". Not trying to sound like a sore SOB but would you guys have shared such info when something /had/ changed?
With all the current (high staked yet still fragile) Linux commercial interests going on?
I have some /serious/ doubts there.
The average time between vulnerability is known in Windows and issue of a patch is over two months. The latest CA business hurdle showed that a certificate audit is done no more often than every few months.
As for the story, that's why I set long, completely random passwords for user accounts and actually use encrypted (using long pass-phrases) private keys to login, then another different pass-phrase for root access.
It is quite weird that they didn't have a bit tighter security there (tripwire, etc.), on the other hand, you can't protect yourself from the user that's too lazy and doesn't encrypt his private SSH key he uses for remote root access...
It's apples and oranges. The time between a vulnerability being discovered and patched isn't the same as the time between a server being exploited and that exploit spotted. So what's your point?
Still, thanks for the info about your password policy. A different pass phrase for root access? No shit! No wonder linux is so secure!!! I wonder if you can do that on windoze....
Windows, Mac fans should ask themselves ... if the same thing happened to the kernel repository inside the corporate HQ of your favorite company, would you ever get to hear about it?
More likely, the company would keep the exploit a secret. For similar reasons, you don't know what procedures are in place to detect such an exploit and recover from it. What you do know is that the number of developers eyeballing the code is way smaller. You also know that there are documented cases of the company being informed about a security-realted issue, and choosing to do nothing about it for months or even years until the issue leads to real-world exploitation with malicious intent.
Still feeling smug, looking through your security-by-obscurity glasses?
BTW the greater risk by far is not corruption of the code base by penetrating the repository, but corruption of the code base by corrupting a contributor. Submission of a well-concealed backdoor in a legitimate patch. Again something that could happen to anyone, but probably easier to do to closed-source project with relatively few people having access to the code, than with an open-source project with many times more developers.
your response seems to be "Yes, but, look over there, big corporation...."
This isn't a mac or a windows article, it's about security in the open source community. As you point out, anyone could infiltrate these open projects and corrupt them.
The bit where you lose me is where you claim it's easier to infiltrate a small corporate team than a big open one. Can you explain that? Surely getting to work on a project at MS or Apple is trickier than contributing to an open source project?
And as for calling other people smug, that's exactly how your post sounded to me...
"The bit where you lose me is where you claim it's easier to infiltrate a small corporate team than a big open one. Can you explain that? Surely getting to work on a project at MS or Apple is trickier than contributing to an open source project?"
Go and write a kernel module and ask for it to be merged into the main branch. While you wait, apply for a job at Microsoft. Then you can go through both interview processes at the same time and see which is easier/quicker.
Seriously - "It said a trojan was found on the personal machine of kernel developer H Peter Anvin and later on the kernel.org servers known as Hera and Odin1"
So a Trojan infection, was this a Linux machine or a windows pc?
If Linux then the Trojan would have had to trick a (presumably) tech savvy user into installing and giving appropriate privileges to it. If windows then the same would apply (and presumably said tech savvy user would surely have had up to date AV / malware programs running)
The real question I suppose is how it happened and not the why
Serious, not having a go or my OS / your OS posturing, just how he got the Trojan
(may already be in the article, going to have a re-read now)
... one would think the he must have installed a legit piece of software that had already been compromised. It is to be hoped that he is not installing trivial software of his dev machine so somewhere further down the food chain some other significant server with proper security overview has been compromised. How far down does this go? Was this server actually targeted or is this just the first in a random chain of machines to notice the trojan?
""" One thing I always wondered was if a source repository is hacked and its contents modified, what is there to stop them modifying the list of hashes too? """"
Nothing. But that won't stop it from being detected.
Kernel.org folks are using 'git' for source control management. It's completely distributed system. Each person that uses git to fetch source code from kernel.org downloads the full repository. Checksums and all.
So the attackers cannot go back and change the history of the source code without being detected. Not unless they manage to hack every person that has downloaded the source code in the past... which is hundreds of thousands of people.
Their target wouldn't be the source code. Their target would be to gain access to the developer's machines by monitoring their activity and recording their passwords.
So if they do anything like:
* Use the same passwords for multiple systems
* Use 'scp' or 'sftp' to copy data from the server to their workstation (inputting their passwords in the process) or other machine. That is if they 'push' the data from a ssh session on the compromised system instead of 'pulling' it.
* Ssh to the compromised machine then from that session ssh to others...
And things like that. That sort of thing is very common bad habits used by SSH users.
Once the developer's machine is compromised then the attacker gains access to signing keys in emails and that developer's private git branch (that nobody else pulls from). From there they can intercept and modify patches submitted to mailing lists, inject vulnerabilities into the developer's source code and things of that nature.
THAT is how you compromise the Linux source code. Piggy back on legit patches and hope people don't audit them too closely... which given the history of Linux development is quite likely.
One particularly irritating thing is this statement:
"""“It's sort of surprising,” said Jon Oberheide, one of the Linux security researchers briefed on the breach. “If this was a very sophisticated attack, it's very unlikely that the attackers would use an off-the-shelf rootkit like Phalanx. Normally if you were to target a high-value target you would potentially use something that's more more tailored to your specific target, something that's not going to be flagged or potentially detected.”"""
Hey, fuckwit. Use your brain.
Why the hell would they use custom software to hack kernel.org when:
A) Off the shelf open source software works well enough (why would they want to make it harder?)
and
B) the kernel.org is not the main target.. it's a proxy to gain access to developer's vulnerable system.
THEN when they gain access to the vulnerable developer systems they will use their secret techniques to consolidate control over those systems in a undetected manner.
Moron developers, who think they know much more then they really do about security, will just download some shit 'root kit detection' software and say:
"NO Phalanx here!! The shit root kit detector says so. So even though I used the same shit password everywhere, and I ssh'd from the compromised systems back to my workstation and other people's computers... I am ALL SAFE. I now can stop paying attention!!! Yay!"
Then once people 'resecure' kernel.org it will just get hacked again, and again, and again. This time using much stealthier techniques.
Wipe the fucking systems.
Don't let people ssh to them anymore.
Don't let people have shells on them.
Don't let people use their ssh keys with them.
Don't let people choose shit passwords.
etc etc.
The only way to "secure" the system is to eliminate the chances that some tard open source developer is too lazy to use proper security on their machines.
So irritating.
Are you sure the devs don't use SSH public/private keypairs, as opposed to passwords ? If the latter, your analysis makes some sense. If the former you are way off the mark - in this event, presumably all that is compromised if the server was using public keys to authenticate its users private keys are the public keys the devs are happy to publish anyway.
from ssh_config(5):
PubkeyAuthentication
Specifies whether to try public key authentication. The argument to this keyword must be “yes” or “no”.
The default is “yes”. This option applies to protocol version 2 only.
The security of a distributed development effort like Linux kernel is going to be only as strong as the weakest link in the chain. With hundreds of contributing individuals out there on the internet it's always going to be difficult to ensure that they're each as careful / prepared / patched / etc. as everyone else. Humans as individuals aren't very good at being so consistently self disciplined.
Whereas in an internet-isolated development environment (in which I imagine the likes of Windows are developed) there's a BOFH, rules, corporate oversight, contracts of employment, and no direct internet connection. To attack such a setup means getting a suitably motivated person in on the inside. That's much harder to achieve I should think. It's certainly less convenient for the attacker.
Perhaps the OSS community needs to be a bit more open minded? I don't know for sure but I suspect that all the main servers holding the Linux source are running Linux. A homogonous collection of servers is much easier to compromise on a large scale than a heterogeneous set. If kernel.org used something else (FreeBSD? Windows even?) as well as Linux to host the source then an attacker's life would be much harder. With reference to the canine world, mongrels are much more resilient than pure-breds. It won't stop some individual developer's personal machine being hacked and leaking passwords, but it does complicate the matter of how to exploit that to attack the servers. Microsoft famously turned to Linux servers when a serious problem emerged with Windows a few years back. Perhaps it's time to return the compliment?
OK, it's not good PR to say that you don't totally trust your own OS, but then we're clearly past that now aren't we? Doesn't this hack underline that? Wouldn't it be quite mature to acknowledge that nothing, not even Linux, is perfect? Surely it's better to provide a more robust offering than maybe being a little bit fanbois-ish about the perfection of one's own creation?
As for 17 days, isn't that a mighty long time to notice that something's wrong on such an important set of servers? Was everyone away on holiday?
No online system is hack proof. Anyone who claims otherwise is lying. Therefore you practice defence in depth. The first defence is identify threats (which could be inside or outside) and do your best to meet them. You firewall your machines, you shut down unnecessary services, you compartmentalise your data (e.g. web server and database are separate machines), log everything, produce checksums and signatures to ensure the integrity of your data and put in triggers so you get notified if something happens that shouldn't happen.
I think the Linux kernel.org is fortunate in many ways. It must be one of the most widely mirrored website which should make it easy to test if files have tampered with. Also git has explicitly enabled strong authentication from the beginning precisely so it can detect tampering or even just file corruption. Checkins are checksummed and may optionally be PGP signed too. Additionally there are so many clones that tampering with one is going to do nobody any good.
So as far as kernel.org is concerned, it's bad they were compromised but ultimately there are so many safeguards there (precisely for this kind of eventuality) that there is no long term damage.
Yet another null pointer deref issue? I am thoroughly sick of dealing with these. Isn't it about time mmap() at zero was globally disabled and anything that relies on this broken, insecure behaviour reworked so that it doesn't? Stopping the thing from getting root so it can install itself should be the first step, not piddling about blaming OpenSSH for insecure storage.
BTW, I did get a heads-up on this last week from a colleague in academia who got owned by a similar beastie. Looks like the same issue.
In the linked article on Phalanx and evolutions of the same: "The attacks appear to use stolen SSH keys to take hold of a targeted machine and then gain root access by exploiting weaknesses in the kernel." Which is, coincidentally, what my peer told me had happened on his network recently before this story broke.
It was, of course, communicated as a vuln in OpenSSH. IMHO, it's nothing of the sort. OpenSSH is doing what it was designed to do, allow ssh access to anyone with the correct credentials that can reach whatever port it is bound to with the correct client and protocol. Once you're in as a local user, you might as well be sat at a serial console with those same credentials (IINYCAM). Your problem is credential and privilege management and those are OS functions. Everyone is so eager to shift the blame across to Theo and the "masturbating monkeys" of BSD that they're conveniently ignoring this little fact.
So just how does Phalanx II get root, hmm? In the recent past, local privilege escalation attacks have been predominantly via null pointer dereferencing errors and the OS's impotence where these happen. I *am* thoroughly sick of those, as are any number of people who have been bitten by them, so much so that the bsd.security.map_at_zero sysctl was created in double-quick time and my standard CFLAGS set now contains -fno-delete-null-pointer-checks. In fact, you'd better pray that it is because, if it isn't a null pointer deref, you've got some serious crap on your hands; in that case nobody yet knows what the hell the mechanism is or where it is and it has been proven to be exploitable from an unprivileged user's shell, which means this could bite people from many, many angles in the not too distant future. Either that or some numpty has been careless with bloody sudo, in which case they got what they so richly deserved.
It's either a failure on the part of the admin to enforce standard operational policy or a weakness in the OS. I'd much rather take the conclusions of a group of security experts than some random commentard who hasn't realised yet that all software sucks and it's only degree of suckage that separates it all.
So, uninformed or spreading FUD? You decide, dear reader. I've neither the time or patience for this ad-hominem shit. There's a problem. Arse covering isn't going to help. Whose fault it is matters very little in the grand scheme of things to anyone but a PHB. Looking in the right places for vulnerabilities, however, does matter. Greatly.
>> So the attackers cannot go back and change the history of the source code without being
>>detected. Not unless they manage to hack every person that has downloaded the source code in >>the past... which is hundreds of thousands of people.
OK ... so what if, let's say.... the "hundreds of thousands of people" use crontab to automatically refresh their code sources.......
It's not exactly a good idea to keep your hashes in the same place as the code you're trying to protect by their use !
Normally you'd pull in commits on the branches you're interested in when you need them, rather than pulling in all branches in a cron job. Still, if you did have such an automated process and someone messed with the source repository the pull would fail.
An attacker could however add commits at the tip of any branch and those would get through.
This post has been deleted by its author
It's been shown that an MD5 hash value can be duplicated with different data (though I think that it is still not possible to do this and make the changed source file look like reasonable code - you need to include a lot of random looking gibberish).
Some weaknesses have been detected in SHA1 - but there are no known practical attacks. So your guess that "something has been submarined into the Linux code base, and the respective files have been tweaked to ensure that the SHA1 hash matches" is almost certainly wrong (more likely that I'll win the lottery by finding a winning ticket lying in the street on my way home).
If/When there is a credible attack on SHA-1, git can be fixed to use a new secure hash (at the cost of re-building any repository using the new hash).