MIcorosft contributing security fixes to Samba
World has truly gone mad.
Linux admins were sent scrambling to patch their boxes on Monday after a critical vulnerability was revealed in Samba, the open source Linux-and-Windows-compatibility software. The bug, which has been designated CVE-2015-0240, lies in the smbd file server daemon. Samba versions 3.5.0 through 4.2.0rc4 are affected, the Samba …
Also the CVE entry was filed in November... thereby the patch took more than 90 days... probably MS understood compromised Samba servers are a risk for Windows users as well.
While Google prefers to act like a bully, without caring what could happen to its users as well - or probably being more products than users, it doesn't care at all.
That's why maybe actuall paying for software is not that bad.... "free" software may be paid in several different alternative ways... including some you may not like.
I don't fully support googles 90 day limit, but I understand there needs to be a limit otherwise some vendors will just prevaricate until the end of time.
Perhaps Google needs to enhance its rules - give the vendor 90 days to announce a patch release (with the detail redacted), and the release must occur within the following 30 days (which should catch most vendors scheduled release cycles). Rushing out patches is as dangerous as delaying them longer than necessary.
You can't really know how long does a patch take do be developed, and how many other patches are in the pipeline already. Nor nobody voted Google - a private company - the "Security RoboCop" (TM) of the world.
And I'm sure Google is not interested at all in securing other products - just to put them in bad light as much ad it can. Otherwise it would collaborate to understand when and how disclose a patch - not bully its competitors in adhere to is unilateral policy.
It's because when you connect to a Samba server on GNU/Linux it forks a new process under the credentials that you're accessing with, which is sensible enough. But only root can fork processes as another user so the Samba daemon itself has to run as root.
I guess it's an artefact of grafting support for the MS protocols onto GNU/Linux rather than having a true remote login. You need to be able to act as different users without an actual direct login as them... so root it is.
"It's because when you connect to a Samba server on GNU/Linux it forks a new process under the credentials that you're accessing with, which is sensible enough."
No that's not sensible - both for security and for resources. It should start a new thread and the thread should impersonate the user. This is how it is done in Windows.
"But only root can fork processes as another user so the Samba daemon itself has to run as root."
Even Windows runs network processes as minimum privileges. This is a big hole in the Linux ACL model - in Windows you can allocate just the rights required for a process via Kerberos constrained delegation. In Linux you have to start as root to do anything privileged. That really sucks and should be fixed ASAP.
>>"No that's not sensible - both for security and for resources. It should start a new thread and the thread should impersonate the user. This is how it is done in Windows."
That would still require the Samba daemon to run as root. Within the constraints of the UNIX security model I'd be interested to hear of any approach that could work without this. If you want to argue that the Windows security model (Vista onwards) is better than the UNIX model, I agree with you. But I don't see a fault here on the part of Samba's design.
Also, I'm not sure the resources criticism holds up. Why do you think it makes any relevant difference?
> I'd be interested to hear of any approach that could work without this.
Well, you could have an "impersonate user" privilege that lets you make an O/S call to become that user requiring a password to authenticate with, which would mean the process itself wouldn't have all the other stuff that root users can play with and wouldn't even be able to impersonate any user without a password.
The idea of "root" on Unix systems is widely considered to be a poor privilege model, it's just historically important. VMS had a "BYPASS" privilege, which was similar but almost no processes ever had it.
"I guess it's an artefact of grafting support for the MS protocols onto GNU/Linux rather than having a true remote login."
A remote login would also require a root process in order to be able to fork a process under the eventual user ID. e.g.
ps -ef|grep getty
root 3849 1 0 08:55 tty1 00:00:00 /sbin/getty 38400 tty1
etc. It's a consequence of the Unix security model.
Unless I've missed something here, the steps of forking another process and performing a setuid/seteuid are still separate calls. It's not the fork() that is the problem, it is the fact that in order to perform the setuid/seteuid() the process changing it's credentials must be running as root.
So you have a root owned samba process that forks another root owned samba process, which then changes it's credentials to the user.
This is the way it works for all traditional UNIX processes first acquire a users credentials, things like login, sshd, telnetd, ftpd etc. etc. As people point out, it is a fundamental feature/flaw depending on how you look at it.
This is changed significantly if SELinux is turned on (or another RBAC system on other UNIXes), whereby you need to have the correct roles assigned to a process for it to be able to perform actions, which includes syscalls. Thus, I think that Linux already has a more controllable authentication system, it's just not turned on in most systems, as it's foreign to the way that most Linux/UNIX systadmins think.
Even though I understand the concept. I'm one of the sysadmins who've never set up a RBAC/SELinux system in anger, so I still have to go through the learning curve for this.
Quite a few things - auth, setting the actual access perms to the user accessing the share, etc. The actual file access and serving runs with the user perms most of the time though. So it is still better than having most of it in-kernel at elevated privs as in that... other... OS.
Hu-huh.
Last year a colleague had a Windows 2012 Storage Server box (supplied from HP I think) in his lab that we spent ages battling with. Our first impressions were that it was reasonable (worked good for storing VMware guests). Then we tried net booting a 'nix box off it and the fun began.
We really tried to like it but soon gave up fighting it and loaded up FreeNAS on it instead. If there was any performance difference it certainly wasn't noticeable and we had it running reliably in less than an hour.
Did anyone ever claim that Open Source was completely bug free? Is the claim that this bug would not have existed if this were closed source? That would obviously be a ridiculous claim, so what are you trying to say? As far as I can tell you're just creating a strawman to attack as no-ine here has claimed such a thing.
And if you're trying to argue that ability to review the Source Code doesn't help, that's plainly not true as Microsoft would not have been able to review the code, find this problem and submit a patch. Unless in your hypothetical universe of closed source Linux they were sending copies of their source to their chief competitor whilst hiding it from the public..."huh?"
The real unarguable benefit of Open Source is not that it will always have fewer vulnerabilities than closed source software, but that it protects against deliberate subversion. It may or may not have accidental flaws but it's very hard to put a statement in there saying "if blnNSA == True..." And that's important.
The other critical thing is that in most cases, open source software is also Libre software, which means people can build on it. I've been involved in Libre Software for over fifteen years and I never recall us ever arguing our code would be immaculate. Instead we argued "Free as in speech", "Usually free as in beer", but never that I can recall "Free as in free of all bugs".
Yes, there is an advantage to the "thousand eyes" principle for security - you're posting on a story about a patch that would have existed without that - but you're basically strawmanning against something no-one here has claimed.
"So thats why microsoft and adobe products have so few patches and leaks"
Microsoft's OSs had fewer holes than most others inc. Linux last year:
http://betanews.com/2015/02/22/os-x-ios-and-linux-have-more-vulnerabilities-than-windows/
@AC
>Microsoft's OSs had fewer holes than most others inc. Linux last year:
>http://betanews.com/2015/02/22/os-x-ios-and-linux-have-more-vulnerabilities-than-windows/
It all depends if you count Internet Explorer in the mix, it is "built-into" Windows. Even if you take GNU tools, Firefox, OpenOffice, you have fewer holes than WIndows+ie, then you can count MS Office, and you are well out in the blue.
"It all depends if you count Internet Explorer in the mix"
It's part of the OS so it's already counted in the Windows OS figures. The separate IE vulnerability numbers quoted are across all versions of IE so include a lot of duplications.
"The real unarguable benefit of Open Source is not that it will always have fewer vulnerabilities than closed source software, but that it protects against deliberate subversion. It may or may not have accidental flaws but it's very hard to put a statement in there saying "if blnNSA == True..."
If that's what you want to believe, you might want to read say:
http://bsd.slashdot.org/story/10/12/15/004235/fbi-alleged-to-have-backdoored-openbsds-ipsec-stack
and
http://en.wikipedia.org/wiki/Dual_EC_DRBG
>>"If that's what you want to believe, you might want to read say:"
And if you think those contradict my post, you my want to read what I had to say: "it's very hard...".
In Closed Source code, you have to compromise the vendor and that is job done - yes, it possible that outside parties might find evidence of backdoors from decompiling, but it's difficult and time-consuming and, after all, we're talking about the ease of getting backdoors in there, not the relative merits of how hard they are to find (which OS also wins, btw). Whereas with Open Source, you have to camouflage your backdoor well enough to pass inspection by some very skilled people. Seriously - read your own link on the Dual Elliptic Curve Deterministic Random Bit Generator exploit and try and tell us again that this isn't far, far, far harder to pull off than a few IF statements.
Its hard to hide "if NSA==True" in code, but not impossible. You need a C compiler for that, right? So you better check your C compiler sources. They look fine, so you compile your C compiler. With a C compiler binary. What could possibly go wrong?
>>"You need a C compiler for that, right? So you better check your C compiler sources. They look fine, so you compile your C compiler. With a C compiler binary. What could possibly go wrong?"
Not this again. Yes, there can be exploits hidden in a compiler but again, you seem to be responding to my statement that it is very hard to hide such backdoors in Open Source software with examples of things that are (surprise!) very hard to pull off. You need a compiler from somewhere to get started on the process, even if you're then compiling your own compiler afterwards. So where does it come from - well, somewhere reputable. You can check the hash of the file. The hash of this file will be the same as the hash of the file for that same compiler in a lot of other places. You think someone wouldn't notice that a gcc binary was different on one set of servers to another, even though it was supposed to be the same? Of course that would be noticed. So now you're talking about having sneaked your backdoor code into all the places that distribute those binaries. Places that compile them independently from source!
Seriously, we are talking Moon Landing levels of Conspiracy to pull this off and to keep it hidden. You can pull it off maybe for very targeted attacks (still hard as any serious user is using an enterprise distribution and differences would stand out), but that does nothing to contradict my point about it being very hard to hide backdoors in Open Source software. Your link, btw, is to a proof of concept. Good luck actually getting that out there into general Open Source that people had on their computers. In contrast to proprietary where you only have to compromise the vendor.
I don't know why some people are so determined to turn everything into a My Team better than Your Team fight. In any two systems that are different, there are going to be advantages and disadvantages, otherwise they would not be different. It does no good to deny an advantage or disadvantage because it's not to one's liking. It doesn't mean one is utterly better than another in either direction, it's just called recognizing not everything is five-year-old simple.
"Don't go blaming Microsoft for vulnerabilities in its protocols, either. This is strictly a bug in Samba's open source implementation of the stack..."
Didn't the Samba team pay MS £10,000 a few years back to get MS's SMB code so they could get it to work better? Wouldn't surprise me if this comes from a few routines they copied over. MS found this bug ages ago, patched their own systems, sit on it for a few months, then pretend they find it in Samba code...
"Wouldn't surprise me if this comes from a few routines they copied over"
No Microsoft coder would ever normally write something so inefficient that it started a new process per user. You would always use a new thread or fibre. And Windows 'network' processes never normally run as admin / root. It's only the security and architecture limitations of Linux that requires this sort of kludge.
There is still a *huge* difference in a thread and a process, even in Linux: threads -> same address space, processes -> different address spaces, and need of IPC to make them communicate.
If tasks are separate enough, then only the startup overhead is a matter. But when tasks needs to communicate and coordinate, that overhead may start to matter.
"If tasks are separate enough, then only the startup overhead is a matter."
And the shutdown overhead. Processes are more heavyweight than threads, and have a higher startup and shutdown cost. Forking costs more than pthread_creating because of copying tables and creating COW mappings for memory. Interprocess communication (IPC) is also harder and slower than interthread communication.
>There is very little difference between a process and a thread in Linux
>- generally it does not suffer the penalty overhead that Windows
>does of starting a new process
Which whould be why no code copied from Windows would ever include starting a new process like this?
No, this was about the EU/Microsoft case, the important part, the work group protocols. Microsoft was forced to open up its protocols and pay a few $. The Samba team was the only "team" that did not give up.
An all American case except for Opera. Sun gave up or was given something half way through.
The case as a .pdf you find at:
http://ec.europa.eu/competition/antitrust/cases/dec_docs/37792/37792_4177_1.pdf
(Case COMP/C-3/37.792 Microsoft)
It's a very well written text, an about ten years "down memory lane" story.
Too lazy to find out about the 10,000 you mention, perhaps MS was given the right to charge something for the documentation. Ask Andrew Tridgell.
Why do some people insist on capitalizing the first letter in each word of 'open source,' as in: "Open Source"? Did it become a brand name somewhere along the line? Are we now heaping all open source projects under one umbrella? It niggles me nearly as much as when I see the plural of Unix written as "Unices," where the transmogrification of the letter x into a c makes no sense given that Unix is not a word of Latin derivation, but rather a trade name. So, to summarize: open source is not Open Source, and Unixes is not Unices.
Greets from the dirty pedant!
>>Why do some people insist on capitalizing the first letter in each word of 'open source,' as in: "Open Source"? Did it become a brand name somewhere along the line? [...] Greets from the dirty pedant!
You don't get much more pedantic than I do. I capitalize Open Source because I am referring to a specific category of software known by that name. I.e. it is a proper noun, similarly to how in the same post I capitalized Libre rather than just saying "libre software" which could refer to things other than those I meant. I.e. Open Source is a proper noun in this context.
And to anticipate any extreme pedants about to claim that it is a proper name rather than a proper noun because it is more than one word, you are wrong. There is no good foundation for such an arbitrary rule and you are just attempting to sound clever.
EDIT: And to the AC I am replying to, you have used an icon that is incorrect by convention. It should be the icon you see in the top right of this post when attempting pedantry. ;)
My pedantry has been bested not once, but twice; a defeat most unsavoury for me to digest. Surely I have felt the knife of conceit quicken to my very soul. But be assured, as certainly as I have played the fool's mummery today, having been thumped by pikes of haughtiness most solid, shall I tomorrow return on the grand stage (with the right avatar, of course).