
Jeez
Just as well everyone is diligently monitoring for updates and applying them as soon as possible.
Microsoft is urging everyone to install an emergency security update for all supported versions of Windows to fix a remote-code execution vulnerability. Details of the vulnerability were found and reported to Microsoft by security researchers poring over internal memos leaked online from spyware-maker Hacking Team. This …
"Microsoft Windows does not release any OpenType fonts natively. However, third-party applications could install them and they could be affected by this change"
Sure the library is broken, sure it might well be Adobe's shit code, but the decision to run a Font library in kernel mode was all Microsoft. This particular class of problem has been a pointed out to MS and the user community on numerous occasions - going back to at least NT 4.0.
I am hoping the fix stops running that library in kernel mode in addition to fixing the code, but the fact that MS & their fans have expended more energy on burying the bad news than fixing the problem so far doesn't give me much hope.
This post has been deleted by its author
Also how about this crazy idea. Perhaps some servers don't need a GUI at all and can do without a font attack surface in kernel mode. I understand that for many computing roles its may be necessary to have some of the graphics subsystem run in kernel mode. You are going to have a tough time convincing me however this is anything but ancient legacy gunk that Microsoft knows is not good but being basic infrastructure would be prohibitively disruptive/expensive to do right now.
Sure, for a *server* it's wrong - actually, a server could even work without a graphic card.
For a desktop, it's needed to achieve required performance, and not for gamers alone. From some perspectives, games are even easier to manage because usually the get exclusive use of the screen. From the same perspectives, it's more complex to handle multiple applications each with their windows needing rendering while overlapping.
That's why the next version of Windows will introduce a *server* version without a GUI.
And that's why Linux is introducing newer graphic frameworks running in the kernel - if you want performance on the desktop side you can't do without.
>>> Perhaps some servers don't need a GUI at all and can do without a font attack surface in kernel mode.
But but but ... all compute devices should be running a unified OS. I mean it's not like there's an actual *reason* to have different server, desktop, tablet and phone capabilities. I mean they are all used for the same thing...
>But but but ... all compute devices should be running a unified OS. I mean it's not like there's an actual *reason* to have different server, desktop, tablet and phone capabilities. I mean they are all used for the same thing...
I hope the upvoters actually caught your sarcasm instead of agreeing literally.
It's not impossible. It's slooooooooooooooooooow.
That's why Linux is used by 1.46% of desktop users, and mostly by people who don't go beyond an SSH shell to a server. Try to use anything graphical and it's a slug, compared to other systems.
Just take the time to look at how font rendering works, and how a graphic card works, and maybe you'll understand why. And you'll also learn why most Linux desktops and windows managers are pure crap.
"That's why Linux is used ... mostly by people who don't go beyond an SSH shell to a server."
This is a new and ingenious one. Is this your own material or do you have a script-writer?
Back in the day I did use Windows for more or less this purpose. The first example was when Visionware in Leeds did a nice package which included Windows/286 or 386 (look it up - it was a thing) and an X-server; it was a very good way of getting multiple sessions from a PC.
"Sure, Linux doesn't use Direct Rendering Manager in kernel, does it?
Face it, the amount of processing required by actual application requires most of pixel calculations and settings to happen close to the VRAM and GPU..."
If anyone is genuinely interested in finding out a bit more around the topic, I suggest that they read up some papers on how SGI implemented their early 3D accelerator hardware, drivers & libraries. Might be a bit of a hunt - they were published in the early 90s, I think I found them in IEEE Computer Graphics & Applications back in the day.
Anyway - in SGI's case Performance was a tougher problem for them - as they were working with slower silicon than the NT 4.0 bods, and yet they decided to pay a *lot* of attention to stopping people from cracking the kernel and applications via the graphics hw & libraries. I can't believe that all those techniques passed Microsoft by, especially as they actually *hired* some of the SGI folks... Perhaps MS were simply too cheap to license the tech.
>Anyway - in SGI's case Performance was a tougher problem for them - as they were working with slower silicon than the NT 4.0 bods, and yet they decided to pay a *lot* of attention to stopping people from cracking the kernel and applications via the graphics hw & libraries. I can't believe that all those techniques passed Microsoft by, especially as they actually *hired* some of the SGI folks... Perhaps MS were simply too cheap to license the tech.
Well shoving the whole GDI system haphazardly in kernel space had to be a lot cheaper and considering most of their customers don't know the difference in the end which company is still around (in all but name)? Sad but true though people did learn some and haven't shown a lot of interest in Microsoft engineering in mobile.
"Sure the library is broken, sure it might well be Adobe's shit code, but the decision to run a Font library in kernel mode was all Microsoft."
Well, consider that font handling is a basic OS function (meaning it gets used all the time) AND that graphics drivers are in kernel space for performance reasons, how else are you going to get smooth and speedy font rendering without tons of time-wasting context switching?
NT was a server OS, it doesn't really matter how fast a server OS repaints windows or scrolls the console screen. I don't remember NT 3.51 being noticeably more slower than Windows 3.1 for general use and as it was a server OS nobody was running Photoshop on it.
They should have gone with the idea of merging the Windows 95/NT lines into XP but having two XPs. A server XP done properly, a desktop XP which had stuff in the kernel if necessary, and one compatible API to rule them all. Instead their idea of a server OS is the same as a desktop OS with a registry key to be able to use Active Directory.
"Well, consider that font handling is a basic OS function (meaning it gets used all the time) AND that graphics drivers are in kernel space for performance reasons,"
I suspect that convenience and slinging the software out of the door as fast as possible also played a part.
"how else are you going to get smooth and speedy font rendering without tons of time-wasting context switching?"
There are a number of techniques you can use to reduce context switching without running complex application code in the kernel. Two of the simple and obvious ones are:
1) Build up a display list (usually made up of primitives) then render list all in one go.
2) For fonts and other oft-replicated items you can cache the rendered glyphs so you don't need to keep re-rendering them. Some systems even cached glyphs in off-screen display memory as well.
if you want to learn more there are a lot of books & papers out there on the topic and there are millions of lines of production code you can read through (for free). In my case I used to religiously read through every copy of IEEE Computer Graphics & Applications and databook I could get my hands on. You may find hardcopies of early 90s CG&A hard to find, so might be worth a look at computer.org to see if they have digital editions of the back-issues. If you are lucky you will a corporate tech library or university that will be only too happy to have you take away all their old copies - just ask them.
Well, consider that font handling is a basic OS function (meaning it gets used all the time) AND that graphics drivers are in kernel space for performance reasons, how else are you going to get smooth and speedy font rendering without tons of time-wasting context switching?
I think this is the root cause: x86 is dreadful at context-switching which is why the decision was taken to put stuff that had deliberately been kept out of the kernel into it. I suspect it didn't make as much difference on the DEC Alphas that early on were given equal status to x86. Sigh, another instance of where the Wintel duopoly stifled innovation and quality.
Believe it or not, it wasn't a gamer requirement. It was a requirement for complex graphical applications, including DTPs, vector and bitmap graphics, CAD, etc.
Try to scroll a complex documents with lots of text in different fonts, antialiasing, kerning, etc., and some complex graphics, and try to render it smoothly while the user scrolls or zooms it...
Then ask yourself while none of such applications are available for Linux - I mean *professional grade* applications...
I could argue that as well but fine let Windows be used for engineering workstation stuff but I am also grateful that manufacturing production servers I support don't have a bunch of legacy windows 3.11 crap shoved into the kernel (not Linux either as Red Hat is now going the same desktop first and only path). The Where have you been for the last 20 years? post below nails the issue.
"Try to scroll a complex documents with lots of text in different fonts, antialiasing, kerning, etc., and some complex graphics, and try to render it smoothly while the user scrolls or zooms it...
Then ask yourself while none of such applications are available for Linux - I mean *professional grade* applications..."
Then ask yourself why so many of those applications were originally Unix applications ported over to Windows NT. In some cases there were Linux ports as well, since one reason for Linux's success was the ease of porting from Unices. Then ask yourself whether the real reasons were commercial rather than technical.
Try to scroll a complex documents with lots of text in different fonts, antialiasing, kerning, etc., and some complex graphics, and try to render it smoothly while the user scrolls or zooms it...
"But... but ... MUH OPTIMIZATIONS! I can't do it! HERP! DERP!"
I agree the situation would be completely hopeless if practically the whole company consisted of low-grade fakers unable to even understand how this "Operating System" that they are supposed to own even works. Well thought-out optimizations and proper architecture would be right out and it everybody would think it be a good idea to shit all over everything and do insecure stuff where it shouldn't be done.
As I suppose this is not the case at MS, some other factor must have been very important.
"It's the financial models that limit the kind of "professional" apps that Linux can support as well as the overall success of the OS."
What are you talking about? Commercial software sells on its own merits. They don't have to worry about how the OS is distributed. And there's absolutely nothing preventing you from selling commercial software for Linux. Licensing is on a program-by-program basis. The only things they're concerned about it appropriate market penetration and support. That's why there were tons of Mac-only software in the old days: because it was a tool of choice of certain professional niches.
"What are you talking about? Commercial software sells on its own merits. "
Agreed. For years I've been using seriously expensive protein modeling software that was written for Unix/SGI/Linux. The sort that needed a license server (or worse only had a few copies available via a token system )
"It was a requirement for complex graphical applications, including DTPs, vector and bitmap graphics, CAD, etc."
Would you include video in this? In my household TV is vary rarely watched directly but via MythTV. Because of the constraints of a domestic environment this runs on an old fan-free Intel mini-ITX board with just standard Intel graphics simultaneously shuffling multiple streams from the receivers (note the plural) onto disk and the watched programme off it.
I don't have many requirements for some of the other stuff you mention although LibreOffice & PDF viewing works quite well under KDE on the Debian laptop on which I'm typing this.
But I believe that another Unix-derived system is quite popular for such complex graphics. You may have heard of it. It's called OS-X & comes from a little company called Apple. As they use a Mach-style kernel where the principle is to shove as much stuff as possible into userland I'd be surprised if it handled fonts or the like in kernel space and I'm sure there are plenty of folk here can give chapter & verse on that.
Twenty years ago, I was fiddling around with "fly through" views of 3D CATIA datasets at Boeing. Targeted at the high end IBM AIX workstations, they ran just fine on my little Dell PC running Slackware/X11 (back in the 1.2 kernel days). And these were being generated by clients running remotely (20 miles away, over leased lines).
Back then, the problem with running full up CATIA on Linux boxes were proprietary I/O drivers patched into the AIX X11 implementations, mainly for the specialized input h/w. Fast forward to today: Most of the high performance graphics stuff is licensed to individual apps (games, CAD systems, etc.). Ask for a license to hook some proprietary GPU API to an open X11 server? Forget about it. Graphics optimization must still be done on a per application basis. It's just buried deeper in Windows than in Linux.
"Believe it or not, it wasn't a gamer requirement. It was a requirement for complex graphical applications, including DTPs, vector and bitmap graphics, CAD, etc.
Try to scroll a complex documents with lots of text in different fonts, antialiasing, kerning, etc., and some complex graphics, and try to render it smoothly while the user scrolls or zooms it...
Then ask yourself while none of such applications are available for Linux - I mean *professional grade* applications..."
DreamWorks might disagree with you, for one example...
Files run in the kernel in Windows. The Pope is Catholic. ATFM.dll is such a file. Ursines defecate in areas surrounded by trees. Changing that particular process will mean that far too many third party programs would have to be changed. Fonts are a particular issue as they require both software (documents) and hardware access (printers) to the system. If not, then things would crawl when printing documents or displaying them.
It's all so "luvvies" can make pretty Powerpoints and Word docs.
We've all been down this road before. Have you been living in a bunker somewhere for the last 20 years?
"Files run in the kernel in Windows" - with such an incomprehensively vague and bizarre comment, I do wonder if you might bow out of further discussions. You'll only irritate those who understand stuff, and confuse those who don't.
How about "Flies run in the wake of the mature banana." Perhaps you could pick that up as your mantra and run about the woods at night. That'll keep you nicely out of the way of ordinary, decent everyday folk.
>Files run in the kernel in Windows. The Pope is Catholic. ATFM.dll is such a file. Ursines defecate in areas surrounded by trees.
His point I think (admit the writing is confusing) is to state some obvious things up front to prove his point. More interesting to me is his implication how in general desktop optimization comes before all other competing interests even in the core of the OS (which is so wrong but also where Linux is headed).
Fonts. In kernel mode.
FREAKING FONTS IN KERNEL MODE? The hell were they thinking in Redmond? That's almost as bad as passwords stored in plaintext.
I get that some applications need that kernel mode access for graphics. But fonts? Something that could be rendered in user mode with no lag on a 386? Why the hell would you ever put something like that in kernel mode??
Why not add in rasterized vectors ?
You spout a lot of tech, but in the end all we're talking about is displaying text on a screen. If Windows was the only OS that could do that, then your argument might work. Unfortunately, there are plenty of other OSes that can do so as well, without endangering the stability of the whole by using kernel operations.
So find some other argument.
> 386 were having issue to display bitmaps fonts on VGA displays, and were doing it writing directly to the video buffer..
Ok lets assume its 1994 and we buy that excuse. Fine but its frigging 20 years later. Next you will say fonts are naturally something that kernels should be dealing with directly.
>Ok lets assume its 1994 and we buy that excuse. Fine but its frigging 20 years later.
And MS have "rewritten the OS" right? right? We've also got brand new versions of applications too?
Load a large document into Word on a multi-core machine and try paging through it quickly while observing how many cores get maxed out.
We should be able to do things quickly now with faster silicon. I'm not sure if its relevant, but I can ssh -X across my network to a core2duo E7500, fire up libreoffice and the experience is very close indeed to my local specced out i7-3930k. The core2 has slower disks and 1/8th of the RAM clocked far slower plus serialisation for the network. I can also ssh -X from my core2duo 2.4Ghz imac (which I believe does X in userspace) across the network and run libreoffice just fine.
I'm not suggesting this is a good solution for gaming, but if userspace rendering was at least the default, set at application launch time or even boot time, there's a whole raft of issues which just go away. If I'm browsing and I suddenly get a request to launch kernel-rendering to read a document, I'll have a clue that something is up.
Thing is, your screen will get rendered kernel-side here, too--either by the source or by your machine. As others have noted, it's a performance issue: namely a constant context switching issue since fonts are graphics-related and the graphics drivers, for performance reasons, are kernel-mode.
"Thing is, your screen will get rendered kernel-side here, too--either by the source or by your machine."
This is really basic stuff, you should do some reading about it instead of trying to guess what happens. Here's a clue for you: You can map a framebuffer into userspace. Windows could do that too.
>> 386 were having issue to display bitmaps fonts on VGA displays, and were doing it writing directly to the video buffer..
>Ok lets assume its 1994 and we buy that excuse. Fine but its frigging 20 years later. Next you will say fonts are naturally something that kernels should be dealing with directly.
Don't give this idiot the slightest of attention, an Apple LC could render both vector and raster fonts pretty amazingly, with a 16Mhz CPU, all while running Quark.
We always get the exact same "excuses" from window cleaners and surface experts, got the same for the vuln found in http.sys ... Windows is inherently slower because it runs a gazillion obsolete subsystems, the dependencies are a complete mess even for veteran kernel developers, so they are forced to shuv everything they can into kernel land, making the whole system insecure.
This post has been deleted by its author
Hmm I read this thread with some interest - and to the guys that's quoting Quark on a Apple LC, it rendered well and one reason why it was the de-facto in the publishing industry, the principle reason being a proper understanding of CMYK and device gamuts and professional printing presses are all about those two.
However it was never particularly quick. x86's were worse, and even with VESA bios's coming along all that did was give you tricks like triple-buffering, you still had to calculate the whole lot in CPU, so for speed alone I fully understand doing this (back then). Then a year or two later from the 486 era onwards with the advent of graphics controllers plugged into the VL-bus the Mac dominance was broken. This was all done with ultra-quick 2D rendering, forget your polygons that was purely secondary back then, it's all about bezier curve calculations, also a key part of post-script level 2 (re: breaking apple dominance). The VL-bus was directly wired to the CPU so it isn't a surprise at all that this was all kernel mode. Aldus pagemaker and then quark moved quickly to the PC domain and that was that for Apple for many years. Back then you weren't even connected to anything by TCP/IP unless you'd installed a TCP/IP stack, security was a different world away prior to '95 so be careful at apply 2015 standards to back then.
Of course it's a load of cobblers to even remotely have fonts rendered in the Kernel this day and age and should have been dropped by the late 90s at the latest, but back then, it was commercially important.
"The VL-bus was directly wired to the CPU so it isn't a surprise at all that this was all kernel mode."
VLB reduced the latency and increased the bandwidth to the graphics hardware, so if anything there was even less excuse for running third party application code in ring 0. I found this out the hard way with a logic analyzer, a 'scope and a misbehaving RIP.
Perhaps is it the gazillion copies of svchost that seem to spring up out of nowhere and consume lots and lots of CPU that is to blame?
They can't all be doing the same thing?
As has been said, there is a lot of old cruft in the Windows OS that should have been consigned to the great bit bucket in the sky years ago.
Bring back the ASR-33's and acoustic couplers. No graphics nonsense with them.
If we are to believe the white paper ( https://technet.microsoft.com/library/cc750820.aspx ) then the GDI could be taken out of the kernel and brought back into userland without affecting compatibility. When they moved GDI into the kernel they left a GDI userland stub that Win32 programs continued to call and so maintained compatibility between NT 3.51 and NT 4.
"Application developers are not affected by this move. The Win32 APIs are unaffected. And though Window Manager and GDI are now contained within the Windows NT Executive, all Win32 APIs are still accessed with the same User32 and GDI32 interfaces."
But we don't know if we are to believe the white paper as they spend most of the end of it protesting too much about how it didn't affect stability when it so obviously did. And let's savour at the following final paragraph in the summary...
"Ultimately, integrating the Window Manager and GDI subsystems into the Windows NT Executive is simply another step in the continuous improvement of Windows NT. It allows Windows NT to continue to define, within the framework of the Windows 32-bit operating system family, a new standard for personal computing that is both high-end and mainstream at one and the same time."
Yes, Windows was all things to all people then as it is now. One size fits all, then it was desktop and server and now it's desktop, tablet, and phone. How's that clusterfuck working out for them...?
then the GDI could be taken out of the kernel and brought back into userland without affecting compatibility.
Thanks for the detail!
Didn't Microsoft kill GDI in Vista? Certainly on any machine beefy enough to handle WPM the font-handling should be the responsibility of the graphics engine, hopefully running on the card.
People who have never seen, let alone run a 386 really ought not spew in the forums.
Yes, I was running a Windows 3.11 workstation on a 386 with math coprocessor back in the day, never had a problem with displaying bit mapped fonts on either my paper white (DTP work) or color monitor (CAD). And your whole CAD argument is just right out of your arse. Granted I was running Autocad which most folks don't consider real CAD, but the relevant fonts for it were additional drawings.
Modern fonts are applications in and of themselves. They might even be Turing-complete.
Rendering a font means running code provided by the font author, one hopes within a decent sandbox.
Quite why this sandbox needs the privilege of living in kernel-land is beyond me - especially as Apple (of all people) have proven it to be unnecessary under Windows.
Perhaps it's for the same reason a certain Mr. L. Torvalds has included support for f*cking *game pads* in his own, little-used kernel? (I forget its name. Minux? Torix? It's got an 'x' in it somewhere.)
But please, do continue with your rant, O Great Sage! I'm sure no other OS has ever included code designed in an era when dial-up Internet access from home was still a novelty even for most IT experts.
"But please, do continue with your rant, O Great Sage! I'm sure no other OS has ever included code designed in an era when dial-up Internet access from home was still a novelty even for most IT experts."
In the PC space, agreed. However NT was actively marketed as a replacement/alternative to UNIXen at the time of 3.51 & 4.0, and it shipped with TCP/IP & IPX support out of the box, so lack of knowledge about networking would be a very weak excuse IMO.
The best excuse I can come up with for MS's naivete is their products mostly ran on single-user boxes. By contrast the multi-user OSes of the time had been secured against hundreds of users trying to steal resources and play pranks on each other for a couple of decades.
That is an excuse though, it's not a good reason. :)
"Perhaps it's for the same reason a certain Mr. L. Torvalds has included support for f*cking *game pads* in his own, little-used kernel? "
That 'little-used' kernel is running on more devices than any other (for better or worse). Don't forget that desktops constitute a tiny portion of computers in use.
If there were a bug in Linux that caused all copies to simultaneously stop tomorrow, the world would grind to a rapid halt.
I say this as somebody who uses Windows and OS X daily as well. I just don't bury my head in the sand when it comes to keeping the entire technology landscape into perspective.
Hope I didn't make a mistake installing this patch on the day of release. Laptop has been on "Preparing to configure Windows" screen for at least 30 minutes. The spinner's spinning and the dot is moving so I hope something good is happening. Would be nice to have more info. Finally got tired of waiting and fired up desktop to complain. This is the only patch it's installing.... There has got to be a better way. Windows 7 SP1. Forget the exact specs on laptop but it's an HP w/dual-core AMD about 2.7 GHz and 8-GB RAM.
Edit to add: A slow install of updates occurred a few years ago just as the power failed. The UPS shutdown sequence began and there was no way to override it, so this PC lost power mid-update. What a cluster-xxxx that was.
I had a similar situation happen to me while updating a laptop last year. Things had gotten to the 60-minute mark and I was just about to push the power-off button. (I'd read online that anything past 30 minutes meant that the update was hung, so you had to power off and then go through a rather extensive manual recovery process). But then I got a phone call which distracted me for a while, and when I came back to it at around the 90-minute mark, all was well. !?!?
Just leave it, if you interrupt it it's worse. If you've got an Atom netbook then that might mean leaving it the whole damn day.
All because MS won't release any more SPs for Windows 7 above SP1, if the release cycle were like XP's it should be on SP3 or SP4 by now. Got to push people onto the latest and greatest trainwreck you know.
Microsoft are in a complete mess with windows. They don't even have time to step back and consider their best strategy. They are too busy firefighting and even while they are firefighting they are adding new features that no one wants. Windows is clearly in a state of unmanageable complexity. Banks shouldn't touch Windows. They should think about putting some simple, manageable OS on ROM for their critical systems.
Kind of funny hearing people want to kill Flash. Flash was never, and will never, have as many exploits as Windows has. So, whenever I hear people saying we should kill Flash they should really think about killing Windows along with it. Windows and Flash are perfect for each other. Both have gaping security holes and we'll be entertaining their 0 days for years to come.
This makes no sense. It's not a logical argument if you have any clue what is going on at all.
Flash is not an operating system.
Flash is now a browser.
Flash is a plugin for a browser that requires an operating system.
So lets do the math here. Windows Exploits + Internet Explorer Exploits + Flash Exploits. This holds true for other operating systems as well. Linux Exploits + Firefox Exploits + Flash Exploits.
"Windows might have it's holes, but it has fewer than most of the competition."
The problem with that statement is you are comparing apples to oranges. Closed source development hides faults so that the customers don't get scared off. Barely a month goes by without a vendor silencing a security researcher, that should tell you all you need to know about the accuracy of vulnerability counts for closed source.
"Closed source development hides faults so that the customers don't get scared off. "
Thanks to Open SSL etc, we know that the quality of Open Source code is often awful with zero proper security reviews in 18+ years...so being in public view doesn't mean anything is secure.
"that should tell you all you need to know about the accuracy of vulnerability counts for closed source."
It doesn't tell me anything about vulnerability counts for Microsoft OSs. However (seeing as Linux isn't really used on the desktop), try comparing defacement via remote exploit rates for internet facing webserver OSs, or malware levels on mobile phone OSs then...
"However (seeing as Linux isn't really used on the desktop),"
Linux is really used on the desktop but not generally by casual computer users. A lot of scientists/engineers/academics use it for example. 1.6%, of course, is actually a very good number considering that many people have to get off their ars*s and install it themselves.
In answer to AC earlier (perhaps you could mention it to him !) the graphics performance is often the reason for using desktop Linux. I and many of my colleagues use if for hardware stereo 3D graphics for protein modeling. Rotating a large protein complete with all its bonds smoothly in 3D whilst running further computationally intensive modeling programs is about as demanding as it gets BTW.
Firstly, thanks for making the effort to engage Vogon. :)
"Thanks to Open SSL etc, we know that the quality of Open Source code is often awful with zero proper security reviews in 18+ years..."
OpenSSL is one project out of many, just as MS is one vendor out of many. Just because MS decided to throw third party code into ring 0, I don't assume that IBM pulled the same stunt with z/OS.
"so being in public view doesn't mean anything is secure."
Quite correct, I am in violent agreement with you on that score.
Bad code can happen anywhere, the trick is to identify it & mitigate it before it burns you. In the case of OpenSSL quite a few outfits forked it because they couldn't get their patches accepted or vuln reports accepted (and this was a common complaint levelled against OpenSSL for a very long time). In the case of Windows we've known about the risks of running third party code at ring 0 for decades, and MS just hasn't listened or decided it's enough of a problem until there are heavily publicised attacks out in the wild. From the point of view of the end user the material difference is that an MS font rendering vuln gives root to the attacker whereas vulns such as Heartbleed compromise user processes.
At the end of the day it's your choice to make excuses for vendors with massive margins, personally I would like them to actually fix the defects in the products that folks buy from them. Hell, even if I didn't pay MS anything I'd want their stuff fixed because those flaws cost productivity and that impacts my spending power.
I note that as well as the previously mentioned Open SSL remote get root exploits used by Slapper, today we find that Open SSH can allow 10,000 logon attempts per 2 minutes!!
It's a shame Linux doesn't have sensible and modular architecture that can control authentication centrally and not allow an application to compromise something so basic as account lockouts!
See http://arstechnica.com/security/2015/07/bug-in-widely-used-openssh-opens-servers-to-password-cracking/
"on the other - badly written OS pawned by displaying text"
Which if you read the details at least requires end user interaction.
The Open SSL exploit originally used by Slapper and the password issue above could be used to attack *NIX systems with no user interaction.
"Open SSL remote get root exploits used by Slapper"
Apparently that requires OpenSSL process to be running as root - which is possible, but SOP is to run web servers and other network services as anything-but-root to mitigate the risk of a remote attacker being able to root the box ;)... In the case of services like OpenSSH that *really* need root, privilege separation can be used to mitigate the risk of remote root exploits.
"It's a shame Linux doesn't have sensible and modular architecture that can control authentication centrally"
Why doesn't PAM (http://www.linux-pam.org/whatispam.html) qualify in your estimation ?
"Why doesn't PAM (http://www.linux-pam.org/whatispam.html) qualify in your estimation ?"
Because apparently it doesn't prevent thousands of authentication attempts happening against privileged accounts on a default install of any Linux that has Open SSH enabled.
Also a quick read of the RFC that you link to (which you apparently didn't) implies that PAM does not deal at all with unified lockout and password policies, etc - and is just an API that sits between the code that does and other applications.
"Because apparently it doesn't prevent thousands of authentication attempts happening against privileged accounts on a default install of any Linux that has Open SSH enabled."
I have found that my own Linux boxes are not vulnerable, so no the vulnerability doesn't actually affect all Linux boxes (and I haven't even looked at my OpenSSH + PAM configs either).
"Also a quick read of the RFC that you link to (which you apparently didn't) implies that PAM does not deal at all with unified lockout and password policies, "
I hate to state the obvious here but an RFC is just a document, PAM is code. They are not the same thing.
I still don't understand why this OpenSSH + PAM vuln justifies a vendor (that gets paid many billions for it's product) failing to fix poor design that was pointed out to them over and over and over again for over a decade.
FWIW *if* I wanted to remove the possibility of that vuln happening I could fix the OpenSSH code myself. Fixing the font lib vuln myself just wouldn't be an option, and if I chose to publish such a fix I'd be open to all kind of legal crap from DCMA to copyright infringement. The fact remains that closed source is inherently harder for the user community to fix.
"It's a shame Linux doesn't have sensible and modular architecture..."
I hate jumping into this conversation, but a Windows fan stating that Linux doesn't have a modular architecture shows an unholy amount of ignorance in this industry.
Linux and many Unicies are about as modular as operating systems get. Windows by the same token is very monolithic. I'm not talking about labels we apply to kernels (micro versus monolithic kernel, etc etc), I'm talking about the OS.
Try building any popular Linux distro from the ground up using packages. It's easy and you get an incredibly granular level of tools and services. Kinda of like Lego really. It's so modular there are often many solutions to the same problem - hence the multitude of desktop environments available for example.
Install Windows and you get very few choices. Desktop versions only come with a GUI, and on a server you can leave it out. Install an application an it roots itself so deep into the OS that it's difficult to completely remove. For all intents are purposes it simply becomes a part of the monolith.
Given than this arose from a discussion on authentication systems, then Unix/Windows wins hands down for a modular architecture thanks to PAM. You can have a system authenticate against practically anything imaginable, it just needs a PAM module. Contrast that to Windows, where you get a choice between local accounts and AD (i.e. more Windows). That's it.
I'm not saying that AD is bad, in fact it has a huge number of merits. Central policies, reasonably logical directory structure, easy of deployment and administration. It is a known quantity to many folks and is largely predictable, has commercial support available if you need it and does cover a number possible use cases. Likewise for Windows clients, and the whole shebang is designed to integrate quite well with itself.
Many PAM modules and even Samba don't include account lockout policies as you state. Luckily 'nixes are so flexible we can choose to easily join 'nix hosts to an AD domain to take advantage of those. If it suits the use case involved then it's a great option. We get choice here too; any of LDAP + Kerberos, Samba/Winbind, PBIS or realmd can do this for us.
Don't want to run an AD domain? Then use Samba 4, or IPA, or build your own with any number of LDAP servers available. Or NIS if you are feeling oldschool. The key thing here is the option for choice; you can pick the right tool for any given job.
By contrast with Windows, the only choice for auth you get is more Windows. That's OK in many situations too; look at many corporate environments these days. There's a lot of stuff to tweak and the policy enforcement generally works very well. But more modular and flexible it isn't - it's designed that way! That's the point that was being made.
The day we can claim that Windows is more modular than Linux is the day I can install it on my broadband router. But I can't because it's completely impractical to do. And even if it weren't, we rely on Microsoft to set the direction as we can't easily get under the hood and modify it to that degree ourselves.
The moral? We have different tools for different use cases, and that's a really good thing. But let's not lose sight of what the real differences are, or ever pretend that any one platform can do absolutely everything the best way.
"the material difference is that an MS font rendering vuln gives root to the attacker whereas vulns such as Heartbleed compromise user processes"
Linux has had plenty of remote vulnerabilities that either give root directly or can be combined with numerous privilege escalation vulnerabilities to get root. For instance the original Slapper Worm - which spread via Open SSL!
Microsoft have released a patch for Windows 10's preview build 10240. As commented upon by others, because W10 hasn't officially been released, it isn't included in the formal patch notices and seems to only be available via the W10 update process.
What amuses me is that MS could (unwittingly) preserve functionality (ie. vulnerabilities) across many Windows versions that enables malware to work, whilst at the same cause problems when you try and use legitimate programmes across the same versions...
Strangely, this patch doesn't show up for me on Windows Update (running Windows 81). Or maybe it did install it, but I just don't know what to look for... But there has been no indication of any Critical Update being available or installed in the last 3 days.
Anyone know what I should be seeing?
For Windows 10, Microsoft are going to finally finish their performance tuning programme first started with Windows NT 4, and move Flash, Java and IE into the kernel.
Said a spokesperson: "We already allow the running of arbitrary code by anyone from anywhere at any time via our font engine, so this next step seems logical. For the short time your PC is still under your control, you'll probably really notice the slight speed increase."
When asked how Mac OS X managed to be faster at pushing pixels around a 5k screen than Windows and still managed to keep much of their code in userland, the spokesperson was frank: "most of our managers are idiots, we've lost our best people through stack ranking, morale is lower than ever, and we've outsourced most of our development to Accenture. But we've got some really nice looking icons in the latest Office - well flat they are. Please buy Windows 10."
Dude, don't be an idiot about it. Yes, running it in userland is bad now. You're saying modern Macs (some of the most expensive consumer computers on the planet) can render fonts in userland very fast, when Windows on a 386 couldn't? What an amazing point to make.
Hate these forums when illogical, rubbish reasoning gets posted, and upvoted just because it (nonsensically) ridicules what other people disagree with.
People are urged to install the update as soon as possible, and long before miscreants begin to exploit the vulnerability to spread malware and misery.
You're already too late. The whole reason we know about this vulnerability is precisely that the miscreants found it, were subsequently hacked, and the exploit was posted to the internet.
Roo made a statement that comes closest to a question I have.
"At the end of the day it's your choice to make excuses for vendors with massive margins, personally I would like them to actually fix the defects in the products that folks buy from them."
[http://forums.theregister.co.uk/forum/containing/2577028]
So my question is as follows.
Which part of the Windows kernel is its trusted computing base? That is, which part is responsible for guaranteeing the invariant of the operating system?
In this case of, I think it is about much more than just "defects". The problem is closer to conceptual. Who has sat down and considered and decided the question as to what will be invariant in Windows? What was the outcome of those deliberations? That is, what did they decide will be the invariant the system must guarantee?
The entire OS kernel can't be the trusted computing base. That is far too large (last I heard, the Windows kernel was more than a million lines of code). Only then can I make sense of a question like what would motivate putting something like graphics or font drivers in the kernel.
"Which part of the Windows kernel is its trusted computing base? That is, which part is responsible for guaranteeing the invariant of the operating system?"
The following paragraph from the article linked by Dan 55 may answer your question:
"Finally, it's important to understand that this design is not fundamentally "risky." It is identical to the ones used by existing I/O Manager drivers (for example, network card drivers and hard disk drivers). All of these drivers have been operating within the Windows NT Executive since the inception of Windows NT with a high degree of reliability."
NT had no trusted computing base to start with, and MS were quite happy with that...
Here's the 'Security' section of that article, quoted verbatim (it is one of the shortest sections):
"Due to the modular design of Windows NT moving Window Manager and GDI to kernel mode will make no difference to the security subsystem or to the overall security of the operating system this will also have no effect on the C2 or E3 security certification evaluation, other than making it easier to document the internal architecture of Windows NT."
I really cant decide if that paragraph is a result of ignorance or corporate fecklessness.