Backwards compatibility
If only Windows offered the same level of backwards compatibility for programs, as it seems to do with exploits.
Windows operating systems from XP to version 10 can be popped with a single bit, researcher Udi Yavo says. The hacker, formerly chief of the electronic warfare unit for Israeli defence contractor Rafael, detailed how the local privilege escalation vulnerability (CVE-2015-0057) fixed in this week's Patch Tuesday update could …
Well this is particularly because of a bad design decision with Windows NT 4. They moved the GUI code into the kernel for performance reasons... Now there is no way for them to take that back as it would break compatibility. Even if they could do it, all the software probably would need to be recompiled. That's obviously not going to happen as much of the commercial software out there in use is no longer maintained by the manufacturer.
Now contrast that to your typical Linux system. There you _can_ change such things as you do have the source code for your applications. Your distribution can decide to make some architectural change, and they will just recompile everything for you, or even modify the software that needs to be changed.
This nothing to do with bad design. This bug could have been in just any other, non-GUI related code, and would have had the same effect. As you can see with Linux, which has dozens of new kernel level vulnerabilities every year, even though it doesn't even include a GUI subsystem in the first place.
You're also wrong about the thing with the source code, recompilation and backwards compatibility. Those have nothing to do with each other either. If this is indeed a design flaw, that can't be fixed just by recompilation. A design flaw is only fixable - by definition - by redesigning the affect system and components, which also require semantic change of the programs depending on that particular behavior. And if it's not a design flaw, that could be fixed with recompilation, then it could be done also at the binary level, with no recompilation needed.
You really have no clue about Windows works and what an API is.... as long as the userland API calls are the same - and a userland application does always call userland APIs -, it doesn't really matters where most of the GUI code is. When the change was made in NT4, no Windows application needed any recompilation for such reason. The same applications were running on Windows 95 run on NT despite the very different underlying architectures. Why? Because the APIs were the same.
It's the API code itself that has the privilege to call into the kernel when needed - because only the kernel can access the hardware. Part of the GUI code has always been in the kernel (drivers....) because in a properly designed OS, userland application should not allowed to access the hardware directly (especially in a multiprocess/multiuser one). What Windows changed was to minimize the switch from kernel space code to user space code made inside the GUI APIs because those switch are "expensive" - for a full explanation why, feel free to read http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html.
Even on the "typical" Linux system you may have not all the code of the application running on it. I run Oracle on Linux, and it happens I have not its source code. And it's not the only commercial application delivered on Linux *without* source code. Your Linux video drivers too may be closed source only....
What Windows changed was to minimize the switch from kernel space code to user space code made inside the GUI APIs because those switch are "expensive"
Yes a major irritation with the i286 and to a lesser extent the i386 was the cost of using the features that supported memory protection, task isolation and OS functionality. By 'cost' I mean not only CPU clock cycles (an important consideration given the 286 ran at 10MHz) but also in OS design terms where the instruction set effectively straitjacketed you into creating something not too dissimilar to iRMX-286 (there are pro's and con's with respect to this, in our case it was a con as it prevented us getting the real-time performance we needed).
I think given the orders of magnitude performance difference between the i286 and current intel CPU's, perhaps now is time for MS to effectively put all of legacy Windows development into a VM/container - like it did with XP on Win7 and design a new version of Windows from the bottom up based on Intel RMX and INtime technologies... I think this would also help get rid of many of the stack/buffer overflow issues that bedevil all current versions of Windows. The only problem is that the core OS would need to be written in ASM...
For the matter, with the 286 Intel proposed a far stronger security model using all the four processor rings intead of just two. The model was core kernel-ring0, kernel I/O subsystem-ring1, OS userland code-ring2, applications-ring3.
Because that model was not portable (other processor just supported two "rings"), more complex, and slower due to the much more ring transitions, it was never used, AFAIK. Other features allowed for far better control on what executing code could do with memory segments. So seldom used some are being removed in the 64 bit architecture - and in my opionion AMD really designed a bad one with only performance and easy 32 bit portability in mind, not security.
Face it: if Windows always did something well, is exactly to offer an excellent backward compatibility for applications, up to the point that some issue can be easily tracked to the need of letting old software - often written by clueless developers who kept on coding on Windows 2000 and later as it was still Windows 3.1 - keep on working.
You can have some fun reading Raymond Chen's "The Old New Thing" blog and find what applications MS developers have to cope with to let them run even if they are really bad coded, often because in use by the classic "Fortune 500" customer you can't tell "hey, you developed crappy code, rewrite it per our OS specifications, read those at least once every ten years".
I still see developers, often those using some RAD tools, fossilized in very old programming habits dating back to more then twenty years ago, and unable to evolve as the OS does. Dinosaurs Windows still need to support...
Which other OS still supports running *binary* code written more than twenty years ago? Can you run Mac OS9 applications on the latest OS X?
With Linux, you may have issue even recompiling code written a few years ago, because some library APIs changed enough to create troubles... sure, if someone else maintains that software for you, you don't notice, but try to recompile some older application nobody maintains... or worse, run the binaries...
Hey, Microsoft apologist: the issue is performing GUI operations in the kernel. It might be a neat idea for a single user disconnected computer (which was the main use case of Windows at the time), but not for today's computers, which need to read data and execute code from untrusted sources. It's inappropriate for this era and the performance gains are negligible, with the hardware improvements in the past 2 decades.
There's no need to become aggressive/upset because someone isn't completely aware of the internals of the Windows API (which only Microsoft and security researchers do), especially when you appear to be a little confused about what you've just googled.
As for backwards compatibility, Microsoft know without it they're fucked - but they're getting fucked having to maintain it. The killer feature of Windows is it can run legacy software - without that, it has no advantage. That's the only reason why I have a Windows image lying around; for my old code.
I didn't need to "google" because I started working with Intel x86 protected mode far before Google ever existed. You can criticize the decision to have most graphic code in the kernel, but at least do it correclty - if there's someone really confused it's you - as you say yourself, you're not aware of Windows internals (there's a good book with that title, a very good reading)
The decision to keep that code in the kernel or not as nothing to do - again - with compatibility - but maybe graphic drivers.
Is today hardware powerful and optimized enough to remove the issues than then lead to such decision? Maybe - or maybe not, even a SYENTER/SYSCALL is still slower than a simple CALL althoigh faster than the old INT mode - CALL gates were rarely used -, but switching to kernel and back is more than that, and there could be still CPU caches and TLB issues to slow it down.
Graphic performance are still very important because we demand more, and that's what most users "feel" as the main system performance indicator.
That's why almost nobody uses Linux for GUI intensive applications - it's Windows or OSX. Ask you why... but even Linux with DRI/DRM moved some graphic code on the kernel side.
Moreover, as someone wrote in another post, that kind of bug could have happened in any other call that ended up in kernel while Removing some code surely reduce the attack surface, just, you can't have *all* the GUI code in userland, because all the code that talks to the graphic card needs to be in kernel, and in today's graphic subsytem, a lot of operations are offloaded to the GPU.
Sure.
I'm not criticising the decision. It was the right decision at the time it was made, and I'd stretch to say it's still the right decision if the responsiveness of the UI should trump the integrity of the system.
Also, I wasn't saying Linux is better (I didn't even mention it) - there's give and take when picking between with systems. It depends on what your priorities are. Personally, I don't want an font or image (for example) to be able to execute code at a higher privilege as my browser.
Each to their own.
Sorry, I believed you were the OP of the comment I answered. My fault.
Vector fonts are trickier to handle because you don't just shuffle pixel around - you actually need to calculate what pixels to show and how (because of antialising, kerning, etc.) - and to display text - often a lot of text - there's a lot of work to be done under the hood. Userland code can't write to buffers in kernel, and code from kernel still will go through the protected mode access checks wherever accessing userland buffers from kernel code slowing that down.
Moreover, users wants to exploit their expensive graphic card GPUs... you got Direct 2D and hardware accelerated browser rendering - browser are today one of the applications - being UI intensive - that got the most benefit from a more performant UI code.
Anyway this wasn't a bug due to fonts - it's how the scrollbat kernel code accepts data from userland without proper checks. It just allows to flip a couple of bits - but it's enogh to trigger a buffer overflow if properly exploited.
'Scrollbar kernel code". That really sums up all that's wrong with the windows architecture.
If you're righting true Kernel code, say low-level sync primitives (ok, maybe not spin locks), or the device driver interface layer, or the scheduler, then hopefully you are experienced, an excellent coder and of the particular mindset that understands exactly what to do with userland-sourced pointers, data and buffers.
If you are writing a helper function for scrollbars, then you're probably an intern, or a bit bored, or you've been given the job to fix bug-1675432 because no-one else wants to do it. You won't be the best coder in the shop - they're working on the kernel. You'll be thinking about your next coffee, or your rent, or mortgage, or..., not the ramifications of running your slightly slipshod code with ring-0 privileges.
And you've just left an exploit for someone cleverer than you to use.
"Which other OS still supports running *binary* code written more than twenty years ago?"
VMS? Twenty years ago, VMS was already in its late teens, and though it may not have been very visible since HP inherited it, it is still around.
VMS on VAX was around in 1978. Still works, still binary compatible.
VMS on Alpha was around in the mid 1990s (that's your twenty years). Still works, still binary compatible (and via binary translation can run many VAX applications without a recompile and with very little performance penalty - this is a translator not an emulator).
VMS on Itanium? Yes well, shall we move on? But it is still selling, it does still work, and in a sense it's still binary compatible (though best performance on newer hardware may well require a recompile owing to the particularly sensitive nature of VLIW processors in general and IA64 in particular).
VMS on x86-64? Stay tuned. HP finally woke up and handed the future development of VMS to a company with a clue. More info at
http://vmssoftware.com/ (roadmaps for IA64, x86-64, etc)
"Thing is, graphical apps did run dog slow on VMS."
Depends on when you looked, and what box you looked at, and what the application was.
In days gone by there have been VMS workstations (and/or Xwindows applications running on VMS and displaying on a box with X11 display) which have been more than adequate for routine office desktop apps, and there have even been successful graphic-intensive workstation applications in fields like CAD, SCADA, and so on. You may well not have heard about them.
But in general, graphic-intensive apps on VMS aren't necessarily going to be performance leaders.
On the other hand, if your interests include robustness and security as well as performance, there may be some interesting tradeoffs to think about.
As HP chose to exit the VMS workstation market some time ago, that option hasn't really been generally available to customers for a while, even if they wanted it.
The new owners of VMS development may in due course see things differently than HP have done. Or they may not. Who knows. If you might potentially be interested, they're likely to be far more responsive than HP have ever been re VMS.
"I wonder if they will also see the light over PDP-11 and RSX-11M ..."
?
What light is there to see with PDP11? Lovely entry level architecture, a joy to program (so long as your program and data are small enough) and to build IO cards for (on Qbus), but largely irrelevant today except maybe as a teaching aid and the general teaching aid market has gone in other directions. Maybe also a fun plaything for those that way inclined (e.g. Canadian nuclear power plants).
[Roland6 probably knew this next bit already, others may not]
RSX11M hasn't been fully owned by HP or predecessors for some time. The picture is somewhat unclear, but as far as I know, Mentec in Ireleand acquired some rights (eg distribution) for a while when they were handed the PDP11 business in general. When Mentec went away those rights were picked up by someone who seemingly (a) doesn't have any inclination to do anything with the rights (except prevent anyone else doing anything) (b) doesn't want his name known, or the fact that he owns a company that owns the rights.
I think I did work out a while back, courtesy as usual of public LinkedIn info, who was in the frame here. You too might be able to do it with a search engine, some intuition, and some simple and obvious keywords: e.g, you could start with pdp11 rsx mentec linkedin, or for a different view with more history (and more noise), try pdp11 rsx mentec licence
Hmm, I couldn't find that quote in the page you linked to. In fact, there was only one mention of the word 'codebase' and that was (ironically) in the phrase 'A big complex codebase can have many lurking holes that will take many years to uncover.'
I'm going to have to agree with the previous poster about you not understanding what quote marks are for (as applied to Google search terms).
Look for "the entire codebase of Windows has been rewritten from the ground up" and you will only find reference to your post (plus possibly this one now that I've repeated it).
The closest I could find (within the edit window of this post) is this:
"“We have re-imagined Windows from the ground up.” ~ Steve Ballmer"
from here
https://techpinions.com/8-questions-for-windows-8/10010
another OS with very good long term binary compatibility is IBM AIX, but IBM z/OS probably holds the grand prize here, its derived from OS/MVS of the early 1970s and afaik, binaries from that era STILL run on the latest version as long as the mainframe has the equivalent facilities configured.
... "long term binary compatibility is IBM AIX" ... "z/OS" ...
@pierce: I nearly hit the report abuse link there 8) That's what you would be doing to those binaries from long, long ago. However, somewhere (banks?) it will still be going on. Mind you I'll bet a fair few of those binaries got a refresh just before 01/01/00.
Cheers
Jon