@Don Mitchell
It's a tired cliche that Microsoft never innovates, repeated by people who don't really know anything about technology
And that's a tired straw man argument put up either by Microsoft sock puppets or people who are deeply ignorant of the *history* of the technology they are talking about. On the off chance you are the later prepare to learn.
Microsoft "innovate" by buying up companies,hiring staff *after* they have made innovations or buying patents to enforce then getting saturation PR to describe it as ground breaking.. This is not innovation as most people understand it.
"Microsoft's NT operating system was light years ahead of UNIX in 1990,"
Debatable. What features did you have in mind?
What's not debatable was the Director,Windows NT Development, David Cutler(and most if not all of his team) came from DEC where he designed VMS. The most compatible part of Windows NT with Windows 9x was calling it Windows.
"and Linux is just now catching up to it in terms of feature set."
Which ones did you have in mind?
"Windows 95 had a pretty good UI for its day, many of its ideas sneaking their way back into Apple's system later."
Neatly airbrushing out O/S2 and the long term work done by Xerox, which pre-dated all Apple & Microsoft work in this area. Windows keyboard short cuts are straight out of the IBM Common User Access guidelines BTW.
"OLE Automation was a major innovation that allows extensible interfaces on applications."
Setting inter-process communication standards (which OLE, ActiveX or whatever your going to call it are) is a lot easier when you have a proprietary OS architecture (Amiga Rexx anyone) or a virtual monopoly on a very limited number of hardware architectures (x86 compatible and what others these days?). The IBM "Message Queue" approach is cross hardware platforms.
Corba is the platform and OS neutral standard for this but I don't know how widely it is used. The others were and are real cross app scripting and control options around when OLE came out. Consensus in a monopoly is easy. Its tougher in a community and tougher still to explain why having it is a *good* thing. I note the common lack of qualification. This only applies to non Windows with significantly greater difficulty.
"Windows was far more modular than UNIX, with dynamic libraries and object interfaces (COM), and device driver interfaces -- all ideas copied years later in Linux. "
Funny, part of Microsoft's claimed reason for resisting the un-bundling of IE was that explorer code was so entwined with the core modules it was impossible to remove.
I note your now moving the goalposts from Linux to Unix in general. Unix always had the idea of high level "block" or "character"mode I/O devices rather than a whole different API for each device. IIUC Linux made them installable without a full scale kernel re-compile. Something (I think) even the original Unix developers noted under "What we would have done differently." Linux might have gotten DDL from Windows. Or OS400, an OS (and apps) originally designed to run as a large set of DDL's to reduce disk and memory footprints and allow running form the disk directly. Hard core eXecute-In-Place for the palmtop developers reading this. I don't know enough other architectures in depth to comment if there could have been other sources. Burroughs B5000 for example?
"If you start to look at multi core hardware, NT is still leading in terms of its Amdahl-law characteristics (Linux levels of at 4 processors, NT at 8 and Microsoft's new red dog is rumoured to reach 128 processors before reaching asymptotic performance). "
NT (do you not mean an NT derived OS like Vista) would still be behind VMS (same architect) which IIRC topped out at a 64 processor cluster. And Amdahl's law warns that the speed-up hits the buffers when any part of the program cannot be parallelised. You'll need an app which is <0.78% serial to actually exploit that.
"They don't understand what goes on under the hood, and under the hood the NT kernel is still superior technology to the Mac or Linux operating systems"
I'm not sure how much you understand either. “Superiority” is debatable. Expensive, insecure,proprietary, bloated and designed to force customer lock-in. definitely
To anyone who grew up using Microsoft kit only it can seem very impressive. To those of us who know of the wider IT world its rather less so. Apple's system later."
Neatly airbrushing out O/S2 and the long term work done by Xerox, which pre-dated all Apple & Microsoft work in this area. Windows keyboard short cuts are straight out of the IBM Common User Access guidelines BTW.
"OLE Automation was a major innovation that allows extensible interfaces on applications."
Setting inter-process communication standards (which OLE, ActiveX or whatever your going to call it are) is a lot easier when you have a proprietary OS architecture (Amiga Rexx anyone) or a virtual monopoly on a very limited number of hardware architectures (x86 compatible and what others these days?). The IBM "Message Queue" approach is cross platform.
Corba is the platform and OS neutral standard for this but I don't know how widely it is used. The others were and are real cross app scripting and control options around when OLE came out. Consensus in a monopoly is easy. Its tougher in a community and tougher to explain why having it is a *good* thing.
"Windows was far more modular than UNIX, with dynamic libraries and object interfaces (COM), and device driver interfaces -- all ideas copied years later in Linux. "
Funny, part of Microsoft's claimed reason for resisting the un-bundling of IE was that explorer code was so entwined with the core modules it was impossible to remove.
I note your now moving the goalposts from Linux to Unix in general. Unix always had the idea of high level "block" or "character"mode I/O devices. IIUC Linux made them installable without a full scale kernel re-compile. Something (I think) the original Unix developers noted under "What we would have done differently" a *long* time ago.
Linux might have gotten DDL from Windows. Or OS400, an OS (and apps) originally designed to run as a large set of DDL's to reduce disk and memory footprints and allow running form the disk directly. Hard core eXecute-In-Place for the palmtop developers reading this. I don't know enough other architectures in depth to comment if there could have been other sources.
"If you start to look at multi core hardware, NT is still leading in terms of its Amdahl-law characteristics (Linux levels of at 4 processors, NT at 8 and Microsoft's new red dog is rumoured to reach 128 processors before reaching asymptotic performance). "
NT (do you not mean an NT derived OS like Vista) would still be behind VMS (same architect) which IIRC topped out at a 64 processor cluster. And Amdahl's law warns that the speed-up hits the buffers when any part of the program cannot be parellelised.
"They don't understand what goes on under the hood, and under the hood the NT kernel is still superior technology to the Mac or Linux operating systems"
Again debatable. Expensive, insecure,proprietary, bloated and designed to force customer lock-in. definitely
"It's a tired cliche.." "repeated by people who don't really know anything about technology"
I know a little of the history of this technology. I have described it on the off chance you merely ignorant of it. To anyone who grew up using Microsoft kit only it can seem impressive. To those of us who know of the wider IT world its rather less so.