Windows 11 Patch Tuesday preview is a glitchy disaster
When DEC was running TOPS-10 they required regression testing for any patch.
Microsoft just throws it at the wall and sees if it sticks.
29 publicly visible posts • joined 25 Feb 2017
The only UNIX system with a good applications universe I have ever owned was the AT&T UNIX-PC from 1985.
I actually had a licensed multiuser copy of real Microsoft Word on that system. On UNIX!
I think the MS Word license cost $450 in 1985.
AT&T UNIX System V release 2 with the 4.2 BSD virtual memory. And dynamically loadable device drivers.
This was a Motorola based workstation, not a PC clone.
You could even buy ORACLE for it. As well as INFORMIX. I think the ORACLE license was about a thousand dollars.
I ported Berkeley SPICE 3C1 to that system.
I ran Solaris on a dual Xeon Intel motherboard later for a few years and it was the same as Linux, poor quality applications for home PC use.
I ran Microsoft XP with the SFU Unix package on my IBM/Lenovo T60 laptop and that was good because it wasn't an emulator. Microsoft actually merged AT&T SVr4 UNIX into the guts of XP.
My favorite Linux is SUSE Enterprise Linux.
I have run ORACLE 10g Enterprise RAC on that.
It has a lot of the look and feel of AT&T UNIX.
But again the applications are disappointing in quality for home PC use.
So you're kind of forced to put up with Microsoft if you want a good home PC experience.
I have never liked Apple PCs except for the Mac IIfx. They are typically slow and the user interface is so frustrating that you feel like you want to just rip the top off so you can get to the guts of it.
The Mac IIfx was insanely fast but cost around $7,500 or so in 1992.
I'm sort of resigned to the fact that you have to put up with Microsoft Windows if you want a decent home PC experience.
I miss XP with SFU which I had on my ThinkPad T60. That was not an emulator but a true merger of XP and SVr4 UNIX on the same filesystem.
You could use microEmacs to edit Windows scripts, for instance.
I had a similar experience in the mid 90's developing software for an IBM mainframe running MVS, DB2, MVS/TSO, CICS and VM/CMS.
Documentation was on MS Office.
Official email was on VM/CMS.
We had a big AIX pSeries server.
I installed eXceed on my desktop and could access everything in different windows from the pSeries server on my X11/Windows desktop and seamlessly cut and paste anywhere. It had active icons which were a miniature of the session screen so you could see when a compile job finished or monitor performance without opening the window.
I primarily developed software in Emacs on the pSeries for COBOL, CICS and DB2 on the mainframe. The mainframe COBOL had special hardware acceleration tied into DB2 within MVS. I learned that from K.R. Hammond, one of the principal developers of DB2 on MVS.
My coworkers hated me.
They used dumb terminal emulators that took over the whole desktop, so they had to log out of CICS or TSO just to check their email.
I could do everything all at once from the Windows desktop and stay logged in to everything.
Wow, we just bought this VAX 780 and now we can free ourselves from leasing time on that IBM mainframe and control our own culture.
Shift forward a few decades and we're floating toward the Cloud, inhaling its mephistic vapors from the hookah.
Does any of this sound familiar?
To be serious for a moment, I think the post about the rise in the noise level is relevant. It took Max Perutz 30 years to elucidate the structure of h(a)emoglobin.
Most scientists would be fired by MBAs if they took more than 5 years even to discover the Grand Unified Theory of Everything.
"Although VMS is therefore relevant to the lingering on of Itanium, another significant factor is perhaps the fact that no port of HP-UX to x86 or x86-64 has ever been released."
I suspect the reason for no x86 port of HP-UX goes back to AT&T SVr4. Sun Solaris 2 was developed from scratch and dual endian.
HP decided to incorporate SVr4 into HP-UX without a full rework, so it stayed big endian only.
If you look at the source code there is so much low level bit twiddling that it would be impractical to try to "port" HP-UX to a little endian CPU with anything short of a complete ground up rewrite.
There were a couple perfunctory stabs at it that were cancelled early on.
Since most customers are primarily interested in application environments above the OS, porting HP-UX to x86-64 was less attractive than migrating them to Linux.
This is typical of a corporate environment that places stock buybacks above creativity and innovation. IBM has been engaged for over a decade in deliberate age discrimination mainly to cut the payroll.
And the company's customers have suffered from it.
At this point as a customer I would never do business with IBM, largely because they have nothing to offer.
As I recall, the impetus for EPIC was a soon discredited research paper implying that there was a huge amount of unexploited parallelism in existing source code that was missed by compiler peephole optimization and RISC runtime reordering.
I tested this with a DEC Alpha in 1993 running OSF/1. The compilers had the option to do deep/wide optimization across the entire set of source files for an application.
I compiled some large biomedical imaging and genomic applications and found the difference in runtime performance was about 3% or less - not worth the effort.
Given the lack of runtime optimization and the nondeterministic nature of cache misses and memory access, the only way to get Itanium to work was to put the entire working set in cache.
It would have been interesting to see what the highly regarded PA-RISC design team could have done has they not been displaced.
Of course the same goes for Alpha.
IBM is so far behind Amazon and Microsoft they might as well just punt and put what little is left behind Watson. As if there's much more promise there.
They took one of the best computer companies in the world and turned it into a disaster while destroying the careers of their employees.
We've heard of too big to fail, and IBM used to fall in that category.
Now IBM seems to be exploring "too dumb to fail " as a business model.
I had an early DEC 3000 Alpha 400/400S AXP workstation. It had a C compiler that did extremely deep/wide source parallelism analysis across an entire application, what EPIC later promised. I experimented and found that the degree of "hidden" parallelism discovered in no way compensated for the inconvenience and time required to recompile the entire application from source just to edit one module. Yes, I conducted detailed, controlled studies at (then) DCRT at the NIH. Around late 1992 or early 1993. Oh, yes, you could do EPIC on an Alpha in 1992! Just pick the right compiler options and compile the whole source at once.
This whole disaster could have easily been avoided had someone at HP or Intel simply repeated what I did a decade earlier with that Alpha.
Yet IBM decided I didn't deserve to have a job selling HP Itanics to Mickey Mouse (he evidently likes HP-UX better than IBM POWER with the COBOL like AIX OS) so I was RA'd.
Go figure...
Hewlett and Packard wanted to make the best quality engineering kit possible. It was costly but generally worth it if you could afford it.
When I visited a large HP facility around 1999 or so I asked about their quality assurance testing. Once the laughter subsided I was told: "If it powers on, it ships!"
I think I also vaguely recall a time when employers saw critical minded, skilled employees as a valuable resource. By the 1990s they were a disposable asset. Now they're just a cost to be cut.
A significant issue is the prevailing social problem of complete lack of ethics that has overtaken nearly everything. When the people in control of doing things have no interest in quality or potential risks, it really doesn't matter. Private sector or public, big or small, if there's no intent to do good to begin with, neither regulations nor market selection pressure will matter.
From reading the descriptions of most Windows related vulnerabilities, the developers would only have needed to type, size, bounds and sanity check inbound data. All incoming data, every time. This is hardly news, and is certainly less difficult than the time some suits at a former unnamed employer decided it would be a nifty idea to mix big and little endian app servers in a n-tier SAP environment. "Well, the marketing rep SAID it would work..."
This was always the old selling point when Cisco produced that white paper nearly two decades ago about "at will" employees being able to pick and choose the best compensation. Unfortunately when everyone is going down the H-1B route there aren't many "opportunities" to choose.