Nice museum piece
...but I'll bet half those floppies don't work anymore
A version of OS/2 2.0 from Microsoft, not IBM, just surfaced on eBay. This pre-release version came out after Windows 3.0. How much might you be willing to pay to own a piece of operating system history? Would you pay two-thirds of a grand? No, us neither, but fortunately for posterity, Reg reader Brian Ledbetter is. He wrote …
There is a reason for this (over and above Sod's Law). Duplication was often done using 'B grade' floppy disks. When I asked what this meant - hey, I was young and naive then, could have been 'B' for 'Better quality' for all I knew - it was explained that the discs were, ehem, not normal quality but were somewhat cheaper. The rationale was that they were only expected to be used a few times i.e. to install, maybe reinstall and add a driver or two at some point, whereas regular floppy discs were designed to be used hundreds, thousands of times, and last a number of years. After all, why use the finest polished rust on something which was basically regarded as disposable?
A few years back I sold my old Apple][ on eBay. I dusted it out, and cleaned the floppy drive heads. It booted just fine from the original Apple boot floppies that were dated 1978. The buyer was thrilled to have original boot disks that worked. They paid stupid money for the old thing. I bought a nice new mid-range CNC machine for what they paid.
Did it come with a control computer running something proprietary like Microsoft Windows? If so, do you have a support contract to ensure updates for the expected operational life of the machine?
Several times now, we have seen commenters pop up with mentions of some ancient factory machine that still has to be controlled via Windows 98 on some old PC that cannot be replaced, or something stupid like that. You’re not going to fall into that trap, are you?
The control PC runs Debian. Everything related to control of the machine is open source. I have a rule at my house: No new Windows PCs.
The CAM software I use runs on Linux or Windows. The same with the CAD software.
Soon, the only Windows PC at home will be my work supplied laptop.
It wasn't the manufacturers decision. The machine came with a questionable copy of bloated Windows based control software.
I never even set up the Windows based crap. I made some hardware modifications/upgrades, and set up LinuxCNC to run the machine. I planned to do this before I even bought the machine.
The software and electronic hardware side I can build and maintain myself. The precision machining, welding and other metalworking, not so much. At least not before buying this machine!
I know you're putting up the 'joke' icon, and you're probably right - in terms of reading them as-is in a random 5.25" drive some disks probably don't work.
However there have been recent articles about what can be achieved with a Greaseweazel. an oscilloscope, and a waveform editor. With careful selection of a good floppy drive, and manual correction of sectors where the data are unreadable (by studying the peaks and troughs in the waveform and correcting it) it can be possible to completely recover data
I remember in 1994 I paid $400 for Microsoft Office on umpteen floppy disks. Within a year or so some of those floppies had sectors that couldn't be read. So you'll get to like disc 12 and the installer barfs. Microsoft Office 95 was available on cdrom by then... I borrowed me one.
I replaced CP/M with the early Microsoft releases and upgraded all the new office computers every time a new version was released, putting the replaced original disk boxes in the store room - the business closed and I had to clean it up. Maybe I should post my collection of disks (include Windows NT etc) on EBay. I guess the shipping cost might be at least $80 for the complete box of all the disks and original Microsoft disk box covers.
The 8086 could address 1Mb of memory in real-mode (20-bit addressing), while 80286 could address 16Mb (24-bit addressing) in protected-mode, and 80386 4Gb (32-bit addressing).
That 20-bit addressing consisted of 16-bit for segment (for 8080 compatibility), and 4-bits for segment selector (16 segments)
Through experimentation MS engineers found that the 80286 in real-mode could address 20 segment giving a usable address space 4Mb - a surprise to MS/IBM and Intel! Because the PC-AT could take a maximum of 4Mb anyway, the whole 286 protected mode (the original reason for OS/2) became completely redundant.
Windows 3 didn't start as a breakup with IBM, but as a way to push win-Word and Excel (that has been ported from Apple Macintosh) and get a head-start over IBM DisplayWrite, WordPerfect and Lotus.
Windows/386 (the variant of Windows 2) was a faster GUI platform because it used top-left origin for CRT (the standard for TV) rather than bottom-left of IBM terminals. translation of screen addresses meant OS/2 was always going to be a little bit slower than Windows 3.
The break-up came because IBM didn't want OS/2 to be a multi-processor OS to protect its mainframe platform. Windows NT was originally going to be OS/2 3.
Steve Channell: “Because the PC-AT could take a maximum of 4Mb anyway, the whole 286 protected mode (the original reason for OS/2) became completely redundant.”
That would be news to the rest of us. My understanding is that IBM paid Microsoft to write OS/2. MS hastened slowly on OS/2 while working on Win 3 and pretending to promote OS/2 and crashing it at demos.
See IBM's doomed operating system
--
1990: What Makes The OS/2 Platform Great For Server Applications
• Efficient multitasking
- Pre-emptive scheduler
- Multiple threads/processes
• Memory protection
• Virtual memory
[Author here]
> That would be news to the rest of us.
Well, yes...
> My understanding is that IBM paid Microsoft to write OS/2.
No... it was a co-development project. IBM paid for a lot of it. MS was relatively small then.
AIUI IBM insisted on it being 286-specific when MS wanted (correctly, IMHO) to go 386-only.
> MS hastened slowly on OS/2 while working on Win 3
Naah. Win3 came later out of skunkworks project. MS put tonnes of R&D into OS/2 1.0, 1.1, 1.2 and to some extent 1.3, which shipped _after_ Win 3.0.
You mistake cause and effect, I think.
Win3 happened _because_ OS/2 1.x flopped. MS came up with an escape plan, and at first, the industry thought it was mad, because Windows was universally held to be junk.
> and pretending to promote OS/2 and crashing it at demos.
Nope. MS really believed in it, in the early days.
The real blame for the flop falls at IBM's feet, for wanting to keep its promise 80286-based PS/2 customers -- who didn't care.
> AIUI IBM insisted on it being 286-specific when MS wanted (correctly, IMHO) to go 386-only.
The 286 manual set Intel gave out at the UK launch , whilst very good (especially the OS writers guide) did contain many place holders for the 386, implying it was more of a marketing release of work in progress than a completed chip.
So I suggest someone of influence at Microsoft read the manuals…
The laugh now is that we have forgotten just how big a step up in performance the 286 (10mhz) was over the 8086/8088 (2mhz).
just how big a step up in performance the 286 (10Mhz) was over the 8086/8088(2MHz)
Actually, 6MHz and 4.77MHz (original). Or 8MHz and 8MHz (volume).
The big step in performance was the 8088 to the 80186, where a big stack of the original microcode was implemented in silicon, so operations that took 20-25 cycles now took 2-3 cycles.
Clock speed on the 80186 was limited because the silicon also included peripherals (DMA), so it was power and clock limited to around 10-12MHz.
The 80286 was a hotted-up (bus speed) 80186 that was cheaper to produce. They dropped the peripherals (DMA etc) off the 80186, and got the bus speed up to 20-25MHz. So late models were twice as fast as an 80186, which was 5 times as fast as 8086
I've always thought that the real reason the 80186 isn't listed as a processor "generation" is that IBM never made an 80186 PC, but whatever: in spite of the extra processor commands in the 286, Intel lists the 186 and the 286 as the same processor generation.
[Author here]
Great comment.
> I've always thought that the real reason the 80186 isn't listed as a processor "generation" is that IBM never made an 80186 PC,
A fair point, but some did. The RM Nimbus used one, and so did the BBC Master 512.
One of the big "advantages" of the 8086/8088 that Intel fans kept pushing at the time (early / mid 1980's) was the instructions were not microcoded. So was "faster". Unlike that "dog" the MC68000. Which had microcoded instruction. There again we used to keep a list on the office wall of 8086/8088 instruction sequences not to be used because they crashed the processor. Never needed one for the MC68000. It just worked. Apart from the first MMU.
Dont remember any huge speed up in clock cycle counts either. With the 186 / 286. In the general use instructions. Apart from the usual adding barrel shifters for bitshift operations and getting the clock count down on muls/divs by throwing more transistors at the problem. A lot more. Only solved eventually with on chip FPU instruction channels. Eventually one FP result every 1 or 2clock cycles. If you got the instruction mix just right. There again trying to work out real world cycle counts for instructions was always very hit and miss with Intel docs. Unlike the MC680000. Which was very easy to memorize.
I know the Pentium was full microcoded but not sure how much was done with the 386, 486. Given the various instruction bugs and how long it took Intel to fix them I suspect not all was microcoded. If any. There again I did not pay that much attention to this area at the time as it was the least of your problems trying to get code to work in Intel land.
There's a few 80186 PCs, but they're rare. One reason is that the 80186 has some incompatibilities with the 8086/88 that the 286 doesn't suffer from.
The 80286 is, in general, faster than the 80386 at the same clock speed - but it does depend what you're running and which processor a program was written for.
The 286 was really, really fast at text based applications with some graphics at the time, but if you had the software to take advantage of a 386 it was clearly better.
"... of a marketing release of work in progress than a completed chip"
Nah! I do believe this was a real – albeit flawed – attempt to create the next generation of CPUs. Someone an Intel apparently truly believed that the majority of DOS software would either just run in 80286 protected mode unaltered or could easily be fixed. Digital Research even wrote an operating system for the 286 that was supposed to replace MS-DOS. But it looks like the Intel engineers never took a good look at real-world 8086 software to see what it was really doing and missed the part where everybody was accessing segments wherever they pleased. There was no way to create a protected mode operating system that could deal with that real mode mess. It looks like Intel also believed that programmers really liked dealing with this segmentation scheme and that flat memory models would never catch on. They even extended segmentation to 32-bit on the 386.
[Author here]
> Digital Research even wrote an operating system for the 286 that was supposed to replace MS-DOS.
Yes they did. Unfortunately they built it using pre-release 286 hardware and the shipping version removed the feature that CDOS 286 depended on.
https://en.wikipedia.org/wiki/Multiuser_DOS#Concurrent_DOS_286_and_FlexOS_286
Reporting at the time:
https://books.google.cz/books?id=2y4EAAAAMBAJ&pg=PA17&redir_esc=y#v=onepage&q&f=false
It was state of the art stuff, comparable to VAX/VMS:
https://www.tech-insider.org/personal-computers/research/1985/05.html
> But it looks like the Intel engineers never took a good look at real-world 8086 software to see what it was really doing and missed the part where everybody was accessing segments wherever they pleased. There was no way to create a protected mode operating system that could deal with that real mode mess.
There was and DR did it. By the time Intel restored the missing features, it was too late.
I have written about this on the Reg:
https://www.theregister.com/2022/08/04/the_many_derivatives_of_cpm/
> It looks like Intel also believed that programmers really liked dealing with this segmentation scheme and that flat memory models would never catch on.
No, the thing that is generally missed is that segmented architectures were a known thing then and more sophisticated OSes such as Multics could apply different CPU protection rings to different segments.
https://en.wikipedia.org/wiki/Protection_ring
Result: hardware enforced execution protection.
The code in segment #13 can only read memory elsewhere. Segment #42 can read and write but not execute. Segment #32 can execute, but only in user mode, no kernel code, no I/O.
And so on. Fantastic idea, worth the pain of segment arithmetic.
But braindead PC OSes only used ring 0 until the 1990s, and then only 2 of them.
More sophisticated OSes leaned heavily on this but the PC industry squandered this, because as I've written before, it regressed to super-simple single-tasking OSes, such as CP/M and MS-DOS, then reinvented half the wheels that had been removed, and did it badly. We threw away the complex but flexible system for something simple and brain dead -- flat 32-bit code, no protection rings -- then reinvented hypervisor mode (ring -1) -- and in the end all we got was half-assed junk like the NX bit.
https://www.theregister.com/2023/12/25/the_war_of_the_workstations/
Don't listen to the loud voices of empty heads. When something was built that was complicated and later got replaced by something simple, the reason is usually that someone somewhere didn't understand the complicated thing and wanted something easier, and didn't realise it was a lot less capable. Complexity is usually there for a reason. This is the principle called Chesterton's Fence.
The later summary everyone repeats is "that was dumb so we junked it" and it usually means they didn't understand what they were throwing away.
Compare with how modern UIs are crappy broken things:
https://www.theregister.com/2024/01/24/rise_and_fall_of_cua/
P.S. Hat tip to Reg commenter MarkMLl for explaining some of this stuff to me years ago. I occasionally quote his wisdom in the Reg. His comments are worth reading.
@liam
I'm not disagreeing with what you wrote, but will give a possible counter perspective.
The complexity of providing segmentation has a cost. Material cost of the CPU (I'm supposing - chip design is not my forte) and development time, both of the CPU and those programming for it.
If you were sitting down to write a single-user desktop OS in the 80s, with limited processing power and memory, I think it unlikely you'd be thinking "the CPU has *got to* provide segmentation".
When something was built that was complicated and later got replaced by something simple, the reason is usually that someone somewhere didn't understand the complicated thing and wanted something easier, and didn't realise it was a lot less capable.
This is true 99% of the time.
As an aside, I don't think Intel really innovated after the 4004. And MS never did.
> The complexity of providing segmentation has a cost.
Oh, definitely, yes, you're right.
But the thing is...
> If you were sitting down to write a single-user desktop OS in the 80s
The people who wrote the OSes were not the people designing the chips. The people designing the OSes did some absolutely minimum-viable-product type efforts, and they threw the baby (hardware memory protection) out with the bathwater (keeping the design as simple as poss.)
I am not pointing fingers here: DR did it, MS did it, Commodore and Atari did it...
What is interesting to me is that DR put some of this back in early, but got rough treatment from MS and from Intel, and thought it survived for a while, and sold lots of DR-DOS, it didn't thrive.
While MS and IBM came up with increasingly complicated hacks to get this working on newer chips and take advantage of them in vaguely compatible ways... While Atari, Commodore, Acorn failed to and paid the ultimate price. Apple failed too but managed to buy in a solution, and it did it late enough that its bacon was saved by VMs.
Which, now I come to think of it, is how I'm proposing rescuing Plan 9 from oblivion... Huh.
>” Nah! I do believe this was a real – albeit flawed – attempt to create the next generation of CPUs.”
That was the i432, but you are right in a way, Intel needed to build on the success of the x86 and deliver something more capable of supporting a minicomputer operating system., such as Unix.
ISTR the 386 was actually designed before the 286 but once designed it was simply too complex for the fab facilities of the time. That would certainly fit in with the rest of the industry that were at the time beginning the transition to 32 bit, mostly skipping 16 bit as Intel had a jump on the rest of the industry there.
The 286 was essentially a stopgap until fab processes matured, and effectively ensured the mass market stayed on 16 bit all the way until Windows 95.
I can understand the 286 being a fab process stop gap, but that's a different matter from it enforcing the mass market staying on 16 bit.
As I've mentioned on reflection I believe the market was driven by 1) memory - all the 'proper' protected mode OS tended to eat memory for breakfast, and it was expensive and 2) applications. People thought I was a bit mad getting a 486 with 8MB memory to run OS/2 2.1 in 1993, and really that was the reasonable minimum, not the comfortable amount.
Even if OS/2 and NT are discounted (not entirely unfair, neither achieved mass market) the mass market was *heavily* using 32 bit well before 95. At the low end there were DOS programs escaping some of DOS' limitations with DOS extenders, predominantly DOS/4GW. Outside DOS, as soon as Windows 3.1 was released in 1992 it was very clear that its 286 supporting protected mode was a second class citizen, and real mode had been dropped entirely. On the run up to Windows 95 various parts of 32 bit code had been added for disk and network drivers, and win32s operated as a stop gap enabling a subset of the Win32 API to run on Windows 3.11.
OS/2 failed because the PS/2 - Lets Close The IBM PC Architecture - move by IBM failed. The PS/2 dog wagged the OS/2 tail.
Even before Win 3.0 shipped in 1990 OS/2 was dead in the water. Major ISV's had ported their products to OS/2 but sales (if shipped) were crickets. The market by that stage was clones with IBM shuffling off into the PS/2 ghetto.
The Win3.0 codebase was yet another attempt to fix the Win 1.x/2.x debacle. Same software gene pool. The NT / Win32 code base is what came from the "Portable O/2" project that IBM had mostly blow off. All the gory details are in the book Show Stopper. And yes, David Cutler really was that nasty.
Have nt looked recently but in Win2k/ XP/Win7 most of the kernel "Portable O/2" code was still in there. Which looked remarkably like the code from the "The Mill" in Farmington, MA. I wonder how that happened. There again, most of the of Win16 code was still in there too.
Ah, the good old days.
> > and pretending to promote OS/2 and crashing it at demos.
> Nope. MS really believed in it, in the early days.
Maybe at some level MS "really beleived in it" but not everyone, always.
The crashing anecdote is well known:
"6. Play dirty. When IBM and Microsoft were still partners of some sort, Microsoft sabotaged everything IBM was doing. During the OS/2 era, Ballmer took a floppy disk into the IBM booth at a COMDEX and installed malware to prove that OS/2 was not crash-proof. It was a hilarious dirty tricks stunt that did nothing for his reputation."
https://www.pcmag.com/news/why-does-everyone-hate-ballmer
[Author here]
> Through experimentation MS engineers found that the 80286 in real-mode could address 20 segment giving a usable address space 4Mb
[[citation needed]]
> Because the PC-AT could take a maximum of 4Mb anyway
OS/2 1.x was mainly aimed at the PS/2, not the PC-AT.
> the whole 286 protected mode (the original reason for OS/2) became completely redundant.
I would not say that, even if you can back up your claim. It wasn't all about RAM. OS/2 had multitasking, a better filesystem, better IPC, better networking, and much more besides.
(I evaluated OS/2 1.0 in my first paid job in 1989 or so.)
> Windows 3 didn't start as a breakup with IBM
It was not the start of it: it was the _reason_ for it.
> but as a way to push win-Word and Excel
Not true. Both ran on Windows 2.0. I supported Excel on Win2 in production.
> (that has been ported from Apple Macintosh)
True but I am not sure it's relevant.
> and get a head-start over IBM DisplayWrite, WordPerfect and Lotus.
WordPerfect and Lotus both believed the MS/IBM marketing and ported to OS/2 1.x.
DisplayWrite was already dead in the water, along with MultiMate.
> Windows/386 (the variant of Windows 2) was a faster GUI platform because it used top-left origin for CRT (the standard for TV) rather than bottom-left of IBM terminals.
[[Citation needed]]
> translation of screen addresses meant OS/2 was always going to be a little bit slower than Windows 3.
I don't buy it. I think the overhead would be minute and trivial.
> The break-up came because IBM didn't want OS/2 to be a multi-processor OS to protect its mainframe platform.
[[Citation needed]]
> Windows NT was originally going to be OS/2 3.
Yes it was... but IBM gave it to MS for free in the "divorce".
I thought I was doing well remembering back 35 years, Citation is difficult because none of the publications at the time were on the Internet..
I developed on OS/2 (1.1-,1.3) - it was rubbish for services unless you disabled real-mode applications (processor reset was used to switch from protected mode to real-mode and lost interrupts).
i have no evidence that IBM wanted OS2 to remain single CPU, but 80486 was the first x86 processor to have CMPXCHG needed for multi-CPU Critical sections
The reason for OS/2 was multitasking and memory protection. Even 16MB became a noticeable limit for certain OS/2 applications reasonably early on. OS/2 should have been released for the 386 to start, but early 386 steppings were extremely buggy, it wasn't a cheap processor, and IBM had made commitments to bring OS/2 to the 286.
There's a lot of reasons for OS/2's failure and the breakup, but the large ones were because of the success of Windows 3.0 due to its applications and crucially the reduced memory usage. OS/2 required more memory than Windows up until the mid nineties when it no longer mattered. NT had similarly high requirements but the majority of users chose 3.x and 9x instead due to lower hardware requirements. greater driver support, and prior to 2000 certain features such as USB and DirectX.
"because IBM didn't want OS/2 to be a multi-processor"
Do you mean multiple CPUs in a single system, or did you mean more than one processor platform?
I ask this because OS/2 3.x was ported to PowerPC by IBM (although there were some problems), and from what I remember from when I worked in the AIX Systems Support Centre in the early '90s, there was actually going to be some synergy between OS/2 and AIX on PowerPC hardware, with common hardware and LPAR support (running OS/2 and AIX on the same physical system concurrently) in the roadmap. I even heard rumours of a common kernel, possibly even written as a micro-kernel, with AIX and OS/2 personality layers on top.
I no longer have any copies of the documents I saw, so it is all from memory, but I did see OS/2 running on a pre-production 7020 40P (actually, although the hardware was mostly the same, it was probably a PowerSeries 440, but apart from the covers, it looked like the 40P that I had in my herd of systems).
An individual segment was up to 64K and used 16 bit addressing but the segment register was also 16 bit. To compute an effective address the segment was left shifted four bits and then added to the offset address. Thus segments could begin on any 16 byte boundary meaning there were multiple segment:offset combinations that could access the same physical address.
You'd find this plenty of times in the assembler guides of the time, different sources would use seemingly different addresses to refer to the same hardware register.
As an aside although the 386 was limited to 4GB physical memory it did have a 64GB logical address space using a similar segmentation system. However the undeniable simplicity of a flat 4GB address space was such that people seem to attach some kind of mysticism to segmented models now.
> ... IBM didn't want OS/2 to be a multi-processor OS to protect its mainframe platform.
Not sure about that - is it documented ?
There's always individual mananger paranoia, but I don't think that wasn't the thinking at the C-suite or even in the Mainframe shop.
Mainframe performance was and is about the peripheral sub-systems - buses and intelligent devices. They would not have looked at CPU performance as the measuring stick of a threat to the MF business. And the first time I became aware of Intel themselves invoking MF performance as a comparison was the i860.
To forsee the x86 as a threat would have been to forsee myriad developments: x86-64, PCIe, multiple cores (not discrete cpus), ongoing fabrication improvements (basically everything behind the Moore's law prediction) and the "Personal Computer" prevalence. As well, the "failure" of alternatives that could have prevailed: specialized cpu supercomputers, Risc, IA-64, Co-processor farms, ... .
I could quite easily believe that IBM management did not see x86 as more than a commodity terminal point, and made short-term choices.
Protected Mode on the 80286... that brings back some bad memories. Instead of using descriptor tables to extend the global addressing space, I rather wish that Intel had just included a new flag in the FLAGS register that toggled between 16 byte and 256 byte segment offsets (ie, 0001:0000 goes from 10h to 100h) with the memory mapping muck limited to a V86 mode for legacy code that couldn't handle the larger offsets.
Fools and their money. Some people place great value on valueless things just because it tweaks their ego to have something with some form of rarity or authenticity. Why in the everlasting fuck would I want some skidmarked pantaloons just because they were worn by Elvis or somebody?
Software has got to be the most ridiculous memorabilia, if I wanted OS/2 for something I'd just go find it. Humans are pack rats... there are archives.
Yes, if you want 'OS/2' and aren't picky about the variety you can go and buy a modern release right now in the instance of ArcaOS, hit ebay and get an historic copy (mostly of 3.x or 4.x, 1.x and 2.x are less common), or a few Internet sites where you can 'obtain' a number of disk or ISO images for quite a few releases including barely released products such as OS/2 PowerPC.
However this is an extremely early release of OS/2 2.0 that isn't archived anywhere. There's no guarantee that even Microsoft has a copy or would be prepared to release it.
Sometimes software is lost to history without backups. This is especially true if it's a pre release no longer considered useful, or a number of games where source control is frequently lacking (once it's been shipped for a while, a number of companies historically weren't concerned with keeping it for the future)
"Fools and their money. Some people place great value on valueless things just because it tweaks their ego"
If someone's enjoying doing something that has zero effect on your existence, leave them the fuck alone and be happy for them.
It would be a boring world if we all had the same interests.
I remember OS2 1.3 EE which included DBM - an early version of Db2/2. When IBM came to port that to NT it ran faster so they held off as faster on NT than OS/2 was not allowed. Turns out the OS/2 file system HPFS had a bug that Microsoft fixed in NTFS. So IBM held back releasing a version for NT until OS/2 HPFS was fixed. By then IBM had lost both the OS and the database war.