
Summary....from way back then......
OS/2 -- Half an operating system...
PS/2 -- Piece of S**t number 2...
The resurfacing of a 1995 Usenet post earlier this month prompted The Reg FOSS desk to re-examine a pivotal operating system flop … and its long-term consequences. A 1995 Usenet post from Gordon Letwin, Microsoft's lead architect on the OS/2 project, has been rediscovered. To modern eyes, it looks like an email, but it wasn't …
Yes the whole story completely misses the MCA v’s ISA debacle … whose fight was won by … neither of them and PCI stole the market.
Portable OS/2 rebranded as Windiws NT is a bit of a leap/wide of the mark. Dave Culver may have input for this subsequent story.
Windows NT was a huge pile of shit until NT4
Windows NT4 was a piece of shit until Windows 2000 (largely) unified NT and 95 .. and USB started working.
… with the eventual offspring/unification of Windows XP doing the home run.
Two things I _really_ disagree with here.
> Windows NT was a huge pile of shit until NT4
No it absolutely was not. I deployed and supported 3.1, 3.5 and 3.51 in production. I ran 3.51 as my own work desktop for several years, and years after I moved on from the day job where I maintained a mission-critical work server running 3.51 server, my own home server ran NT 3.51 Server for years until it was replaced -- by an NT 4 Server.
NT 3.x was amazingly good for what was in effect a 1.x product, and 3.51 was one of the peaks of the product line's entire history. Sure it looked plain, but if you think that matters a damn then you have fallen victim to the 21st century "it's no good unless it's pretty" syndrome, and if you're old enough to think you can pass judgement on NT 3.x then you should be mature enough to fall victim to such a miserable fallacy.
Give me an ugly but rock-solid product over a pretty but flaky one any day.
(If it's flaky _and_ it's ugly then to hell with it, and that's common now.)
You could tweak NT 3.51 into something very sleek and usable. Hotkeys for all your apps, so that you never needed to interact with Program Manager at all. I used Ctrl+Alt because I didn't have a Windows key:
C+A+W and _bang_ you're in Word.
C+A+E, flicker of an LED, you're in Excel.
C+A+D, blink, DOS prompt.
C+A+F, blink, File Manager, still a good solid tool even after 3 decades of Explorer. Legions of people still pay for "orthodox file managers" such as Total Commander.
C+A+M, Mail. Leave it open all day.
C+A+N, Netscape.
Etc. Basically, 26 different apps all right there at the press of a button.
Doesn't matter if the app was open or not; a decently specced SCSI PC was pretty quick and extremely responsive. I had a Pentium/66 -- yes the big fat old 5V ones -- and 32MB of RAM, and I removed its EIDE controller altogether for that all-SCSI all the time vibe.
> Windows NT4 was a piece of shit until Windows 2000 (largely) unified NT and 95 .. and USB started working.
Yes and no. No because you fail to correctly identify the big problem.
As a desktop workstation or as a network server, NT 4 was great. You got very solid reliability but a good UI as well. What's not to like?
It was missing features, yes, sure. Not FAT32, no USB. Those proved problematic. But they would have been easily fixable.
Win2K fixed both and it remains peak Windows 25Y later.
But you focus on the cosmetics and miss the deeper malaise.
NT 4 was pretty snappy but some marketing-lizard pillock of a manager at MS did that by moving the GDI into the kernel and it took 15-20 years to even start disentangling that mess. Even Server Core still has the GDI and window system; it is literally inextricable.
*That* was the core mistake of NT 4, but you didn't even mention that.
XP wasn't a unification of anything. Never was. That was marketing spin which you've uncritically swallowed.
2/10. Must try harder. See me after class.
Have to agree with Liam... Windows NT 3.51 was very very solid. I only switched to NT 4 because of USB (because my ISP only provided USB DSL modems - don't ask). And NT4 only stabilised properly a couple of Service Packs in.
But there we are. A lot of historical artefacts killed OS/2... Warp 3 wouldn't install on a specific laptop platform (Cyrix processors with built-in SCSI connector) because it insisted on a specific DMA channel (?) that was taken by the SCSI interface and couldn't/wouldn't reconfigure to ignore the SCSI interface (I had an IDE drive big enough, I didn't need SCSI, but Warp *insisted*).
I still have that box here somewhere...
Of course WNT was great, but it was hardly a v1 OS.
It was basically a VMS clone (V++=W, M++=N, S++=T), a "re-creation" done by Dave Cutler (and team?) who had been lured into Microsoft from DEC.
As such it had tons of multi-user credentials, a properly designed security model, using the 386 ring isolation to keep the kernel out of harms way (device drivers) and thus stable... like a VAX.
Of course a CPU driven pixmap GUI wasn't part of VMS' design so having to push pixels through security barriers made for inacceptable GUI performance on VGA hardware, especially when you add a 16-bit ISA bus twixt CPU and screen on your typical 386 clone.
I mostly ran the Citrix variant of NT 3.51 but again modified by an X-terminal vendor (was it Tektronix?) and on X-terminals, so a lot of the pixel pushing was instead translated to much higher level X11 line, pixmap and text calls rendered on the client side, that resulted in rather good multi-user performance for office apps. The Citrix ICA variant could use normal 32-bit RAM for rendering, but still just wouldn't scale to higher resolutions (1024x768 or even 1280x800 was becoming popular) at 8-bit color (or deeper).
In Windows NT4 graphics and other device drivers were moved to ring 0 which meant that badly written printer drivers for stuff like ink-jet printers would kill a dual CPU NT4 50-user terminal server in the blink of an eye because it wasn't written thread-safe: I very distinctly remember seeing this happen (and hunting down the cause), and it didn't help that Microsoft had cut off Citrix from access to NT4 sources, either, unless they gave them access to their technology.
The ";XX" sign at the end of filenames which shows WHICH VERSION of a filename in VMS is saved I miss sooooooo much in Windows NT!
Plus the Symmetrical Multiprocessing where I could hook up as many VAXes together as I want to in order to make a massive monolithic supercomputer from individual and inexpensive parts! I think we got up to 47 VAXes in our Calgary Office at one point doing 3D graphics-based Oil and Gas Reservoir modelling and that was in 1989! It BLEW AWAY our much more expensive IBM Mainframe setups at number crunching!
V
The automatic versioning of file (and the "purge" command to get rid of older versions) was already present on DEC's PDP-11 machines or rather their operating systems.
Can't actually speak for RSTS, because I never used that, but RSX-11, where I spent a few years, had it, too. DCL, or DEC's variant of a [shell] command language was quite nice generally and there was some early cross pollination to DOS via CP/M, whose programmers evidently were familiar with PDPs, too, since a few command and even utilities like PIP (peripheral interchange program) were purported to have been inspired by RSTS.
True, the VAX cluster facilities never quite made it to mainstream appeal on Microsoft's Windows, mostly I guess because Wolfpack came a the same time when NT4 let device drivers run at ring 0, obliterating the main security advantage that would have made it feasable: clusters can't help against broken software.
And I don't know if IBM's cluster product were older than VAX clusters, but the latter can only be called "inexpensive" when compared to what IBM keeps charging for mainframes (or Tandem for NonStop).
Ken Olson eventually led DEC into ruin by trying to emulate ECL mainframes via the VAX 9000 at a time when IBM itself was going CMOS on one hand and trying to conquer the PC market at mini-computer prices via the DEC Rainbow at the other.
I can see him shaking his head at a Raspberry PI emulating a VAX 9000 (or a Cray XMP for that matter) faster than that ever ran.
I was somewhat involved in the HPC motivated Suprenum project during my Master's thesis, back when even (Bi)CMOS CPUs were unsoldering themselves from their sockets at 60 MHz clocks (first Intel Pentiums), so I've always retained an interest in scale-out operating systems, which would present a single-image OS made from huge clusters of physical boxes connected via a fabric (e.g. Mosix, by Moshe Bar).
But with currently 256 cores on a single CPU die (or thousands on a GPU) each delivering large SIMD vector results per multi-gigahertz clock cycle, that (scale-out operating system) domain has become somewhat irrelevant, or rather transformed far beyond recognisability and often rather proprietary.
LOL! My other older boss in the Commodities Trading/Financial Services industry is STILL running his 1989/1990-era VAX-9000's using COBOL-based financial transaction software. They started converting everything still running on the VAX-9000 systems into JAVA about 25+ years ago and 5 years after than then built-their own in-house developed real-time multi-core OS Lazarus-style IDE with a multi-core GPU enhanced highly-speed-optimizing ObjectPascal compiler that has a super-fast C/C++-like pre-processor added-in for extra functionality!
Parts of it are now running on a few racks of the newer dual 128-core AMD EPYC processor motherboards with four AMD Server GPUs on each mobo and hundreds of Terabytes of globally shared System RAM. That project is now on it 21st year and should be fully done by 2028, so those VAXes still have to run for another 3 years as the primary runtimes beside the new system! The in-house programmers are making a fortune and will probably start retiring after the project is finally done and fully deployed! The interesting thing is that the VAXes are STILL holding their own on certain types of simultaneous high-Input/Output commodities transactions even though the new hardware is LOADS FASTER than the VAXes! I am told that once everything is done, deployed and tested he will use the in-house software to scale-up and scale-out all the hardware inside a custom-built datacentre located in northwestern Ontario so he can take advantage of the cheap hydro-power electricity.
He should have bought a more modern IBM Z-frame but I think my old boss has a soft-spot for those VAX-9000's which he is eventually going to put into his own personal home computer systems museum (i.e. he has HUNDREDS of vintage computing systems ALL still working and some are from the 1950's -- And YES! He has a very large house in a rather posh area of greater Toronto to fit all that vintage gear in!)
V
The problem is, it was so good, they spent the couple of decades weakening its security, because it was too restrictive for programmers and users who were used to the lax security of Windows 9x and would rather forsake security rather than learn to do things properly.
Don't forget Token Ring networking. That was another own-goal in IBM's personal computing dream, because for all its supposed technical advantages, it was rapidly eclipsed by cheaper and easier ethernet. If (like my employer of the time) you were fully paid up with PS/2 machines running OS/2, connected via Token Ring, then you'd wrapped yourself into a very expensive solution, just as other people offered better software cheaper, better PCs cheaper, and better networking cheaper. And made worse by IBM's distribution channels that were entirely designed around selling big iron to big companies, so the salesmen were hostile to any form of mix and match of technologies, and only used to selling to those with deep pockets.
When you look back, IBM did nothing right in the personal computing space, and reaped the rewards of that.
No.
Token Ring was good but expensive.
Ethernet was nasty and cheap.
I sold IBM Token Ring products in 1994/5. Our customers wouldn’t touch Ethernet because their businesses depended on it.
Then Ethernet improved - switching, higher speeds, mainly with no change needed for the cheap and nasty PCs.
The world moved on.
Token Ring was of its time and good when it was, duplicate MAC addresses gave fail-over corporate solutions very early in the day, but Ethernet got better.
In 1994 Ethernet wasn’t good enough. IBM didn’t work out that it soon would be.
IBM backed ATM as a future local area network technology and this was just wrong. Cisco and Ethernet won.
I banned Token Ring from a global transportation company in 1994. IBM threw $1 million of free consulting at the unstable LANs and still got their backsides handed to them when I implemented an extremely stable Ethernet LAN for 250 users at pennies on the dollar compared to Token Ring.
I knew absolutely nothing about networking back then & relied on a very small network team. I wish that I knew then what I know now to better understand what went so wrong.
I'll add that OS/2's GUI perplexed everyone, including my significant Unix team who were working with 3 different GUIs.
I had to rely on an app vendor to install their excellent native OS\2 app. Fine with the app, just cursed a lot at the OS GUI.
I ran the largest civilian T/R network in the country (IL) at that time... nearly all were Madge cards. Kicked IBM out of my site for AST (a story for another time)... and even so, ended up getting hired by IBM. Ethernet couldn't compete back then. We used to call it OS/Tfoo! (think spitting sound). But the main reason Windows took over for us was the BiDi support.
This post has been deleted by its author
> There was a period when Token Ring was superior to Ethernet,
It handled heavy loads much better until 10base/T _switches_ replaced hubs. Then it was all over. Given switches everywhere, suddenly Ethernet scaled well under load, even on larger networks.
100base-T just sealed the deal.
Initially we ran IBM TR and DECnet 10BASE5 networks at the same time.
We also used10BASE5 between buildings to interconnect a number of 10BASE2 LANs. The joy of someone moving a 10BASE2 computer and an accidentally/ignorantly disconnecting the cable, or putting the earth connector at both ends (or neither end), or connecting the cable directly to the end PC without the terminating resistor and "T"...
In one lab, I set up a TR test bed with cables snaking around door frames and suspended above head height by string. After a couple of weeks when everything was bedded in, I asked the cable guys to wire it up properly over the weekend. I came in on Monday morning and every station had a wall patress with TR sockets. I plugged everything in and connected the MAU - Nothing worked. After a bit of head scratching, I removed an access panel that had one of the pattress plates and saw that the wiring behind it was standard POTS cable. When the chief cable guy came to look, I asked him why they had cabled it that way. He said "It's just twisted pair, it should be OK". I suggested that he consider why the standard TR leads looked as though they could be used to moor a medium-sized boat. They came back the next weekend and replaced it. About a year later we standardized on Ethernet, so it was all replaced. The good news was that the existing TR pattress plates were rewired with 10BASE2 and double 50 Ohm connectors so everything looked tidy.
A previous employer moved into a new building in 2017. It had structured cabling running to all of the wall ports and floor ports. Looked super, until they moved in and nothing worked!
Turned out the previous tenant had still been using Token Ring, so the ports were all wired "wrong"...
I also remember agreeing with most of it.
When looking at the corpse of OS/2, everyone sees the bullet holes in the body. IBM points to the bullets in the head that Microsoft put there, like a sniper. They ignore the many, many more bullets in the feet that were put there by IBM. There are so many it looks like IBM used a Gatling gun.
I worked at IBM (on contract) doing OS/2 applications from 1990-1992. I didn't work on OS/2 itself, although I had friends that did. I did get to see, from within IBM, the breakdown of the JDA with Microsoft. The JDA was the IBM/Microsoft Joint Development Agreement. It basically stated that IBM and Microsoft shared the OS/2 kernel, that Microsoft owned the GUI, and IBM owned the database and networking (what was known as the Extended Edition) features.
When the JDA broke down, IBM's internal attitude was that OS/2's new goal was to be "not Microsoft". I saw numerous instances of OS/2 being changed, usually needlessly, and far too frequently to its' detriment, simply to be different than Windows. Working functionality would be scrapped when a necessary component was changed, solely for the purpose of making it different from Windows.
The belief from upper management seemed to be that the corporate market drove the personal market (I disagreed), and since corporations trusted IBM more than Microsoft (I agreed with that), they would standardize on OS/2 (which many did), leaving Windows to die. By making OS/2 incompatible with Windows (except for a WinOS2 layer) it would make migrating OS/2 applications to Windows extremely difficult. That would starve Windows of application development, and kill Microsoft.
"Kill Microsoft" was clearly a goal of many at IBM, especially the marketing and business direction types, who'd been stung by the failure of the JDA.
The problem was that the corporate market didn't dictate the market in 1992, the way it had in decades past. IBM management was told that repeatedly, but they refused to believe. The IBM internal fora (like Usenet, but internal only) was absolutely filled with rank and file employees screaming at the top of their lungs that it wasn't 1980 any more. Parents were not buying PC 5150 DOS machines and awing their children with this majestic new technology. In fact, the kids were often the ones explaining to parents what an Apple ][, or Atari, or Commodore 64 was. That may not be true in households with parents working at IBM, but for the vast majority of households, they cared less about what computers their company used, and more about what their kids' school used, and what they saw for sale at Sears, local electronics stores, and Circuit City.
Developers were not going to develop a massive application for OS/2 and then carve away functionality to make it run on Windows, the way IBM (executives) believed they would. They'd start from the bottom up, making it work for the easy case of Windows first, and then expand and extend it for OS/2. Or they would have, if IBM hadn't deliberately done everything they could to make that as difficult as possible.
I had a small DOS application that I'd written in 1988 and had sold to a number of local law firms. Many were curious and asked about Windows and OS/2 versions. When I asked Microsoft, they sent me a WIN32 Developer Kit, for free. It was a beast, and incredibly klunky to work with, but it worked. When I tried to talk to IBM about OS/2, I was sent a price list that showed C/Set2 tools, starting at $500, and that was it.
Microsoft went out of its' way to court developers. Often they went too far, to the point where they were practically bribing people to develop Windows apps. In contrast, IBM held non-corporate developers in contempt. As one magazine at the time put, "IBM would garner a lot more support for their OS/2 operating system if they stopped treating potential developers for it like child molesters".
OS/2 was technologically far ahead of Windows, especially version 3.x. It was still technically better than Windows 95. But in real world terms, for consumers and developers, IBM was simply too difficult to deal with.
At home, I ran OS/2 1.x from 1990 to 1991, and OS/2 2.x from 1991 (beta versions) to 1996, when Windows NT 4.0 came out. I dual booted between them. Remember MOST, the Multiple Operating System Tool, that IBM included with OS/2? Long before GRUB, we had MOST. But once NT 4.0 came out, with the stability of NT and much of the application base of Windows 95, OS/2 was simply too far behind to ever catch up.
You need to include the small business/everything else* market which certainly wouldn't have come over IBM's horizon as corporate but which I would have thought would have been bigger than the home PC market but, like the home market, wouldn't have been signing up for OS/2.
* e.g. laboratory instrumentation
e.g. laboratory instrumentation
One that did move over to OS/2 in the days of version 1.2/1.3 was Spectra-Physics with their software to support data collection and analysis from HPLC (High Performance Liquid Chromatography) systems. This was from around 1991/92. The alternative systems in the market were running on DOS, or, in the case of one particular system, "DOUBLE DOS"
Towards the mid-90s, Spectra-Physics sold their LC business to Thermo-Fisher Scientific.
...everyone sees the bullet holes in the body. IBM points to the bullets in the head that Microsoft put there, like a sniper. They ignore the many, many more bullets in the feet that were put there by IBM. There are so many it looks like IBM used a Gatling gun
That is perfectly put! Looking at the hollow walking corpse that is IBM now, it seems nothing has been learned.
I was born over a decade earlier but have the same impression. What was the last good innovation from IBM? Was it virtualisation? Was it the System/360 thing of a common target platform? Or is there something more recent? The AS/400 family has its fans, but does it count as a lasting success when most software today is "a bit shit"?
What was the last good innovation from IBM? Was it virtualisation?
Very subjective question - what's your definition of 'good innovation'. The innovative parts from 1980 forwards for IBM (in my book) would be:
1. PC - obvious.
2. PS/2 - brought several enhancements and set new PC standards: VGA, PS/2 ports and more.
3. OS/2 - first (?) pre-emptive multitasking OS (with GUI) on PC. The GUI itself was innovative. HPFS was great.
4. Thinkpads set the standard for laptops for quite some time, the butterfly keyboard anyone?
5. Deep Blue - first system to beat the reigning human chess champion.
Each of those could be though as standing on the shoulders of giants, with incremental additions to pre-existing innovations and so on.
I'm sure Deep Blue's achievement would have come sooner or later from somewhere else because of ever faster computers and better chess algorithms have (probably) been written many times in the last 25+ years.
Mwave was even worse than that.
While I don't remember the drivers being so bad (outside of IBM's not producing drivers for many Mwave based devices under Windows 95 and NT), the DSP would run out of memory and then you'd have to stop using or actually turn off a particular function. A good example of this comes in the form of the Audiovation card. Not only did it offer almost everything you'd expect from a decent sound card of the day, it also came with software to decode JPEG images on the DSP. (One thing Audiovation notably lacked was Soundblaster compatibility from pure DOS. You only got that when running DOS programs under Windows.)
Unfortunately, there was too little memory available to the DSP for it to do very many things at once. Rather than perhaps bumping up the memory available to the DSP, IBM's solution was a software control panel that let you do things like turning the audio line input on or off (yes, doing this really saved DSP memory), or unloading the JPEG encoder. I think you could even turn off the game/joystick port to further free up DSP memory. Of course, you'd only find out about this when you tried to do a certain thing with the Audiovation board and the drivers would tell you that there was not sufficient DSP memory available.
Mwave did see a little use outside of audio and modem applications. There's the previously mentioned JPEG decoding, and also the Waverunner ISDN adapter.
It was a great theory, marred as only IBM could in its implementation. I've never used any of them, but I have heard that the various Mwave based modems in the Thinkpad line were pretty awful. I have also heard the same about the Mwave sound (and modem?) cards included in IBM's Aptiva line from around the same time. I even dimly recall that there might have been a lawsuit over the limitations of Mwave based hardware...
I started my first computer courses in 1980: BASIC on a Tandy TRS-80, Fortran and Cobol on an IBM mainframe.
The 3270 screen for the IBM was just beautiful: 80x25 characters that were just wonderfully chiseled in bright green on black and IBM keyboards were like Steinway grand pianos. The TRS-80 was washed out dots on a bad TV and the keyboard a nightmare.
But BYTE magazine convinced very quickly that little was more desirable than having your own computer: I've always bought PCs and then tried to turn them into mainframes since. With Microport Unix on my 80286 I thought I had gotten close... I always used original IBM keyboards on my cheap PC clones, but put a pilfered IBM metal sticker on the chassis.
My professional life has mostly been replacing the mainframe. But that has never kept me from admiring the admirable parts. Gene Amdahls forward looking 360 architecture was certainly one of them, but as with virtualization you could argue that more was invented *at* IBM, even against their management's wishes, than *by* IBM: TSS vs. VM/370 is one of many such stories.
One IBM architecture which I feel still undervalued and much more advanced than even today's mainstream operating systems is what started as System/38 and became AS/400.
I've never consciously used one and they weren't exactly personal computers, but as a technical architect I've at least come to admire their forward looking principles, the single level store and capability based addressing, both of which might have saved unimaginable man years and trillions of IT spending, had they been more affordable or even open source.
Unix was a hack that turned everything into a file, because the PDP it was born on, had too short an address bus to support a Multics like virtual memory system. It's designers were so embarrassed by its success they developed Plan-9 and Go, just so they'd have something done properly to be remembered for.
And who would want files, when they could have persistent objects, like on Smalltalk machines or at least a database like on AS/400?
But these days I'm reminded ever more fact that humans started out as segmented worms and were also not designed to sit in front of a computer for a day of work: we might have evolved there, but the design is can't anything but optimal for the job, or where to those back pains come from?
Gosh, I tried OS/2 thinking it must be the way things are going to go. At the time, my conclusion was that I was out of my depth because I just found it awkward in every way. I probably was out of my depth but being in business, manufacturing real things, we needed our systems to work and work for everyone.
I find it hard to say, but Windows did just that. And we ran it on 'clones', despite dire warnings on 'compatibility'.
> I was out of my depth because I just found it awkward in every way
No. It just was. And I speak as an enthusiastic user of the first 2 32-bit versions.
Some of it was its own traditions and practices. It inherited a lot from 16-bit OS/1, and of course nothing from Windows.
Some was IBM being its idiosyncratic self.
Some was just short sightedness.
Why the hell was dragging only done with the right mouse button? What conceivable benefit did that have?
Why weren't long filenames truncated? DOS & Windows apps couldn't see anything with longer than 8.3. That was just daft.
Both Win9x and NT could be bootstrapped with a DOS installer. OS/2 never could. You needed to customise boot disks, FFS. Ludicrous.
I didn't use OS/2, so a couple of those don't sound as daft to me as they do to you.
"Why weren't long filenames truncated? DOS & Windows apps couldn't see anything with longer than 8.3. That was just daft."
Because that was a limitation that is better when you no longer have it. Forcing every limitation on something in the name of compatibility probably wouldn't have helped either, because then it would have been identical to Windows and Windows, being cheaper, would also have won. When the differences make OS/2 better, it makes sense to keep them.
"Why the hell was dragging only done with the right mouse button? What conceivable benefit did that have?"
This one isn't a good thing to me, but presumably the conceivable benefit is that, at some future point, a long press on the left button could be assigned another meaning, making the mouse more capable. Of course, if they never assigned that, it doesn't end up being very useful, but that's the primary reason I can think of for doing it that way, and it has precedent because that's why there were three buttons on the mouse instead of one in the first place.
Most mice on Intel PCs at the time only had two buttons. The IBM mouse you bought for first generation PS/2 systems, for example, definitely only had two.
Macs had one button mice, and most UNIX workstations had three.
It's a fair while since I've seen OS/2 Warp running on anything, but for a time it was on my PC at work. I did not find it that difficult to work with, although a lot of the time I was just running X11 applications through the PMX addon to OS/2. But I was not really using Windows much at the time (I did have a Win95 system at home, mainly to run accounts packages), but most of the time the kids weren't playing games, that system ran Linux.
"Why weren't long filenames truncated? DOS & Windows apps couldn't see anything with longer than 8.3. That was just daft."Because that was a limitation that is better when you no longer have it. Forcing every limitation on something in the name of compatibility probably wouldn't have helped either, because then it would have been identical to Windows and Windows, being cheaper, would also have won. When the differences make OS/2 better, it makes sense to keep them.
Granted, but backwards compatibility is a thing (it was back then too, but not many folks back then understood that). So rendering a whole bunch (perhaps all) of the existing application base for an underutilized and under-supported OS was...well, not a good plan.
Micros~1 came up with a mechanism that would allow compatibility with the 8.3 convention, and also allow for long filenames for apps that could accept them. Was it a kludge? Yeah, probably. It certainly was the butt of much derision (including on my part, as you can see above). That said, it did work, and added life to "legacy" applications in the brave new world of NTFS and beyond. thatr IBM couldn't (or wouldn't) be arsed to come up with a similar solution is yet another nail.
"Why the hell was dragging only done with the right mouse button? What conceivable benefit did that have?"
I don't remember this one - I only came to OS/2 Warp, but I _like_ this idea. The amount of users I've seen click and drag things by mistake, I'd be really grateful if things were only dragged with the right mouse button. It would certainly save a lot of "and then all my files disappeared" type calls!
It is more than decades ago around 2005, a die hard OSX developer had to ship a Windows exe for a reason. I can't remember the name or product.
He had the exe crashing all the time and somehow MS got aware of it.
Microsoft contacted him telling why likely it fails and how to fix it sending the necessary documentation.
The guy was so impressed that he blogged about it on a OSX developer blog.
Agreed, yet we are in a Windows and Mac dominated home environment …
- Commodore Died at Amiga
- Atari Died at ST
- Sinclair died at QL
- Acorn/BBC died at Archimedes (though kudos given sired are the entire ARM legacy)
Indeed Apple almost died twice - af Apple II v’s Mac and secondly at Pepsi time until Jobs was brought back and unified MacOS with NextStep Unix and utilising the boon of ARM (from above) created portable iThings.
Honourable mentions for Linux and Raspberry Pi.
"Microsoft went out of its' way to court developers."
They sure did! In the late '80's or early '90's, I was learning Visual Basic and remember having trouble implementing a 3rd party extension (DLL). I called the support number from the VB manual, spoke to an native English speaker Microsoft who asked for my address so he could send me a floppy with a bunch of their own extensions and documentation on how to integrate them into VP projects. I still laugh how a total nobody could call MS, talk to an actual person and just for the asking, they'd copy a disk, put a label on it and mail it across the country totally free of charge.
I have a box filled with old floppies, Zip and Click discs, CD-Rs and maybe a SyQuest or two. I bet that floppy from MS is still in there!
Admittedly, the first two major releases were both costly and crap but Warp was a far better piece of software than any DOS-based version of Windows. Basically, customers expected an OS that was free (or its cost was covered by the inclusive deal). OS/2 was always a costly extra and, by the time it was any good, customers had bought big time into Windows 3.x The insistence on loading the registry into already restricted RAM made Windows 95 inferior to 3.x in my opinion but Microsoft's huge marketing effort made folks want the shiny and new. Luckily for me I was able to go from 3.1 to 2000 via Warp 3 and 4. Before anybody asks, yes, I had lots of compatibility issues with 2000 but not one that couldn't be solved.
I disagree. All the 32-bit releases were pretty damned good. There was no great leap in quality after Warp.
Technically far better than Win9x... But the 9x user experience was better in every single way. The technical underpinnings were kind of ugly, but NT fixed that.
This! I was one of the early OS/2 2.0 adopters, and I was very happy with it. And theeeeeeen every single update to (2.1, Warp, 4.0...) caused it to boot even slower, and it was already a freaking pig to start with. You could boot into Windows 3 in about 15 seconds (depending on hardware of course). OS/2 2.0 was well over a minute on my machine at the time. God forbid you'd shut down uncleanly, because then it was going to take way longer.
OS/2 would also occasionally corrupt itself while playing demanding games - not often, but reinstalling from two dozen floppies every couple months was hell. And then there was the growing compatibility gap where it became harder and harder to run new Windows software on OS/2 - native OS/2 software was very far and few between, mostly some rare utilities, because as you pointed out Windows was the standard. And WIn95 was a much better user experience.
Then at some point Win2K came out and my OS/2 install ate itself again and I said 'eff it' and installed Win2K and it was superior to OS/2 in every way. I had zero desire to ever go back. But RIP OS/2, I used you for almost a decade.
> And theeeeeeen every single update to (2.1, Warp, 4.0...) caused it to boot even slower
Absolutely this.
I didn't game on it, but what you say is true of badly-behaved DOS apps that hit the metal. Fractint was the one for me: it could reliably bring down OS/2 with its custom video modes.
It was running full screen, obviously if it is tweaking video modes. Either the OS should let it screw around with video card registers as it wished because the OS was just going to reset the video mode when you returned to the desktop (as I recall, that's what Win9x did) or the OS shouldn't let it mess with the hardware (which IIRC is what NT did.)
I never gamed on it, because the only game that interested me then was Doom, and that was because of network deathmatches. Playing against another human is much more interesting. I built a 2-PC LAN at home, just for Doom deathmatches. (My home wifi still has the same name, nearly 30 years later, because at some point, 1 or another computers was always on the preceding network.)
OS/2 didn't come with a network client. That was a cost-extra edition, "OS/2 Warp Connect." Windows 95 did and at first it could even run over a Laplink cable between 2 parallel ports: configure Direct Cable Connection, and bind IPX/SPX to it. Bosh, Doom could talk.
That's another thing MS got right and IBM got wrong, but it was so much later in the story that I did not think it was relevant.
IBM tried to "nickel-and-dime" not only customers but also developers: a network stack cost more, dev tools cost more, etc.
MS threw in everything it could to sweeten the deal.
Of course, later on, the bean-counters at MS were given too much power and you started to get Windows Home, Windows Pro, Windows Enterprise, Windows Ultimate... and Windows Server, Small Business Server, Enterprise Server, Datacentre Server, whatever.
The IT business is like Seinfeld. Nobody remembers, nobody learns, nobody changes.
So with Microsoft's next OS, Windows NT, that's what it did. It ended up eventually dominating the industry because it aimed at future hardware: it was designed to run on the i860, specifically the Dazzle motherboard, then the MIPS R3000, as well as x86 machines.
And not forgetting the DEC Alpha
> And not forgetting the DEC Alpha
You are missing the point of what I was getting at.
Sure it ran on other things. That's not the point here.
The point is it was developed on i860 (never released), then ported to MIPS, and only then was it ported to x86.
NT is not a native x86 OS.
It started out on RISC & moved to a 2nd RISC before the first x86 version was developed.
Since OS-2 died out in the 1990's, many people had switched out to Windows NT and by 1999, we were running Windows 2000 on Daytek motherboards (i.e. the monitor maker who also made motherboards!) that had DEC Alpha CPU chips in 1999 running at 240 MHz which made AMAZING graphics workstations! My boss bought a few of those MASSIVELY HEAVY 21 inch CRT Sony Trinitron tubes that was the first multi-megapixel displays ever at 2048 by 2048 pixels originally made for FAA air traffic control! We had to build custom graphics boards and drivers to drive those displays at a full 48 Hz but they worked! We used them for satellite-based Earth and Deep Space imagery acquisition and display!
You could get Windows NT for x86, MIPS, PowerPC and DEC Alpha! I still have my Windows 2000 Server and Workstation CD-ROM discs with all the multi-CPU-system installation sub-folders! Microsoft used a system called HAL (Hardware Abstraction Layer) which lets people run the user interface and NTFS file system on top of hardware-drivers for varying GPU displays, disks and motherboards. It was a great idea at the time and even today you can run Windows 10/11/12 on variable CPUs by writing custom HAL drivers! I can even run the Windows 12 Beta on IBM S1024 48-core Power-10 CPUs using HAL which is AWESOME! I am even running Windows 12 on our in-house-built "Haida Gwaii" system which is a custom-built many-core supercomputer system on a HAL-based driver set!
Once in a while Microsoft gets it right! And the HAL (Hardware Abstraction Layer) of the Windows code-base is definitely one of the best ideas EVER and is STILL being used internally and for developer use if you ask nicely and pay a big fee!
V
"... the HAL (Hardware Abstraction Layer) of the Windows code-base is definitely one of the best ideas EVER"
That was called the BIOS in CP/M and was Gary Kildall's bright idea to abstract the underlying hardware implementation from the software layer so that the OS could run on just about any 8080 computer by just implementing a BIOS for the architecture. As usual Microsoft just took somebody else's idea and ran away with it.
YUP! You are correct! The BIOS was teh first attempt at hardware virtualization! Anyone remember the great BIOS battles over Phoenix and AMI (American Megatrends International) clones versus the IBM BIOSes?
I still have a LOT of DOS code with low-level INT 13 and INT 21 interrupt-based code running. Windows 10/11/12 maps the interrupt calls to virtualized functions now BUT you can STILL call the BIOS if you need to emulate or call old code! Serial and Parallel Port communications, disk operations, memory mapping all used fancy interrupt calls!
V
Sorry but CP/M's BIOS was no HAL and HAL wasn't particularly novel or powerful.
IBM's 360 architecture (by Gene Amdahl) which allowed a single instruction set to span a large range of machines that differed significantly in terms of capabilities and physical architecture was much more forward looking. It basically had a virtual instruction set, some of which even the smaller machines could execute in hardware, while any of the more complex (e.g floating point) ones, would be emulated in microcode, fully transparent even to the OS.
CP/M had to run on S-100 machines, where few ever had the same hardware so a BIOS had to be written (or adapted) for each machine, much like run-time libraries in the 1950's.
And HAL was Microsoft's insurance, both against a multitude of ISAs, but also against a PC platform which had zero abstractions or support beyond a CP/M style BIOS in ROM.
I've never investigated the abstraction capabilities of HAL, but everything in PCs went to the quickly evolving metal when it had GUIs look better or stuff run faster, which nobody could have ever anticipated when HAL was designed.
Congrats on deriving value from a code base that old, but I can't think of INT13 BIOS or INT21 DOS calls as "fancy". They were a primitive replacement of CP/M's CALL 80(BDOS), which was necessary because of the 8088's segmented memory and because it lacked a proper system call instruction. And they were so incredibly primitive and slow that everyone who could bypassed them whenever they could.
Intel itself was so embarrassed by them that they overcompensated by really fancy system call instructions and mechanisms like task and call gates for the 80286, which OS/2 was designed for. But while they only cost a single instruction to call and seemed to offer good process isolation and protection, they were so incredibly slow to execute, that Linus had to replace all that code to make his initial OS perform anywhere near to BSD386 levels.
And later not even Intel managed to keep track of all the registers that actually needed saving and restoring, which is why it was removed for the 64-bit ISA.
This provided more abstraction that you might have thought. I had a bad experience with the HAL on an early Windows XP system, where there were actually two HALs for x86 systems, depending on whether they were uniprocessor, or multiprocessor capable.
I replaced a failed motherboard for a system (keeping processor, memory and I/O cards), but the new board for some reason had a multiprocessor capable support chipset (even though it was running a single core Athlon processor), and Windows would not boot because of the different HAL required (it may have been multi to single, I don't really remember). I tried doing an in-place re-install, and then had different ACL security keys for the OS specific regions of the NTFS filesystem (the bits you rarely have to look at, or sometimes don't even see from inside Windows), which caused all sorts of problems that I didn't know how to fix without doing a full wipe and re-install.
As a slight aside, the dual boot Linux installation on the same system took the new motherboard in it's stride, and didn't blink an eyelid.
It was mostly three HALs by the time XP came along.
Before ACPI arrived (which is the *third* HAL) and allowed almost everyone to run the same standard HAL (with a lot of complexity, the BIOS has to play ball and all the things included in the ACPI tree these days aren't straightforward) there was MPS for multiprocessor support. You had to choose between a single or multiprocessor MPS HAL, and switching was as you say a bit of a pain. Not to mention power saving, which was also largely standardised via ACPI, and rather a nightmare before then.
There were then a few other HALs for medium sized iron usually from Compaq or Unisys but most people wouldn't use them.
back in the early 80's. As I mentioned above my company at the time tried one PS/2 computer but ditched it after a short time. In the same office we were using DOS, early Windows, Commodore PETs, DEC Rainbows, an Apple II, Husky handhelds (for data capture) an ICL mainframe and the CAD dept had their own mini system. There were no real standards. Eventually Windows became dominant due to the availability of software and ease of development in house. The mantra became "is it Windows PC compatible?" Originally we used IBM hardware (20 mb hard drives and 5 1.4" floppy) then lots of cheaper Windows compatible kit mass produced in Asia.
The other thing was that you could network Windows computers relatively cheaply and (for the time) easily. Sure you could network DEC's stuff if you didn't mind paying out the wazoo, or IBM if you didn't mind paying your entire wazoo. But for PCs you'd just pop in some (relatively) cheap 3Com cards or whatever. This was before everything was TCP/IP, so a lot of it was what we'd consider horribly proprietary and weird today, but by golly for 'only' a couple hundred bucks (cheap at the time) you could have your Windows (and DOS!) PCs on a work network without complete vendor lock-in - you were only locked in for the networking portion.
At the time I preferred Apple IIs (a 1 MHz Apple //e was demonstrably faster in practice than a 4.7 MHz IBM PC because the PC was so bogged down by the BIOS, slow video, slow floppies, etc.) but I quickly realized which way the peripheral and networking winds were blowing, especially after Turbo Pascal hit.
Windows networking was based on LanMan, which came from IBM and was included in OS/2. (I think you needed an OS/2 Enterprise Edition) You were still locked in, you paid for it in client licenses on your server. Didn't matter if your server was LanMan, Netware, or something else. I don't remember any of them offering peer-to-peer sharing, like we have now.
NetBios, NetBuei, AppleTalk, IPX, all of it was a mess at the time. I found NetWare 3.x to be the easiest to set up on all the OSes. DECNet seemed fairly problem free, as long you didn't mind spending $250 for NICs and a license. We basically used it so we could run terminal apps on PCs to log into our Vax cluster. As long as no one kicked their cable loose and killed everyone's network on the string.
Winsock and the Internet moved us into TCP/IP as the standard. (Which was also available for Win95, via Trumpet)
DECNet assumed that all NICs were DEC and hence had the upper however many bits of the MAC all the same. The lower bits were the DECNet IDs
We needed to have HP-UX talk to a VAX so bought a DECNet implementation for the HP (I dread to think of the hoo-hah we might have had with the VAX team to do it the other way around). What we'd forgotten to allow for was that when we ran it it modified the HP's MAC to fit in. All the PCs lost their network connections for a minute or so until they rediscovered the server and updated their ARP tables.
Lan Manager was Microsoft's networking
Lan Server was IBM's
Both were basically NetBIOS (Netbeui, TCPBeui. Spxbeui? Can never remember if there was an acronym for NetBIOS over SPX).
I have to admit whilst I used OS/2 extensively as a client or server, for an actual file server it was mostly Netware. Setting up Lan Server atop OS/2 was substantially more fiddly and error prone than Netware.
The only exception to that being the Netware client under OS/2, which spoke ODI rather than the NDIS model used by OS/2, Windows, and DOS. You could run a pure Netware model and speak to ODI drivers, but if non Netware connectivity was required an ODI2NDI driver provided the necessary translation. Mostly this was pain free, but a couple of the cheaper NE2000 NDIS drivers were poorly coded, and if Netware and non Netware software were used at the same time a black screen of death tended to result. Other NE2000 drivers and practically all 3Com drivers were fine.
" Even now, Windows 11 won't run on a perfectly serviceable kit."
And that's a problem now, where it used to be a solution. The technology simply is not advancing at the same rate, at least from a user perception level. A 2025 PC or laptop is not really all that much better than a 2020 PC or laptop unless you are a hard core gamer/builder. The enforced added security of having TPM 2.0 may or may not be a good thing, but it's deprecating a lot of kit that, as Liam points out, is perfectly serviceable. One possible upside is a likely glut of decent, second hand laptops on the market reducing prices or at least precluding price increases.
Heck, my primary personal system is a 2012 Asus that's faster than every other high end laptop I've used from customers. I haven't gamed in 15 years, but just fired up a few over the holiday break & felt like I was running them on a dedicated 2024 gaming tower.
If we're not restricting to Wintel PCs, I can tell you a 2024 Macbook, or even the 2021 M1 Macbook, is very much better than any 2020 laptop. The hardware is leagues ahead, and they have somehow not thrown away that power on ads and crapware.
It shows the potential is there if MS & OEMs work hard, but it's too hard.
The best proof of Microsoft's lame excuses about old hardware is produced by Microsoft itsself.
It's called Windows 11 IoT Enterprise LTSC and does away with nearly all restrictions, except 64-bit ISA and POPCNT support.
I'm running it on anything Sandy Bridge and up, or simply on anything that I also used for Windows 10.
No TPM (unless it's a travel laptop and has one), no HVCI (I run VMware Workstation as type 2 hypervisor), no OneDrive (not stupid), no Co-Pilot (not that stupid), no Edge (that would be *really* stupid) nor many other "improvements".
It was released in October 2024 and comes with support until 2034.
And to deploy I simply take a minimal install I keep current on a Windows to Go USB stick with all my applications and all the various drivers for older and newer hardware and put that on the target's boot storage, MAKs and ISOs came with MSDN and remove all activation hassles.
After perhaps a reboot or even two to reconfigure the hardware it's good for longer than the hardware will likely still last, since some of it is already more than 10 years old.
And I find it somewhat embarrassing that it's easier to transplant than most Linux variants and across a vast range of systems ranging from Atoms and small laptops to powerful mobile or tower workstations with all sorts of storage, NICs, integrated or discrete GPUs.
And if a brand new laptop comes with some "OEM enhanced" pre-built image? I just plaster it with the live image from the stick, because OEMs are just badly imitating the abused notion which Microsoft has copied from the Fruity Cult: that they own your personal hardware including your data.
Windows Server 2025 is and works pretty much the same, btw., I'm running the Data Center edition as "to Go" on a nice Kingston Data Traveller 1TB USB 3.2 stick that isn't quite NVMe, but will go 2x SATA speeds on matching hardware. Actually Windows server is mostly a PoC, because it's a bit rough on AMD desktop hardware due to AMD's penny pinching and Microsoft charging extra for server signatures.
Every Windows 11 has always installed perfectly fine without any issues on VMs running on much older hardware, including things like device pass-through (e.g. GPUs for CUDA or gaming) on KVM/Proxmox/oVirt: all those blocking checks are only performed on physical hardware by SETUP.EXE.
And even if Windows to Go also no longer officially exists, Rufus will help you out for any edition Microsoft produces.
And no, I cannot imagine Microsoft ever blocking security updates to LTSC IoT editions based on hardware generation. Application vendors are the far bigger risk to long-term viability: some games now refuse to run without TPM (could be inconvenient) and Facebook might be next... no problem for me, except when you're forced to used them to do your taxes returns next year.
Back in the day I had a dedicated xterm and wanted to build another using a PC. I managed to do an end run round whatever obstacles BT procurement might have put in the way by buying an xterm kit which comprised a network card, the xterm S/W and something called Windows 386 which was presumably Windows 3 before it got called that.
Someone else in BT ring me up for sourcing details, basically because he wanted the what was basically the same network card; the same card was available via BT procurement but in typical BT fashion they'd broken its leg by specifying some modification for some particular purpose which made it unsuitable for anything else.
in typical BT fashion they'd broken its leg by specifying some modification for some particular purpose which made it unsuitable for anything else.
Ah, like the Newbury labs terminals which were all fitted with special ROMs that had non-standard cursor control sequences so would only work with BT-ized software packages...
1992 I was cranking out C++ to do CAD/CAM things. Working with Windows 3 was horrible - having to chunk stuff into 64K segments, background processing being a nightmare. Got hold of a copy of OS/2 2.0 and it was a revelation. When I went to a dev launch thing for Win95 a few years later, I recall the whole audience reacting with revulsion as it was revealed how it actually worked.
IBM did kneecap OS/2 by not having a 386 version, even in the 1.x days the 16MB 80286 memory limit was being reached, not to mention the other advantages V86 mode provides.
However, a great deal of it was to do with the cost of RAM. Protected mode OS are more memory hungry. It was cheaper to run DOS programs which ran in upwards of 640Kb, or Windows 2.x which ran 'OK' on a 286 with as little as 1MB memory. When I first started using OS/2 2.1 in 1993 I had 8MB which was very unusual for a consumer, and quite expensive. 8MB was still at the lowest reasonable limit of acceptability for OS/2, and NT 3.x reasonably needed 12MB+.
Sorry, but OS/2 2.x did not 'work in 4MB'. It may have booted but would have swapped to the point of unusability. Even at 8MB the swap file was in use, but not to the point it impacted performance too much. When Warp 3 came out it was tuned properly to 'run in 4MB', which still meant it grinded more than you'd want but could have run some programs. Like it or not, OS/2 2.x onwards has always required 8MB or more, and 16MB is a decent target.
It was possible to develop for Windows 2.x and what's more include it as a runtime if Windows was not installed, and as a commentator notes above, the high cost of IBM development tools did not help.
Acceptable amounts of memory and cheap development tools provided a foothold into the market, as did improved driver support. I agree that the OS/2 battle was lost before even OS/2 2.0 was released.
There's a whole boatload of other things that went wrong. IBM's funding and insufficient monitoring of third party software (such as Borland C++ for OS/2, which is brilliant in some parts and unusable in others), the lack of driver support for several critical years, some very poor design decisions that were never properly addressed, and the ridiculous debacle of OS/2 for PowerPC or basically expecting non Intel platforms to be viable full stop.
OS/2 was a glorious hot mess of an operating system for a few critical years, and I don't particularly regret spending personal and professional time on it. Still got two separate OS/2 systems here, an ArcaOS system, plus a model 43P I'm trying to persuade to run OS/2 PowerPC. Naturally my day to day operating systems are Unix and Windows.
However, even if OS/2 somehow managed to get a low memory runtime 1.x version to capture the market in the 80s, even if it released a slightly less crippled 2.x and 3.x release in the 90s it would *still* have to face the spectre of its truly weird single user design later on, whereas NT and Unix for the most part had a fundamentally decent design they could build on
(even then Microsoft notably impacted its own market in the early 00s when it voyaged up its own arse with grandiose designs finally reined into sanity with the initially buggy Vista. Of course at that stage there was no effective competition, Android didn't exist, OS X required Apple hardware, Unix still wasn't capturing the market.)
> Sorry, but OS/2 2.x did not 'work in 4MB'.
It absolutely bloody did.
I bought 2.0 with my own cash.
I ran it on 2 different 4 MB 386sx boxes before I could afford to buy a 486 with 8 MB, which was the highest spec machine I used it on while it was contemporary.
With only DOS apps and the occasionally 16-bit Windows one, it was fine. Not fast but still vastly better then Win 3.0!
Running 'only DOS apps' is hardly using OS/2! You might as well have used Desqview or Desqview/X instead..
I bought OS/2 2.1 with my own money too. With fewer than 8MB OS/2 was beyond grindingly slow. You could free up a fair bit of memory with judicious config.sys editing, and using MSHELL/TSHELL, but then not using the WPS removed a lot of the point of using OS/2 too.
Running 'only DOS apps' is hardly using OS/2! You might as well have used Desqview or Desqview/X instead..
Running DOS apps under OS/2 was better than Desqview. Multitasking was better and you didn't get screen bleed through if a background task wrote to the screen buffer directly. A lot of DOS apps were not Desqview friendly. It has been 30 years, so I don't remember how we changed our apps to display graphics safely under Desqview.
One of the things people used to do to show off OS/2 was to start a floppy format and then change apps and show that everything hummed along nicely. With Desqview that brought all of the other tasks to a grinding halt until it completed.
OS/2 was particularly nice for BBS operators that ran multiple nodes or wanted to be able to use their computer without bringing down the BBS.
"However, a great deal of it was to do with the cost of RAM. Protected mode OS are more memory hungry. It was cheaper to run DOS programs which ran in upwards of 640Kb, or Windows 2.x which ran 'OK' on a 286 with as little as 1MB memory. When I first started using OS/2 2.1 in 1993 I had 8MB which was very unusual for a consumer, and quite expensive. 8MB was still at the lowest reasonable limit of acceptability for OS/2, and NT 3.x reasonably needed 12MB+."
Was OS/2 2.0 purely 32-bit? I recall one of the benefits of Windows 3.x and 95 being a mix of 32-bit and 16-bit code is that it helped keep memory requirements down. My first PC clone was a 486/66 with 4MB of memory and it ran Windows 3.1 moderately well. I also recall most apps still being Win16 during the time, not Win32s or Win32, which also helped keep memory requirements down.
And speaking of cost, weren't 386 systems prohibitively expensive those first few years? I wonder if IBM thought that it had more time before it needed to make the transition to 386 code.
> Was OS/2 2.0 purely 32-bit?
Purely, no.
This is a dramatic over-simplification but it was a bit more akin to:
OS/2 2.x was an OS with major 16-bit components, but running on a true 32-bit kernel.
Win9x was an OS with major 32-bit components, but running on a 16-bit kernel.
> I recall one of the benefits of Windows 3.x and 95 being a mix of 32-bit and 16-bit code
O_o Very few of us saw that as a _benefit!_
> is that it helped keep memory requirements down
No no no no no! It did nothing of the kind. What it did was allow Win9x to run usefully on a system where it only had 16-bit drivers.
If you had something like a sound card or a CD drive or even a network card with only DOS-type drivers, W9x could still use it.
Worst case, if your main disk controller or something only had DOS drivers, then you'd get a warning message in System Properties saying "Windows is running in BIOS compatibility mode" or something. It killed performance, *but it still worked* and that was the key thing.
This also applied to the setup and installation process.
The great Raymond Chen went into some depth on part of this recently:
"Why did Windows 95 setup use three operating systems?"
https://devblogs.microsoft.com/oldnewthing/20241112-00/?p=110507
Also, the point that - forgive me if I've missed it - no-one's made here yet is that OS/2's ability to run Windows applications was both brilliant, and extremely dumb.
Brilliant, because it opened up an immediate catalogue of software to OS/2 users.
Dumb, because why would developers now bother to write a separate OS/2 version, when they could simply write for Windows - as mentioned above, with enthusiastic support from Microsoft - and have both the Windows and OS/2 markets? And so in the longer term, it guaranteed OS/2's irrelevance.
Balmer was right about one thing - it really is "developers, developers, developers". Forgive me if I don't leap around the stage and scream that, though.
> Dumb, because why would developers now bother to write a separate OS/2 version
I disagree.
People said the same about WINE 25 years ago. If Linux could run Windows apps there'd be no market for native ones.
Well, it's done all right, and now Microsoft offers Linux apps.
The point is that OS/2 native apps could have delivered a much better experience. There was a marketing justification.
But MS outdid IBM again. 16-bit Windows 3 apps ran better on NT then they did on Windows 3 itself. All the resource limits went away, and each app could have its own separate memory space if you had enough RAM and wanted that. So if 1 Win3 app died the others were unaffected.
This did nothing to hinder development of 32-bit apps, note, once there was a consumer 32-bit Windows.
I disagree.I think there's a significant difference here. O/S2 was a commercial app that the developers had to be able to justify to accounting to get funding for development. If the O/S or app didn't have the revenue to satisfy the bean-counters, things stagnated or were just dropped entirely.People said the same about WINE 25 years ago. If Linux could run Windows apps there'd be no market for native ones.
Well, it's done all right, and now Microsoft offers Linux apps.
WINE and Linux didn't have this problem. It could weather the lack of use/revenue/bugs/drawn-out dev time for decades until it was actually useable/mainstream since there was no revenue/profit driver.
"People said the same about WINE 25 years ago. If Linux could run Windows apps there'd be no market for native ones."
It was a lottery that an application would run on one version of WINE but not on the next or one version of an application would run and another wouldn't. Then they decided they really didn't like 24-bit graphics H/W so there that when the system reported that it substituted 32 because it would align with a 32 bit word & run games faster or the like - the fact it wouldn't run an application at all was not seen as a problem. Enterprise Architect crashed on its splash screen. Stylus Studio would, I think, have run but there were other issues - IIRC the licencing didn't work.
I did a bit of bisection on the code changes until I found that out, removed the nonsense, recompiled and it worked fine but I had to keep doing that for every new release until they finally mad a big change & I never located where they were doing it or even if there was a provision for 24-bit any more. It was easier to run anything that mattered on a W2K VM. Being long retired there's nothing from that era that I really need to run any more.
> Dumb, because why would developers now bother to write a separate OS/2 version
I disagree.
People said the same about WINE 25 years ago. If Linux could run Windows apps there'd be no market for native ones.
This was an argument that we had in the 90s. IBM had to be aware of it. Many did their best to support vendors that released native apps, but most often we were told to just use their Windows version because they weren't going to port. I probably still have Stardock's OS/2 Essentials and Galactic Civilization around here somewhere. Virtual Pascal too.
The reason Linux businesses have survived is because it wasn't pushed as an alternative to run Windows apps via WINE. Native 'Free' software is what has driven Linux. People simply aren't running Windows apps there. But who knows, maybe 2025 is the year of the Linux Desktop.
The reason Linux businesses have survived is because it wasn't pushed as an alternative to run Windows apps via WINE. Native 'Free' software is what has driven Linux. People simply aren't running Windows apps there. But who knows, maybe 2025 is the year of the Linux Desktop.
That was probably true until the Steam Deck and other equivalent devices showed up. Remains to be seen if they'll trigger a long-term shift in the market (I doubt it) but the boost in compatibility that Valve's investment in Proton has made is already lowering the barriers to entry considerably in other areas too. What's really required for the shift, though, isn't just better compatibility, but more effort to reduce the friction Linux often still has for new users. This was Valve's biggest triumph, in my opinion - making the Deck's Linux underpinnings totally opaque to end-users - but they're far from the only ones working to make Linux more accessible to consumers.
What's been interesting to me of late is the way Microsoft's strategy with Windows 11 has been prompting the various non-techies in my life who are switched on enough to know about Win 10's arbitrary EoL deadline to ask about alternatives for perfectly serviceable kit that can't run Win 11, as an alternative to buying an overpriced new 'AI PC', and there's a few Linux distros out there that focus heavily on compatibility. I've personally been recommending Zorin OS to a few folks, as its got some of the best out of the box compatibility in terms of both the technical side and its 'look and feel', and the support from the company and community is excellent too. This reduction of the learning cliff for migrating to Linux is essential to making it succeed.
Microsoft will retain the business market for the foreseeable future, but if Linux can keep on maturing away from its elitist past and businesses like Valve and Zorin can keep on making it more accessible to people who don't care what their computer runs, so long as its familiar, easy, and just works - it could easily keep chipping away at Microsoft's consumer market with every little misstep they make.
"People said the same about WINE 25 years ago. If Linux could run Windows apps there'd be no market for native ones."
And I think those people were right. For two separate reasons, that didn't end up having much of an effect:
1. Not that many people used Linux desktops, so most people didn't bother to make software to work with it, either natively or through Wine. It didn't look like there were a lot of Wine applications because there weren't, but that wasn't because people chose to make native ones. They just ignored Linux users. They mostly still do.
2. The reason why Microsoft writes Linux applications: Wine was never quite good enough for companies to sell things that required it. Sure, some things ran perfectly, but many others did not. That sometimes changed. A company wanting to make a Linux variant might try making a customized version of Wine that ran their software well and then wrapped their application with that version, but that was a much more substantial effort than trusting that anyone who wanted their software enough would do it for them. Had Wine been perfect and unchanging, it probably would have been used. A lot of the software that Microsoft makes for Linux, for example, uses cross-platform runtimes like .NET, admittedly they wrote a lot of the Linux support into that, and Electron, so the effort required to have one is much lower.
Most of the native software available for Linux was written for Linux first. Most of the native software for Windows was written for Windows first. A lot of the time, the authors don't bother branching out if changing the compiler options doesn't do all the work.
> Handicapping it to use the pi**poor 286 pretty much f**ed themselves.
That's very true but I seem to recall that Intel had promised that the 286 would allow pre-emptive multitasking but it didn't work. So they shipped 286s with those features disabled with the result that they were basically a 8086 with some additional large memory space addressing registers.
The 80386 was pretty much the what the 80286 was supposed to have been. Quite why IBM didn't jump ship over to it pronto is hard to understand (even with the benefit of hindsight). The article talks of "promises" to corporate customers but, a marketing promise is just that - marketing. It would be interesting to know more about what contractual commitments IBM had with Intel at the time.
> That's very true but I seem to recall that Intel had promised that the 286 would allow pre-emptive multitasking but it didn't work.
Not quite, no, but this is awfully close to what happened, and if the editors permit, I am planning a follow-on article that goes into this in more depth.
The 80286 could and did support pre-emptive multitasking, and SCO Xenix 286 used it very well.
I never got to try it myself, but so did MWC Coherent 3.
https://www.abortretry.fail/p/the-mark-williams-company
Not a developer but I was reading 80286 history. The problem seems to be 286 can't get out of protected mode once it is switched to it, e.g. you run a modern GUI OS but you also have critical MS-DOS software. On Unix there isn't such archaic issues of course.
Proceeding us further down the rabbit hole, all 286-equipped PCs can get out of protected mode, it's just very slow. Or rather, it wasn't obvious that it doesn't need to be.
IBM's original solution, as used by the BIOS, was to leave appropriate annotations in RAM and full-on reset the CPU via the keyboard controller, which takes several milliseconds and fully resets its state.
A faster solution is to triple-fault the 286; triple-faulting on x86 at or after the 286 causes a reset. But a very quick one, though you still have to pick things back up in real mode from a full processor reset so it's slow as compared to a 386 or any other modal processor feature that you're intended to switch both and _and off_ during execution. You still don't want it to be something that happens often.
It's the difference between 15ms and 800µs per Raymond Chen of Microsoft.
I don't remember (or probably never knew) the details but I've always known that OS/2 failed because IBM refused to be on the leading edge of PC development. IBM preferred to ship stable, reliable products. Some (most?) of OS/2s compatibility issues have been attributed to IBM not allowing for (or refusing to stoop to the level of) the vagaries of clone hardware. For example they had a lot of complaints about printer handling which turned out to be because OS/2 expected everyone to be using hardware signalling with Centronics connections. The idea that someone would ship a cheap cable or cheap adaptor that required the computer to poll the printer flummoxed them. And I once saw a post on CompuServe where an OS/2 developer was bemoaning the unreliable RAM refresh rate on clones.
Microsoft just wanted to ship whatever they could persuade enough people to pay for (and still does).
OS/2 suffered the fate of a lot of really great products. The creators wanted to ship the best product they could. By the time they felt they had a worthy product they'd missed the boat.
It would depend on what they perceived as "best". Taking your printer example, is it best to make assumptions about hardware signalling or to work with as many printers (and cables) as possible?
I take it as an axiom that any assumption you make when developing S/W becomes a limitation of the product.
Yes. I think IBM were too strict about 'best practice'. I've always thought that it should be a sliding scale. For important stuff like airplanes, nuclear reactors, medical equipment etc the hardware and software needs to be fully specified, documented and development tightly controlled with formal QA and certification throughout. But for the other end of the market - basically 'consumer goods' - we can be less rigorous in most areas.
The general public won't/can't pay for the highest quality goods and innovation is rampant so a relative drop in quality should be accepted.
However there need to be certain aspects of hardware and software that are inviolate. Things that are never skimped on (security and electrical safety being the obvious two). The difficulty is probably going to be coming up with a system for consumer goods that allows for QA to be concentrated in just those areas that need it.
For products which need to interface with others the best approach is a double one. For providing a service follow the official specification. For receiving a service allow for the possibility that other providers might not be so good and be generous in accepting departures. The second, however, has the risk of something deliberately nasty being sent down the line which means that being generous comes with the cost of carefully vetting the data.
In case you don't know, web based version of Space Cadet:
https://alula.github.io/SpaceCadetPinball/
Left/Right mouse for paddles, hold middle mouse and release for the ball.
Overall cost was a big issue for small businesses and home use. OS/2 Warp v3 launched big, and a lot of people tried it. Sure you could get DOS/Windows for 'free' on a new Compaq or IBM, but a lot of people were building compatibles at the time. Because Compaq and IBM were expensive. IBM forced higher costs into OS/2 because it required better CPUs (remember the 386sx vs dx?) Then it required more RAM to run well. More disk space (in the days when we were buying megabyte sized harddrives and data CDROMs were relatively new) Then there was the "buy hardware from the approved list or take your chances." Nothing like fighting to get your PC to boot because you chose the wrong brand of RAM. Or bought a 'clone' NIC instead of a 3-COM. Of course, hardware was also limited by available drivers. (mouse, sound card, video, etc)
Then Warp 4 dropped and it was the same problems with higher hardware requirements. Once the novelty wore off people went back to Microsoft whose product worked well enough. Most people didn't care about true multitasking or hardware abstraction. And nobody but developers cared about the OOP in Presentation Manager, extended file attributes in HPFS, or short cuts that actually followed the files.
Personally I only used Win 95 at work. At home I went DOS/Desqview, Warp 3 & 4, then NT 4 to 2000 and up. I skipped most of 95 fun, all of 98, ME and Vista.
I have never used Win95. I stuck with OS/2 until Win98 came along. OS/2 was a great platform for DOS and Windows development. If you screwed something up all you crashed was your VDM. Launch a new one and you'd be back on stream within a few seconds. I was writing data recovery software at the time and OS/2 emulated BIOS calls so it was fine.
This post has been deleted by its author
This is one of those occasions.
Better DOS than DOS, better Windows than Windows was, in fact, precisely the same premise that sold Windows 3 as a GUI that could also run your existing DOS programs, possibly more than one at the same time if you had the right hardware.
OS/2v2 made the much the same claim for Windows programs. With proper pre-emptive multitasking and real inter-process memory protection. In this respect it was far superior to Windows 95 when it came to Win16, which was pretty much the entire ecosystem at the time. I never tried Doom, but Commander Keen ran beautifully, either full screen or in a window.
What scuppered OS/2 was the perception, encouraged by both Microsoft and IBM, that you needed a PS/2 to run it. IBM liked it that way, obviously, and Microsoft came to realise that it left a wide open opportunity for something entirely made at home to clean up in the clone market. That this perception, with the exception of PS/2's watchdog timer to mitigate the system-hanging potential of the single global message queue insisted upon by Microsoft, was actually false mattered not one jot.
It's subjective, but the first time I saw the Windows 95 UI my first reaction certainly wasn't that it was superior, but that, when compared with Windows 3, how much it resembled IBM's Workplace Shell. Windows NT at the time still featured the full Windows 3.0 experience.
-A.
The whole 386 thing was a bit of a disappointment to me, because there were a multitude of memory-management hardware solutions out there for 286 chipsets, supported by DOS extenders, and then MS choose to support only the Intel offering.
EMM386.sys was a real win for Intel, effectively killing the motherboard-based extended memory chipset solutions from other foundries.
Intel then lost the naming fight, and everybody else eventually brought out "386" processors, but it was a massive setback for the rest of the industry.
While it is true there were various memory management products you could bolt onto a 286, they didn't solve three fundamental problems that are fixed in the 386.
The first two are true multi-tasking and memory protection**. With the 386 you can implement a true multi-user operating system - or put another way, truly isolate running programs for each other.
But the third thing, which killed the promise of OS/2 for me as a developer, was that all the APIs were written for the 16bit segmented memory architecture of the 286 rather than the flat-memory model available in the 386.
Having programmed on both types of memory architectures at my day job, the thought of writing code and interfacing with the OS via a soon-to-be-obsolete segmented memory model with unprotected memory turned me off completely as a developer-friendly platform. And, having just ported a 286 Xenix application to 386 and then spending months fixing all the SEGFAULTS identified by the CPU, my distain was far more than theoretical.
I presume they fixed all of this in OS/2 2.0, but that initial turn-off was a killer for me and effectively influenced my career choices.
(** Yes, the 286 had a half-hearted "protected mode" but I don't think early OS/2 used that as it precluded DOS compatibility - another thing fixed in the 386 with virtual 8086 mode).
OS/2 1.x absolutely *did* use protected mode, it was a fully preemptive multitasking, protected memory OS, even in the 16 bit versions.
Of course as it was 286 protected mode, you're still stuck with segmented memory which majorly sucks as you mention.
OS/2 1.x also included 'DOS compatibility'. It could run a single DOS box that looked very much like DOS. I advise not looking at how it was architected unless you want to run away screaming.
Even when OS/2 2.x onwards arrived, it used a particular 'tiled' layout of the 386's flat memory model which limited OS/2's process size to provide easier compatibility with 16 bit programs. In these modern days of web browsers and multi gigabyte Rust compilers it's now a real issue (making modern Firefox, especially).
I was lucky enough to completely miss the 386 era for personal machines. By ‘87 I already had an Archimedes (one of the handmade prototypes) and just floated past the whole thing. Except for managing a project that supported ObjectWorks & VisualWorks on Windows 3.1 , NT 3.5, and OS/2, but I held on to my RISC OS world all through that for sanity’s sake. For the windows stuff we had to run a sortakinda screensaver approach in order to get any responsiveness. On os/2 we had constant nightmares with device drivers that didn’t. NT was almost bearable by comparison; and it was quite fast on the Alpha box we had. I do remember with horror that one os/2 test machine just decided to dive deep into some sort of disk garbage collect navel gazing for hours. And hours. We left it to see what would happen and a *couple of weeks* later it just stopped and sat there looking all innocent and ready to go.
Shouldn't say that Windows 11 "won't run" on serviceable hardware. It has arbitrary limitations requiring newer hardware that have nothing to do with performance capability. It will in fact run well enough on decade-old hardware. The days when you needed 4 times the "minimum" RAM and CPU requirements to actually get good performance are past (the OS itself runs tolerably even below the minimum that are required to permit installation), and disk space is cheap now.
Interesting to read the line:
...designed it for machines that it had already sold. It did not want to let existing customers down.
While not quite the same situation, Microsoft, is effectively abandoning existing PC users who have machines that could probably run Windows 11 perfectly well, but are being prevented from doing so by MS design decisions. None of my machines is considered capable of running it, despite being decent spec machines, their only failings being that the CPUs are considered too old. Of course, they still have that big stick to beat people with, if they stop supporting Windows 10 and key applications are only available on Windows, industry has to give in and buy new machines, whereas home users will still hang on with their last version of Windows 10 while their PCs are still good enough, and security will start to fall apart because undoubtedly it still has some embarrassing vulnerabilities.
I do still run OS/2 in a VM on my Linux machine. It was a nice OS, shame it got screwed over.
Until Windows 11, Microsoft has always been more concerned with market share as in share of PCs running Windows, not with "market" share as in how many licenses get purchased, or get purchased at normal pricing. They would have rather had Windows and Office pirated than those machines running Linux and alternative Office suites, or someone switching to Mac (though you'd have to be insane to think switching to Mac was cheaper, obviously), and other than token efforts to stop cracks being profitable for anyone, they didn't do much to stop pirating and were fine with gray market licenses. They still don't care about gray market licenses, and pirating is actually really easy now. This is the first time they've ever truly made an effort to eliminate multiple generations of machines, and according to them, it's not because of money but because of security. And I actually halfway believe them on that, because of the history of not caring about ensuring new purchases. They know they'll get a huge number of new purchases anyway, at some point, because of businesses and users that don't know much and just buy a new machine whenever anything goes wrong. They don't greatly care about the remainder, which is why the code still allows workarounds to install without supported hardware, so they still manage to retain those few users on Windows rather than losing them to Linux, and get them off of Windows 10, and the majority of users will be on supported hardware which has the security capabilities they want to push (which happen to also help tie you to the Microsoft semi-walled garden).
> Microsoft, is effectively abandoning existing PC users who have machines that could probably run Windows 11 perfectly well, but are being prevented from doing so by MS design decisions.
Not design. These are purely marketing decisions.
Marketing has driven Microsoft since Interface Manager was renamed as Windows.
-A.
OS2 2.1 was a decent OS, so was Warp. I think I remember the 8580 MSRP being around $8k USD at announcement, and that was for a basic unit without extra memory. Those PS/2 option cards were expensive, the OS2 drivers for them could be a pain in the neck to install. Until Warp all the OS2 installs I did were on 1.44 diskettes.. lots of them, far more than Win 3.0 or Win 3.1. Once you got OS2 up and running many aps people wanted to run were just not available but available for Windows. Prices few small businesses or consumers were willing to pay for hardware or software, lack of software, unexpected crash and burns requiring full reinstall of OS2 to a freshly formatted hard drive... that's what I remember.
Yep. The prices were eye watering.
A cheap IBM clone PC and a pirated DOS/Windows and you were in business. OS/2 could not compete with that. Nobody could. And the cheap IBM clones begat the cheap peripherals. Sound cards. Video cards. Modems. Drives. If you couldn't afford top of the line peripherals, there were dozens of cheap, albeit, problematic and unreliable, alternatives.
It all went geometric growth from there.
IBM from the release of the first "PC" carefully trimmed their products at the knees, elbows, and eyebrows trying to keep them from cannibalizing other product lines.
My TRS-80 interpreter BASIC was faster executing than a 5150... They'd simply recompiled the 8-bit BASIC into x86 opcodes...
From my memory, it was 4 or 5 *years* before IBM admitted that it was possible to do word processing on a PC. (daisywheel, very NIH)
"Oh, let us show you our DisplayWriter line for professionals..."
"Planar" LOL I still say it sometimes today at Lenovo (and the youngins look at me like I'm high). We called it a planar because of how the mobos were designed and manufactured in layers or planes. Planes were either power or signal; so if you had a 4S2P planar it had four layers of signaling planes and two power planes. That also brought back the other word that differentiated us from our competitors: 'hard file' instead of the now standard hard drive. The early modern PC development era were some of the best times of my professional life. Just wanna say this is why I love el Reg. Great article with great comments and repartee.
.....with the original PC design:
(1) Apart from the BIOS, the whole machine design was open source!
(2) And the backplane was designed to take plug-in boards from anyone at all!
By comparison, IBM lost their bottle with the PS/2 where the backplane needed expensive (proprietary) add on boards.......
....while in the mean time PC clones were cheaper....and so were the PC add-on boards.....
....and all this BEFORE we get to the 386 CPU or Windows 3.......
....and I know this because I bought a PC AT clone with a 286 CPU in 1986......
Because the PC was essentially a skunkworks-type project done by people from the wider industry. PS/2 was corporate IBM. Corporate was, I think, already moving in on the AT. Ever looked at the cable you needed to make up to connect a 9-pin D to the headers on the board? Whoever specified that didn't know that in the outside world the pin numbering on a board header isn't the same as on a D connector.
sadly, there was a lot of junk in the 286 motherboard market... I had a BUNCH of 286 systems that ran OK with realmode software like DOS and windows 1/2, but crashed with protected mode software like Unix/286, Windows 3, etc. IMHO, WIndows didn't even begin to get stable until WfWG3.11, and Win95 was an order of magnitude better. Win98 another order of magnitude.
From what I remember, Advanced Logic Research (ALR) shipped 386 machines shortly before Compaq. FWIW, I bought one of the first Compaq boxes in fall 1986 and the original motherboard had an 80287 as Intel wasn't shipping 80387's. Another unique feature of the 16MHz Deskpro 386 was the use of static column DRAM instead of using an SRAM cache as on the 20MHz Deskpro 386.
An annoyance with the ISA bus 386/486 machines was that DMA only worked with the bottom 16MB of RAM. This was one place where the PS2 machines were better than the clones, though that went away with the introduction of the EISA bus, then later with Intel's PCI bus.
IBM tried to take back the open nature of the PC with their proprietary MicroChannel Architecture with its planar boards and fixed disks because if you own the language on the interface and human level then you win.
But they partnered with their chief software competitor, a known scurrilous rogue, to develop an OS to compete with itself. It doesn't take Carnac the Magnificent to figure out how that's going to go.
I Looooooove this OS! I'm still running it on an IBM PS/2 Model 80 tower which I use as the server host for my secret dial-up psot-and-reply text-based BBS (Bulletin Board Service) for us insane old-timers who discuss and showcase 1980's-era ASCII/ANSI p0rn and how to over-engineer a working -75 Celcius Okanagan Pale Ale and Ice Wine fast-chiller! I think OS/2 Warp and the REXX scripting language it uses for its Batch OS commands is probably one of the FINEST operating systems ever created! Only VAX VMS comes even close!
I also use it for my word processing and report writing because IBM XYwrite on OS/2 is STILL one of the fastest and best word processors out there. Since I also have the 21 inch Sony Trinitron 1024 by 768 pixel RGB CRT display it still looks great even in 2025! I even got a version of FireFox up and running on OS/2 Warp!
V
(Bulletin Board Service)
Bulletin Board System
If you were a Service, then you were commercial in the eyes of your local Bell mafia and needed to pay triple the price for your telephone lines.
Our local group negotiated with our Baby Bell that a BBS utilizing 4 or fewer lines and not charging a membership could continue to pay residential rates.
Was in Calgary, Canada at the time and we had a whole network of Gandalf Hybrid modems at the time and AGT (Alberta Government Telephones) was HEAVILY regulated and could NOT charge the Arm and a Leg that AT&T did in the U.S. !!! Our monthly rates were a LOT CHEAPER that yours in the USA or the UK !!! Today, I run a "fake" serial port modem line from my Gigabit over Ethernet IP home phone system to the IBM PS/2 model-80 tower which THINKS its a normal Hayes 2400 Baud Smartmodem system! Can you say ATDT in order to connect ???
It's used for fun by a bunch of old-timer hardware and software engineers using an old 1980's-era OS/2 BBS system that does post-and-reply messaging! You post a reply and A FEW ***DAYS*** LATER you would eventually get some replies and you would post another reply to the original question. NO instant text messaging here! I think our "online" conversations are higher quality because we have to THINK longer and harder about our questions and answers. Again, this is done for fun as sort of a virtualized late-Sunday-night or mid-week beer-sipping convos about ANYTHING and everything for a bunch of late-work-life, semi-retired and fully-retired engineers gabbing around a digital firepit.
Back in the heyday of local Calgary BBS'es, there were home setups that had 64-phone lines coming into a suburban HOUSE that were quite inexpensive to run even under AGT business rates! Membership fees were typically a yearly membership ($80 CDN) or a monthly $6 to $10 CDN a month where you could pick from a MASSIVE TEXT MENU of downloadable files and subjects ranging from programming source code to imagery and scientific stuff to esoteric subject matter! I think the whole BBS era ran from about 1983 to about 1994 when the first ADSL modems came out and dial-up just died right-out except for rural and deep suburban places that could NOT get high-speed internet! I had 10 Megabits per second ADSL in 1994 with AGT in Calgary and 10 Gigabits Per Second Fibre-based Internet now with Telus in Metro Vancouver in 2025! That shows just how fast Ethernet Frames and TCP/IP packet speeds have increased in just a few decades! If I want to, Telus now offers 100 Gigabit fibre to the home in some parts of Vancouver!
V
I gave up my BBS mid 90s partly due to y2k. But mostly due to Fidonet dying. Usenet just wasn't the same, and Fido Echomail via Usenet was a bitch to moderate.
There used to be a protocol stack for OS/2 that allowed people to connect to your BBS via TCP/IP. It looked like a modem to your BBS software. At the time you needed a static IP, but now you could use dynamic DNS. Local software was the same DOS BBS/Mailer that we ran since the late 80s. By that point though I wasn't hosting any longer, I just ran a private node and pulled the Fido groups that interested me.
Another factor, IMHO, OS/2's GUI insisted on using a differernt API that had no real advantages over the WIndows API, it just added more layers of complexity. The driver layer in OS/2 1.1 was pretty awful too, with lots of very poorly documented complexity in implementing a display card adapter driver.
Ah but the Workplace Shell was (and is) far superior to the Windows GUI. It could be fickle (having only one message queue prior to Warp 4 was a silly decision in my opinion) but object oriented desktops are great once you understand the concept. For instance cc:Mail was integrated into the desktop really nicely(*) and you could do amazing things with REXX and WPS.
Almost any application could be implemented as an extension of the shell.
(*)Apart from some stability issues which I blame on Lotus, not IBM.
Somewhat related YouTube : the development disaster behind macOS https://www.youtube.com/watch?v=5fD5q_LShdY
There's a cameo there by IBM re. Taligent, though apparently some of the screwups there came in from the Apple side of things. And how Apple screwed things up with Copland is very intriguing, management-wise.
An interesting side note with Taligent (i.e. P-ink) object oriented operating systems that Apple and IBM were co-developing, was that THEIR CODE set in motion the start of JAVA and Javascript and two HUUUUUGE names coming out of it I remember were two Canadian guys from Sun Microsystems and Oracle named James Gosling who invented JAVA and Mike Duigou who was an UTTER GENIUS assembler code who has like 30+ patents who did stuff for Adobe too! James started the mainstreaming of object-oriented programming above and beyond what NEXT started! Mike Duigou's code started the peer-to-peer networking revolution and started object-oriented peer-to-peer communications which ended up starting the social media revolution including the codebase that ended up in Facebook and even BitTorrent! Those two CANADIAN guys literally helped start ALL of the modern internet object-oriented data storage and peer-to-peer/social media communications revolution we see today!
V
> one of the main single points of failure of OS/2 was its complete lack of printer support
Oh come on!
We were fortunate enough to have as an IT contractor the extremely pleasant chap who wrote the OS/2 printer driver 'system'.
Wish I could remember his name, but it was a Long Time Ago...
You had to buy an old Epson FX-80 dot-matrix printer which worked with EVERYTHING in order to get printing on OS/2! For its time it was an AMAZING piece of technology which was a LOT cheaper than that fancy and FAST line-printer or laser-copier that IBM was selling for like $50,000 USD as a networked printer system! Epson sold a LOT of FX-80 dot-matrix printers to DEC VAX, IBM Mainframe and UNIX or OS/2 shops back in the day because so many people were writing custom hardware/software drivers for it that you could download teh source code from various BBS'es!
V
It did have printer drivers and a working printing system but for example my HP 690C which was a massively popular inkjet wasn't supported. In Warp 3. Perhaps HP shipped a driver later.
The problem is said to be they didn't work with vendors. MS on the other hand even physically sends people to help some vendors.
Not really. The operating system without printer drivers was BeOS. OS/2 was fine provided you looked up driver support *before* you bought a printer.
Too many people didn't realise or ever accept that running a non mainstream OS requires a bit of planning, rather than just buying any only rubbish in the shop and expecting it to have drivers, and for the drivers to work.
I'd accept the driver support was annoying elsewhere, especially decent display drivers for a few years, but printer drivers were something that weren't a particular issue.
I recall working for big blue and receiving a Thinkpad with Win95; rather than handing it over to IT for an OS/2 image, I just kept the Win95 platform and installed Lotus Notes on it.
Win95 worked fast and perfectly but Big Blue continued to whip the Dog OS/2 until it eventually died.
Good riddance to it
I am fairly sure that was RBC (aka Royal Bank of Canada) which had like 100,000+ copies of OS/2 running as ATM operating systems and as internal banking operations which were running EVEN into the 2000's! Then they switched to Windows XP which i STILL see running on some machines in 2024/2025!
V
That was at Microsoft's insistence. The IBM guys knew from the outset that it was a bad idea, and put a hardware mitigation into the PS/2.
> MS obsessed over making sure the mouse pointer never froze
Fat lot of good it does if you can't actually click on anything! It's the hope that get you.
-A.
Seem to remember MS resorting to a hell of a lot of FUD which the Media ate up and also some anti competitive practises which they got fined for many years later? Significant part of the story too.
OS/2 was a better OS than both Win 95 and Win NT 3.1/3.51 for a time - it ran faster and smoother than both if you knew which hardware it liked - I built a machine from the ground up in 92/93 (DFI MB, Intel 486, 16MB Ram - £400 alone, S3 805, 80MB HDD and a Creative Labs Soundblaster / CD Rom combo) and was it rock solid (this was Ver2.1/.11) - Win 95 came out - a lot of fanfare for sure, Warp 3 soon after (Sept v Aug) and it had Internet support - I seem to remember a browser was an optional extra on Win 95 - NT 3.1 was a resource hog, not fixed till 3.51, 8 months after Win 95.
The biggest loss from OS/2 was the incredible Object Orientated Workplace shell which you could use to build whole systems with before any coding needed. Win95 was prettier but my goodness, what a loss in form over function.
"OS/2 was a better OS than both Win 95 and Win NT 3.1/3.51 for a time - it ran faster and smoother than both if you knew which hardware it liked"
OSs were originally developed by H/W manufacturers for their own H/W. OS/2 belonged to that world view. Unix had already changed things. An OS was now the becoming a platform to offer a common S/W platform over a widening variety of H/W. In that world view not needing to know what H/W it liked was a merit. That made OS/2 a better OS only for a world that was already passing into history.
In this context I meant HW that was known and had optimised drivers. i.e. the video card with an S3 805 chip was well supported, the Creative Labs sound card and cd drive was supported out of the box. The DFI motherboard was of higher quality unit for its time and even having a good quality ALPs floppy drive worked while parsing through the numerous floppy disks. OS/2 demanded good quality hardware, no name cheap crap would probably run but if it was outside of specified standards then it would probably fail.
"if it was outside of specified standards then it would probably fail."
Which standards the HW didn't adhere to if worked perfectly fine within DOS and Windows, mechanical? Electrical?
The real reason was that the OS/2 driver support from many companies was sometimes just a one man afterthought and as such the drivers were often buggy and very limited.
My Cirrus Logic 5426 and S3 968 support was poor both on 2.1 and Warp 3, and my Creative Labs AWE32 never received model specific drivers at all. (it "worked" as an SB16, but e.g. MIDI was limited to AdLib FM synthesis and no support for Soundfonts and such)
Things got way better with SciTech drivers when Warp 4 moved to a new graphics driver model (GRADD) - if only proper hw support had been there from early 90s...
> if you knew which hardware it liked
That was quite likely a huge factor for slow adoption by consumers. The early PC ecosystem was a wild West, especially when major brands were concerned. Buy a Sony, or HP, and it might not like whatever hardware you tried to put into it. Paradoxically, getting no-name PCs was less problematic: they couldn't dream of imposing their own "standards".
Now, with IBM, we all knew they had gotten burnt with the PC's openness. So it was hard to look at the PS/2 and OS/2 combination and credit it with something other than an attempt to re-wall their garden.
That's why MS "double-crossed" them on OS/2: they didn't want to hand that power back over to IBM.
Just a point of note - Warp 3 somewhat predated Windows 95. What came later was Warp Connect, with built in networking, standard Warp 3 was dialup only.
Warp 3 didn't look that bad, its defaults were certainly prettier than those of OS/2 2.1 which was designed to be easy on the eye but majored in grey.
"Well you would when only high-end systems could render 256 colours"
You have years mixed badly there. Amiga had 4096 colors since 1985. True color (24-bit) graphics for PC was first introduced back in 1984.
OS/2 2.0 was released in 1992, and every new computer had at least VGA - that's 256 colours. My cheap-and-slow Trident 8900 SVGA bought years earlier had 256 colours on higher resolutions and it was "low-end fodder". (I should have opted for Tseng Labs...)
In the same year (1992) Cirrus Logic came out with their quite affordable true color cards.
Miro, #9, and several other high-end cards (IBM XGA as well) with 24-bit colours had been available for years by then.
As Sandtitz says you've got your years mixed up. In many ways the Amiga parallels OS/2 - innovative fully multitasking system, both with REXX.
Everyone knew the Amiga was dead in 1992 when DOOM was released, however at that point the Amiga was already a dead man walking. In 1985 it had a huge advantage over PC based systems. By 1990 256 colour games were pretty much mainstream, PCs had sound via addon cards, there were platformers with acceptable performance, and PC based CPUs were outclassing the 68000 the Amiga wouldn't move on from for the majority of its user base.
True colour was still around even for average systems in 1989, it just wasn't standardised or particularly high resolution. By 1992 it was standard and widely available.
Amiga was kept alive in the late 1980's with the Video Toaster linear video editing system in a box which NO ONE ELSE HAD for any price! For less then $10,000 USD you could buy yourself an NTSC or PAL television studio in a box that let you do corporate video cheaply, quickly and with reasonable quality! Our own Betacam SP/HDcam/Pinnacle-3D/Quantel editing suite was originally $1.5 Million dollars and the various versions of the Video Toaster bricked it all when we put it into our trucks.
In between computer programming and IT work, I did a lot of high-end broadcast video work including EFP camera and editing work for corporate video and even big league sports. Even though we had lots of EXPENSIVE component RGB/YCbCr broadcast gear running into the many millions of USD, we STILL used the Video Toaster running on an Amiga for high-end aerospace company video and world-wide broadcast sports events! We even put a radiation-shielded Amiga and a Video Toaster INTO SPACE for a short period of time doing custom 29.97 fps live real-time 480i resolution space video imaging and broadcast!
Nowadays, Black Magic Resolve running on an AMD Threadripper system can do live cutting and mixing for 4K/8K broadcast and since i'm going to Cortina Olympics 2026 for Camera/Editing, I expect we will use systems that fit on workstation-grade laptops that can stream LIVE in DCI-4K and DCI-8K resolutions. We will be using 8K resolution Canon R5 II's or (or R5-3's by then) as the video cameras, so its just a really SMALL amount of gear that is needed compared to the MANY TRACTOR TRAILERS we used to use for such live events!
V
While this was the end of OS/2, it is proving to be a vital winning point for Linux with Wine and/or Proton installed. Wine and Proton have proven to be fully capable of running virtually anything I've required of them on Linux, with the caveat that games take a much longer time to initialize under Linux because Wine/Proton needs to map all that Windows code and data to Linux data structures before it can actually run it. Slick "not an emulator" approach, but it isn't the fastest startup times!
That startup time issue is the sole reason Ubuntu can't just "take over" for Windows in the real world yet. You're stuck with web interfaces for so much, but then again, so are a lot of other users out there, especially those on tablets or Chromebooks and cell phones. (Face it: most cell phone "apps" are just a pre-downloaded web UI skin that "runs" on Android but does all the work on the server, not the client.) Most users aren't willing to wait as long as I do for a program to load; they think it means their computer is "broken" and start clicking and re-clicking things, making the situation much worse. I know what a sweathouse job Wine and Proton are actually having to do, so I wait. And wait. And wait. But it runs in 90% of cases (only "Ori - Will of the Wisps" and "Assassin's Creed Unity" have proven incompatible, and I suspect the issues with the latter could be resolved by investing in a Playstation-compatible controller, as X-Box controllers are only "partially supported" by that game, and it is the controller giving me grief no matter what I do.) 12 mouse and keyboard games and 11 controller games are installed and work fine - big name titles like Witcher 3, Baldur's Gate 3, Cyberpunk 2077, Elden Ring, Kingdom Hearts HD 1.5+2.5, Marvel's Midnight Suns, The Outer Worlds, Red Dead Redemption 2, and Starfield. I have another 20 games in my "A-LINUX-PLAY-LATER" folder that have also been tested and verified as working, ready to be reinstalled and played in the future.
Nope. You do NOT need Windows for Steam gaming anymore; 100% happy with my Ubuntu 24.04.1 installation as a replacement for Windows 11 Pro. Bye, Bill. Bye, Satya. Have fun hoovering user's data; you'll not have mine any more.
Don't forget: I've lived in *nix land since the mid-80s when BSD4.2 on a VAX was a thing.... Windows has always been "foreign" to me and what I used to play games and run business software that work required. It isn't even that to me any more. I am free!
the sole reason? really?
I am still trying to figure out how to set touchpad sensitivity on Ubuntu 24.04, on a laptop that supports it. OK, maybe I am a dimwit, but I am in good company: that question gets asked all the time. And a lot of ppl say: disable when you have a real mouse in. Ah, the joys of seeing nerds argue about libinput vs synaptics drivers!
Or, trying to suspend power to hibernation. Apparently, I get stuck in shallow hibernation, with significant power draws. Not cool when your battery is sucked dry on repeat. There are tons of contradictory articles on what to do. But a lot of answers end up: "well, no great problem, I mostly use my laptop while plugged in".
Does this ever get solved? No, but we do get to endure countless debates about the finer points of X vs Wayland and the subtle improvements from release to release in the bling factor of the desktop environments.
Again, I don't claim a really high level of knowledge. But I understand most of the architecture more than most users and am quite happy on the command line. I just object to having to tinker for hours at a time to tune UI behavior, based on tribal knowledge of very uncertain authority.
Love Linux, as a server OS fully manageable from the CLI. As a desktop? Well it is better than Windows 10... And on the plus side, I agree that Steam works A-OK on it so it has grown up to be a very capable gaming platform as well.
I remember that period very distinctly, because I had just sunk the equivalent of a used Porsche (or a new Golf) into am IBM PC-AT clone with an EGA graphics adapter, basically two years savings from freelance programming work while I studying computer science.
I then wrote my own memory extender so my GEM based mapping application could use extended memory, while GEM and DOS were obviously tied to x86 real-mode. It basically switched the 80286 into protected mode for all logic processing, and then reset it into real-mode via a triple fault to do the drawing bits.
It worked, but every PC had its own little differences on how to trigger the reset or do the recovery because the mechanism might have been IBM intellectual property (who used the keyboard controller to toggle reset).
Anyhow, having worked with PDP-11 in the form of a DEC Professional 350 and with VAX machines, I was utterly bent on overcoming the CP/M feel of my 80286, and also ran Microport's Unix System V release 2 on the machine, which included a free Fortran compiler that unfortunately produced pure garbage as code.
It also included a working DOS-box, long before OS2 could deliver that. Using the same reset magic, I'd exploited for my personal extender. I ran a CP/M emulator on that with WordStar inside just for the kicks of running CP/M on a Unix box!
Then the Compaq 386 came along. I even got one coming to my doorstep. The dealer who I had purchased the 80286 from came to my house, rung the bell and told me he had a 386 for me.
You see, when these machines were the price of a new car, house deliveries up the stairs and setup of the machine were actually part of the service...
Can you imagine just how painful it was to tell him that I had not ordered it? And finding out that in fact my father had ordered it for himself? Including a full 32-bit Unix that actually worked like it would on a VAX?
BTW: that Compaq wasn't slow. Perhaps that ESDI HDD wasn't super quick, but the EDO RAM was 32-bits wide and way faster than anything on my 8MHz 80286. And Unix apps don't typically block on physical disk writes.
Anyhow, finally going on topic here:
OS/2 was an OS tailor-made for the Intel 80286. The 80286 was very similar to a PDP-11 and their discrete MMUs, which kept processes and users apart by allocating their code and data into distinct smallish memory segments (16-bit offset addresses) and protecting them from unwarranted access. Unless your program was permitted access to a memory segment, any attempt to load and use it would result in a segmentation fault via hardware and program termination by the OS exception handler.
The 80286 went a bit further yet and allowed for a full context switch between processes via call and task gates, putting almost the entire logic of a process switch into microcode which could be executed via a single call or jump.
That was continued on the 80386 and caused an overexited 16-year old Linus Torvalds to think that writing a Unixoid OS couldn't be all that difficult and fit on a single page or two of code!
It wasn't until Jochen Liedtke of L4 fame carefully dissected just how horribly slow those intrinsic microcoded operations were, that Linux gained the performance which enabled its wider adoption by ditching all those Intel shenanigans and eventually discarded all segmentation use with the transition to 64-bit.
The 80286 didn't have that option, nor did OS/2.
Linus grew great acknowledging, enabling and encouraging others to do better than he did. Perhaps the size of his early mistake burned that lesson in extra strong.
Whatever you say about politics, IBM and their ill fated micro-channel machines has very little to do with the fate of OS/2.
It was doomed by being an 80286 designed OS for 64k segments, very similar to a PDP-11 and its various operating systems.
32-bit CPUs and virtual memory made for a completely different OS design and Microsoft clearly understood that it called for a complete restart.
They snatched Dave Cutler to get their hands on one of the best virtual memory operating systerms availabale at the time, that wasn't Unix.
And the rest is history.
The so called 32-bit versions of OS/2 weren't really a 32-bit OS. To my understanding they were a lot like DOS extenders in that the kernel and many base services mostly remained 16-bit code, but allowed 32-apps with virtual memory.
A re-design of OS/2 for 32 or 64 bits wouldn't have been OS/2, because the segmentation model and its hardware security mechanisms were really at the heart of the OS.
I bought Gordon's OS/2 book when it came out, read it and it spelt out its tight integration with the 80286 on every page and thus its doom. I chugged it into the reycling bin decades ago. With some lingering regret, since I had spent my fortune on the wrong box, but boy am I glad I wasn't in Gordon's place and mispent a career!
I had to read the details on the 80286 architecture to make my extender work.
And I remember reading about those tasks gates and call gates and feeling a pull somewhat similar to what Linus must have felt.
I also remember reading about the Intel 80432. Intel has a penchant for designs that look great on paper.
But by then I had an 80486 running BSD386 and/or various "real" Unix variants, as well as various closed source µ-kernels GMD/Fraunhofer was developing at the time. And my focus was on getting smart TMS34020 based graphics cards to work with X11R4, so I wasn't biting.
I also had access to Unix source code, so why should I settle for something amateur?
After finishing my thesis porting X11R4 to a µ-kernel with a Unix emulator built on Unix source (thus unpublishable), I actually got a job where I was to create a TIGA graphics driver for OS/2 so it could run the PC as an X-terminal. Got the SDK and went diving deeply into OS/2... for a month, after which I was called away to work for my dad's company.
I was glad to go in a way, because even if the technical challenge was interesting and so called 32-bit variants of OS/2 had emerged by then, the smell of death was too strong.
DOS boxes whetted my appetite for VMs and containers and I've built my career on crossing borders or merging operating systems with VMS and Unix lineage and far less µ-kernels than I ever thought likely. Nor did scale-out operating systems like Mosix ever really take off or clusters ever become significant at OS level, except in niches like Non-Stop.
OS/2 to me is the 80432 of operating systems: dead on design.
How Intel then crippled the 80386 to not support full 32-bit virtual machines is another story.
As is how Mendel Rosenblum, Diane Greene and team overcame that limitation via the 80386SL/80486SL SMM (system management mode) and code morphing.
Intel wasn't amused and it's nothing short of irony how Gelsinger came to head the company that destroyed Intel's CPU business case.
I see where you're coming from, but if you're going to nitpick I seem to remember even intel x64 is segmented in theory. It's just that everyone sets up one extremely large segment because there's little point in anything else, and yes, the ring system present in 386 protected mode has been substantially trimmed for x64.
Anyway, OS/2. No, it's not 16 bit. OS/2 1.x was entirely 16 bit. It also had some utterly horrific re-entrant real mode shenanigans to support a DOS box, which is best not used.
OS/2 2.x, and in fact all future versions of OS/2 do have a kernel with a lot of 16 bit code in it and convoluted use of the x32 ring system. From what I remember if you're operating at ring 0 you're still using the 16 bit segmented memory model. However, that's the kernel itself. It's entirely possible to write a number of device driver types almost entirely in 32 bit code. Even then there are oddities such as XFree86 for OS/2, which managed to do direct port I/O from user mode programs ring 3 at high speed via a special driver and a lot of call gate and thunking tricks. Check EDM2 for details if you're interested.
OS/2's kernel : '16 bit'. Many drivers can be 32 bit. I really should read the architecture books to see how the VDMs worked, but that's going to be 32 bit code.
Gpi : Has always had a 32 bit API even in the 1.x days. 16 bit implementation up to and including OS/2 2.0. 32 bit from OS/2 2.1 onwards.
Windowing : 16 bit implementation up to and including OS/2 2.1. 32 bit from Warp 3 onwards.
GRADD drivers for easier video driver development : Warp 3 onwards.
WPS : It's user mode code. 32 bit from its inception.
OS/2 vio (text mode windows, direct keyboard and mouse control in those windows) : 16 bit in all OS/2 versions, unless you count OS/2 PowerPC where it is fully 32 bit.
So yes, OS/2 is pretty much a 32 bit OS, but the bitness is not what killed things. If you want to talk tech rather than critical mass of software, or funding/supplying development kits cheap or free :
The display driver model was much easier to code for from GRADD onwards, but was fundamentally a botched design due to politics. It could have been streamlined from the start.
The synchronous input queue. That's synchronous, not single. Have an app that won't take messages off the message queue and interaction stops, even if code doesn't. Windows NT did not make this mistake when preserving win16 compatibility.
So many technologies that worked to a large extent but had limitations or bugs. The WPS. OpenGL. The dead at birth OpenDoc.
Memory. Not an issue in the early to mid nineties, but OS/2 never supported PAE or x64 mode, which is a real issue now.
What did it in the market I was in at that time, was the software pirating.
Software - very especially games - could be easily copied for the "IBM clones". Cue all the young boys convincing parents to buy a cheap Windows machine "for education". That led to professionals and small business folk seeing the affordable potential.
From then there was no holding it back.
As an aside, that's also what did for the Archimedes. A good machine, heavily punted at the time by Acorn with a PC emulator. Until enough people realised you could buy a real PC for half the price.
Don't forget the marketing! I saw this PC Magazine ( I think?), IBM had bought a I think *50* page ad for OS/2. And like half the pages were "Well it had this bug but it's fixed." Obviously fixing bugs is good, but this is not what you put in when you're paying for advertising to get someone to buy your product LOL. Seriously, though, I used OS2 for a bit in the "OS2 Warp" days, between Windows for Workgroups 3.11 and Win95 (I had switched to Linux already by the time Windows 95 came out). But the advertising was awful and I can't imagine anyone ever buying it based on how it was marketed.
This was a great article, and I always appreciate a fresh new story regarding OS/2 history! I especially enjoyed reading the many comments and various viewpoints from all the participants here. Some got good up votes while others not as much. However, one thing is clear to me, and that is most of the people here have a very vast knowledge of these old systems which is nice to see.
As I read through all these excellent comments, it really makes you appreciate the value of human experience and contribution to a topic like this. I say this because of the current world we are living in, with the rise in artificial intelligence and ChatGPT. Reading through all these comments and the accumulated knowledge from all the real world people and their experiences, make sure you realize that this is a perfect example of something very unique that artificial intelligence will never be able to duplicate or replace.
So kudos and thanks to everyone.
The article implies that there is any connection between the two "OS/2 NT was rebranded as Windows NT". (The linked article has correct info.)
The only thing in common was that NT for a while had an OS/2 subsystem, capable of running 16-bit text mode OS/2 applications.
(Similarly NT has a Win32 subsystem, capable of running Win32 applications.)
See https://learn.microsoft.com/en-us/sysinternals/resources/inside-native-applications
We used OS/2 v2,3,4 a *lot* on account of it for many years being the only 32-bit operating system able to run Windows applications.
It was better at this than the first versions of NT.
Then VMware released their beta in 1999, which for a lot of use cases was better and more stable.
OS/2 was around for a long time especially in financial organisations. And don't forget it was powering cash points until at least 2020.
Personal experience of it started in 1993 so already v2 by then, oddly enough working for a back. It was so bloaty that IBM did us a custom version was a load of features taken out so it would run on their valuepoint range of PC. As a server OS it wasn't too bad, did what it said on the tin, but not really suitable for desktop. I'm sure will remember the desktop with the sliding front cover, think the 330 range.