
Software vendors either played ball with Microsoft
“software vendors either played ball with Microsoft or didn't get access to Windows or Microsoft Office.
Feb 1998: Who is Microsoft's Secret Power Broker?
Today, thanks to Android and ChromeOS, Linux is an important end-user operating system. But, before Linux, there were important Unix desktops, although most of them never made it. Way back in 1993, I oversaw a PC Magazine feature review on Unix desktops. Yes, that's right, before I was a Linux desktop user, I was a Unix user. …
“software vendors either played ball with Microsoft or didn't get access to Windows or Microsoft Office.
Feb 1998: Who is Microsoft's Secret Power Broker?
Quote: "....a big fan of Linux containerized desktop applications, such as Red Hat's Flatpak...."
Well.....fine.....but just make sure that:
(1) There's PLENTY of room in /usr ......
(2) ....and if you remove a Flatpak, take care to manually remove all the HUGE unused software in /usr which the removal failed to remove.
For real. I went to install a software package, wasn't paying attention, and inadvertently grabbed the Flatpak version, which weighed in at 2 GB compressed, 3.5 GB uncompressed. The same package in native deb format was less than 100 MB. That's one hell of a difference in both disk space and bandwidth!
It's a valid complaint, but when, to pick a couple of real examples, the packages you want to download are current in Flathub and literally years out of date from the distro, I'll pick the Flatpak version every time. And given that I am currently spoiled with disk space (thank you manufacturers who briefly over-produced SSDs), I don't care about the extra file space.
Yes the first Flatpak will get the whole base image, but subsequent packages will often depend on some of the same images and utilize deduplication through Ostree. Nvidia packages tend to take a lot of space though as they're "special".
If you mix native and Flatpak it'll still take up some extra space ofcourse, though less than you'd think because of the deduplication. If you use an immutable distro that mainly uses Flatpak the sum total is not too different.
I take a mixed approach on opensuse Tumbleweed but I'm fortunate to have enough space that I don't care. I just enjoy that I don't have to grant root access to install 3rd party apps and they're kept separate from my "OS filsystem". But it's not everyone's cup of tea and that's fine.
While I observed from afar the Unix Wars while using at various jobs a NeXT box and various SGI boxen, I had to chuckle at the author's assertion that we are not seeing that in the Linux world as well as with his mentioning the containerization of Linux packages via Snap or Flatpak.
I mean....there are two. Three if you want to make a handwaving argument about App Image. So....three versions of containerization. Each incompatible with the other. Only one ( App Image ) is platform agnostic. That is to say that while you CAN use Flatpaks on Ubuntu you have to go twiddly fiddly with the command line for a while in order to set up Ubuntu to use Flatpaks as Ubuntu does NOT do that for you as they support their own in house containerization scheme namely Snaps. And the same holds true for Red Hat and Suse as they have native Flatpak support but you have to go twiddly fiddly with the command line for a while to setup for Canonical's Snap containerization scheme.
Now you might say...well...let me use App Image instead since it's platform agnostic. Well, you have to right click your App Image program icon in order to select "RUN" in order to lauch the program. Sorry....this is the 21st Century. Here is the process we have had since the original Apple Macintosh in 1984. Download program. An icon is place on the desktop of computer. Single or Double click the icon to launch. Only in the nerd land of Linux is it acceptable to force a user to learn how to launch a program all over again.
And let's not even discuss that before the Linux Wars of 3 different containerization schemes we had the Linux Wars of RPM's vs DEB's, and the continuing Linux Wars of DE's such as but not exhaustive as KDE, GNOME...( that's GNOME 2 vs GNOME 3 so a war inside of a war ), Xfce, Cinnamon, MATE, LxQT....and the list could go on ad infinitum, ad nauseum.
Oh...right....ChromeOS DE from which the majority of Windows 11 DE was copied. And speaking of ChromeOS and Chromebooks, here is THE MOST successfull Linux desktop in history. And it can't even run Linux programs natively because it only uses the Linux kernel not the entire Linux desktop and userspace bits to make it a "real" Linux desktop. BUT....as Linux nerds will retort...YOU CAN RUN LINUX PROGRAMS IN A CONTAINER CALLED CROSTINI. And my reply is....LINUX WARS !!! Now we have 4 Linux container schemes. App Image, Snaps and Flatpaks which only run on actual Linux desktops. And the forth...Crostini...only running on Chromebooks because it actually isn't a Linux platform even though it uses the Linux kernel.
So....in the end....how are the Linux Wars of today any different from the Unix Wars of yesterday other than the closed source nature of Unix back in the day vs. the open source nature of Linux today?
And it can't even run Linux programs natively because it only uses the Linux kernel
I take it you don't use a Chromebook? Despite all of your screaming that it's something else, running Linux apps is still as simple as Settings > Advanced > Developers > Linux Development Environment > On.
I taught an entire Linux programming course at a major university on my Chromebook. Because... It's Linux. Same commands, same package system, same apps.
No, Jumbotron64 is correct.
The Linux you describe is run in a virtual machine that runs on the main chrome OS.
If that counts, then you could also say that Windows runs Linux because of VMWARE, Linux runs FreeBSD because of KVM, FeeeBSD runs Linux because of Bhyve etc..
Next time you startup your chromebook environment, type "uptime". It will not be the same uptime as the host.
In fact, Android Apps are run the same way - the android-tweaked kernel and userland all run in a VM too.
P.S. Not my downvote!
Linux development environment runs in a container, not in a VM. It's isolated from the main chrome OS in the same way as daemons launched by systemd can be made isolated from each other and rest of the system. Kernel allows for creation of processes with private view of the filesystem, network interfaces, even process ID namespace, almost anything. So even though you can't see the chrome OS processes and filesystem etc. from inside the development environment, you could have seen everything from the google OS -- if only google have provided a proper terminal in it. But there is no virtual machine, no second kernel running inside. See, for example, docs for systemd-nspawn or any container solution to see how it works.
"Well, you have to right click your App Image program icon in order to select "RUN" in order to lauch the program."
@Jumbotron64
I just downloaded the Inkscape 1.3 appimage. One time only I had to right-click, select Properties and the Permissions tab, and tick the little box that said 'Allow this file to run as a program'. Then I can double-click the icon(*) and start Inkscape forever after.
Strikes me that some kind of user intervention should be needed before running a random downloaded file, but I think perhaps a popup 'set permission' box might be a good idea like on Winders.
* I prefer to make a simple inkscape.desktop file and stick it in ~/.local/share/applications/ myself so that the application appears in the menu and I can add a launcher - but that is probably too much 'twiddly twiddly' by your definition.
Maybe the article was a bit too positive on this, but I would argue that there's a significant difference between having to write software for separate UNIX variants, and just a bit of fiddling to get it on the Linux varants. The little fiddling to get snap or flatpak setup, or making an AppImage executable, are way lower barriers to entry.
Linux may be quite polymorphic still, but this has far less of an impact. Everything is still largely interchangeable and compatible.
I just ran across this video, and this comment thread seems like as good a place as any to share it.
Thank you for that link. What a video.
As a programmer, I have found proof of why I should be happy to program in LotusScript because dear God, writing to the metal is an awful experience.
I'm certain that I will be going back to this video, if only to remind myself that there are people out there who are vastly more intelligent than I am.
Hands down the best 34 minutes 13 seconds I have spent since entering the world of Linux over 20 years ago and since first using a NeXT computer and assorted SGI IRIX machines even further back. This should be required viewing for anyone interested in Linux or already enmeshed in it.
Totally. "Everything is a file" is a terrible abstraction. Or even, "everything is a text stream".
I find many developers are incapable of representing things with anything other than character strings. Their idea of an API is to sprintf a bunch of variables into a structured string and then parse them back out later... You can see how that mentality would think that /proc etc. was a good idea.
SCO on the laptop was my choice partly because I could run development versions of Informix products on it I could support clients running Informix on SCO servers. I never met Interactive as a desktop product but as Onix it provided my first Unix server; I believe they also ported the original Aix.
But let's look at "Does that [the Unix wars and need to compile multiple versions of applications*] sound familiar? That kind of thing is still a problem for the Linux desktop, and it's why I'm a big fan of Linux containerized desktop applications, such as Red Hat's Flatpak and Canonical's Snap."
Go and look at the download page for LibreOffice. At any time LO offers two versions for any platform, the leading edge and trailing edge versions. Check either of them. What options are offered for each version? For Linux there's 64-bit RPM and Deb. For Windows there's 64 and 32 bit. For Mac there's Intel and Apple silicon. That's right, there are no more versions offered for Linux than for Windows or Mac. Why is Linux considered to present more of a problem?
Now let's look at another staple on my desktop, Seamonkey. We have a choice of 64 and 32 bit Linux, 64 & 32 bit Windows and just x64 Mac with a choice of languages (it looks like macOS is the difficult one here, not Linux). They've even removed the RPM vs Deb choice because all that has to be done (and it's all the LO options automate for you) is copy the download over to /opt and extract it. The same method has been used for years for installing non-distro applications. What Flatpak and Snap are ostensibly solving is a non-problem, something that's never been a problem, a straw man. What they are very clearly doing is creating their own little walled gardens. What sounds familiar about them is that they're reviving the Unix wars for exactly the same reason the original wars were conducted - to conquer territory.
* In part the need to recompile was driven by multiple H/W architectures: DEC, HP, IBM, Intel, MIPS, SPARC, Zilog and various others. The only H/W choice at present is between 64 and, where it survives, 32 bit Intel** and Apple.
** And that itself is really an OS rather than a H/W choice.
Your argument breaks down at this point.
Can you download, install and run a Snap package on Red Hat or Suse WITHOUT FIRST twiddly fiddly on a terminal to download and install all the necessary files needed to download, install and run a Snap package on a Linux distro where Flatpaks are native ?
The answer is manifestly no.
Can you download, install and run a Flatpak package on Ubuntu WITHOUT FIRST twiddly fiddly on a terminal to download and install all the necessary files needed to download, install and run a Flatpak package on Ubuntu where Snaps are native ?
The answer is manifestly no.
Now...do you need to open a terminal, do some twiddly fiddly in said terminal to simply download, install and run a 32 bit Windows program on a 64 bit version of Windows OS ?
The answer is manifestly no.
Now...do you need to open a terminal, do some twiddly fiddly in said terminal to simply download install and run a 32 bit Intel MacOS program on a 64 bit Intel based Mac?
The answer is manifestly no.
Now...do you need to open a terminal, do some twiddly fiddly in said terminal to simply download install and run a 32 bit Apple Silicon MacOS program on a 64 bit Apple Silicon based Mac?
The answer is manifestly no.
Here's a tricky one....
Now...do you need to open a terminal, do some twiddly fiddly in said terminal to simply download, install and run an Intel based MacOS program on an Apple Silicon base Mac?
The answer is....manifestly no....most of the times. Because, at least in my experience the first Intel based MacOS program I am now currently running on my Apple Silicon based Macbook M3 Pro automagically downloaded and installed Rosetta 2 Intel to Apple Silicon real time translator before downloading and installing said Intel based MacOS program. And now every other Intel base MacOS program just downloads, installs and runs becasue Rosetta 2 was already automatically downloaded and installed. I've heard that sometimes you have to download and install Rosetta 2 yourself because your first Intel based MacOS program did not download Rosetta 2 automatically.
Either way....you were NOT forced to open a terminal and do some twiddly fiddly in order to download and install Rosetta 2 in order to download, install and run an Intel based MacOS program on an Apple Silicon based Mac.
So the ONLY computing platform that still insists on making it hard for end users by continuing to engage in Unix-Like Wars...( and really, doesn't it make sense that Linux being created with the express purpose to be "Unix-Like" that it would stand to reason that the Linux world would engage eventually into their version of the Unix Wars )...with 42 flavors of DE's, 5 App Container schemes, ( Snap, Flatpak, App Image, and Crostini with ChromeOS and whatever Google has baked into Android ) not to mention RPM's and DEB's ?
I'm not sure who you're replying to as I said FlatPak & Snap were solutions to a problem that didn't exist.
However let's consider your first two questions.
Firstly the whole idea of both those systems is that the basic package, Flatpak or Snap is to provide a set of prerequisite libraries for the applications packaged for those platforms. I'd expect any application that requires any additional libraries to have them included in the application package itself.
So we then have to ask can the base FlatPak and/or Snap packages be installed without manually downloading any pre-requisites. The packaging approach in both the RH and Debian based worlds has been for the system to automatically identify any addition packages in their repositories and include those, including pre-requisites of the pre-requisites. So let's see how I would do that on Devuan (which in practice means Debian for snything not systemd related) with the KDE desktop:
I can click to open my main KDE menu, select and click Synaptic, click Search, enter Flatpak and be presented with a list of Flatpak related options (that includes stuff for builders as well as installers, how many of your preferred non-twiddly options provide that), mark it for installation see a couple of required packages needed, click on Apply and have Flatpak and the prerequisites installed all without any use of terminals - should I so desire. I can do the same for Snap. No command line in sight.
One thing that might be slightly different from Windows, and, indeed, Ubuntu, is that on opening Synaptic I'm prompted for a root password. This is because Debian, like many Linux distros, follows Unix in being in principle a multi-user system and has appropriate security built in. Ubuntu differs in that it would request the user's own password as some measure of security. But even in Windows the system provides you with a warning dialog and asks you to click to approve, again as a sort of security measure. (I don't know what macOS does in this respect).
So in Debian/Devuan-land the answers to your first two questions is manifestly YES:
I doubt that things are essentially different in the RH/Suse world to what I've described above and I'm quite sure they're not different in the Ubuntu world. Again I would expect the answers there to be manifestly YES.
You can indeed go through the rigmarole of installing stuff from the command line. You may have read installation instructions on various how-to sites but for things which are in the big distro's voluminous repositories you don't have to.
The fact that you aren't aware of this makes it glaringly obvious that if you have any experience at all of the Linux world it must be a couple of decades out of date. It's always easy to spot those whose professed knowledge of such things is based on reading comments of others who are similarly out of touch.
You may well know what you're talking about for Windows and/or Mac. For Linux you don't.
Given that Apple got rid of 32-bit support a while ago. there were some inaccuracies about macOS too. :-)
I do agree with the article about the many proprietary flavors of UNIX being part of what gave Microsoft an opening to take over. I'm not as sure that the same situation exists with Linux. Linux is free software, so there isn't quite the same competition as there was with proprietary UNIX. There are basically 2 Linux ecosystems. RH-based and Debian-based. (SuSE was originally RH-based and I still consider to be in that ecosystem.)
I decided on Debian 30 years ago and have stuck with Debian-based distros. They've been a comfortable place for me all this time, and they've allowed me to do my work. I still work with Ubuntu Server on a daily basis.
I do use Linux on the desktop via a Raspberry Pi and also a PC with Linux Mint. I'm happy with this situation. I also use a Mac for work and I find that to be pretty comfortable to use.
I tend to pick software that does what I want and that I can actually use. I don't worry about software from a different ecosystem, and I don't worry about what software _you_ choose to use. That's been working for me for a very long time. :-)
You can download the entire source code of the Linux kernel. You can tweak any bit of it you think needs tweaking to make your very own version, not Linus's, not anyone else's. Yours. You can compile your tweaked version (assuming it's still syntactically correct after your tweaks). When/if you've compiled it you can run in (assuming your tweaks didn't make it crash).
Try to repeat that for the Windows kernel.
Now do you understand what proprietary means?
Now do you understand what proprietary means?
I do. Do you? It means that it has an owner, who has the right to decide what other people can, and cannot, do with it.
I was making the rather tongue in cheek point that, as the article points out, the Linux kernel is in good shape because Linus acts as a fairly strict owner who decides, proprietorially, what changes are allowed (often in very colourful language). The code may be freely available, but the official kernel effectively only gets the changes that Linus permits.
The code may be freely available, but the official kernel effectively only gets the changes that Linus permits.
That's not actually quite true. Distros set their own configuration and in some cases (e.g. Blue Hat) apply their own patches to the kernel. That's outside of Linus' control. As an extreme example, what goes in the Andoid version is totally out of his hands.
Then there's minor forks like the Zen version of the kernel. It's not just free as in beer but also free as in speech.
Linus only really has control of *his* version of the kernel. This is regarded as the official or stock version due to consensus/meritocracy, I.e. power people choose to give him. A proprietor has power ultimately backed up by the state.
I realise your original comment was tounge in cheek, and agree that having one (competent) person coordinate has helped development greatly.
Yes but general considerations can be wrong.
Anything generally considered to be good should be treated as a warning flag that there is a general push to avoid further examination within a specific situation.
That should be taken with suspicion and used to motivate specific examination and exceptional exploration for alternatives.
These general goods rely upon a lot of assumed goals that may or may not be in play.
Some things seem good only until you examine the underlying philosophies and methodologies and goals. finding ou disagree with some of the foundational assumptions and so invalidates the entire model.
Undeclared assumptions can make abstract wishes seem to be facts
There's Linus's kernel, then there are the various LTS kernels which are managed by Greg K-H. Then there are the kernels which might or night not have additional tweaks. And, as I pointed out, your own kernel if you want one. But where is the non-MS Windows kernel?
I've pointed out elsewhere that the maintainer system which FOSS projects have adapted is a good solution to the problem posed in TMMM of how to coordinate multiple developers while maintaining clarity of vision. It doesn't put the maintainer in the position of a proprietor because anyone can fork it and become a maintainer without reference to the original maintainer or anyone else. It's an important aspect. Let's not obscure that, even tongue in cheek.
"who has the right to decide what other people can, and cannot, do with it" Was that a gunshot I heard? I'll call you an ambulance... your foot appears to be...elsewhere!
You have hit precisely on the crux of OSS and Linux in particular; Ain't nobody telling anyone what they can, or cannot, do with their code!
If you were a device developer, that is exactly what you could do.. All operating systems have source licences for device developers - what you couldn't do is ship the product.
I loved "the land before Linux" story, but couldn't work out which episode of star-wars it was a parody for, but am looking forward to the the "return of the jedi" edition. The only thing missing is the plot hint about Xenix for the "I'm your father Luke" line.
Those familiar with UNIX SVR4, will remember the SunOS boot sequence where the Microsoft (C) is included alongside AT&T. The original "Unix on the desktop" outfit got the gig for PC-DOS because [1] IBM was already in the room shopping for a Basic interpreter [2] The CP/M author had snubbed IBM [3] MS had pitched Xenix to IBM for the desktop - IBM thought "if you can do Xenix, you can do DOS".
MS was a Unix shop before OS/2 (they developed and ran office apps on DEC Unix) - but changed as PC's became more powerful. ~Nobody should forgive them for the Balmer years, but shouldn't rewrite history either.
My goodness, the amount of people who had to write a thesis about this. It's a tongue-in-cheek point that didn't need a thousand paragraphs dedicated to etymologizing every letter. Obviously it's not really proprietary... Duh.
Point being: There are forks of Linux, but none amount to anything more than tuned spins compared to the UNIX landscape of the old days.
(I never check responses, so dear reader, please feel free to nitpick this to death like the OP.)
> but Linux has already become a top end-user operating system, thanks to Android and Chrome OS
Isn't Chrome OS still a tiny minority? Some schools were tricked into buying a fleet and a some elderly use it as a web browser thin client perhaps?
Saying that Android is providing Linux to the masses is very true but I personally find it a little bit... sad. I feel Linux can do so much better than that as a proper general purpose operating system. And this is wasted when misused as an over-engineered phone OS.
Not sure about the definition of "tiny", but certainly not enough. For the desktop market in 2023, Windows had about 70%, OS X about 20%, and Linux much of the the rest with roughly 1/3 Chrome OS and 2/3 Gnu. Perhaps a significant portion of Gnu is in a Windows Subsystem for Linux container, though.
Android works fine as a Linux that is the biggest seller of all in personal devices, but I truly miss and would love to see a revival of Nokia's Gnu Linux phones, the spiritual descendent of my late beloved N900 and the briefly sold but well-received N9 successor that Microsoft killed with a $1 billion check. But I'm probably just weird. *sigh*
"Isn't Chrome OS still a tiny minority? Some schools were tricked into buying a fleet and a some elderly use it as a web browser thin client perhaps?"
A few things:
1. Schools weren't "tricked" into buying ChromeBooks, they did it (and still do) because it's a reliable, easy to manage platform without all the shit that comes with Windows.
2. ChromeOS, or better ChromeBooks, together with Google Workspace, are widely used by small and large businesses (not everyone uses MS365 and Windows, especially U.S. tech companies tend to stick with Google).
3. Not all Chromebooks are cheap-ass $299 laptops sold at Walmart, but actually includes a wide range of models up to some high-end models (usually business models)
I previously worked in a business installing computers/networks in schools. I've been in 100s of schools in the UK. I've seen plenty of Chromebooks. They always lived in a cupboard unused or a charging trolley that was unplugged. They all tended to be Atom processors with tiny onboard EMMC storage (probably too small to even run an update). They were never used.
Maybe they were mis-sold in the first place and the school was conned into buying them, maybe the purchaser should take some blame because they had high expectations from a cheap device, whatever the story, they're useless for anything other than being sent for scrap. They were useless at the time they were purchased, and are doubly so now.
Schools (almost universally - certainly at primary level) tend to use iPads in this day and age. Those who've had Chromebooks would never buy them again (in my experience), and certainly not the higher-end models. Why buy a expensive Chromebook when you've just binned 100 of them and there's heavy education discounts available on Apple devices?
Point 1.
I can verify this after supporting 280,000 grades school students. It was picked because it's dirt easy to manage compared to everything else. I had no idea user admin could be this easy.
Do I LIKE Chromebooks? No. Not for anything other than general simple use, i.e. grade school students or low level employees to perform rote tasks.
But being the system admin of those Chromebooks is eyeopening in its simplicity. Tech support even easier.
And those 280,000 Chromes books were used every day, even for Zoom classes. And they worked fine.
Supporting a mostly useless platform is easy.
It booted up and still sucks everything is to be as expected go drink some coffee and watch a movie.
Administering a useful Network built on a useful platform towards users, that's more difficult because the versatility and user mods to be useful is what creates the work for the administrator.
Don't forget Xenix - which was a System III. We did have one machine that ran it...
Personality, I cut my teeth on Edition VII then System V on Interdata/Perkin Elmer/Concurrent minis rather than desktops. Then Masscomp came into the fold with more desktops running RTU ((Real Time Unix) which was a System V / BSD blend as far as superstructure
No, Microsoft did NOT have a UNIX.
What Microsoft had was a license to sell leases of the bog-stock PDP11 UNIX Version 7 source. AT&T retained ownership. Other companies, such as SCO[0], did the porting.
[0] The real SCO, not the zombie SCO of insane litigation fame.
Citations are overrated.
Stack enough invalid citations Referencing each other and no one will notice the difference unless they directly tested the idea itself.
It's really gotten quite out of control in the world of publication and citation.
Since you have to directly test a concept to really know you might as well skip right to it or admit the validity is unknowable until personal experimentation has concluded.
Back then we had about half a dozen major vendor propriety versions of Unix and another few dozen minor players. Mostly kinda compatible. Sort of. Now we have about half a dozen major distros of Linux and another few dozen minor distros. Mostly kinda compatible. Sort of.
Back then we had about half a dozen various windowing GUI's on Unix. None with dominant market share. Now we have about about half a dozen various windowing GUI's on Linux. None with dominant market share.
Back then Unix was a big player in the mini and specialized workstation market. Now Linux is a big player in the sever and specialized dev PC market.
As for embedded Linux. We had QNX etc back then.
And I did my first build from source of Minix in 1989. The year I first cracked open my copy of the Burgundy Book. For X-Windows.
Nothing has really changed.
Big difference between then and now: all the proprietary Unix vendors were trying to lock you into their particular platform with their incompatible extensions.
Nowdays, you have something like 10× the number of Linux distros than you ever had proprietary Unices, yet they offer few or no barriers to interoperability between them.
In other words, nowadays you actually have a free, competitive, level playing field.
Not really. The lock-in back then was with the vendors proprietary OS. Not so much with the Unix they offered.
Pretty much all the vendors specific Unix differences were part of the fixing well known (or perceived) weaknesses in whichever part of the Unix gene pool their Unix came from. The big split was BSD v System V. Mostly. Although it being Unix some guys decided to roll their own. But lock in was never the primary intention.
Porting software from one proprietary Unix to another, at least on the workstations, was about the same level of difficulty as porting a non trivial end user application on Linux today. I base this opinion on what the most successful Unix workstation end user application back then looked like. Mostly ifdefs rather than completely different modules.
In HP-UX and Ultrix land etc sure you could not mix and match processor hardware from 3'rd parties but in SPARC/68k workstation land you had some leeway. In later years even competition. But just because I can buy a generic x86 box to run any particular distro (mostly) is not some huge improvement over the world of the 1980's / 1990's. Because most people bought Unix systems to run particular software packages. A large part of low end mini sales of Unix systems was just as dominated by VAR software considerations as in the workstation market. The majority of Unix hardware was bought to run a particular VAR software package, not the other way around.
What people might have seen in universities or research institutions back then bore little relationship to how most of the Unix hardware / software market worked. By $ revenue. Universities / research installations might have been high profile but were actually a small part of the market. By revenue and installed base.
"A large part of low end mini sales of Unix systems was just as dominated by VAR software considerations as in the workstation market."
Yes. My experience was there's be a package of H/W, O/S, (Informix) RDBMS and application. That included the company's product when I worked for a VAR or some industry-specific package which was the main reason for the purchase when I worked in admin. Even where the main application was home-grown and the original purchase stopped at the RDBMS we also bought in an accounts package to work with it.
But actually building the application for distribution would be a problem for vendors, given the number of H/W architectures they had to deal with. I certainly hosted at least one local VAR to build a version of their product on our HP-UX box (always keep on good terms with your local S/W house, you never know when you might want a job with them).
The company I knew best about what was in their Unix software build had a big multi-page list (very multi-page) of all the hardware configs it worked with out of the box. So to speak. Nothing ever worked out of the box with Unix VAR software packages. Now they had one team whose job was to create a custom build for any customer who was going write a big enough check (think at least 7 figure $) for a very specific non supported hardware / software configuration. And sign up for the (long) support contract.
But with bigger VAR's / ISV's having a team to do custom build configs for large customers / partners etc is pretty standard. And if they are not busy doing custom customer builds the team can always do internationalization. Which is usually just a custom build with funny typefaces.
《Linux distributors and developers have learned their Unix history lessons.》
Not so much now with the passage of time but I wouldn't underestimate the contribution of Unix developers to the initial success of Linux.
I certainly recall very early Linux days when a Unix type wss using our teaching labs SunOS 4 workstations to cross build code for Linux. He was working on gettting g++ working to port his Unix C++ applications.
MacOS is officially allowed to use the “Unix” trademark, unlike any of the BSDs or Linux. However, when we talk about “*nix” systems, and the traditional conventions for how they should work, MacOS doesn’t really conform to those expectations.
As evidence, I offer the frequent occurrences of “if(UNIX AND NOT APPLE)” in the build scripts of cross-platform apps.
Just to add to the above, I only today came across this report that Ken Thompson, one of the original Bell Labs crew that created Unix, has abandoned the last standing official “Unix” in favour of Linux.
I have used too many UNIX and unix-like OSes over the past 35-or-so years to remember all of them. I definitely do consider macOS to be part of the Unix family, and I felt the same about NeXTSTEP and OPENSTEP.
There have been plenty of differences between all of these, but the overall flavor and feel was much the same. I don't expect them to be exactly the same, and I certainly expect to have to conform to some of the local environment, but they're all Unix to me. Did I find some of them to be kind of weird? Yep. They were all Unix though.:-)
It's well established that the legal issues around 386BSD were what thrust Linux into popularity - even Linus has said if it wasn't for the lawsuits, he'd probably never have even created Linux: "In 1993, “If 386BSD had been available when I started on Linux, Linux would probably never had happened.“"
https://klarasystems.com/articles/history-of-freebsd-part-2-bsdi-and-usl-lawsuits/
If that were true, why isn't BSD taking off now that legal issues are resolved? As Linus said: BSD was not available on 386.
No-one likes doing free work for other companies to take. That's why the MIT license is such a bad license.
Well, that’s a take.
The reason - and by the way, interest in the BSDs is growing, albeit slowly, largely due to foot-guns like systemd - is that while there was FUD around BSD, GNU/Linux grew roots and established itself. GPL v2 is also a much less me onerous license than v3 too - note that Linus is not a fan of v3 - and far friendlier for business generally.
"No-one likes doing free work for other companies to take."
So no-one developed BSD? Or did somebody develop it and then have an "Oh shit!" moment when they got round to reading the licence? Or is it possible that those developing BSD knew exactly what the licence implied and were not only OK with the implications but welcomed them? Should they have gone to some random A/C on the internet for instruction on what licence they should have used?
"So no-one developed BSD? Or did somebody develop it and then have an "Oh shit!" moment when they got round to reading the licence? Or is it possible that those developing BSD knew exactly what the licence implied and were not only OK with the implications but welcomed them?"
I have been contributing to what we now call FOSS since before BSD was BSD.
Over the years, I wrote code, tested it, chased down bugs, created patches, wrote documentation, and all the other bits & bobs that go into FOSS because I am extremely selfish. I wanted it to work for ME, my way, in my time. Once it worked the way I wanted it to work, it solved a problem that I had, which more than paid for the time and effort that I put into it.
Then I released it to the wild, without caring if anyone else needed it. It's MINE, it scratched my itch ... now, if you have the same itch feel free to make use of my scratching post. No point in you re-inventing the wheel to do the same job ... and better, it frees you up to work on something to scratch another itch.
Thankfully, over the years many other people have has many other itches. In aggregate, we have created something useful.
Without money bags getting under foot.
"Should they have gone to some random A/C on the internet for instruction on what licence they should have used?"
::snort::
"As Linus said: BSD was not available on 386."
386BSD was first publicly available in early 1992 ... but if you knew who to ask (info easily found on Usenet), you could have had access to it's roots in 1990. Earlier, if you knew people at Berkeley.
I don't know if I'm happy that Linus didn't know this, or not ... one thing is very clear, though. If he had, the world would probably be a very different place.
"No-one likes doing free work for other companies to take."
I strongly disagree. See my reply to Doctor Syntax, elsewhere in this thread.
Companies do like to take, they are less enthusiastic about giving back. I have often heard claims that non-copyleft licences like BSD and MIT are more “business-friendly” than copyleft ones like the GPL. Yet the GPL is more “competition-friendly”: it helps to ensure a more level playing field, where one company cannot take what others have given without giving back. Companies may complain about the GPL, yet they are forced to accept GPL’d software when their competitors do.
David Wheeler made this point some years ago.
Maybe if you read the link I posted, you'd have your answer?
Also, huh? What do you think the "386" in "386BSD" stands for? It was "unavailable" because of the lawsuits, which was the whole point of my post!
Finally, like many GPL zealots, you don't even know your own license. Companies can quite happily take and use your "free work" if they want to.
Some developers just want their code to be used and shared. They put less conditions on that than do some other developers.
I think it's kind of silly to argue about licenses in this way because what license to use is just down to the preference of the developer.
It's like that lake with the long name in Massachusetts with the possibly apocryphal translation of its name being, "You fish on your side, we'll fish on our side, and nobody fish in the middle." Stick the the side (license) you want and let others stick to their side. :-)
(It's Lake Chargoggagoggmanchauggagoggchaubunagungamaugg)
Yeah, it's a genuine pain in the ass to write code for literally over a dozen different Unix variants. But when it has to be done, it just has to be done.
Once upon a time, oh so many moons ago, I worked for a company that sold a terminal box for IBM mainframe operator consoles. The box replaced the console, and plugged into a Unix workstation. We had two Linux customers, and the rest spread across a full dozen Unix variants, mainly on HP-UX and Solaris. So yeah, it was necessary to keep a UI and software compiling for the customers.
I'm not going to say that the "infighting" was a problem. It was the customers wanting to move away from some expensive workstation to a cheap off-the-shelf solution. What's the point of spending $30,000 on a workstation when you can buy a $1,000 PC to do the same job? Yes, some customers had to have a 64-bit CPU and OS because of their data load. But a lot of them didn't need that. And later on came the AMD 64 bit and Windows 64 bit. And then there was less of a reason for the big bucks for big hardware.
Another problem was the manufacturers themselves. Each OS was there because everybody had a competing hardware solution. I worked on the Celerity workstation hardware, using the first 32-bit CPU. What happened to Celerity? The management shot themselves in the foot, bled out, sold the business to FPS, FPS shot themselves in the foot, and those remains when to ... uh ... Cray? I don't remember, it's been a while.
Rather than a fragmented market, it's bad management that is the real business killer.
Is this one of those things where the editor or someone else who didn't write the article, writes the headline? I clicked hoping to hear abour CDE, Openview, etc. Sure, NeXTStep was given its due but... this really isn't an article about "UNIX desktops" as much as it is yet another article explaining why Linux succeeded. Waste of my time.
I think the article misunderstands the purpose of the Unix software standard that did succeed - POSIX.
POSIX came about largely because of the edict of the US DoD that it would not accept procurement bids from companies unless the design complied with open standards for both hardware and software. VME got chosen as the hardware standard (and is still alive, supported and functional today, though there is now also the far quicker OpenVPX too). POSIX became the software standard. The reason the DoD gave this edict is that, previously, it had been paying extremely large support costs for bespoke processing system for things like radar, communications, etc.
Whether or not the article is correct in saying that POSIX is "too general" to have succeeded by the article writer's terms, POSIX most certainly did succeed from the Department of Defence's point of view. The vast majority of technology-based systems across NATO are based on VME / OpenVPX, and POSIX. Software can be ported from generation to generation with minimal effort (compared to before POSIX). The price of development and support paid by the DoD for its very complex systems dropped very significantly.
And, if you can believe it, the risk in procurement has dropped. Essentially it is easy for equipment to at least pass the environmental testing it'll be subject to. The hardware manufacturers have become good at designing for the military environment. The DoD's, MoD's engineering standards have excellent data on what different environments are like in terms of temperature, shock / vibe, electrical supply, so it's been possible to make sure that the component parts survive. It doesn't mean the whole system works, but it should at least not fall to pieces!
If I were to guess, the problem being faced by a lot of these military systems is the slow demise of Xorg. There's quite a few military system based on XServer.
Windows!
Surprisingly, the open standards hardware that the DoD mandated opened the window for Windows to play a part. The hardware manufactures that glued down Intel chips into VME cards, or OpenVPX cards, essentially chose to make them PC compatible. So it became possible to install Windows. There are a fair few systems based on Windows, largely because of the availability of developer resources and Microsoft (by then) having a well deserved reputation for backward compatibility.
The irony now of course is that Windows itself is now an excellent platform on which to run POSIX compliant software, in WSL. WSL is interesting because you can in principal run an old / out of date Linux plus software combo, with security handled by fully-patched Windows. Like it or not, Linux has become one of the key POSIX platforms in military systems, but is now being dragged in all sorts of unhelpful directions by RedHat (systemd, gnome, etc), and increasingly the best option for long-lived Linux software systems that do not want to upgrade every 3 months is to run inside WSL.
Future Direction, Unintended Consequences of RedHat's Trajectory
DoD still mandates POSIX, and increasingly Linux isn't POSIX compliant (thanks to SystemD).
For example, for decades C code does name resolution by making a few library calls, and these library calls are the same on Linux, *BSD, Unix, VxWorks, INTEGRITY and other militarily significant operating systems. SystemD has introduced an alternative that involves making a request via dBus. Now, for the moment, SystemD has not displaced those well understood library functions, the dBus route for name resolution is an option. But, for how much longer? They're already re-routing conventional library call DNS requests to resolveD by messing with the default configuration files.
Given the attitude of RedHat / IBM, and their SystemD / Gnome teams, I would not put it past them to deprecate the library calls, and use their weight within the Linux distro world to make that stick.
If SystemD does start gutting Linux's compliance with POSIX at the software API level, this will cause military equipment / system providers a bit of a problem; they really cannot go that way. So there could be some very monied companies looking for a Linux alternative, with the motivation to put money into it. FreeBSD strikes me as a very strong candidate going forward.
This could hurt Linux badly as there might be strong demand for things like FreeBSD instances on AWS. Someone has already tried that I gather. And if there is plentiful supply of non-Linux based resources out there in the world, there may be others keen to get away from systemD. Certainly, with RedHat's current messing around with licenses causing no end of anguish, one has to consider the consequences of RedHat's grip on things like SystemD / Gnome. If RedHat were to buy Ubuntu (not impossible), possibly that'd be Linux in effect becoming owned by RedHat. There may still be a Linux kernel project, but if RedHat has bought Ubuntu then there'd not be many distros out of their control, and the only Linux kernel anyone is running comes from RedHat.
If RedHat are motivated to lock software and users in to their version of Linux (their corrupted version of POSIX), it won't be just a few OSS enthusiasts unhappy about that. It will be the military-industrial complex too and, indirectly, Uncle Sam.
That's the possibility. It probably won't be a black'n'white, one-small-deprecation-cuts-out-RedHat thing. For example, changing init is something that other Unixes have done (e.g. Solaris) to no ill effect. But there is a tolerance threshold, and RedHat / IBM are moving towards it and not staying still with respect to it.
It all smacks of IBM / RedHat management having no idea what their developers are really doing, no idea exactly how much this stuff matters to users, no idea why some of their most influential customers care about this kind of stuff and are probably of the quiet-until-provoked variety. The management surely appreciate that DoD and pals are a big potential customer for their services, but have totally failed to connect the dots between their developers' arrogant "We know best" stance and making life harder (and not easier) for customers from one of the most monied government work areas there is.
For me the warning signs started when RedHat started price gouging the fees they charged for RedHat MRG (remember that?). They made Oracle and Microsoft look like rank amateurs. I ditched RedHat at that point in time and haven't gone back.
Even the first Slackware distribution or subsequent Mandrake and others did have way better GUI event though with a number of crashes compared to the HP UNIX CDE or the Solaris ones. The UNIX issue was not of stability but lack of number not applications. In the other hand Windows to its credit did have large number of apps in its ecosystem including large number of back end integrations but flattered at stability and security. I guess the mac was in the same level of the first Linux distros but with a bit better stability that BSD did provide at the time plus the GUI wizardry for the addicts. Linux has grown way out know that instead of borrowing from existing UNIX, Windows ask Mac's is actually contributing and influencing them. A big leap indeed
The problem with Unix workstations was that they were expensive to buy and maintain and fiddly to work on. I had this brought home to me when a Sun workstation had a hard disk failure. The hard disk was a generic hard drive but we had to buy the special Sun one at a ridiculous markup. Reinstalling Unix was no picnic, either. I've used Linux since whenever and even in its most primitive forms it was a lot easier to work with -- commodity hardware, commodity parts and a relatively simple setup.
The one feature that Windows had which Linux lacked which probably made it attractive to software vendors was that Windows always had some kind of license management built in. Linux was always philosophically opposed to any kind of DRM but in taking this line they shut out a huge potential base of software. This isn't so important these days because there are other mechanisms for evaluating and enforcing licensing but it gave Windows a huge push at just the right time.
I agree with the first part of your statement.
Back in the mid 1990's our desktop Sparc, NeXT and SGI machines were around 5 times the cost of a PC running NT of the same specs.
Peripherals were a rort, I remember having to get a Sun SCSI cable for a few hundred dollars, where a no-name one was a fraction of the cost.
I worked for Data General during the time of "Soul of a New Machine". They sold proprietary hardware (they even drew their schematics with internal part numbers so third-party service companies couldn't repair the boards) and proprietary software to run on it. If you bought a DG system, you were locked in to their OS, their software and their service organization.
Then we in Engineering got Sun workstations and I learned UNIX. Big difference, but as others have mentioned, each vendor supplied their own version of UNIX, and if you wanted to run free software on your system, you needed to compile it from source to account for your vendor's UNIX peculiarities.
Next, came Linux. Now, we had a flavor of UNIX that ran on the new "generic" hardware (such as it was at the time). You could buy a PC made by anyone, load any flavor of Linux (SLS, Slackware, Mandrake, etc) on it, and away you went. Generic hardware AND generic OS. As the PC and Linux became more powerful, that setup began to compete with the proprietary hardware, which was expensive to make and maintain. People began to think of a standardised HW/OS platform as being competition for the old proprietary systems.Within about 10 years, DEC and DG were out of business. Generic hardware had killed them...it was just so cheap that even the (minimal) performance advantage of proprietary hardware couldn't compete with multiple generic processors.
I used plenty of generic SCSI drives on Sun SPARCs. There wasn't any need to buy a special Sun HD.
The story is different if you actually worked in the enterprise with Sun servers, because there you would likely have purchased a support contract and it's not a great idea to toss random HDs in your $100K Unix server. You buy spares from Sun because they come with a warranty, are guaranteed to work, and will be replaced by a Sun field engineer within 4 hours if there's a problem.
Unix workstations weren't fiddly to work on. They were more expensive in many cases, but the quality of the hardware was much better than PCs. and better hardware costs more money, so that accounts for much of the difference.
I used to hear Windows PC fanbois whine about how Macs were more expensive too. However, they always compared some off-brand white-box PC crap to a Mac. Hardly an Apples to Apples comparison.
Re-installing Solaris was no big deal. Once you had a Jumpstart server, it was pretty easy to install on whatever systems you wanted to. As for being easier to work with, Solaris was easier to work with at the time. Linux became easier to work with, and to some extent commodity PC hardware got better, but Solaris x86 benefited from a lot of that better hardware too, so it's not just Linux.
Solaris x86 never really did well because for a long time Sun didn't want it to succeed. They were selling RISC hardware and didn't want x86 to compete with that. By the time they did start trying to sell Solaris x86 it was too late. The whole company was on a downward spiral. Then they were purchased by Oracle and it was all over. People started moving to AIX or to Linux like rats fleeing a sinking ship. Or like people fleeing a stinking Larry.
Windows didn't have license management built in for 3rd-party apps that I remember. It did have some sort of license code needed for Windows itself, but MS didn't really care to enforce that until they had the market sewn up and had everyone over a barrel.
FlexLM was one of the license managers I remember, and there was also a bunch of software that required hardware dongles. Solaris, HP UX, and Windows had these. I think Linux had versions too, but hardly anyone at the time would buy the software that needed these because real UNIX on RISC hardware was so much more capable that nobody was buying most of this software for use on Linux.
In the end you should use what hardware and software you prefer, and I have no problem with that. I just don't think that your analysis of things from the past is correct.
Hübner said the city has struggled with LiMux adoption. "Users were unhappy and software essential for the public sector is mostly only available for Windows," she said.
She estimated about half of the 800 or so total programs needed don't run on Linux and "many others need a lot of effort and workarounds".
Hübner added, "in the past 15 years, much of our efforts were put into becoming independent from Microsoft," including spending "a lot of money looking for workarounds" but "those efforts eventually failed."
CLIX was the UNIX derivative for the Fairchild/Intergraph OS for the Clipper CPU. Intergraph sold Tangate, the (awful) chip place and route software, to Cadence in exchange for porting their software the Clipper/CLIX. I was the CAD manager for the chips, and was told to go in to Cadence to port their chip design software to Clipper/CLIX. I met their internal porting team that was maxed out at the time to support nine different UNIX ports. They were too overworked to handles another port. I only had a few weeks to get the software ready on my platform which these huge 28 inch screens with one mega-pixels! No other UNIX vendor had anything like it at DAC, the yearly conference I was to demo the software at. I came at the port with a different approach with the team I brought with me and managed to make it. By the time to pack the machines up and go to Vegas, Cadence's entire porting team offered to quit and come work for me. I was getting hostile vibes from Cadence management I had to report back to my management, and decided to personally move the machines to Vegas myself without any help from Cadence. Mission accomplished, before this article was written. But who knows about CLIX now?
Linux is a kernel, FreeBSD is an operating system.
What follows is not actually a Stalmanesque rant, but I think an important point that needs to be made.
GNU, Android, and ChromeOS are all operating systems that use the Linux kernel. Microsoft Office is available on Android, but I don't think it is very easy to run it on GNU (eg Debian / Fedora / etc) without running it on some sort of Android emulator or Virtual machine. You can I think run it in ChromeOS.
So in that sense, there is fragmentation in "Linux".
That isn't necessarily a bad thing though. People have tried laptops running Android, and they were mostly useless, because Android isn't designed for that form-factor, and trying to run a desktop operating system on a phone would be an equally bad idea.
https://notes.technologists.com/notes/2008/01/10/a-brief-history-of-dell-unix/
https://notes.technologists.com/notes/2019/07/01/koko-exploring-nextstep-486/
https://notes.technologists.com/notes/2019/07/01/koko-reviving-timbls-worldwideweb-browser/
https://notes.technologists.com/notes/2019/09/26/koko-welcome-to-eight-jurassic-o-s-on-1992-dell-486d-50/
https://notes.technologists.com/notes/2021/01/19/koko-dell-unix-sustainable/
Wow -- Dell UNIX, I was in the team who ported/developed/suported it for many happy years and the linked article was like a time machine. There were only a dozen of us in engineering working on Dell UNIX, which was kind-of hated by the sales 'roids who couldn't wrap their heads around UNIX.
I kind of recall the article coming out and noticed my/our 2 DNS root servers dell1.dell.com and dell2.dell.com were listed instead of our very early www.dell.com (which was despised even though it had a screenscraping script to take your service tag and look it up in our Tandem mainframes. Perhaps that's why my DNS servers took a hammering!
I seem to remember that we had Unix desktops at Essex University in the early to mid 1980's. They were an in house build called "Senate". I remember climbing around dusty buildings installing the 10BASE5 (thick ethernet) cables to support them. They had 20Mb hard disks and almost all of it was taken up by the operating system!
I don't know a lick of Linux or BSD or whatever. I just know that I took a 20 year old program made for Windows 98, tried to run it Windows 10 with all the compatibility modes turned on, and the thing simply refused to run. Well, at least Windows managed to show an error message stating a no-go.
If any flavor of Linux can support that age-old code or trigger a self-update to support old code in the same conditions at least, than it's eons ahead.
And I'll drink to that.
I have had good luck running older Windows 3.1 and 9x software under Windows 10 32-bit edition. There was still a lot of 16-bit code floating around in '98, which doesn't run under 64-bit (x86-64) editions of Windows. If you received an error about 16-bit code, try throwing a 32-bit edition in a virtual machine and try again.
Well this has all reminded me of being a callow yoof at polytecnic, and learning to use Sunview/Suntools on some kind of workstation with a giant CRT monitor. Also DEC Ultrix on workstations with those wierd round mice. THen worked for quite a few years with SGI Irix from the old Indigo all the way up to the days of the O2, Indigo2 etc.
Must dig out that Jurassic Park poster SGI were hawking around at the time, and get it framed. It's probably worth several hundred pence these days ;-)
>So, how did Linux come to win? Well, it had two major advantages over the Unix distros. The first was that it was open source.
This claim is totally false.
The kernel, Linux was proprietary software in 1991 and only freed in 1992, under the GPLv2-ambigious (only clarified GPLv2-only in 2000).
Linux was only free software from 1992-1995 (the concept of "open source" did not exist until 1998), until the Linux developers added the first proprietary program to it and in the following years, more and more proprietary peripheral software was added, with most of it merely moved to the oh-so-separate "linux-firmware" and only ever being actually removed when such software became totally obsolete (i.e. appletalk),.
Ironic isn't it - despite being the poster child of "open source", it isn't even fully source-available!
GNU/Linux won in 1995, as you could finally use a modern computer in freedom again - too bad many people have taken that win away from most of the users with the relentless addition of proprietary software to Linux and systemd/Linux distros.
>After all, if all it took for success were open source code, we'd all be running pure BSD operating systems such as FreeBSD, DragonflyBSD, and GhostBSD.
All BSD systems contain a lot of proprietary software without source code and many unlicensed files, therefore no BSD is "open source".
The original BSD developers needed quite a bit of convincing to not distribute under propriety terms and then to not distribute under the potentially burdensome BSD 4-clause terms.
What was needed for success was the unparalleled drive of GNU developers to achieve GNU/Freedom no matter what it took.
HyperbolaBSD is still working on replacing unlicensed files and once that's done, that'll be the first free software BSD that is actually source-available.