Opensolaris anyone?
Kind of similar to what Oracle did to http://www.opensolaris.org/
RIP
Oracle claims IBM is trying to kill open source competition among Linux distributions to boost its bottom line, and has pledged to keep distributing Oracle Linux source code for free. Yes, the same Oracle that in January 2019 ended free public updates to Oracle JDK for non-Oracle customer commercial users – prompting Red Hat …
Linux killed Solaris, why buy expensive box's from sun/oracle that funded the spaghetti code maintenance when you could change it yourself or pay someone else cheaper (I'm not saying its better just cheap)
really the whole argument about RedHat/IBM removing src rpms is aimed squarely at Oracle everyone knows that, every time a redhat sales people walk into large accounts Oracle people are there or are going to be there saying they will do it for free and oh can we sell you a database/etl...
Wrong and wrong. Solaris has many features that were, are, and will be superior to Linux. The only reason Larry isn't letting it reach it's full potential as FLOSS is because he can't find a way to enslave the coders and sue people for using it without his permission.
Do not, under any circumstances, anthropomorphize The Lawn Mower!!
https://www.youtube.com/watch?v=-zRN7XLCRhc&t=33m40s
Watch the next seven minutes when you get seven free minutes and watch all of it when you want an education.
I'd always found Solaris to be better to work with - it had >20 years of development making it an enterprise capable operating system, Linux had >20 years of people pulling it in different directions (desktop, server, embedded controller, etc). Live Upgrade was a great tool, allowing low risk patching/upgrades (we even upgraded Solaris 8 to 10 with it) with an easy backout, especially when integrated with ZFS. SMF sped up system boot times in the same way as systemd does, but it's also fallen victim to the same rabbit hole of integrating features it really shouldn't be, like moving resolv.conf into some complex svccfg commands.
That said, Solaris was dying before Oracle took it over. The flip flopping of support of x86 hardware meant no-one wanted to rely on a roadmap on cheap hardware and SPARC was expensive. OpenSolaris was an attempt to win back customers but it was too late to stop the exodus to Linux/x86. I still believe Sun should have aggressively pushed on x86 and worked with Intel/AMD to develop some of the SPARC features like hot-swap CPUs and hardware resilience, but they didn't want to lose the cash cow of SPARC servers.
Can't dispute the truth of any of that, it's a top notch Enterprise OS. In my first job we used Solaris on SPARC was utterly unbreakable - but expensive. A big part of that, alongside the very sound design, was tight integration between the hardware and the software. But when we got to the point where Linux distributions were being certified on x86 servers, it was all over.
Agree completely with your second paragraph. They could have saved both Solaris and SPARC by supporting x86 as a gateway drug.
GUI apps could be securely isolated from one another using a combination of security labels and access controls, for starters. It also had working drivers which wouldn’t lock up at random when using an NVIDIA 6200SE TurboCache with an ATI IXP chipset, which couldn’t be said for Linux at the time!
Had Oracle not killed it, it would have rapidly become a Linux-killer. We also might have got ourselves: X12 instead of myriad Wayland compositors, a viable competitor to ALSA for audio drivers instead of a myriad of userland sound servers… and a proper successor to SMF instead of systemd.
Thanks a bunch for ruining our future Larry!
Hmm. "it worked well with my graphics card" is great and all, but I rather suspect that Linux at that time worked with a far wider range of hardware!
Much as I'd happy to assign blame to Oracle, it was all over before they came along. The Solaris engineering team stiffly resisted licensing the code under the GPL, forcing them to come up with this CDDL daftness. I also don't think it's Oracle's fault that Sun screwed up the x86 side of things. As for X12, I am sure it would have ended up looking something like Wayland; if the issue is community leadership I am not sure Sun's involvement would have improved matters much ..
You don't need any Oracle branded hardware to run Solaris. It runs just as well on Dell; HP; or even some "brand X" home made system. I don't think that running it on Oracle hardware is even a requirement for a service contract.
What Oracle *did* definitely kill is their own chip architecture - SPARC.
I disagree. The market killed SPARC, along with MIPS and POWER. I think that leaves IBM as the only vendor selling enterprise IT platforms based off a proprietary architecture (which I understand is basically a heavily modified POWER arch under the hood?). We'll see how long that lasts ..
What you said is only partly correct. There are things that the last few tru UNIXes standing do that Linux just doesn't, just as chuckufarley said above.
I maintain AIX systems. The RAS features of AIX on Power are unmatched by anything Linux does. There's predictive failure reports, automatic disabling of failing components keeping a system running even when it's in a little trouble, dynamic replaceability of many failed components, live LPAR migration from one system to another and many others, and they've even borrowed back ftom Linux by implementing live kernel patching!
What this enables you to do is keep AIX systems running for a long time without outage. And that is the crux. With Linux, it's all about scalability and resilience by going wide, with many small(er) systems in parallel such that losing one because of maintenance or a hardware issue will only have a marginal effect on the service, rather than keeping a single large OS instance running.
But that requires your application to be written a certain way. But not all workloads conveniently fit into that deployment model. I know, the monolithic deployment mechanism of small numbers of high powered systems is seen as old-hat nowadays, but I sometimes wonder whether that is just fashion - that's the way the commodity hardware works, that's the way applications should be written. But the monolithic model is older and well proved, and has survived many challenges to it's use over the years (and has even subsumed many models, such as the Open Systems move in the '90s and '00s). As long as you can keep scaling up the systems to keep up with the workload, then it is still a perfectly valid way of running systems, as many banks, and other financial institutions will tell you, if you ask.
Whilst I have seen them, and modern hardware platforms do allow it, you just don't really see the mainframe-challenging single instance Linux systems being used. It's much more normal to slice these large systems up into virtual machines than to deploy a huge Linux system (I know, Power systems are also sliced up, but it is much more normal to see a couple of large LPARs doing the grunt work, with smaller LPARs surronding them for the application interface and management functions with system-image isolation).
AIX is dying. I know that (but I'm close to retirement, so I care less than I did). Slowly, Linux is getting some of the features that the financial orgs. want (and I think that one of the few benefits of IBM/Red Hat is that Power RAS features may end up being better supported in Linux), but also Linux is becoming different from UNIX. It's now almost impossible to back-port a modern piece of software from Linux to a traditional UNIX (goodness, it's becoming difficult in some cases to port to Linux systems that don't run systemd!)
I can see a further slow tail-off of traditional UNIX, and whether it will stabilize in a niche like mainframes have is an interesting thing to watch for, but I don't think it will.
Oracle have done almost nothing with Solaris and SPARC systems since they bought the technology. It is very, very unlikely IMHO that we will ever see new iterations of SPARC processors, now Fujitsu seem to have transferred their attention to 64 bit ARM processors. There is nowhere for any customers still running Solaris to go other than moving to another platform. HP/UX is all but dead. IBM AIX and Power? Well, all the time IBM invests in the Power roadmap, I think AIX will survive in some form, but it will become increasingly legacy as time goes by.
You are so spot on, I do not think IBM ever realized which treasure it had with AIX and Power hardware in the early 2000's.
It is a forever loss IBM failed to do mainstream virtualization with AIX and Power hardware using VmWare, instead of this cumbersome LPAR system they built.
The love for Solaris is hard t understand, Solaris was an implementation of AT&T System V Release 4 from 1991.
For people who do not use it on a daily basis, SVR4 is a weird, over engineered and over complicated hog.
The simple elegance of the BSD 4.2 based SunOS 3 was much better, never understood why Sun adopted AT&T Unix at the time.
Maybe Linux source code can be protected in an ARM like structure, so everyone who needs to build something can count on the availability of safe and maintained source code.
They have. Power and AIX have moved on in the last decade or two. There are Power systems under the OpenPower banner that do not have the Power hypervisor burned in, but use one of the Open Firmware projects, and can be managed as an adjunct to a VmWare environment, managed by many of the same tools. These systems only run Linux though, as far as I am aware.
Interestingly, Power10 systems have a RESTful API to the hypervisor and the newer soft HMCs, so they can also be managed to use more standard tools to a degree. There is a cloud offering of AIX on Power in Azure and IBM Cloud (although I don't think there is any in AWS or GoogleCloud) managed by an outfit called Skytap.
IBM have obviously thinking about this, because there are a number of changes to AIX that have been made to allow blank, vanilla system images to be cloned at the storage level, and pick up their configuration at boot time. This is very clever, but tripped us up when we built some traditional systems using NIM, which lost all their network customizations whenever we rebooted them because this option was enabled by default!
I had a brief play with it early last year, and it was interesting, but IMHO unlikely to be used like a cloud service (although the possibility is there, with fast system deployment and spin-up), but the problem we found is that accessing Azure storage blobs was not as easy as it should be, and although you had rapid deployment, the best option was to internalize all the storage within the Skytap environment as virtual fibrechannel storage.
The evaluation I was involved with fell down when we were working out how to import a couple of 10's of terrabytes of data, and could not work out how to do it to AIX storage in the Skytap environment within the outage window that the end client had outlined. I also think that it looked like it would be quite costly in the long term compared to migrating to in-house managed systems, but I don't know for certain, because I was on the technical side, not the financial side.
It all felt a little not-quite-ready, and as such, I fear it will miss the boat. Hopefully, in the last year, they've ironed out some of the issues, and it is a bit more slick.
When looking at SVR4, you have to remember when it was defined, and what it was trying to achieve.
It was intended to be Unified UNIX, allowing software to be easily ported from SVR3, BSD4.2, SunOS and Xenix. It was backed by AT&T, Sun, the original SCO, ICL, Amdahl, Fujitsu (although differentiating between the last three was difficult), and also included IIRC Dell, Acer and a number of other PC manufacturers. But it did not have the buy-in from IBM, DEC or HP, who, saw this as a risk, and set up the Open Software Foundation, with their own brand of 'standardized' UNIX in competition.
SVR4 was cumbersome, to a degree, because of all of it's roots. The initial revel started in 1987 (I was at one of the developer conferences), and I then got to play with some Sun boxes with it on in 1988.
To me, with a Bell Labs./AT&T background, it felt very much like AT&T UNIX, but at the time I was using R&D UNIX, which had already got many of the features that AT&T contributed to SVR4. It had some really nice features. I particularly liked ABF (Application Binary Format), which meant that shrink-wrapped software for a particular processor architecture could be installed on any SVR4 system with the same architecture, even from different manufacturers. I saw the same packages installed on SPARC systems from Sun and ICL, and also (another package obviously) on Sun i386 systems and another Intel, was it a Dell? system.
Because of the time it was set up, many of the things that we take for granted, such as user interfaces like Motif, CDE and the various spin-offs just hadn't been invented (and Windows was at version 1 or 2). So comparing it to systems even a few years later is actually disingenuous. If I remember correctly, they actually decided to use a DisplayPostscript rendering model rather than X11 (well, I suppose X10 really) because that was not really mature.
SVR4 was a product of it's time and IMHO was good, but was left to wither on the vine because it just did not get the traction after IBM, HP and DEC all pushed into the UNIX market place with their proprietary UNIX systems (yes, IBM and HP having been fundamental in setting up OSF, abandoned it after poisoning Unified UNIX, and kept their own flavours of UNIX. Only DEC had a real OSF product). Sun were really the only company that gained mainstream popularity with SVR4, although ICL/Amdahl/Fujitsu did try, and Sun put a very SunOS like view on their systems.
When the remains of UNIX Software Laboratories (set up by the SVR4 founders to promote SVR4) ended up with Novell, and they licensed their systems to SCO group who had bought some of the IP from The Santa Cruz Operation who had, it was only the UNIXware line that remained alongside Solaris. Where it is now is anybody's guess.
Sun converted from BSD to SVR4 to try to end the Unix wars which set Unix back several years. On balance this was sensible.
Given that a lot of SVR4 IS SunOS, it wasn't a major technical challenge but it did result in a fair bit of change for existing users without any immediate benefit.
Sun was very innovative for many years. SunOS and then Solaris introduce a load of advanced features - some mentioned previously - so if you were on Solaris, you could do things that other people couldn't. In the early years there was NFS, NIS, RPC and then later SMF, Zones, ZFS, DTrace etc.
Working with Solaris was a good place to be for a long time.
I actually got to look at some of the source. The kernel really was a merged system. It had quite a lot of both Sun and AT&T code in it, but one of the major features was that they tried very hard to remove all #ifdef'd code, so that much of the internals had to be rewritten to make this happen.
When it comes to userland, there were multiple versions of tools that used different flags. You could select your 'personality' by setting the order of the directories in your path to pick up your preferred flavour. The same thing could be done to select the library functional flavours with the LD_LIBRARY_PATH when using dynamic linked libraries (yes, I'd forgotten about that until just now!)
So I wouldn't say that SVR4 is SunOS, nor would I say it was SVR3+. It was bits of all of these things.
instead of this cumbersome LPAR system they built
AIX inherited that from the IBM mainframe world (AIX was originally supposed to be the IBM version of a little mainframe - most of the resilience features come pretty much direct from the mainframe world (I was a tpf programmer in the early 90's, using IBM 3090J units. Our main dev system had 6 processor units, any of which could be live-upgraded although some (like complete replacement) the CPU had to be taken offline).
So it was developed for a very good reason well before things like AIX were in development and were retained in AIX because business (especially banks) were migrating away from mainframes but were reassured that this new-fangled AIX had all the resilience features that they already were happy with.
More here; https://en.wikipedia.org/wiki/Logical_partition
Perhaps commercial UNIX isn't as viable or common as it once was in the server space. However, as far as workstations go, a commercial UNIX, again now on proprietary expensive hardware is the 2nd most common desktop OS. In the server space, it could very well be that the big UNIX vendors just didn't adapt or make a compelling enough case to adopt or continue to use their product over Linux.
I presume you're talking about MacOS here.
Well, it's a bit debatable whether MacOS is a true UNIX. Yes, it passes the Single Unix Specification verification suite (or at least it did about 10 years ago), so there is merit in calling it UNIX. But, and in my view, this is a big BUT, it is not a genetic UNIX, i.e. one derrived from AT&T sources.
The kernel is some strange mashup of the Mach kernel and some stuff developed by Apple to produce their XNU kernel. From the Mach side, it inherited code from BSD 4.3, but this is mainly AT&T free except for the ancient stuff, due to the settlement between AT&T and UCB.
The MacOS CLI userland is straight from BSD, and has quite a few differences from AT&T derived UNIXes, and quite a different feel to it. People who know BSD will be quite at home, but to me, mainly System V and ports, it feel more like the archaic UNIX Edition 7 that SVR3/4. And besides that, most people very, very rarely use the shell on a Mac. They use a proprietary GUI (not even X11 based) which is what users mostly see.
So yes, it is a UNIX system, but few people actually use it as such.
If you look, the main reason why UNIX did not survive on the desktop was cost. In the mid '90s, I was putting AIX systems on peoples desks, both using thin clients (IBM X-Stations and PowerPC RS/6000 model 220, 230 and 250s), and also full-fat systems like 43Ps. But the cost was HUGE compared with even IBM priced PC's. Using 6091-19 monitors (the 1091-17 monitors were cheaper, but late to market) at close to £2K a pop, and IBM Model M keyboards at £108 and over £40 for a three button mouse, and then adding the cost of the system unit itself, again running in thousands, it was just cheaper to put a decent Thinkpad and a good sized monitor in front of the user (and more in line with IBM's desktop policy)
IBM was expensive, but I think that all of the UNIX vendors put a premium on their workstations. They just thought that customers would pay for the perceived better performance. None of them ever produced systems running their proprietary UNIX systems at a price where they could compete even with high-end PCs (look at IBM's PS/2 model 70 and 80 running AIX 1.2 and Sun's i386 running Sunos/SVR4, and compare prices - and they were Intel systems!) Even the OS licenses were much more expensive than Windows.
When you look at the route Apple took, they only put a UNIX on their Mac systems after being unable to offer decent systems with their original, completely proprietary OS, and effectively bought in a UNIX platform when they acquired NeXT. By that time, commodity processors had achieved enough power and features to comfortably run a UNIX-like OS.
I think that the point at which UNIX really lost out was when IBM decided on the 8088 for their PC platform (and the rest of the desktop world followed IBM). If, by a twist of history, the 68000 had been ready enough (and at the right price) to be used, I suspect that by the mid to late 1980's, desktop PCs would have had all of the necessary hardware to run UNIX, and I think we would have seen UNIX rise instead of Windows, because at the time it was just so much more mature and capable as an operating system than was MS-DOS.
Why did Solaris take off in a big way? SunOS had been doing well enough on M68K, but when SPARC was released it wiped the floor with the competition. Still, relatively low volume and high profit. Fast forward about 10 years to the late 90s. Chips like the Pentium III have now caught up with the SPARC (chips like the Ultra SPARC II) but are cheaper.
Skip forward another 5 years to 2003 and AMD releases the Opteron. By now the SPARC is no longer competitive on cost/performance. But at that time Sun had a big opportunity. Sun already had several years of experience of 64bit OSes, the V9 SPARC architecture having been released about 8 years earlier. Sun also released a 64bit amd64 version of Solaris 10 in 2005.
Though Linux had already supported amd64 for a few years my memory was that it was all very flakey. Sun did push fairly hard for amd64, but also they had done some serious damage as already noted when they dropped Solaris 9 x86 for a short while. Even though Sun was selling Opteron workstations and servers, most customers were using them for Windows or Linux. And though the writing was on the wall, Sun was still making most of its money from SPARC and couldn't bring themselves to switch focus to amd64. OpenSolaris was too little, too late. And finally along came My Little Pony and finished the job off https://www.theguardian.com/technology/blog/2010/feb/04/jonathan-schwartz-sun-microsystems-tweet-ceo-resignation.
Part of the problem is that while Sun defined SPARC, they did not initially want to make processors. They wanted silicon foundries to license and actually bake SPARC processors, in a similar fashion to what ARM do now.
But the problem is that only a couple of chip makers decided to pick up SPARC designs, and eventually Sun had to commission the creation of processors for their own use themselves, although Fujitsu did manufacture SPARC processors for a long time.
As a result, although SPARC performed well for it's time, when HP started to up the clock speed of their Prism architecture, IBM produced the Power processors and DEC produced Alpha, SPARC did not stay competitive. They tried going massively parallel for a while, and then tried to up the thread count of the processors by increasing the number of integer units in each core, but they never really produced a killer SPARC implementation again. Fujitsu did, and Sun/Oracle did use some Fujitsu chips in their 64 bit lines, but they just did not have the resources to remain competitive. The so called ROC chip, which was supposed to have much promise, never saw the light of day. It was a long and slow death.
Eventually Sun ran out of money, and were consumed by Oracle, probably the worst place they could end up! I know that IBM currently has a bad rap. sheet because of Red Hat, but it often crossed my mind what would have happened if IBM had bought Sun.
>> Why did Solaris take off in a big way? SunOS had been doing well enough on M68K, but when SPARC was released it wiped the floor with the competition.
Not really. When the first SPARCstations and SPARCservers came out in 1989, there already were RISC systems from other vendors - namely SGI (using the MIPS R3000) and HP (with their own PA-RISC processors).
A year later IBM launched its first POWER based systems (RS/6000 POWERstations and POWERservers), and the following year DEC's Alpha AXP appeared on the scene as well.
In terms of raw performance, SPARC mostly trade blows with SGI and its MIPS processors, but had difficulties matching the other RISC architectures (especially PA-RISC performed better than SPARC pretty much all the time, and that gap only widened when both architectures moved to 64bit).
What helped Sun selling SPARC systems where two things, one which was the fact that Sun workstations were pretty common around universities (not just because Sun was literally funded in one, but also because Sun was aggressively pricing for the edu sector). The other thing was the fact that Sun sold SPARC processors to other computer manufacturers including Cray, Tadpole and Fujitsu, as well as for ruggedized/embedded applications (usually for aerospace and defense).
>> Fast forward about 10 years to the late 90s. Chips like the Pentium III have now caught up with the SPARC (chips like the Ultra SPARC II) but are cheaper.
That may be, but the P3 was still a 32bit processor, clinging to crutches like PAE to overcome the 4GB memory limit, while the RISC platforms were all on 64bit already. And USII wasn't exactly breaking performance records when it came out either.
In case any reader is unaware of the Illumos (~openSolaris) and openindiana projects they might wish to have a look.
Nice to be able to compare three families of *ix kernels (sysv, bsd, linux) and distros built over them.
I recently had reason to look at the SmartOS virtualization offering which is built using the illumos kemel and offers both KVM and Bhyve virtualization options. Worth a look if only out of curiosity
It's not quite the same, Oracle decided they didn't want to do it anymore so the community forked it into Illumos and OpenIndiana. It's still a thing. OpenZFS is a Illumos project if I'm not mistaken.
This is more IBM being belligerent toward a community that pretty much worked as free advertising for RHEL for a long time to squeeze a couple bucks out of the community. I don't know how many times I sold RHEL because I had the client try out CentOS, now if I'm doing something stable where Fedora isn't appropriate but FreeBSD is overkill, it's mostly Debian derivatives like Ubuntu that I wind up using, and I'm not their biggest fan so it says a lot. I wish I trusted Oracle enough to use theirs but I don't.
"... but no one (except Red Hat) believes that this business model is in the spirit of the GPL and FOSS.""
No, they don't believe that either, they just don't give a f**k and IBM lawyers told them that it's totally legal. That's how you do screwups like this: Lawyers do not understand "spirit" at all. Especially when they are royally paid not to.
Effectively IBM are freeriding on all those upstream contributors, who devoted their time (or their employer’s) on the understanding that there would be reciprocity downstream. Whether technically in compliance with the GPL or not, IBM have violated the norm here.
Which is interesting, as the norm from upstream projects is to provide free support to and accept patches/PRs from downstream. The latter is as much, if not more, beneficial to the downstream, as they get free testing and don’t have to do a bunch of increasingly painful merging as source trees diverge. Upstreams, of course, are not mandated to provide any of this.
I wonder what would happen if some key upstream projects (the kernel, glibc etc.) asked people posting to forums and submitting PRs to declare that they aren’t IBM employees, or acting on behalf of IBM? I suspect that first of all IBM would lose many of their core committers. Second, would you buy support from a company with no upstream assistance or influence?
It would require some gumption as obviously Red Hat is a really big committer. But ultimately, it would benefit the community to show there are consequences.
However, changes to the licence require all the copyright holders to agree, and that's not actually possible for the Linux kernel - and probably a lot of other projects.
Hence the "or later" clause that became common, as that allows a project to relicence under a later version of the GPL that fixes unintentional loopholes.
They *have* new versions of the GPL. The problem is they are not "upward compatible". To re-license the Linux kernel under a new license would require the approval of *all* the people / companies that have contributed code that *still* resides in today's kernel version. Similarly for other parts of the distribution.
When you say "the community" who do you mean ? Taking the Linux kernel, from what I can tell it's pretty much other large corporations - AMD, Intel, Google, alongside IBM/RH. Strip out contributions made by those who are on their employer's clock and what have you left ? I'm not saying this to diminish the core contributions made by volunteers and enthusiasts, but proportionately, most of the work is done by corporate sponsorship.
I'll bet that "the community" who are really concerned with running a CentOS like OS for free are almost exclusively businesses who are annoyed at the idea they might have to pay for something that is of value to them (noting that academia is being caught in the crossfire). I'm sure there are exceptions to this, but why would the sort of enthusiasts who contribute to OSS routinely want to run a boring, enterprise focussed OS that is basically out of date on the day it is released ? I appreciate that for many people there is a principle at stake here, but how many enthusiasts and volunteers are really effected by this ?
Corporations around the world, many of them not directly involved in IT services, are making money off the back of open source contributors all the time. They contribute nothing back to the community - they're not asked to do so - but sell their products and services using open source frameworks. Across the world, businesses use tools such as the Linux OS, databases, editors, compilers and drivers to generate what must be $trillions in revenue without anyone ever complaining that they're making money on the backs of volunteers. Then Red Hat come along, contribute a ton of stuff upstream and ask for payment for their stabilized, boring downstream distribution and suddenly they're worse than Ghengis Kahn.
Go figure, as they say.
Forcing companies to make your source in binaries open without even a public release would make it no longer a free software that is what happened when the RPL or the reciprocal public license was written to get rid of the so-called privacy loophole and whole fairness thing it was either code or cash you should look into the open whatcom compiler from side base
I must say I feel confused.
My understanding is the GPL gives recipients the right to redistribute the source code WITHOUT RESTRICTION. Red Hat threatening to take away a customers' licenses and support is an extremely large restriction. The fact that Red Hat claims they have a right to do that is irrelevant - it doesn't prevent it from being a restriction.
It is a restriction on the support contract, not on the license to the software. If you violate it, they will terminate your support contract, not your license to use/modify the software.
You will be fired as their customer but you can continue enjoying the code as you see fit under the GPL.
But that is it. Once you do it you will have no support, no further updates, no access to future source code releases, no possibility of becoming a Red Hat customer ever again.
Distributing the code is a one-shot option that you will not be able to repeat. All in accordance with the GPL and RH's contract.
I don't like it, but that's how it is.
I think there is a time limit. I believe they have to keep providing the source to the binaries that you already have for a period of 2 years, but where that 2 years starts is probably a moot point. And they can charge a reasonable media and transport fee for providing it (this goes back to the time of distributing code on physical media like tape or some form of disk). Nowerdays the Internet is perceived to be the way to do it, but GPL2 goes back to when the Internet was quite primitive.
But after terminating the support contract, they have no obligation to provide either updated binaries or the associated source code for the new binaries.
As it's a consequential act.
However, it would take a court to decide whether or not it is a GPL licence breach in this context, and IBM have very deep lawyer pockets.
So IBM can afford to string it out until whoever sued goes bankrupt, or the heat death of the universe, whichever comes sooner.
I'm sure IBM made that calculation before the announcement.
There's a lot of copyright and contract effects that are entirely based on it being too expensive to sue.
From the article:
In an essay shared on Monday, Edward Screven, Big Red's chief corporate architect, and Wim Coekaerts, head of Oracle Linux development, laid into IBM for trying to profit at the expense of the open source community.
Pot calling kettle black?
Regardless, the only thing anyone can ever say about the motivations and desires of the copyright holders of a piece of source code is whatever is written in black and white in the license they've released the source code under. If a piece of code is licensed under GPL2, then by definition they are content for anyone else to do whatever they want with it, within the terms of GPL2.
If that means making a ton of money out of it, fair enough. Even the GPL2 text is clear that you can sell the binary, and even charge a reasonable fee for the source code. Whether we like it or not, we have to accept that whatever is being done that is compliant with Gpl2 is within the expectations of the copyright holders, and we cannot complain about it because it's their code, not ours.
The problem seems to come when new people come into a project with different views to the original founder(s). But, that's their choice. If they didn't want anyone exploiting their code for commercial gain, they shouldn't have contributed to a GPL2 licensed project.
In the case of Linux, not even Linus Torvalds seems motivated to attempt a re-licensing.
Given Oracle's history, can't really believe that there current position is any more than marketing noise?
Oracle Linux - at a guess it runs Oracle's Cloud and its deployed elsewhere just to run Oracle Databases? Be very surprised if anyone is running good chunks of it's infrastructure on it. Maybe they wanted the RedHat compatibility to enable it to play nice with all the enterprise kit without having to nag other enterprise level suppliers to add Oracle Linux to the supported list, but that may be history now if they are on the supported lists anyway?
Well if you don't trust Oracle (& why would you)
SUSE have now also thrown their hat into the ring.
https://www.suse.com/news/SUSE-Preserves-Choice-in-Enterprise-Linux/
And if you happen to run Linux on HPE HW this will come with the added benefit that you can run in secure boot mode without relying on M$ acting as the CA since HPE already load SUSE's keys into their FW.
What be the point in Oracle Linux if it wasn't RHEL bug for bug compatible?
They can continue to draw from CentOS Streams source and try and steering the general direction that RHEL is taking but if it isn't compatible then no other SW house is going to support code on it. SW houses are well aware of Oracles reputation and aren't going to want to go there.
No supported apps == no point in the platform.
If you're not worried about supported apps you'd go for one of the other distros. If you don't worry about formal support contracts you'd have gone Debian years back and wouldn't be worrying about anything Red Hat related.
The quotation, "but equally, the Hat can respond to them doing so by terminating their customer contracts, and that is 100 percent compliant with the GPL" is at best wishinformation. Go from https://access.redhat.com/articles/5112 to the https://www.gnu.org/licenses/gpl-3.0.html it links to. Section 6 of GPLv3 requires source code distribution to object code recipients. Check. Section 10 states, "Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License." The Red Hat EULA is consistent with this clause. Otherwise, Section 8 ("Termination") would take effect. An agreement that a recipient would not be able to further convey would violate GPLv3.
Aside from the fact that GPL3 isn't relevant to all of the code (for example, the kernel)...
By now, we all know which clauses are relevant.
Everyone agrees that Red Hat are pushing the wording right up to the breaking point and are hateful for doing that.
But have they pushed it past the breaking point? Kuhn fears not, Perens disagrees.
>Everyone agrees that Red Hat are pushing the wording right up to the breaking point and are hateful for doing that.
One has to be slightly careful with that. The only people entitled to "a view" that matters is the copyright holders. And the only true expression of their view is the license they've released the source code under - GPL2 in this case. For all we know, they're all entirely happy with this sort of thing going on, and may thoroughly disagree with the suggestion that what RedHat is doing is "hateful".
Of course, with a project like Linux with umpteen copyright holders there's probably a mixture of views, but if they don't like the terms of GPL2 they shouldn't have joined in. And there is always the option of relicensing; if all the copyright holders agree, the license can be changed.
What Could an Alternative License Be?
Not sure. It feels difficult to have any kind of license that locks in a distributor of binaries for ever and day for all future versions.
An alternative would be a license that obliges one to publish source code on publicly available server, if you're distributing binaries at all. GPL3 nearly does this but appears to stop short; you only have to make the source as accessible as you made the binary (so, both can be behind a paywall). I don't know if GPL3 stopped short because it is impossible (in law) to insist on it being on a public server, or if because there's another flavour of it that does go that far.
What Else Could be Done?
As RedHat are doing with their customers, so the Linux kernel project could do to RedHat. If the condition of getting a copy of the kernel source was "you must buy a compiled one for $1, and we won't sell you another if you give the source to RedHat", they could refuse to sell one to RedHat and deprive them access to the kernel source. That way, if the wider community really didn't like what RedHat were doing, they'd honour the terms laid down by the kernel project.
This works even if some of the source code is copyrighted by RedHat. There's nothing in GPL2 that obliges a repo to make a copy of a copyright holder's source available to them, unless you ship them a binary.
Obviously, that'd be the nuclear option for the kernel community, but if they really, really wanted to stop this kind of thing going on then this is an option that is open to them.
Bad Habits on Repos
I notice that these days, rather than litter the tops of source code files with license notices, SPDX tags are used instead. This is against the explicit advice of GPL, and in principle you need the copyright holder's permission to remove the license notice text from a file.
My point is that whilst developers and build systems may well understand what the tags mean, but they probably have no weight in law. If one receives such a file without any other context, the file fairly appears to be uncopyrighted, with no hint of who the copyright holder might be, when it was copyrighted, what license conditions apply, etc. A court dealing with the file in isolation well ask, "where is the copyright claim and license notice?" and may not accept answer along the lines of "they're separate and only discoverable if you build the code in our build system". Nothing I've found on the SPDX Wiki or meeting minutes (so far) says anything about whether or not source files with only SPDX tags are legally sound.
This kind of thing could become important in future, in dealing with companies like RedHat. There's probably no point trying to say RedHat has no license for the Linux kernel source, unless the legal status of SPDX tags is first established, if source files are going to be devoid of the text that the GPL license advises.
Perens
Seems to be being a bit quiet at the moment. I think he's in a tricky position. He let rip over GR Security essentially doing the same thing, carefully couching what he said in terms of an opinion. He can't now say, "What RedHat are doing is perfectly OK", because that'd then compromise his previously firmly stated opinion, and potentially his appearance as an expert witness in other court cases related to actual violation could be retrospectively challenged. If he repeats his GR opinion but aimed at RedHat, well, I think they could significantly out-lawyer him and may get a different result to the rather perfunctory case that GR Security was able to fund. He's probably out of the business of appearing as an expert witness or making public comments on it, because he could get into very deep water if someone like IBM gets lawyered up against him.
In the sections you've quoted, there's nothing there that obliges RedHat to sell future versions to a customer of a previous version, or supply customer support, or do any further business with that customer. They cannot be obliged to sell something to someone that they don't want to sell to. No binary, no entitlement to the source.
Obviously, if RedHat make a binary version freely available on their website and you download it, you're then entitled to the source (even if the binary is saying "cough up a customer support fee" when / if you run it). However, the recent change in stance from RedHat is a prelude to RHEL binaries becoming hard to get, unless you're already a paid up customer.
And Linux is GPL2, not GPL3.
https://access.redhat.com/documentation/en-us/red_hat_update_infrastructure/3.0/html/system_administrators_guide/gnu_general_public_licenses
"3. [...]do one of the following: [...] give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code[...]" The other options are to provide the source code with the binaries or, for noncommercial distribution, a copy of the source code offer as received with the binaries.
Red Hat is citing GPLv2 here. See https://www.kernel.org/doc/html/latest/process/license-rules.html (Linux kernel licensed under "GPL-2.0")
The posted quotation, "The GPL only obliges Red Hat to provide source code to parties to whom it has provided binaries, and not to the rest of the world." Once again, a little research would provide a different answer. I knew I had seen that before, but could not pin it down at the time.
Red Hat's basic position here is saying "The problem with the open source model is that other people profit off our our work, which prevents us from profiting off other people's work".
And this is the fundamental problem when commercial vendors get involved in open source. Eventually they discover how hard it is to make excess profits on a pure support model where anybody can offer support and expertise of the same common source code. That kind of fair competition is anathema to traditional enterprises. In other words, they begin to see why commercial closed source exists.
So what Red Hat wants is to recreate the corporate benefits of closed source while exploiting the benefits of open source.
Of course, Oracle is no better here. It wants source code to be freely available when that benefits Oracle (Linux) but closely controlled when that benefits Oracle (Java).
In other words, commercial companies are gonna act like commercial companies.
All RHEL commits are still public in the CentOS Stream repo, the only uncertainty is at what time they trickle down to RHEL. Can't you just build RPMs with all of the cumulative commits found in the public CentOS repo and then compare the checksums of the binaries to those of the RHEL binaries (freely available via the developer subscription)? That way you'd know when the commits actually land in RHEL, without having to "illegally" use the RHEL source code to build you clone.
Or am I missing something?
Is what the Linux Foundation was saying in 2017... they actively discouraged the development of more linux kernel talent, yes they did, word was going around, at the conferences, such as their OSSNA conference.
Doesn't this play right into the hands of the vendors?
Remember when "it was/is fun working on it"?
While Oracle may very well be like IBM when it comes to it's proprietary software; when it comes to Linux it's business model is more open than Red Hat. Oracle distributes Oracle Linux binaries *AND* source for free; and doesn't restrict what you can do with it - as long as it's legal under the GPL license. You can use it; modify it to your heart's content; and/or redistribute it.
Oracle charges purely for *support* in the Linux arena. *IF* you want it - and for many enterprise type customers that's basically a requirement from their auditors. And companies *DO* pay for that. Oracle has it's own team who enhance and package that distribution; and; as far as I know; isn't losing money on the deal.
Why IBM/Red Hat apparently seem to have trouble doing so similarly; I don't understand. Unless they're just getting greedy.
You can't impose extra conditions on users on top of the GPL'd code.
IBM is screwed here.
I'm not blaming RedHat here, they walked that very fine line very well. This is a toe over.
I'm an ex IBM employee, one of my internal comments was that if I hadn't already retired I'd have quit over this stupidity, some of that code was mine. (A very small part).
Why it's stupid, if it ends up in court IBM loses either way. They either lose the right to GPL code or if they win someone more feral will come along and rip the carpet from under them.
I remember seeing Oracle Linux when it first came out and was surprised it made it past the lawyers. It was a pretty blatant ripoff of Red Hat. The boot up screen even used to say "Welcome to Red Hat Linux". I'm surprised that Red Hat didn't throw a sue ball just for trademark infringement, if nothing else.