back to article openSUSE Tumbleweed team changes its mind about x86-64-v2

Tumbleweed is changing course once again, but it's due to popular demand, and it means broader compatibility for more people. Saying that, it's looking for someone to help maintain its 32-bit support. Back in November, the openSUSE project announced that it was changing the minimum CPU requirement for Tumbleweed on x86-64 to …

  1. Smirnov

    Tumbleweed is the future of openSUSE.

    "Tumbleweed is taking on increased importance. Its main corporate sponsor SUSE is aiming its next-gen enterprise distro towards an immutable root filesystem and containerized workloads. That pulls the rug out from underneath openSUSE Leap, which is the current stable-release version of openSUSE. Leap releases have been synchronized with SLE since 15.3, which means that if SLE is replaced by ALP, Leap no longer has a base to draw from."

    Seems the author doesn't really understand the relation of the various SUSE Linux distributions.

    The source is Factory, where the packages are built and QA'd. Once they pass, they go into Tumbleweed. Certain snapshots of Tumbleweed which are mature enough then form the basis for both, SUSE Enterprise Linux (SEL) and openSUSE Leap.

    Essentially, Tumbleweed is to SUSE what Fedora is to Red Hat, and openSUSE is what CentOS has been before it was "repurposed" by Red Hat.

    ALP, which is one potential(!) candidate for the next major version of both SUSE Linux Enterprise and openSUSE Leap. ALP is also based on MicroOS, which sits between Tumbleweed and SEL/Leap (although closer to the latter). Should ALP end up becoming the next major version of SEL and its openSUSE Leap equivalent (whatever it will be named), which is still highly questionable as no decision has been made (at this time, ALP is more like an experiment with uncertain future), then these versions will still sit behind Tumbleweed (which might also be named differently by then).

    In any case, SEL 15 and with it openSUSE Leap still have many more years to come, and even if the decision is made to progress with ALP then it will take several years before becomes a real product. So any concerns about the future of openSUSE Leap at this point are pointless, and whatever the next version of SEL and openSUSE LTS will be there for sure will be an easy migration path once SEL 15 and Leap 15 become EOL.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Tumbleweed is the future of openSUSE.

      [Author here]

      > Seems the author doesn't really understand the relation of the various SUSE Linux distributions.

      This is of course entirely possible, yes.

      But there are complexities here that your comment skips over.

      Yes, broadly openSUSE is to SLE as Fedora is to RHEL.

      But the mapping is not 1:1.

      Fedora is versioned. RHEL is built from snapshots of given versions of Fedora every few years.

      So, for example, as I said when covering RHEL 9, it's based on Fedora 34:

      https://www.theregister.com/2022/05/10/red_hat_enterprise_linux_9/

      RH only takes "about 7 Fedora releases":

      https://www.linux.org/threads/the-fedora-redhat-connection.41000/

      Whereas SLE and Leap are synched.

      Leap 15 ~= SLE 15.

      Leap 15.1 ~= SLE 15 SP 1.

      Leap 15.2 ~= SLE 15 SP 2.

      Leap 15.3 = SLE 15 SP 3 (because they moved to a common core at this release)

      Leap 15.4 = SLE 15 SP 4.

      It is not yet 100% certain that ALP will be the only future of commercial SUSE, but that is what the company seems to be planning right now.

      As for openSUSE, as Tumbleweed is a rolling release, it's not equivalent to any Fedora version. If anything it is more like Fedora Rawhide.

      But for RH the cycle goes:

      Rawhide -> twice a year, Fedora -> once 3-4 years -> RHEL

      Whereas for SUSE there is not a directly equivalent path.

      MicroOS has its own version numbering scheme, and openSUSE does not have LTS versions.

      I think that your attempting to shoehorn a comparison of SUSE and its various lines into direct equivalence with RH and its various lines does not work.

      As for the future, what I am trying to indicate with my SUSE-related coverage is that the direction is changing and as there have been quite a few changes in the last year or 2.

      As I wrote, after SUSE's Liberty Linux distro being cancelled, the product director for SLE, Kai Dupke, left the company. He went to CloudLinux, the backers and main sponsors of AlmaLinux.

      https://www.theregister.com/2022/02/28/almalinux_85_powerpc/

      I suspect that this lies behind the degree of uncertainty about future SLE releases.

  2. Henry Wertz 1 Gold badge

    Sensible

    Since I'm not too familiar with these terms, I googled it. There's a whole list of instructions newly required by each, but the highlights -- x86-64 is as the name would imply 64-bit x86, SSE and SSE2 already existed then so they can be used. x86-64v2 requires SSE3, SSE4, SSE4.1, and SSE4.2. x86-64v3 requires AVX and AVX2. x86-64v4 requires AVX512 (which I assume won't be that popular to make a minimum given some of Intel's newer chips have removed AVX512... since the power cores supported it until it was disabled through microcode, but the Atom-based efficiency cores did not.)

    Sensible. To be honest, video players, encoders, and games generally already do runtime detection of what MMX, SSE, and AVX instructions are supported and uses them or not on the fly. The software defined radio (SDR) stuff I used would generally run a ahead-of-time "calibration" step where it determines which instructions the CPU supports, then does runtime tests on various hand-optimized routines for Fast Fourier Transforms and whatever other number-crunching to determine which routines gave best performance (the run-time test is needed because some CPUs were actually faster running a combo of older instructions than using the newer instructions, for whatever reason.) These are what would get by far the largest speedup from these instructions, and they already use them. Gaining a percentage point or two speed on libreoffice or whatever, while having it not run on the older systems that might get the real benefit from those few percent speedup, is kind of pointless. hwcaps are cool, because you can ship side-by-side libraries (and I think binaries) for any other packages that would actually benefit from it, the system would chose the x86-64 lib if it didn't support certain instructions and the x86-64v2/v3/v4 libs if it does support the required instructions, on the fly. Could make things interesting if there were compiler bugs... but I haven't heard of any distro having a serious problem with that for years and years so I think it'd be smooth sailing.

    That said.. my oldest system currently is Ivy Bridge. Having x86-64v3 required would mean I could not run the software (it has AVX but not AVX2.) It supports x86-64 and x86-64v2 though. So it wouldn't affect me. But again I also don't see the benefit given the programs with by far the bulk of the benefit already support using those instructions or not "on the fly".

    Side note -- isn't this amusing. Both macOS and Windows have dropped hardware support down to under 5 years (macOS Ventura requires x86-64v3 due to AVX2 usage, and also drops hardware drivers for numerous older hardware, even pre-USB3 USB controllers.... and Win11 has the TPM requirements etc.) while the Linux distros are debating dropping support for about 15 year old hardware and deciding not to do it.

    1. Anonymous Coward
      Anonymous Coward

      Re: Sensible

      "isn't this amusing"

      So doesn't that make it obvious that the commercial OSes are being bumped so that new hardware can/will/must be sold? Anyone care to reveal the current state of the vampire/werewolf dyad - which one is on top these days?

      1. IGotOut Silver badge

        Re: Sensible

        Unsupported and not getting the latest OS are two very different things. To be honest, I'd rather the cut dead crud out of the latest IS, so long as the continue to support the older ones as well. For example my 7 year old iPad mini just got an update, but its not on the latest version.

        Think about it logically. If you are still running ISA cards with parallel and serial ports, do you actually NEED to be running the most up to date OS?

        1. martinusher Silver badge

          Re: Sensible

          Once again we see the consumer trap that also neatly highlights the difference between Linux and monolithic OSes like Windows. Supporting legacy hardware is a matter of installing an appropriate driver. In a multicore system one of the (slower) cores might be assigned the task of peripheral management. But instead its the "all or nothing" approach -- we're changing a handful of instructions so we absolutely must ditch all legacy hardware and go for the latest version.

          The only reason why support for legacy technologies gets dropped in Linux is that there comes a time when there just isn't the user base to warrant spending the effort maintaining it.

        2. Robert Carnegie Silver badge

          Re: Sensible

          As far as I know, Apple doesn't publish an end of support date for iThings, so your old iPad may get a security update for the next discovered flaw that affects old and new devices, or it may not. I would like to know. But they don't say.

          Here is an article...

          https://nerdschalk.com/iphone-7-support-end-date/

          ...that admits to being "just educated guesses".

          Reading between the lines myself, the prudent thing to do is to upgrade when your device won't run the latest iOS, or to use the device very, very carefully. That may mean upgrading your main device anyway, and keeping the old one as a spare. Or, looking for a trade-in offer, but it may be too late for that.

          Conceivably, the extension of support depends on whether some large customers of Apple are willing to pay for that, as well as whether Apple is willing to provide it.

          I suppose too that advertising the date of end of support may embolden hackers to devise their bugs and save them up until the point in time when bugs aren't going to be fixed on older devices.

          In the opposite direction of policy, don't Chromebooks have a built-in end of life date, regardless of whether the hardware is working or isn't?

    2. Nate Amsden

      Re: Sensible

      I remember back in the 90s efforts to optimize by compiling for i586 or i686 for example, then there was the egcs compiler(which I think eventually became gcc?), and then gentoo came around at some point maybe much later targeting folks that really wanted to optimize their stuff. FreeBSD did this as well to some extent with their "ports" system(other BSDs did too but FreeBSD was the most popular at the time probably still is). I personally spent a lot of time building custom kernels, most often static kernels I didn't like to use kernel modules for whatever reason. But I tossed in patches here and there sometimes, and only built the stuff I wanted. I stopped doing that right when the Kernel got rid of the "stable" vs "unstable" trees as the 2.4 branch was maturing.

      Myself I never really noticed any difference. I've said before to folks if there's not at least say a 30-40% difference then likely I won't even notice regardless (not referring specifically to these optimizations but referring to upgrading hardware or whatever). A 20% increase in performance for example I won't see. I may see if I am measuring something, such as encoding video for example. But my computer usage is fairly light on multimedia things (other than handbrake for encoding, I have ripped/encoded my ~4000 DVD/BD collection, but encoding is done in the background, so 20% faster doesn't mean shit to me, double the speed and I'll be interested provided quality is maintained). All of my encoding is software, I don't use GPU encoding.

      I haven't gamed seriously on my computer in over a decade, I don't do video editing, or photo editing, etc etc. I disable 3D effects on my laptop (Mate+Mint 20), even though I have a decent Quadro T2000 with 4G of ram(currently says 17% of video memory is used for my 1080p display). I disable them for stability purposes(not that I had specific stability problems with them on that I recall, I also disable 3D acceleration in VMware workstation for same reason). I've never had a GPU that required active cooling and I have been using Nvidia almost exclusively for 20 years now (I exclude laptops since pretty much any laptop with Nvidia has fans, but the desktop GPUs I have bought, none have ever had a fan).

      I really don't even see much difference between my 2016 Lenovo P50 with i7 quad core, SATA boot SSD (+2 NVME SSDs) and 48G of ram with Nvidia Quadro M200M, to my new(about 2 months old now) Lenovo P15 Xeon 8 core, 2 NVMe SSD, and 128G of ECC ram and Quadro T2000. It's a bit faster in day to day tasks, but I was perfectly happy on the P50.

      My new employer insisted they supply me with new hardware so I said fine, if you want to pay for it, this is what I want. They didn't get it perfect, I replaced the memory with new memory out of pocket and bought the 2nd NVME SSD(not that I needed it, just thought fuckit I want to max it out). I was open this time around to ditching Nvidia and going Intel video only, but turns out the P15 laptop I wanted only came with Nvidia (even though it's hybrid, I think..). Since the Nvidia chip is there anyway I might as well use it, I've never had much of an issue with their stuff unlike some others that like to run more bleeding edge software. I expect a 6-10 year lifespan out of this laptop so I think it's worth it.

  3. drankinatty

    Good for you Dominique

    I knew cooler heads would prevail.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like