back to article Linux kernel to drop 486 and early 586 support

Kernel 6.15 is taking shape and it looks like it will eliminate support for Intel's 486 chip and its contemporaries. The next version of the Linux kernel is progressing towards release. Release candidate 5 appeared over the weekend, and we expect the kernel itself will probably officially arrive around the end of May or early …

  1. Anonymous Coward
    Anonymous Coward

    Hubble Telescope.......

    ....uses both Intel 386 and Intel 486 processors.

    They may be obsolete.......but they are still doing sterling work!

    Why would Linux remove support for useful (but old and obsolete) processors?

    P.S. Coincidence - Hubble and Linux are approximately the same age!

    1. imanidiot Silver badge

      Re: Hubble Telescope.......

      Because keeping support for those processors means 15.000 lines of code that have to be kept in the kernel and maintained, including all relevant interfaces and drivers. It's a lot of work for processors that are probably not that commonly used anymore. Even in embedded applications I doubt it's nearly as abundant as it once was. Nor am I aware of anyone still making 486 instruction set processors new. Sure there's still processors out there that are doing fine work, and they can keep going for a long time on the current kernel version. But there comes a point were supporting (very) old instruction sets just isn't worth the effort.

      1. demon driver

        "anyone still making new"

        "Nor am I aware of anyone still making 486 instruction set processors new" – I never thought that would come to be a criterion for whether Linux should support a processor or not. When I started to become interested Linux, one of the reasons that actually did make it interesting was the fact that it supported a plethora of old and new architectures and processors some of which I hadn't even heard about.

        1. Charlie Clark Silver badge

          Re: "anyone still making new"

          Linux has always been pretty pragmatic about hardware support, but the move towards x86_64, ARM-64 and all the new hardware devices do make maintaining a monolithic kernel more demanding. NetBSD is the OS that has always championed maintaining support for as many architectures as possible, but this comes with its own price.

        2. Liam Proven (Written by Reg staff) Silver badge

          Re: "anyone still making new"

          > one of the reasons that actually did make it interesting was the fact that it supported a plethora of old and new architectures and processors

          Well, not any more.

          Linux is a corporate tool now. It powers most of the world's internet servers, and increasingly, the kit that interconnects the Internet.

          If you want an OS that supports old and interestingly weird kit, then look to the BSDs -- especially NetBSD. That's why I specifically called out the return of the FPU emulator in NetBSD 10.

          Fairly soon Linux is going to go 64-bit only, I think. (Although FreeBSD will be first: FreeBSD 15 will be 64-bit only, across all architectures.)

          That's a good thing: it widens the gap between the Linux kernel and the BSD kernels, and that could help BSD.

          1. Anonymous Coward
            Anonymous Coward

            Re: "anyone still making new"

            Why is helping BSD a good thing?

            1. Irongut Silver badge

              Re: "anyone still making new"

              Because variety is the spice of life and it helps prevent a monoculture which would be just as bad if it were Linux or any other OS as a Windows monoculture.

            2. Jamie Jones Silver badge

              Re: "anyone still making new"

              Why is helping BSD a bad thing?

              1. Anonymous Coward
                Anonymous Coward

                Re: "anyone still making new"

                Well…. They are well known for having taken GPL code and simply stripped off the license header.

                That one springs to mind.

            3. bombastic bob Silver badge
              Devil

              Re: "anyone still making new"

              You should try FreeBSD and find out. There are features (like built-in ZFS) that have been there for years, there is NO systemD (nor will there EVER be one) and it's not encumbered with license paranoia [you can have "non-free" modules in the kernel if you want, or ship a modified kernel as a binary]

              packages and ports ALWAYS install with libs and headers for building things that depend on them. It's meant for "build from source", actually.

              There are downsides, too, but in general if you're a software dev it's MUCH better than Linux for that kind of thing In My Bombastic Opinion.

          2. williamyf Bronze badge

            Re: "anyone still making new"

            Embedded systems are part of the corporate tool box. Via is still producing (as in fabbing in a fab somewhere) plenty of "i686" type processors now. For Vortex, this means a big chunk of their product line is toast now....

            X86-32 is alive and well, we just do not see it in the background.

            1. OhForF' Silver badge

              Re: "anyone still making new"

              The Civil Infrastructure Platform plans to support the SLTS v6.1 kernel until August 2033 so there's still more than 8 years before corporations would have to organize their own (kernel) patching for i486/i586 hardware. That is far longer than most corporations will provide software updates for their products.

              My guess is that the big majority of devices based on hardware that 6.15 will no longer support use kernel versions that are already marked EOL for years and the applications in user land will most likely use outdated libraries as well.

          3. the spectacularly refined chap

            Re: "anyone still making new"

            Did NetBSD ever actually mandate an FPU on i386? I recall the discussions from the 3.0 or 4.0 era, where essentially support was turned off by default. If you wanted the FPU emulator you needed to compile it yourself and enable appropriate build flags. From memory it was more than a custom kernel build, the userland also needed to be compiled with the appropriate flags set to avoid particular instructions. From very dim memory you're looking at the NO_FANCY_MATH build parameter if anyone wants to search the NetBSD mail archives and try for themselves.

            1. John Klos

              Re: "anyone still making new"

              There are two common ways to deal with the lack of floating point hardware. One is to compile the OS and software with software floating point routines. The other is to use traps which are called when floating point instructions are run which emulate the floating point instructions as though you have a real FPU.

              The first is faster and more efficient for systems without FPUs, but even if those routines automatically use floating point hardware if it exists, the routines create a lot of overhead.

              The second is usually the default, but the instruction emulation traps have a lot of overhead - for every single floating point instruction, you need to do a context switch, which is expensive.

              Since a vast majority of the computers used in the world have floating point hardware, the second is usually the default. There was even a recent discussion about this for NetBSD running m68k.

              There's a third way, which is to patch binaries as they're loaded and/or run to do an exception when a floating point instruction is run, but then also replace that instruction with a call to a software floating point routine. This is what can be done on AmigaOS with instructions that aren't available on the m68060 and are emulated. It might be worth considering something like that for NetBSD, although modifying running code in a protected memory OS is hardly trivial.

              1. the spectacularly refined chap

                Re: "anyone still making new"

                From memory there were definitely two aspects to it, both kernel module to be loaded and changes to how userland is compiled. A quick search reveals no_fancy_math is itself a GCC flag to avoid sin, cos and sqrt functions.

                In software those would naturally be Taylor series, to calculate to the limit of precision could take a hundred or more primitive floating point ops.

                As you say a trap would be processed without user context, even though it occurs synchronously, which among other things means it is not counted against the process's CPU time. Possibly a half and half approach was taken, trap the simple stuff but ignore the stuff that takes all day. From memory those instructions were present even on the 8087.

      2. williamyf Bronze badge

        Re: Hubble Telescope.......

        In the embedded space Vortex and Via are very active with 486+ (processors that have ALL of the 486 iinstructions and SOME (but not all) of the 586 instructions) and 586+ (again, proicessors that have all of the 586 intructions but only SOME of the 686 instructions). They sell new prosessors even today.

        Via has 686 processors, while vortex does not.

        Here is hope that the i686 (i.e. pentium II or PIII or similar) becomes the mandatory new baseline for x86-32 Linux going forward... Support will simplify greatly

    2. Charlie Clark Silver badge

      Re: Hubble Telescope.......

      It only affects future versions of the OS. Existing ones will be just fine and, especially if you're running 386es you probably don't want anything new due to increased demands on memory, etc.

      1. LybsterRoy Silver badge

        Re: Hubble Telescope.......

        Thank you - saves me posting the same.

    3. jake Silver badge

      Re: Hubble Telescope.......

      Virtually every version of the Linux kernel that has ever been released is still available for download, and likely will be until after the heat-death of the Universe. All your old hardware will still be runnable for a very long time.

      Also, note that the 6.12.x SLTS Kernel will be supported until 2037, or thereabouts.

      Most people have absolutely no reason whatsoever to be running the latest, bleeding edge kernel available. This is doubly true of scientific equipment.

      NB: The HST design process was started officially in 1977, when Linus was 8 years old. It was launched and in operation in 1990, about a year and a half before Linus unleashed Linux on the unsuspecting planet. HST does not run Linux, it runs a RTOS called VRTX.

      1. Simon Harris Silver badge

        Re: Hubble Telescope.......

        Damn you, Jake!

        I could have saved myself a fair amount of research into satellite operating systems (although I have to admit, it was quite enjoyable) if I'd read your post before I started writing mine!

      2. RedGreen925

        Re: Hubble Telescope.......

        "It was launched and in operation in 1990, about a year and a half before Linus unleashed Linux on the unsuspecting planet. HST does not run Linux, it runs a RTOS called VRTX."

        There you go doing the unthinkable injecting facts into the debate. When the oh so hard done by crowd is doing their usual whining on without a single solitary fact to backup their claims as they always do. To be complaining about dropping decades old at this point in time processor support is just typical of them, same with the 32bit whiners. If you have to have that support then scratch your own itch and do it yourself. Do not expect people to do it for you their job is hard enough as it is without providing support for who knows how few people that actually need it. And if they need it that bad then the time to pony up some cash and support its continuing development is now.

        1. Anonymous Coward
          Anonymous Coward

          Re: Hubble Telescope.......

          The problem with that rant is that the OP never claimed the HST ran Linux...

          1. doublelayer Silver badge

            Re: Hubble Telescope.......

            True, but they implied that the presence of that chip inside the telescope was relevant, which it isn't because it doesn't run Linux, and it wouldn't be even if it did run Linux unless they were pushing kernel updates which, for something that's really hard to fix if it ever didn't like a kernel update, they wouldn't be doing. Nor is it relevant for any machine with one of these CPUs unless the users of that machine have actually been updating the kernel version. There are indeed a lot of old machines with these processors in them. Almost all of the ones I've seen are running old software on the old hardware. Linux 6.12's the last LTS version with support? Some of these things are still running 2.6 kernels and the more updated ones are running 4.x ones. If you're running a 486 with a 6.x kernel, I challenge you to explain why, and only then will I start to worry about dropping support for it. If people haven't updated before when they could have, then I won't be bothered about cutting off the stream of updates they didn't use anyway.

          2. Simon Harris Silver badge

            Re: Hubble Telescope.......

            I think the point of the original post wasn’t that Hubble’s 486 used Linux (it doesn’t), but ‘hey, look, some people like NASA still use 486s. Ergo, why can’t we still have an active 486 Linux development process?’

            Hubble is a special case though - the last service mission was 16 years ago, next week. If we still had Shuttles doing satellite servicing missions I wouldn’t be at all surprised if Hubble’s computers might have been upgraded again to something more modern (the 486s were, after all, an upgrade from something more primitive). But we don’t have the Shuttle now, so Hubble is stuck with what was space qualified in the first few years of the millennium.

            Apart from nostalgia and ‘because I can’, I suspect the majority of 486s still running are special cases because they can’t be upgraded for some reason, and are probably running an OS tuned to that application.

            1. Simon Harris Silver badge

              Re: Hubble Telescope.......

              ... On the other hand, I just remembered one of the cancelled Shuttle missions (STS-144) was slated to bring Hubble back to Earth to put it in a museum, so maybe it wouldn't have been deemed worthy of more upgrades after all. Glad it's still up there doing science, even if it is running on vintage microprocessors!

      3. Missing Semicolon Silver badge
        Unhappy

        Re: Hubble Telescope.......

        The old kernel may be avaliable, but the userland comes from the repos of distros that use these kernels. Once *they* stop, there is no Linux install on your old netbook. Sniff.

        1. jake Silver badge

          Re: Hubble Telescope.......

          Slackware's userland is still available back to Slack 1.0, from July of 1993. Including source. See Eric Hameleers' archives, and mirrors.

          I believe the Debian userland is still available nearly as far back. I assume the source is still archived somewhere on debian.org

          The Internet Archive has most of those, and many, many other ISOs of distros dating back about that far.

          Nobody is going to jerk them away from you any time soon, so relax.

    4. Simon Harris Silver badge

      Re: Hubble Telescope.......

      Dropping 486 Linux support shouldn't affect Hubble, as that runs VRTX for its operating system.

      https://en.wikipedia.org/wiki/Versatile_Real-Time_Executive

      Intel stopped shipping the 486 18 years ago, so if you still absolutely need to run a 386 or 486 instead of something more modern, you're probably either running some piece of bespoke hardware, in which case you probably have very specific O/S requirements, or doing it for old-time's sake. I'm not sure Linux's aim is to be in the nostalgia business.

    5. rg287 Silver badge

      Re: Hubble Telescope.......

      Because nobody is using Linux on those processors. Not modern Linux anyway. Maybe an old kernel that doesn't include lots of "bloatware", or modern affectations like ARM or x86-64 support! For best results, download a version of Damn Small Linux from 2008.

      In 2018 these nutters got modern (Gentoo) Linux running on a 486. It took 11 minutes to boot to CLI.

      And in fairness, they did get it playing music and serving a webpage via nginx. It also cloned a git repo at a magnificant 50KiB/s, took ~4 seconds to return "python3 --version" and 15seconds to run a "Hello World" script.

      If you're actually doing real workloads on a 486 you're probably not using Linux - you're using a RTOS in some embedded application. Booting it into a complete modern Linux environment is a woeful use of a 486's useful but limited resources.

      1. Uncle Slacky Silver badge
        Stop

        Re: Hubble Telescope.......

        There's also AOSC Retro, which manages to run recent kernels on ancient hardware: https://lunduke.substack.com/p/aosc-osretro-a-linux-distro-for-486

      2. rafff

        Re: Hubble Telescope.......

        "In 2018 these nutters got modern (Gentoo) Linux running on a 486. It took 11 minutes to boot to CLI."

        At some point I had a batch of diskless 386/25 machines. I managed, eventually, to get them to network boot to a GUI. I don't now remember dates or the Linux flavour; possibly Gentoo, and it would have been at least 25 years ago.

        Obviously I did not build the kernel on a 386/25; I had IIRC a 486DX2 dual processor for that. Working out which part of their filesystems could be common and what had to be private to each processor was entertaining.

        BTW my shrink says that I am now fully recovered.

    6. phuzz Silver badge

      Re: Hubble Telescope.......

      As other's have pointed out, you can always just use an older kernel that does support your hardware, and that's the point really, there's no point in using a newer kernel on an old machine, because (eg) your 386 box won't need NVMe support.

      1. daviduvi

        Re: Hubble Telescope.......

        The problem is risk of attack. From a long time back, internet connection has been one of the first configurations in OS install. And, on the other side, internet is being scanned for vulnerable systems and the list of results available for free. If the kernel is not patched to date, it may be minutes or even seconds to be found and fall victim to an attack to your recently deployed machine. Maybe no personal or private information is stored in that machine yet, although it may be compromised and can be the pivot point to your other systems.

        1. jake Silver badge

          Re: Hubble Telescope.......

          "From a long time back, internet connection has been one of the first configurations in OS install."

          For consumer-grade, one-size-fits-all, kitchensinkware OS distributions like MacOS, Windows or Ubuntu, yes.

          For sane variations on the theme, such as Slackware, you have to specify it during the install (or add it later).

          For serious scientific systems such as HST? They are usually controlled via simple serial connections. HST doesn't speak TCP/IP, no need.

        2. navarac Silver badge

          Re: Hubble Telescope.......

          << internet connection has been one of the first configurations in OS install >

          Way back when I had a 486 PC, (or 286 and 396), the word "internet" was not in the vocabulary!! Maybe BBS, Prestel or MicroLinlk, but not internet or web. Happy days, as they say.

          1. jake Silver badge

            Re: Hubble Telescope.......

            We called the ARPANET "the internet" at least by 1970, mainly because it had become an internet ... By 1974, the word "Internet" was even ratified in the RFCs ... See Cerf & Kahn's take on the subject in RFC-675 ... and note that the research (read "bullshit sessions") that resulted in RFC-675 had started several years earlier. The name was already embedded in the collective psyche by then.

            The 486 was released in 1989.

    7. DrXym

      Re: Hubble Telescope.......

      I imagine they're getting rid of support for the same reason a lot of open source projects drop old and obscure platforms - lack of volunteers interested in taking on the role of maintaining or testing that code.

    8. bombastic bob Silver badge
      Devil

      Re: Hubble Telescope.......

      It's not likely Hubble will need the kinds of updates that "more modern" devices need. That and PC-104 devices and other embedded things.

      I use an older version of Debian for an old Toshiba laptop that in my opinion still has uses... and the Debian distro had stopped supporting it around 8 years ago.

      So basically, if you have an old CPU, you'll need to track (and maybe fork+maintain) an older version. Works for me, anyway.

      FreeBSD has traditionally supported ancient hardware, However, according to grok, "starting with FreeBSD 13.0, the minimum requirement shifted to an i686-class processor (Pentium Pro or better),"

    9. Mage Silver badge
      Unhappy

      Re 386 & 486 & Pentium 32 bit

      Effectively for many they are already gone because various apps and frameworks only support working x86-64-AMD cpus. Last Mint with 32 bits was 19.3. There is no mainstream browser and now no apps using current QT as it no longer supports 32 bit.

      I agree it's a shame.

      Hubble is a bad example as it won't get new Linux code.

    10. Anonymous Coward
      Anonymous Coward

      Re: Hubble Telescope.......

      yeah because hubble is running sudo apt-get update && sudo apt-get upgrade -y && sudo apt-get dist-upgrade -y && sudo apt-get autoremove -y && sudo reboot every day

  2. wolfetone Silver badge

    I'm being a bit dim today - but does this mean that the new kernel won't run on newer 486 chips that you can buy that power embedded systems?

    1. jake Silver badge

      There is nothing stopping you from choosing a down-rev version of Linux if you want to run an "obsolete" core.

      However, note that many (most) embedded systems don't use Linux.

    2. Liam Proven (Written by Reg staff) Silver badge

      > does this mean that the new kernel won't run on newer 486 chips

      Depends on their ISA level.

      If they support CX8 and TSC and have an FPU, then it will work.

      If they are 1989-level ISA, then no, it won't.

      Note, the onboard FPU was the primary distinguishing feature of the original 80486. Yes, there were other less-visible changes and yes other vendors leapt on board, rebadged their 386SX clones as 486s because they supported the handful of extra instructions -- but this was just lying sales propaganda, really.

      The real 486 had an on-die FPU. That was the big thing.

      Then Intel discovered "Marlboro marketing" -- undercut yourself by selling a cheaper version of your premium product; you make less profit on each unit, but you make it back and more on larger total sales. Added side benefit: thus displace your budget rivals.

      That led to the first 486SX chips, with defective FPUs turned off. They sold and that led to 2nd generation 486SX parts made without FPUs.

      That was the beginning of the rot that led to junk like the Celeron, "Pentium Dual Core" and other crippled skipware.

      1. Ian Johnston Silver badge

        Was there not also a 487 "coprocessor" which was actually a bog standard 486 with two pins (power and ground?) swapped but cost more?

        1. Simon Harris Silver badge

          I think the pin change is a myth.

          However it does appear that the i487sx does have 1 extra pin for positional registration with the socket but electrically not connected, while there is another pin that indicates that the coprocessor is inserted, and when active, this turns off the i486sx, and the i487sx, which actually is a complete 486 does all the work.

          Source:

          https://www.os2museum.com/wp/lies-damn-lies-and-wikipedia/

      2. ThomH Silver badge

        In case anybody else is curious like I was, this Retrocomputing StackExchange post has die shots o the 486DX, original 486SX and later redesigned 486SX. Eyeballing it puts the FPU at about a sixth of the total surface area.

      3. Sam Shore

        junk like the Celeron

        As the former owner of both the Abit BP6 and VP6 with dual Celerons I can confirm the performance was anything but junk, tho at the price, you may have thought you were buying junk.

        IIRC each chip was roughly half the price of the Pentium it was based upon, and benchmarked at 90% of the speed.

        No windows software at the time supported dual processors, but, running 2 apps that could each max out a CPU was very doable.

        Linux users were quite fond of it. I sold many systems to a software house that used them in a linux based compute farm they developed and sold time on.

        1. Chz

          Re: junk like the Celeron

          Celerons have see-sawed back and forth between "junk" and "pretty good deal" a number of times over the years. Mendocino and Tualatin Celerons? Brilliant for the money. Anything Netburst-based? Horrible junk. When they first went dual core back in Sandy Bridge days, they were a good deal again. But became junk as they kept the 2 cores only for 10 years. The very last Alder Lake ones with 1xP and 4xE cores were decent again. I suppose there are also the Atom-based ones, which really depended on usage (for a NAS? Great. For a laptop? Junk).

        2. Liam Proven (Written by Reg staff) Silver badge

          Re: junk like the Celeron

          > IIRC each chip was roughly half the price of the Pentium it was based upon, and benchmarked at 90% of the speed.

          I did watch with admiration at the time the motherboard -- Gigabit BP6? -- that let you put two Celerons into modified "slocket" daughterboards and run them in SMP, including overclocked from a 66MHz FSB to 100MHz (IIRC).

          I *was* paying attention.

          I didn't do it, because I've updated benchmark suites to measure the performance delta between a smaller or larger on-chip L1 or on-package L2 cache, and the delta between uniprocessor and dual-processor performance.

          At the time, and for the money, I'd rather have one faster chip, thanks.

          But the bigger message is, as ever, missed:

          If Intel could turn off FPUs and sell the result cheaper and still make a profit, or fit less on-package cache SRAM and still sell the result at a profit, then what that _really_ indicates is that there was a large profit margin on the full-spec full-price products. So big that a large chunk of it could be thrown away and more money spent to create a cheap version, and the company _still_ cleaned up.

          It's dishonest marketing and I personally hate that.

          Intel _could_ have lowered its prices on the full-fat products more, and still sold more units without selling intentionally crippled parts.

          AMD and Cyrix and others had ways to undercut it. The IDT WinChip I mentioned in the article is an example: a lot of those went into SuperSocket 7 systems when Intel was fooling around with Slot 1.

          1. doublelayer Silver badge

            Re: junk like the Celeron

            Why is having a massive profit margin dishonest marketing? Annoying I get. Choosing to buy someone else's product that's reasonably priced, definitely. But if the answer is that their product is actually better, they can make it cheaply, and they're choosing not to let you buy it cheaply, that's just normal. The same reason that I would probably have accepted a lower salary when my current employer hired me, but they offered a certain number and I accepted it or even negotiated it up. It's how everything works and they aren't lying when they say that this is how much you have to pay to get one of these things.

      4. Mage Silver badge

        Re: beginning of the rot

        And Atoms that couldn't address enough RAM, so cheap tablets / "Netbooks" with those had 32 bit Win10, though 64 Linux works fine on them with the limited RAM if you fixed up the boot loader to be 32 bit to suit the BIOS. Even ONE more pin!

    3. imanidiot Silver badge

      The vast majority of those don't run Linux and if they are the more modern ones all have the requisite components (TSC, support for CMOV) to keep running the Linux kernel for a while yet. One of the oft used options is the Vortex86, many of which have the needed support for the i686 instruction set.

      And even the "obsolete" chips can still run just fine with older kernel levels for a long time to come.

    4. doublelayer Silver badge

      In addition to all the comments about what will still be supported, there's also a lot more choices for embedded systems processors which support Linux just fine. There are probably many cases where an old design does require a 486-compatible chip, but most embedded devices can work as well if not better with a different processor that's running a different ISA with more features, lower power requirements, and because everyone's using them, much cheaper.

  3. rgjnk Silver badge

    What's the benefit?

    If you dig through the kernel there's support for all sorts of weird & wonderful stuff, and while it might keep those with a tidiness fetish happy to scrap 'obsolete' stuff the benefit is normally minimal - there's rarely any ongoing maintenance effort involved on those chunks of code and as you already have *many* branches of CPU support in the kernel there usually isn't any wider simplification as you still have to cope with all the other processors & their needs & features.

    Plus as has been mentioned embedded 486 isn't completely dead yet.

    So is this genuinely useful or just a cleanup for the sake of it? Using lines deleted as the metric suggests the latter.

    1. jake Silver badge

      Re: What's the benefit?

      Because the vast majority of people/uses have no use for the old stuff ... and support for the old stuff will continue on as it always has, with no need for modern hardware support.

      IOW, if you have the need, just use an older kernel. They are not going anywhere any time soon.

      1. Anonymous Coward
        Anonymous Coward

        Re: What's the benefit?

        > IOW, if you have the need, just use an older kernel.

        And what if I need Linux to support that whizzy PCIe card with the latest version of USB 4 on my 486 Amstrad Mega PC?

        Did you ever consider that, you insensitive clod?

        1. jake Silver badge
          Pint

          Re: What's the benefit?

          Did Linux ever support the Amstrad Mega PC? (That's an Amstrad PC with a Sega Genesis grafted onto it, for those who don't know.)

          Regardless, I'm fairly certain that no Amstrad PC ever shipped with a PCIe bus. If I'm wrong, I'm sure someone will say so here. Ta in advance for educating us, have a beer.

          With that said, if such a kludge ever existed, and Linux drivers were ever written for it, they still exist somewhere, so you can still run it.

          If the kludge ever existed, but nobody ever wrote the drivers to run Linux on it, you are still free to write your own drivers.

          Have fun!

    2. Mendy

      Re: What's the benefit?

      If you change to a later version of GCC etc. then you need to make sure all the older code can still be compiled, and fix any new checks/warnings.

      1. jake Silver badge

        Re: What's the benefit?

        You can still run aging versions of GCC when working with old equipment. Hell, sometimes I still run PCC when working on ancient kit.

        My IBM 1401 still runs development tools written in 1959, occasionally ... although I admit I usually choose the updates built in the mid 60s, or a trifle later.

  4. sedregj Bronze badge

    My first 80486 based PC had 4MB of RAM and a 40MB hard disk, which was absolute luxury compared to the 80286 I had before it, let alone my C64, Speccie, or ZX81.

    The 6.11 kernel on this PC is 15MB with a 75MB initrd.

    If you need to maintain hardware from the '90s then you will be familiar with "make config" already and you wont be using the current kernel either.

    1. Simon Harris Silver badge

      That must have been an early 80486 machine. Even my 286 machine from 1987 had a 40MB hard drive. My 486 home machine had a 250MB drive, about 1993-ish, can't remember now if it had 8MB or 32MB RAM.

      1. Rob Daglish

        Without turning into the Yorkshireman sketch, I'm not sure my first 486 was that well stocked - it definitely had 4MB of RAM, and I remember upgrading the hard disk to 127MB... It was definitely pre '93, possibly 90/91 as I seem to recall having it before I started secondary school in 1991.

        It eventually got 8MB, with the hard disk growing to 500MB, and at some point I believe 16MB (4*4MB SIMMS, which now live in my Korg Triton keyboard!)

      2. John PM Chappell

        Aye, same. My 286 had a 40 MB harddrive, which was double the size of the school server and *that* was just an 80186, too (Nimbus - education only supplier, IIRC). Some schoolmates didn't believe I actually owned such a beast. :P

  5. Anonymous Coward
    Anonymous Coward

    Surprising number of 486 CPUs on ebay

    Used and purportedly new (unused.)

    I am sure if you had a 486 itch to be scratched one of the PRC's Intel licensees could do you a limited run.

    It might even be practicable to knock up a 486 in a commodity FPGA.

    I suspect a modern RISC processor ARM say could more than adequately emulate a 486. Encapsulate the ARM chip, memory and support electronics in the original 486 packaging you could have a classic microded CISC processor.

    I wouldn't imagine any embedded 486 would be running a kernel much later than the 2.2 or 2.4 series and then with a very customised (minimal) kernel so any maintenance or back porting would be very limited ... probably non-existent.

    I managed a few days ago to get into a device of mine from ca. 2010 running Linux on a MIPS processor and found 1.x kernel. Bleeding edge? - only from the rear. :)

    1. dharmOS

      Re: Surprising number of 486 CPUs on ebay

      There is a 486SX soft core that runs on an Altera FPGA, ao486 (https://misterfpga.org/viewforum.php?f=13) but even that seems to miss out the FPU of the 486DX

    2. Anonymous Coward
      Anonymous Coward

      Re: Surprising number of 486 CPUs on ebay

      If bleeding from rear persists, visit a doctor.

  6. Mockup1974

    Good news for NetBSD

    I'm actually glad it got dropped. As someone with an interest in OS diversity, this strengthens NetBSD's niche proposal of running on old and obscure hardware. Because let's me honest - on a normal piece of x86_64 hardware you would just run FreeBSD or Linux instead.

  7. BinkyTheMagicPaperclip Silver badge

    Nice to hear about the NetBSD FPU emulation

    It's largely useless, of course, but just strengthens NetBSD's hacker OS credentials.

    I would *not* use it as a daily driver - been there, tried that, ran away. However, as a target for hacking, or for creating an embedded system where you're prepared to maintain the software end to end it's remarkably flexible and permissibly licensed.

  8. 45RPM Silver badge

    When will support for the 68030 be dropped? The CPU series (680x0) so good that Linux won’t let it die!

    1. John Klos

      There's no reason to drop it, nor the m68020 with m68851. Motorola had much more of a true 32 bit processor with all the trimmings in 1984 than Intel had with the i80386.

      1. 45RPM Silver badge

        Don’t get me wrong, I love those Motorola CPUs. They’re genuinely very well thought out and they’re nice to program. But…

        Every time Linux drops something I always expect 68k to be on the chopping block. And it never is. Long may that continue!

  9. Anonymous Coward
    Anonymous Coward

    Errata

    According to this article (https://www.tomshardware.com/pc-components/cpus/intel-itanium-is-finally-laid-to-rest-after-linux-yanks-ia-64-support) (Be sure to click the affiliate links to buy your needed Itanium processors)

    Linux dropped the Itanium architecture in 2023. Probably had something to do with GCC dropping support. Still, 11 years.... (when did DEC Alpha go? AS/400? (is that an architecture?))

    1. jake Silver badge

      Re: Errata

      If you prefer an article on dropping Itanic support that was written a trifle closer to home, try this one:

      https://forums.theregister.com/forum/all/2023/11/21/saving_linux_on_itanium/

  10. Henry Wertz 1 Gold badge

    support timeframe

    Long story short, it appears (not planned but de facto) Linux is now having about a 30 year support timeframe before they may yank kernel support for your hardware.

    Besides this 486 and early Pentium support, they've also started within the last year or so dropping drivers for hardware I used back when I started using Linux in the 1990s (well, I *started* on a 386, all ISA bus. 4MB RAM and an ST250R 40MB RLL hard drive I inherited from an 8088 system. Started with Oak VGA then a Cirrus Logic VGA card.). But the Matrox Mystique 220 and G400 I used in mid 1990s or so have both had their accelerated drivers removed (apparenty would still work as a frame buffer.)

    The earliest Pentiums (the ones with F00F bug that support is being dropped for) came out in 1993.

    I did begin to wonder if they'd EVER drop hardware support (well they did drop 386 support previously but are dropping more drivers and CPUs now). Apparently yes, but with abiout a 30 year support timeframe.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like