back to article Linus Torvalds suggests the 80486 architecture belongs in a museum, not the Linux kernel

Linux boss Linus Torvalds has contemplated ending support for the i486 processor architecture in the Linux kernel. The ancient architecture was up for discussion last week in a thread titled "multi-gen LRU: support page table walks" that considered how the kernel can better handle least-recently-used (LRU) lists – a means of …

  1. Joe W Silver badge

    Lessons?

    "And it appears the processor will also soon pass from Linux history to Linux legend."

    The quote this is based on continues along the lines of "and some things, that should not have been forgotten, were forgotten" (lazy me does not look this up). I just hope it will not come back to haunt us, with this being a bit of an "unbidden oracle" quote.

    I found it weird when they stopped supporting i386, I find it strange that now 486 is on the list, but it does make sense. The hardware is 30 years old, which is an eternity in this field. As much as I like to reminisce back to those simpler times (oooh, calculating the Vmode for the X conf? The hassle with DOS and the brain dead memory management, the twaeking of the autoexec.bat - ok, computers are easier to use now), I think our Emperor Penguin is right with that suggestion (not that I have anything to say in that discussion).

    1. jake Silver badge

      Re: Lessons?

      They may have stopped supporting the i386 code in new kernels, but that doesn't mean i386 support has been removed from the old kernels ... some of which are still maintained. And of course all but a few of the very oldest kernels are archived for anybody to download and run whenever they feel the need. There will be no loss of hardware support.

      This stuff isn't exactly written on papyrus and only known to the scribes and high priests.

      1. Anonymous Coward
        Anonymous Coward

        Re: Lessons?

        If compiled for i386, has any of the compile code changed recently... in a decade?

      2. Snake Silver badge

        Re: No loss of hardware support

        But what about the important thing, the thing that actually *counts* on a computer: software support.

        Can an old kernel support all the latest software packages?

        Note that this is a serious question, not a troll. I haven't been in Linux in decades and I certainly won't be trying an old kernel just to test this question out, so if anyone knows please reply.

        1. Anonymous Coward
          Anonymous Coward

          Re: No loss of hardware support

          It shouldn't have to support all the latest software packages. If I run an old computer in a museum, I'll give it an old distro, like the 2005 version of Debian, where the old kernel and old packages have been tuned to work with each other, and it is more than capable of doing anything I would conceivably want to do on that old hardware. In fact I already have such a system for connecting to old PDAs over the serial port (because I can't get USB serial to work properly on more recent kernels), it works fine and I'm not going to break it by running the latest software on it.

          If on the other hand you want to connect this old computer to the Internet then you might be in more trouble. But the Raspberry Pi makes a much better server and saves electricity, so having an old 486 as a server doesn't really make sense anymore.

          1. J. Cook Silver badge
            Trollface

            Re: No loss of hardware support

            And some of those older machines generates a LOT of heat and drank a fair amount of power for that small amount of compute, too...

            1. Corey Winston

              Re: No loss of hardware support

              Can you please provide an example of a configuration that even required a heatsink? I'm having trouble remembering anything like that. Desktop CPU power consumption was rated at about 5W back then. I remember trying to make a decision between getting a 486DX2-66 with a bus speed of 33MHz or getting a 486DX50 with a 50MHz bus speed when I heard about a guy who overclocked a 486DX50 to run at 66MHz. I was intrigued so I asked him how he did it. He said he replaced the clock crystal on the motherboard and mounted a heatsink/fan on the processor. It was the first time I had ever heard of overclocking. A cooling fan seemed like a novel idea to me. Iff I recall correctly, when Pentium II processors started consuming over 30W, that's when Intel began integrating their own cooling fans.

              1. jake Silver badge

                Re: No loss of hardware support

                My 1990 NeXT Cube has a (small) heatsink on its 25MHz 68040.

                I know I remember heatsinks on i486s, but I can't remember where/when that started, and I can't be arsed to start taking covers off boxen to find out. I remember adding and increasing the size of heatsinks on AMD processors, especially when over clocking.

                1. NickHolland

                  Re: No loss of hardware support

                  the very first 486 system I put my fingers in was a Zenith server with 486/25MHz, probably around 1990 or 1991. And there was a little heatsink on it. We all had a good look and laugh at how hot this thing must run to put a heatsink on a digital IC. Yet, some later 486/33 systems did NOT have heatsinks. Same with 680x0 chips -- iirc, I added a heatsink to my Mac Quadra 650 as it was doing week-long builds, but Apple shipped it without.

                  486/25 didn't NEED a heatsink if there was any air flow. Heck, I saw a PPro running "reliably" (worst computer system I'd ever seen, but it wasn't over heating) with the fan and heatsink sitting on the bottom of the case (not on the proc!), running Novell Netware, which idle looped, didn't halt the proc when it had nothing to do, so it was HOT, but chugging along. I also saw a P133 Netware server running with a failed fan -- had over 1000 days uptime before the thing crashed due to running out of RAM, not due to the bad fan. Sure, the heat sink was there, but those stupid fans on the heatsink turn into blankets when they fail -- worse than no fan.

                  Early on, I think there were a lot of engineers, like me, that didn't like hot chips, so they put heat sinks on things that got warm. Considering the average computer had an expected usage life of less than five years, I suspect a lot of older machines would be just fine without a heatsink.

                  Another extreme -- Evergreen 486 accelerator mods. AMD 5x86 133MHz proc upgrade for 486 systems. Tiny little heatsink on a tiny proc, no fan, got so hot they would literally cause pain if touched (never tried to hold my finger on long enough to see if I'd get a physical burn), and in the days of idle-loop OSs, so it would run hot the entire time. Apparently, they were sufficiently reliable.

                  Long way of saying, "yeah, heat sinks were on 486 systems early on. But often not needed".

                  1. Corey Winston

                    Re: No loss of hardware support

                    Desktop processors (then as now) were rated to run at up to 85 degrees C (185 F). That's hot enough to cause third degree burns on contact but apparently not hot enough to damage silicon.

                    As I recall, when AMD seriously began to compete with Intel's Pentium II with their Athlon line in the era of cutting edge 250 nm manufacturing technology, the market clamored for constant increases in MHz. Many reviewers became obsessed with overclocking. Heatsinks and cooling fans were a necessity. I once posted a message in a forum saying I had good results with a particular CPU cooler that was equipped with a 60mm fan and I was called an idiot for praising anything less than an 80mm fan. Personally I never wanted an extreme machine that expelled a lot of heat. I remember laughing to myself when I heard a guy tell a salesman he wanted a case with four fans.

                    I don't want to get off topic, but if I may mention it, I'm curious how engineers felt about the increase in power requirements. I saw a few references to the impossible task they were given to increase processing speed without exceeding the power limits. Their bosses demanded it because that's what the market wanted.

                  2. Mungo Bung

                    Re: No loss of hardware support

                    DX2 processors had heatsinks on them, but not fans. I've had more than one Intel 486DX2/66 run reliably with a 40MHz bus speed with the addition of a small fan blowing directly at it, and Cyrix DX2/80 running with a 50MHz bus speed. I think I also had a DX4/100 running reliably with a 40MHz bus speed, making it run at 120 internally, but that had a fan fitted from the start.

                    I ran a few of these things intermittently over the course of a few years with no failures, although they were technically already destined for landfill when I got them. I figured that they were free, and if the smoke leaked out of them all I was out was whatever I was running on them at the time.

              2. Hubert Cumberdale Silver badge

                Re: No loss of hardware support

                Ah, those were the days – when soldering on your motherboard was even a possibility. I grew up with things like my dad installing a Replay system on our BBC Master (which was a fantastic innovation for the time), so I view the passing of those times with a little sadness.

                I do recall messing with the jumpers in our DX2-66 (which, I'm pretty sure, had no heatsink, let alone fan) and finding a configuration that made it race along (as judged by the frenetic sound of the memory test click) at what I'm guessing must have been a higher clock multiple (100?, 133?), but I was too scared to leave it like that in case I fried what was then a multi-hundred-pound processor (iirc).* My pocket money would not have covered it, in any case.

                *An unusual example of what was probably a sensible decision regarding tinkering at that age. I usually worked on the "if it ain't broke, f##k with it until it is" principle. To be fair, that did teach me a lot.

              3. CynicalOptimist

                Re: No loss of hardware support

                As a kid, I remember my Dad upgrading a 486 SX2 50MHz to a 486 DX2 66 MHz. The new chip required a small heat sink. We also increased the ram from 4 MB to 8 MB, after which Transport Tycoon Deluxe ran like a dream.

              4. PRR Silver badge
                Flame

                Re: No loss of hardware support

                > a configuration that even required a heatsink?

                Some did, some didn't. IIRC at any given MHz there would be an early release that needed heatsink, and a couple years later one that didn't.

                They were frequently 40x40x11mm or 49x49x11mm with cheezy plastic clips and sometimes a 49-cent fan.

                http://www.evercoolusa.com/?p=2038 m

                https://www.worthpoint.com/worthopedia/intel-486-dx4-100-mhz-socket-2002987805

                I clearly recall the sink falling off a 486DX2-66. The clips failed, and SuperGlue didn't last long. It did not die but sure was hot and throwing occasional lockups.

                > Ah, those were the days – when soldering on your motherboard was even a possibility.

                I did some of that. Part of every speed increase was more power. In those days we had to convert PS voltage to a lower number to not burn-up the CPU. Very low voltage switchers were still glitchy, the mobo makers often used a power PNP transistor. At first the difference was just a few Watts, a naked TO-220 package would throw that off. But a really fast 486 "fell off a truck" so I used it. Ran like greased lard. But I noticed the voltage-drop transistor wobbled. It was so hot it had melted the solder holding it in the PCB! I didn't know Silicon could work that hot! Well, it does, poorly, but the voltage-drop transistor was not a precision thing. (Its control chip told it what to do.) But life would be down from decades to days. So I *soldered* extender leads to an ex-CPU heatsink (which were already a glut in the shop) to cool the transistor better.

                Don't actually miss those days. A new/refurb Amazon Fire locks-up twice, I just send the fool thing back. (Waiting for the truck right now.)

                ****** I was gonna say I ran a webserver on a 386SX. That half-bus 386 cut-down. It was a far better server than browser! Simple pages (tens of KB) could take 2 seconds to serve but 75 seconds to render. No proof remains. The first public server machine in my department was already hot-rodded to--

                1991 Iverson 386SX case and power supply

                OAK 16-bit ISA video card, Iverson mono monitor

                Pentium-133 on HX motherboard

                32 megs RAM (usually only 25M are in use)

                Quantum 6G EIDE hard drive

                16X CD drive

                WinNT, MS PWS(!!)

                That is Oct 1999. Note that I skipped the 486 completely. (Lots of desktop 486, but that boat sailed before a departmental web-server became essential.)

          2. Matej101

            Re: No loss of hardware support

            Just a heads up with the USB to serial:

            Had the same problem, apparently there's a name conflict with the CH340 and some kind of braille display. Remove brltty and it starts working.

        2. aerogems Silver badge

          Re: No loss of hardware support

          It really depends on the software and its dependencies. If any part of the toolchain requires kernel functions that are no longer supported, it will fail to work properly or at all. Based on the little I've seen about this particular discussion elsewhere, they're looking to remove some function that would probably be needed by almost every app, so it would effectively be leaving any still functioning 486 systems behind with a hard cutoff.

        3. MJB7

          Re: No loss of hardware support

          Few modern software packages will run on the 486 these days. They will all assume the existence of AVX/SSE/SSE2 deep in their core, and simply won't run. It's dead Jim.

        4. Missing Semicolon Silver badge

          Re: No loss of hardware support

          If you want to run an archived Linux version, you will have to get hold of the correct versions of the source for all of the applications supported at the time, and their dependencies. Then build them all, and recreate the package archive. Basically, respinning an old distro. A job that is far from trivial.

          1. Michael Wojcik Silver badge

            Re: No loss of hardware support

            It's not like there aren't copies of old distributions sitting around. I probably still have some on CD-ROM, and I definitely have some on old laptop SATA drives. I'm sure plenty of people have various old distros squirreled away.

          2. doublelayer Silver badge

            Re: No loss of hardware support

            Outsource it. Go to archives of existing distros, get an old image, and find a mirror of the package repos that is still online. Clone the important files you want from that onto something else on your isolated network (because those distros don't get security updates anymore), and tell your distro where the new mirror is. If you rely on this old machine for production use rather than curiosity, keep that mirror on standby.

    2. Dave K

      Re: Lessons?

      The thing is, how many people are still using 486 machines? For those minority-few that are using them, how many are trying to run the latest, bleeding-edge Linux kernel on them? As people have rightly said, older kernels will still support 486, but there has to be a time when support for old architectures is put out to pasture. And let's be honest, 30 years of support is deeply impressive and is longer than you'd get from pretty much any other OS vendor.

      Plus of course if someone really, really does want to keep 486 support going, there's nothing to stop them forking the kernel and maintaining security patches for it if they really wanted to, although you'd need a lot of skill to pull it off.

      1. GuldenNL

        Re: Lessons?

        Agreed! And saying "You, fork it!" does sound more polite than "Fork it YOU!"

      2. FIA Silver badge

        Re: Lessons?

        And let's be honest, 30 years of support is deeply impressive and is longer than you'd get from pretty much any other OS vendor.

        Isn't the point though that it's not supported anymore? Code is written to support it, but no-one ever tests if it works or not.

        So really this is just going from 'It might work if you're lucky, but more than likely not, or with bugs' to 'It definatly won't work'.

    3. StrangerHereMyself Silver badge

      Re: Lessons?

      But what about decades old equipment that's still in use? In healthcare, government, industry and defense this is commonplace.

      Yes, they could still continue using it as the source code for older kernels is still available, but having mainline support would obviously be better.

      1. captain veg Silver badge

        Re: Lessons?

        We're not only talking about Intel parts, either. Cyrix and AMD sold clones/derivatives.

        -A.

      2. Lorribot

        Re: Lessons?

        Its highly unlikely these are updated at all and also the software is likely too old to be run on newer hardware or OS, this is why there is loads of old kit running out of date versions of Windows such as NT up to windows 7.

        I have seen modern ships being built with Windows 7 logon screens in the background, even in the control centre of the new Elizabeth line in London I saw a Windows 7 logon screen, probably running some signaling software or some such.

        None of this stuff is designed to be upgraded and so never does get upgraded.

        It is time the likes of Linux just supported only recent architectures, if you were writing an OS now you would not be supporting any 32 bit or early code and definitly not old hardware so why do it in a current OS.

        Linus is definitely right on this one and should go even further I would say.

      3. doublelayer Silver badge

        Re: Lessons?

        I'm sure that equipment exists, but I doubt that you could find me a single production 486-based machine where they're running the latest kernel. I doubt you could even find one where they're using a 5.x or 6.x kernel of any kind, including LTS versions. When people keep antiquated hardware running, they usually keep the original software on it as well. They might install kernel patches for security updates, but they almost never update the kernel to a new minor version. If they're willing to go to that level of effort, they could update the machine to use more modern hardware instead, and doing this brings maintainability benefits so usually the reason they don't do it is cost.

        1. Anonymous Coward
          Anonymous Coward

          Re: Lessons?

          I doubt you could even find one where they're using a 5.x or 6.x kernel of any kind, including LTS versions.

          Now there's a challenge.

        2. jake Silver badge

          Re: Lessons?

          "I doubt that you could find me a single production 486-based machine where they're running the latest kernel."

          I have seen this. I do not agree with it. Modern kernels bog 486-class systems down something awful. You are always better off with a SLTS kernel of one description or another, if you actually have to run such ancient hardware. Unless you have the staff to backport security patches (etc.) to your own in-house kernel, of course ... but that's an entirely different kettle of worms.

      4. LybsterRoy Silver badge

        Re: Lessons?

        I have often wondered about the old equipment. Serious question: how much of this old equipment wants to have an upgrade, if its been running for decades how likely is it a new issue will raise its head rather than things just chugging along?

        1. doublelayer Silver badge

          Re: Lessons?

          "Serious question: how much of this old equipment wants to have an upgrade, if its been running for decades how likely is it a new issue will raise its head rather than things just chugging along?"

          I'm not sure whether I understand your question entirely, but most of the institutions running this stuff don't want new equipment, stating reasons you have. If they update things, maybe it won't work and it will be extra effort to fix all the problems involved. Using the older stuff will be just fine, because how likely is it to break if it's been running fine for decades?

          This is classic failure to consider the costs, and it sometimes goes bad. For the same reason, the companies with the hardware often don't bother keeping spares or having any plan for problems. This means that when some part finally does let out the magic smoke, they don't have one and it's so old that you can't just go to the store and get a replacement. Let's say it's the hard drive. You can probably go find a hard drive with the required interface somewhere, but it will likely be used, no warranty, for significantly more cost than it should be, and you have to find it at the last minute. After you get that, do you have a full recent backup of the old disk, something that can write that backup to the new disk, a way to check the behavior when you put your imaged disk back into the equipment, and a guarantee that these things have been tested and work? Usually, the answers to most or all of these is no.

          Upkeep of systems takes time, effort, and money. If you don't do it by having as portable a system as possible where you can use any modern hardware with minimal effort, you can also do it by having a resilient hardware setup with plans and preparations for everything that could fail. If you do neither, then you'll have lower costs if nothing goes wrong, and if something does then you could be stuck for a long time and end up with a much higher bill.

          1. FIA Silver badge

            Re: Lessons?

            This....

            I worked for a company using old Sun kit, and a nicely out of support version of Oracle.

            They dealt with this by paying a company to keep the old systems running.

            Then there was the day the system broke as a few bits of hardware died in rapid sucession..... turns out the company they were paying weren't actually that good and couldn't immediatly provide replacement hardware.... This was then compounded as the initial failures meant hardware from dev was rapidly put into production... which promptly failed.....

            There were a few days when the entire business was basically 1 hardware failure away from destruction.

            But I'm sure they saved a few quid along the way.

            1. BPontius

              Re: Lessons?

              Sounds like a financial black hole, both paying for bad support and scraping up old hardware that can be difficult and expensive to acquire. Can't see them saving any money with bad support and labor costs for OT and idle employees due to crap systems and software.

      5. hammarbtyp

        Re: Lessons?

        But what about decades old equipment that's still in use? In healthcare, government, industry and defense this is commonplace.

        Only a problem if you need to fix specific bugs or issues. For kit this old this is highly unlikely to be an issue. So what about security? well again kit this old is unlikely to have much connectivity to the outside world.

        But as you say if there really iis an issue and the customer is willing to pay for it, there is still the source and will be in the future. The question is whether there is any benefit supporting hardware in future releases onto which no one is making anything with at the cost of performance and complexity

    4. Aladdin Sane

      Re: Lessons?

      Are you saying Intel poured their cruelty, their malice and their will to dominate all life into the i486?

    5. sidk
      Coat

      Re: Lessons?

      "The quote this is based on continues along the lines of "and some things, that should not have been forgotten, were forgotten" (lazy me does not look this up)."

      From memory (I was too lazy to check too) the quote is from Lord Of The Rings - Gandalf relating the history of the one ring either to Frodo or to the Council of Elrond.

  2. simonlb Silver badge
    Trollface

    "the kind of maintenance burden we simply shouldn't have"

    If only he'd apply that philosophy to systemd and revert to init.

    1. Anonymous Coward
      Anonymous Coward

      Re: "the kind of maintenance burden we simply shouldn't have"

      "If only he'd apply that philosophy to systemd and revert to init."

      This is not a kernel matter, but a distro matter.

      Linus is not in charge of distros ...

  3. Anonymous Coward
    Anonymous Coward

    But there's a 486 in the Hubble telescope.......

    ......I hope Hubble isn't running Linux!!

    1. Anonymous Coward
      Anonymous Coward

      Re: But there's a 486 in the Hubble telescope.......

      Planning on popping out there and updating the kernel were you?

    2. Anonymous Coward
      Anonymous Coward

      Re: But there's a 486 in the Hubble telescope.......

      It's not!

    3. Plest Silver badge

      Re: But there's a 486 in the Hubble telescope.......

      Even if it were they're unlikely to be auto running yum/dnf/apt updates every day are they.

      1. jake Silver badge

        Re: But there's a 486 in the Hubble telescope.......

        Of course not. They'd manually run slackpkg, as gawd/ess intended.

    4. Anonymous Coward
      Anonymous Coward

      Don't panic!

      Those old Kernels will still run folks. Frankly, There hasn't been an OS update for my Osbourne CPM machine or my original family computer, an Apple ][, in more than 20 years and if either won't boot, it's a hardware problem. No internet connections on either so no reason to mess with a stable build.

      Also, NASA has enough brains lurking around to take care of kernel patches, for the occasions that they choose to risk an in deployment firmware patch for the ol bucket. I assure you the HST was not set to auto update at 3am Sunday, and I doubt very much it is even running a recent kernel build(just a very, very, thoroughly QCd one.

      1. Binraider Silver badge

        Re: Don't panic!

        The Commodore KERNAL v2 is still doing well. Developers still releasing software and even hardware for it!

    5. jake Silver badge

      Re: But there's a 486 in the Hubble telescope.......

      Nobody else has said it, so ...

      The HST design process was started officially in 1977, when Linus was 8 years old. It was launched and in operation in 1990, about a year and a half before Linus unleashed Linux on the unsuspecting planet. HST does not run Linux.

      1. Graham Dawson Silver badge

        Re: But there's a 486 in the Hubble telescope.......

        Yet...

        1. Anonymous Coward
          Anonymous Coward

          Re: But there's a 486 in the Hubble telescope.......

          Yes...there is a 486 in the Hubble telescope:

          - Link: https://www.theregister.com/1999/12/27/hubble_telescope_gets_intel/

          ....and 1999 would be a fine time to implement Linux......

          I think we should be told!!

    6. FrankAlphaXII

      Re: But there's a 486 in the Hubble telescope.......

      Its probably on an ancient version of an RTOS called VTRX. There are other Electro-Optical and Synthetic Arperture Radar satellites looking the other direction which use it as well which is why I'm not getting into detail here, but its a known quantity.

  4. Binraider Silver badge

    Generally agree, with Linus, but just last month we were commenting on features for the Atari Falcon being updated in kernel too. A very much more deceased platform!

    I could be even more sinister and suggest stripping disused driver code too. Do you need really need that token ring ISA card in kernel?

    On the dead hardware front I would not blink if anything up to and including the Pentium 4 and K7 ceased to be supported.

    1. Dan 55 Silver badge

      Generally agree, with Linus, but just last month we were commenting on features for the Atari Falcon being updated in kernel too. A very much more deceased platform!

      68K didn't have a crazy architecture though and is still in use in embedded with ColdFire.

    2. jake Silver badge

      What you are missing is that all that old code is still going to be available. If you need a kernel for a mid 1990s HP server with Token Ring cards sometime 30 years from now, you'll be able to download it (both binary and source) from the archives. I'd recommend keeping it airgapped, but you should be able to recover any data from it and/or run an ancient program in situ should the need arise.

      Linux 0.99pl12 will likely still be running on something, somewhere, until roughly the heat death of the Universe.

      1. Binraider Silver badge

        For museum and legacy releases, keeping the old source around is fine.

        But why maintain the capability in the current release? I'll grant hardware that accepts modern CPU's and an ISA card does exist, albeit rare. A handful of industrial systems can do it. Keeping the cruft in means there's a bunch of what is basically untested material there that IMO could be removed from current releases on the same grounds i386 support was pulled.

        Long lived embedded systems are an interesting case. Hubble, as noted in the comments. Navigation computers on the 757 and 767 had 386's in them too. I'm pretty sure neither of those cases were Linux ofc.

        1. Nick Ryan Silver badge

          If something relies on that old hardware then updating it to a modern OS is just not going to happen. As long as some numpty doesn't decide to connect it directly to the Internet, which has happened a few times, then the old hardware running an old OS may as well continue as long as there is hardware for it. Hell, if the hardware is important enough then the hardware will still be available somehow, although whether or not this is affordable is another problem. Essentially, if it's not broken, leave it. Something that has run stably for the last twenty years doesn't need the latest revision of any OS applied to it just because it's available.

          1. LybsterRoy Silver badge

            == As long as some numpty doesn't decide to connect it directly to the Internet, which has happened a few times ==

            I've seen this sort of post a number of times and I have to wonder - "so what". Is this yet another case of "I've thought of it so its possible so its 100% certain it will happen"?

            If someone connects an old dos based machine to the internet what are 1) the chances of a hacker spotting it, 2) a hacker having any idea of what to do about it and 3) actually connecting to anything harmful or not?

            1. jake Silver badge

              Yeah, chances are that the kiddies they call "hackers" today (most of them are actually skiddies and wannabe crackers) wouldn't have the first idea what to do with it. And even if they did, it would not be useful to them. Too old, too slow, single user, single tasking, incapable of running their tools, etc.

              The problem lies in other machines connected to the same LAN becoming vulnerable ... so be careful out there if you are playing with this old stuff and trying to go online with it.

              1. Michael Wojcik Silver badge

                An old machine is more likely to be used to pivot to something more interesting. Many IT-crime gangs have a bot army probing addresses for a whole collection of vulnerabilities, and there's no real incentive to remove old vulns from that, so on older machine might well turn out to be exploitable. When a bot breaks into it and notifies its C&C server, the next step will be for someone to see if anything more interesting is reachable from the compromised system – like a SCADA system, for example.

                That's the danger of having old equipment on the public Internet. It's potentially a route into your private network.

  5. Geoff Campbell Silver badge
    Windows

    <raised eyebrow>

    On the one hand, yeah, sure, who needs it, strip it out and be done with it.

    On the other hand, this rather implies that support in the kernel for old processor architectures is on the critical execution path for current architectures, which rather makes me wonder if there isn't a fundamental design problem in there somewhere?

    Me, I'd leave the code in, but make it a conditional compilation option, so that if you *really* want to run ancient hardware, you can, but the code isn't included on mainstream builds.

    GJC

    1. Adrian 4

      Re: <raised eyebrow>

      He does comment that it isn't tested. Doubtless that's why : it's there but almost nobody turns it on.

      1. Tom 7

        Re: <raised eyebrow>

        Every now and then (getting exponentially longer mind) I drag out by 50MHz 486 DX with a Gig of ram and switch in on. The lights dim and there are complaints of CMOS batteries and then the bloody thing boots up and runs and smells of all sorts of weird shit! I look sadly at the stack of more recent machines that clog up the office that will probably never run again due to money saving manufacturing and then think "leave it you twat" and pop indoors to a ZeroW which cost 1/200th of the 486 and all its best cards etc and knocks it into a cocked hat.

        Nice to know it still works though!

        1. Sp1z

          Re: <raised eyebrow>

          "50MHz 486 DX with a Gig of ram"

          You sure about that RAM?

          1. captain veg Silver badge

            Re: <raised eyebrow>

            I was gifted a retired 386 machine that contained a full-length ISA card stuffed with RAM chips sufficient to make... 16MB.

            Not sure if this makes the 1GB claim less or more plausible.

            -A.

          2. khjohansen
            Holmes

            Re: <raised eyebrow>

            a "Gig of HDD" was generous back in the day!

            1. Nick Ryan Silver badge

              Re: <raised eyebrow>

              The first brand new hard drive that I bought myself was 16Mb! It wasn't exactly a large amount of storage but a hell of a lot better than not having one and also a hell of a lot better than the MFM drives that preceded it. It also had auto-park which was another stress removed from earlier HDDs.

          3. Nick Ryan Silver badge

            Re: <raised eyebrow>

            I can't help thinking the original poster meant Mb! I remember having a seriously well specced system which had 4Mb of RAM. That was not remotely cheap either.

            1. david 12 Silver badge

              Re: <raised eyebrow>

              My Dad had a 386 with 8MB RAM. When the order went in, the supplier called back to ask if that was a mistake? Just to check he hadn't confused RAM and HD.

              (The 8MB was because the unix clone COHERENT didn't have any memory virtualisation)

          4. Kevin McMurtrie Silver badge

            Re: <raised eyebrow>

            I recall people running 256 MB and 512MB in Mac IIfx computers. It was possible with enough money and system extensions. (To be fair, the IIfx ran faster than many Gil Amelio era computers that followed)

          5. sreynolds

            Re: <raised eyebrow>

            "Gig or RAM"....

            Does someone work for Sky News - they never let the facts get in the way of story.

          6. ShortLegs

            Re: <raised eyebrow>

            "You sure about that RAM?"

            I'm with you.. PC adverts those days advertised motherboards able to accept 64MB.

            Whilst the 486 could address 4GB memory space, the memory controller was external.

            In 1995 when Win95 launched, the "average" amount of RAM in a system was 4-8MB.

            By 1996, 16MB was serious enthusiast territory.

            And lets not forget, that 4MB RAM would cost you about £200 ($530 / £320 Oct 1995)

            1GB? That would have cost an awful lot of money. 256 lots of £200.

            Even when RAM fell to about £45 / 8MB, that's still £5750 worth of RAM

          7. eldakka

            Re: <raised eyebrow>

            The 486 supports up to 4GB of RAM. So theoretically 1GB is definitely possible.

            However, in the early 90's, 1GB RAM would have cost several thousand dollars (in 90's dollars, probably more like $10k inflation adjusted). Therefore outside high-end 'workstation'-type situations, university research computers, etc., it would be unlikely.

            1. jake Silver badge

              Re: <raised eyebrow>

              In 1990, RAM was about a hundred bucks per meg. By ~'92 it was hovering between $92 and $95 per meg, where it stayed until about 1997 when the price started to plummet. By '98 or so you could get SIMMs for about $5/meg. So it's conceivable that the OP has a computer from that era that was upgraded to a gig of RAM ... but chances are it wouldn't have all fit on the motherboard, he'd have needed an expansion card. Or two.

      2. Graham Cobb Silver badge

        Re: <raised eyebrow>

        Back when I used to run a software team, my favourite line was "if it isn't tested, it doesn't work".

        I mostly used it to stop architects telling us we had to implement some arcane option in the spec that, as far as anyone knew, no implementation had ever used. But I occasionally used it to tell my team to go back and find some way to create an automated test for their sexy new feature to make sure it will continue to work in the future.

    2. Anonymous Coward
      Anonymous Coward

      Re: <raised eyebrow>

      This is how our DNA got to be so big and complicated ….

      1. cosmodrome

        Re: <raised eyebrow>

        Who might have thought that it can be so easy? Try CreationST DNA-Optimiser! God would have wanted you to.

      2. khjohansen
        Coat

        Re: <raised eyebrow>

        I don't think our DNA really compiles with support for H. Erectus or Neanderthalis either

        1. doublelayer Silver badge

          Re: <raised eyebrow>

          Well, since it's building us rather than the other way around, you're only getting complete support for you (support not guaranteed, this is not an LTS release and the hardware only has minor error correction features), but we have a lot of DNA from other branches of humans. This indicates that we were reproducing with those branches at some point, meaning that, if they were still around, we could likely still do so for the time being. So I would say there's a likelihood of backward compatibility in there, though like 486 support in the modern kernel, it's untested and for almost everyone untestable.

        2. David Hicklin Bronze badge

          Re: <raised eyebrow>

          > support for H. Erectus or Neanderthalis either

          Are you sure?

    3. tswsl

      Re: <raised eyebrow>

      As I understand it, it's not so much that the code as compiled that is the concern and more about the need to ensure that changes made and features added will work within the limitations of those older processors when writing that code in the first place (or include workarounds as noted in the article).

      So it's more of a burden on developers than anything else

      1. heyrick Silver badge

        Re: <raised eyebrow>

        Can't they split it off as a different architecture, like "x86 ancient" or something?

        Given how much the x86 has changed in thirty odd years, stuffing it all in together seems...not entirely logical.

    4. James Anderson

      Re: <raised eyebrow>

      The fundamental design problem was i386. Bodging a 32bit processor to be compatible with a 16 bit instruction set which was bodged to be backwards compatible with an 8 bit processor.

      486 cleaned up some of the worst clunks but the weirdness was baked in to all subsequent intel chips.

      Mind you the alternative approach taken by Motorola. Replace your very successful 8 bit processor with a new and rather beautiful 16 bit instruction set did not work that well commercially.

    5. Steve Graham

      Re: <raised eyebrow>

      I imagine it's something like

      #ifdef MODERN_PROCESSOR

      use this register

      #else

      lots of code to simulate the missing register

      #endif

  6. TonyJ

    Genuine question...

    ...but isn't one of the benefits of Linux that you can use it on very old hardware? Would this stymie that?

    1. werdsmith Silver badge

      Re: Genuine question...

      You could still use it on old hardware, nobody is stopping anybody from using older kernel if they prefer.

    2. jake Silver badge

      Re: Genuine question...

      Not only will you always be able to use older kernels, some of them are kept updated as needed. For example, LTS kernel 4.4 (released in very early 2016) will be maintained until at least 2026, and probably until 2036 ... and possibly beyond, if there is a need. There are other niche kernels that get backports for security issues and the like, and the folks who need them know where to look.

    3. Anonymous Coward
      Anonymous Coward

      Re: Genuine question...

      Yeah sure you can run linux on 486. You can get version 1.0 and some gcc version 2 and compile the whole thing. Isn't that the point of maintenance releases.

      Or you could spend the resources to emulate the product. I don't understand why things like COBOL are still alive when as software is a "living" thing it tends to be updated to run with more modern features and languages. The problem is that people see it too much of risk to "maintain" something that is working and go for these new mega projects that end up in failure.

      1. Tom 7

        Re: Genuine question...

        "I don't understand why things like COBOL are still alive when as software is a "living" thing it tends to be updated to run with more modern features and languages. "

        Have another go. You'll get there eventually.

      2. vtcodger Silver badge

        Re: Genuine question...

        "I don't understand why things like COBOL are still alive"

        It's because there are large business and banking systems -- think hundreds of thousands of lines of business logic -- written in COBOL half a century ago still in use today. Replacing them with a language that you like better would cost a fortune and would probably create thousands of new and potentially devastating bugs. It's easier, cheaper, and safer to scour the Earth for a Cobol programmer to do the occasional enhancement than to rewrite the stuff.

        I'm not a COBOL programmer BTW. I don't think I have the patience. But of all the dozens of programming languages I've encountered since I wrote my first program in 1961, COBOL is by far the most readable.

        1. aerogems Silver badge

          Re: Genuine question...

          I personally question whether that's true. I understand the argument, I just don't know if I believe it.

          Let's just use Perl as an example replacement language. Perl developers are a dime a dozen and you could hire someone to produce a program that takes the same inputs and produces the same outputs instead of trying to copy the exact logic. That would probably only take a couple months, tops. Then you run the two in parallel for a year or however long you feel is necessary, testing the exact same inputs against both versions of the app to make sure you are getting identical outputs. Then when you're satisfied you can pull the plug on the old Cobol app. And of course you can replace Perl with basically any other language you want, I just figured Perl is known in particular for its text processing abilities, which is something Cobol was also sort of known for, but you could probably use Python or Ruby or almost any other language if you wanted. Then you're not paying out the nose for someone who can still understand Cobol and you don't need to run ancient hardware that's really expensive and almost impossible to fix, or run special emulators.

          1. Nick Ryan Silver badge

            Re: Genuine question...

            You really missed the parts about the scale and about professional development and software.

            These financial systems have been operating reliably for decades. There are hundreds of thousands of lines of code, interacting with each other in previously defined ways with a lot of edge cases almost all of which have been encountered and dealt with by now. There's a fair chance that the documentation for some of this is just not there. Is this code perfect, probably not but it's known code.

            Replacing this with "dime a dozen" perl programmers just will not work. Ever. You are suggesting replacing working code that has stood the test of time with untested code almost certainly written by developers who have not got a clue what error handling is, let alone testing and even less likely have the remotest clue about full dependency testing. When developing a stable system that one wants to work reliably for any period of time, linking to gigabytes (including duplicated dependencies) of third party libraries, any of which can be changed at any arbitrary point in time, is a utterly stupid way to write software. It's bad enough for web sites and services that will just be replaced in a couple of years anyway, but for something that needs to be stable for many years? Forget it.

            1. Joe W Silver badge

              Re: Genuine question...

              Another problem is speed. The "database" structure (forgot what it was called) and the whole OS are optimised for throughput. It is fast. And yes, there are a few companies still building mainframes (like IBM). And those are bloody fast. Yeah, they are also bloody expensive.

              So there are banks, the board of trade, insurance companies, governments (all of them early adopters of computing technology) still stuck with "ancient" architecture, programs writen in assembler and only if you are lucky in COBOL, and if you are extremely lucky the binaries have not been altered manually (I have colleagues that read and write machine code), so there is no guarantee you even have the source code for your applications. COBOL can be compiled for modern OS, but there are a bunch of system specific undocumented features that are used for speed.

              1. jake Silver badge

                Re: Genuine question...

                It's not so much about the raw speed, rather it's about the I/O capability. Transactions per second.

              2. Stork Silver badge

                Re: Genuine question...

                Add phone companies to the list. And Maersk Line used to be on mainframe, bill of ladings and all.

            2. jake Silver badge

              Re: Genuine question...

              "There are hundreds of thousands of lines of code,"

              Best-guess at the moment is usually given as somewhere between 2 and 4 billion lines. A couple folks I view as quite credible suggest it might be somewhere north of 6 billion. But nobody knows for sure. I think a reasonable man would accept "about 5 billion lines of COBOL are in use today", with all the obvious caveats.

            3. aerogems Silver badge
              FAIL

              Re: Genuine question...

              Whoosh! You should run for political office since you addressed the points you wanted to address, not those in my comment.

              1) Sure, the code may be working now, but how much longer can you sustain finding people who know Cobol to come in and fix it the next time there's a Y2K type issue, or even just some small fix?

              2) If it's running on original hardware, how long until a capacitor fails or something else and you can't fix it?

              3) If it's running on original hardware, how much is it costing you in terms of electricity? Both the machine itself and the necessary cooling, etc.

              4) Even if it's running on virtualized hardware, you still have #1 to deal with

              5) If you took all of these programs together they might make up hundreds of thousands of lines of code, but each individual program is probably only a few hundred, maybe a few thousand, lines

              6) Not sure what your point was about error handling and testing, since I explicitly covered testing

              7) Same about the dependencies, not sure what your point is, since it's not like you'd just be updating these systems every time there's a new Perl package released, and you do realize you can download and use local copies of any additional libraries you may use, right? ... right? ... ... right? I have to ask, because you seem genuinely confused on that point.

              8) I rather explicitly said Perl was being used as an example, which you seem to have conveniently ignored. It could be any language you want, I just picked Perl because its strengths lie in text processing, but as I said in my earlier post, you could use Python, Ruby, Rust, whatever.

              So, basically... we should respect your old timer wisdom because you had to walk uphill 15 miles in the snow, barefoot, both ways, to school every day. If you wanted a coat you had to kill and skin a bear with your bare hands, etc, etc. If you want the youngins to respect you, maybe start by respecting them enough to actually listen to what they're saying. Maybe don't be so arrogant as to think that you managed to think of every conceivable way of doing things, including those that hadn't even been invented at the time, and that someone besides you might have a good idea. And no, I'm not necessarily referring to myself, I'm referring to literally anyone who isn't you.

              1. jake Silver badge

                Re: Genuine question...

                So. In your mind, the combined wisdom of many financial institutions, in business for over one hundred years in some cases, collectively responsible for a quadrillion dollars (plus or minus a few trillion), employing the finest minds (and coders) that money can buy for their IT departments; has somehow managed to miss the obvious answer.

                We must all immediately recode all of it in something else and save gobs of money. Of course! How simple! How could we have missed it?

                The hubris in this youngster is breathtaking.

                1. Johan-Kristian Wold 1

                  Re: Genuine question...

                  The inherent conservatism of beancounter centrals and banks is astonishing. Old systems are kept because they have been working reliably for all these years, and the cost of failures and bugs can be astronomical.

                  An aquaintance of mine does the books for a building suppiles shop, and this shop is still on a system based on old code running on an AS/400 successor. The connect to a central server through a VPN, and the system still requires a java capable browser to work. Cue calls for help when java support was disabled by default in newer browsers...

                  The old IBM mainframe and minicomputer environments have evolved through the years, and new hardware (IBM Power systems) with recent operating systems (IBM i) is still available.

                2. Stork Silver badge

                  Re: Genuine question...

                  I was at the in-house supplier to a major user of mainframes, old stuff was falling out of fashion and a new outfit (of whom it was said that the Sun was shining out of unusual places) was brought in to recode everything for a modern platform, probably in C++.

                  After a few years very little more was heard of it.

                  IOW, it’s bloody difficult.

              2. SCP

                Re: Genuine question...

                OK, not my field but I will take a stab at offering some counterpoints ...

                1) Sure, the code may be working now, but how much longer can you sustain finding people who know Cobol to come in and fix it the next time there's a Y2K type issue, or even just some small fix?

                You do not bring in a random "dime a dozen" programmer to quick fix something in an existential-level business critical system. Even if you were so reckless to do so - would your language-du-jour rent-a-hack fair with a body of code that has been auto-translated from a source with a design based on the precepts of COBOL?

                How would they go about re-validating/re-certifying this new system?

                2) If it's running on original hardware, how long until a capacitor fails or something else and you can't fix it?

                Whilst there is a lively market in legacy hardware for these types of "must have" systems, there is also a strong market for high-fidelity emulation. (At my former employer, the Charon VAX emulator was in use for a number of legacy projects). And, pre-emting 4), the business software issue remains concerns of point 1) remain unchanged and the new software/hardware issues of the virtualization are a new business concern which offsets the concerns of obsolete hardware. These are both unavoidable and need to be dealt with carefully.

                3) If it's running on original hardware, how much is it costing you in terms of electricity? Both the machine itself and the necessary cooling, etc.

                Insignificant in comparsion to other costs and the business value.

                5) If you took all of these programs together they might make up hundreds of thousands of lines of code, but each individual program is probably only a few hundred, maybe a few thousand, lines

                The complexity of many such software systems often lies in the emergent behaviours arising from the interactions of its components rather than the inherent complexity of any individual components. Good design typically tries to control and manage these interactions - but shit happens (sometimes for good** reasons at the time).

                Having software auto-converted to another language preserving all possibly important behaviours would leave you with a mess of software preserving idiosyncrasies of the COBOL and adding idiosyncrasies from the new language. This would be accompanied by a loss of knowledge about the system as those lifers who previously understood the system would now be faced with something unfamiliar.

                6) Not sure what your point was about error handling and testing, since I explicitly covered testing

                Whilst side-by-side output comparison might provide some reassurance for commissioning final integration of a replacement system it seems unlikely to replace all the intermediate level testing stages - which would also need to be migrated (along with the software). Quite often such legacy testing environments can be just as convoluted as the design and deployment environment. The whole development process needs to be considered.

                7) Same about the dependencies, not sure what your point is, since it's not like you'd just be updating these systems every time there's a new Perl package released, and you do realize you can download and use local copies of any additional libraries you may use, right? ... right? ... ... right? I have to ask, because you seem genuinely confused on that point.

                These inherited dependencies also expand your code base - which might add further concerns as you will not have design and test knowledge for this code. Some of this inherited code might give some reassurance based on its wide-spread use - but that might be uncertain if the code has been subject to frequent updates (prior to the point you choose to freeze at). You then have to decide how critical any future updates to this inherited code are - for which you will need expertise that you probably do not have.

                8) I rather explicitly said Perl was being used as an example, which you seem to have conveniently ignored. It could be any language you want, I just picked Perl because its strengths lie in text processing, but as I said in my earlier post, you could use Python, Ruby, Rust, whatever.

                Or, and just going out on a limb here, COBOL. Its strengths are that it is oriented to your problem domain, you have a great deal of experience with it, and you know it works.

                You don't really want the itinerant programmer looking for easy money and something to fluff up their CV. You should be more interested in getting somebody capable of making sound engineering decisions based on a thorough understanding of your needs and who can determine potential problems that might lie in different potential solutions.

                COBOL is a dated language with its own peculiarities - but this is likely to pale into insignificance compared with the oddities of the corporate software system. If somebody is only concerned with COBOL not looking good on their CV, rather than being able to set out what they actually did with it, then they are probably not the right person to entrust your business critical system to.

                I am not saying that battling the demons of stupidity in the muddy trenches of software maintenance is glamorous good fun - but it can give you an interesting perspective on the complexity of developing large scale systems, and some useful experiences/stories to carry forward.

              3. Sandgrounder

                Re: Genuine question...

                Ok. I feel qualified to answer this. I was tech lead on a project for a major bank that needed to migrate a few hundred small programs from running on old/obsolete x86 architectures to a modern, fully supported and maintainable environment. These programs were not the core banking applications, these just ran in the gaps between the main systems, doing one or more jobs necessary to keep data processed and flowing across the enterprise.

                Sounds a straightforward task,

                - we had the knowledge and skills to read the old code;

                - all the programs were in a small, isolated area, walled off from most other systems;

                - there were minimal functional/logical changes, just rewrite it to do the same as it did;

                The project ran for over a year and had still not been completed when I moved on. We found significant challenges in multiple areas

                a. No-one knew what each program was doing or why. There was no documentation. There were no business users who remembered why the program had been needed. There was no IT knowledge as to why something was done.

                b. No-one understood what the data was. As much was scraped from application screens as written to database tables. Many fields had multiple types of data in, mixes of dates, numbers and alphanumeric codes. Many fields were bit masks for other fields.

                c. The enterprise world is dynamic and ever changing. One example springs to mind where data from source A and outputs it to destination b, takes data from B, mixes with source C and D, writes out to destinations E, F and G. but when source C is a mainframe screen with 15 fields, of which only 12 still exist, and there is an update part way through the rewrite that removes that screen completely. what now? Anyone know where the data can still be found.

                d. There were 100s of hard coded edge cases, all interwoven into an impenetrable web of conditional logic statements, tress and branches.

                e. The test data was not fit for purpose in most cases. With 20+ years of industry mergers, new products, obsolete products, half migrated systems etc, there could be 4000 different types of data from records in a single table.

                f. The number of test cases required to test every possible combination of data was rising towards infinity. We stopped calculating past 100 billion.

                g. For the majority of the programs it was not known how many systems would be impacted by an individual program, let alone be able to plan a proper regression test.

                Yes, we had the original source. Yes, we could write it again to do the apparent same thing in the new environment. That was usually the trivial bit. Finding out whether it still did the same in every use case in every combination with no impact to anything else. Almost impossible.

                These programs dealt with moving transactions between hundreds of thousands of current accounts, savings accounts, loan accounts, bad debtor accounts, accounting systems and so on every day, with total revenue in the hundreds of millions.

                To anyone who has never seem the complexity and scale of enterprise systems, it is like trying to handover air traffic control duties at Heathrow to a parking attendant in a small car park.

                So save the smug old-timer comments for the playground. Realise that there is a whole world of IT technology, skills, knowledge and implementation that has been achieved by the efforts of many from several differing generations across the last 50 years. You think you know it all. Think again.

                1. Stork Silver badge

                  Re: Genuine question...

                  - the new system is going live and the old one will be switched off in August!

                  - which year?

                  - not decided yet.

              4. Nick Ryan Silver badge
                Stop

                Re: Genuine question...

                aerogems: So far the only thing that you have demonstrated is that you should not be allowed near anything of importance until you gain some real experience. Hacking a crappy website or web service together and re-inventing the same old thing over and over again? Maybe, but there'll almost certainly be a trail left by you of failed or hacked systems in a year or two. But to be allowed near large scale existing deployments? Absolutely not.

                You used Perl as an example. What has "conveniently ignored" got to do with it at all, you've not making any point at all.

                A couple of other posters have already answered your points very well, and replacing these old legacy systems is something that does need to happen at some point but pretending that "dime a dozen" developers can do this is nonsense. Even if you got in highly skilled and experienced developers and started again from scratch targetting the supposedly easy 90% of processes that often make up the bulk of such work you'd have to implement this in a highly professional, stable manner and be as certain as you possibly can be that whatever you are implementing it in will still be available in 20 years time - virtualisation is incredibly important of course. This leaves the remaining 10% of the processes that can't be easily re-implemented, usually because how or what they do is likely a mystery and doing something about them will take 100x the amount of effort as the easy bulk and even then you can't be sure that they'll work in the same way as they did previously and working the same way is critical. Therefore the old system will continue to be left running. You now have two systems running in parallel and this will almost certainly continue for many years. Eventually the hope would be that the processes running on the old system will stop being used, but that could take some time.

                I've worked in systems where we fixed a bug in the pricing calculations and then had to put the damn bug back in because the customer used the prices that the system generated in their brochures and they'd already been printed and distributed for use the following year. This meant that we had to put a conditional state around the bug to ensure that the incorrect code was kept in use until the following year where the fixed code could be used. Naturally enough the old bug had to remain because historical reporting would be out if the exact same code was not in use. This is just one example of what happens in large and real systems - these things are annoying and stupid but that's reality.

          2. anonymous boring coward Silver badge

            Re: Genuine question...

            "Let's just use Perl as an example replacement language. Perl developers are a dime a dozen and you could hire someone to produce a program that takes the same inputs and produces the same outputs instead of trying to copy the exact logic. That would probably only take a couple months, tops. Then you run the two in parallel for a year or however long you feel is necessary, testing the exact same inputs against both versions of the app to make sure you are getting identical outputs. Then when you're satisfied you can pull the plug on the old Cobol app. " and so on...

            I sincerely hope you are not a software developer.

            1. aerogems Silver badge

              Re: Genuine question...

              I sincerely hope you're not a book editor.

              1. jake Silver badge

                Re: Genuine question...

                Fiction, maybe.

              2. anonymous boring coward Silver badge

                Re: Genuine question...

                "I sincerely hope you're not a book editor."

                Well, I'm not. But if I tried to be one, I would made a good effort of it. Not assume that I could do it based on ignorance of what it entails.

          3. druck Silver badge

            Re: Genuine question...

            20 years ago Perl programmers might have been a dime a dozen, and rewritten your COBAL program in Perl. 5 years ago you would be complaining that Perl was dead, and it all needed to be converted to Python. Tomorrow you'll be saying Go is the only future.

            Meanwhile, for better or worse, that COBAL program will be running long after you have retired.

            1. jake Silver badge

              Re: Genuine question...

              "Meanwhile, for better or worse, that COBAL program will be running long after you have retired."

              I was just offered a contract extension for the maintenance on a bit of COBOL that I first wrote over forty years ago. They want a 10 year extension, with an option for another extension when this one expires. I'm going to take it (after my lawyer vets their paperwork for bugs and re-writes the bits that need re-writing, of course). Should be fairly easy money. I'll probably offer the extension to my Daughter in ten years ... she's been helping out with it since she was a teenager. Or maybe my granddaughter will take it on.

              No, it's not running on IBM, it's on VMS.

    4. bombastic bob Silver badge
      Meh

      Re: Genuine question...

      you can use it on very old hardware

      older hardware really does not need bleeding edge code

      (I have an old laptop with an old distro of linux on it, useful for some things anyway)

      what *I* hate is things like web sites NOT working with old browsers... different problem

      1. heyrick Silver badge

        Re: Genuine question...

        These days some are actively hostile to older browsers. Google won't let you log in with Chrome (their own browser!) if it's more than a couple of months out of date (but oddly they're okay with a much older Firefox?).

      2. Hans Neeson-Bumpsadese Silver badge

        Re: Genuine question...

        what *I* hate is things like web sites NOT working with old browsers... different problem

        I've had at least one customer who until fairly recently was sill standardised on IE6 for internal/intranet-based applications for that very reason. I expect they saved quite a bit of money by keeping what wasn't broken, versus investing a load of effort into making it compatible with more recent browser versions (and risk introducing bugs along the way)

        1. Nick Ryan Silver badge

          Re: Genuine question...

          While standardising on IE6 is crazy, developing to defined web standards and not whatever fad bullshit Javascript library is the flavour of the month is a reliable, long sighted way to work. Code a website or service using HTML, CSS and the minimum of JavaScript (i.e. using JavaScript to enhance, never implement functionality) and the website should continue to operate stably and reliably for years. When a website is vomited out requiring JavaScript just to show the home page, let alone navigation and the JavaScript is repeatedly used to replicate standard browser behaviour then it will never work reliably nor for a long period of time.

    5. StrangerHereMyself Silver badge

      Re: Genuine question...

      AFAIK Linux gave up on that a long time ago. Most Linux distros need Gigabytes of RAM just like Windows.

      1. Plest Silver badge

        Re: Genuine question...

        What distros are you running?! Try Alpine the x64 version will happily run on 256MB ( 32bit edition will run in 128MB!) , means you can fire up tons of VMs on a bog standard 64GB multti core laptop which most people have now, with extreme ease.

        1. StrangerHereMyself Silver badge

          Re: Genuine question...

          Linux Mint. Required 2GB, 4GB recommended for "best experience."

      2. jake Silver badge

        Re: Genuine question...

        Try Slackware. I have -stable running on an older laptop (~17 YO) with 256megs.

        Before anyone says it, it's quite snappy.

    6. ShortLegs

      Re: Genuine question...

      Because "run linux on an old 486 as a firewall" kinda died around 2005.

      Long after using ipchains as a firewall died. Which was about 1999.

  7. TonyJ

    I love El Reg forums

    Nowhere else would you get a downvote for asking an honest question.

    1. werdsmith Silver badge

      Nowhere else would a downvote matter to anyone over 12 years old.

      1. TonyJ

        To me? Beyond perpetuating certain stereotypes within the Linux community, it's just funny.

        To others? Dunno but it possibly puts them off asking questions.

        As for your comment - ad hominid is all I'll say.

        1. jake Silver badge

          They are meaningless, but they are also used in childish attempts at bullying. That's why I generally don't see them.

          That's "ad hominem", but I rather like your version. In this context it brings to mind monkeys flinging shit :-)

          You do know that ad blockers work on more than just ads, right?

          1. werdsmith Silver badge

            I agree, it's ad hominin. If the cap fits.

            1. Irony Deficient

              I agree, it’s ad hominin. If the cap fits.

              The Latin preposition ad requires the accusative, which is why it’s ad hominem rather than ad *homo (hominem is the accusative declension of homo). If you prefer to use “hominin” rather than “person” in the expression, its Latin analogue would be homininus, which would result in ad homininum.

              1. David 132 Silver badge
                Headmaster

                Re: I agree, it’s ad hominin. If the cap fits.

                I think you forgot your icon :) -->

                Now I shall go and write out Romani Ite Domum a hundred times.

              2. Ace2 Silver badge

                Re: I agree, it’s ad hominin. If the cap fits.

                In theory, I would love to learn Latin, but then I read explanations like this and think, “Maybe later…”

              3. Anonymous Coward
                Anonymous Coward

                Re: I agree, it’s ad hominin. If the cap fits.

                Ha ha you said homo....

    2. Flocke Kroes Silver badge

      Re: Nowhere else would you get a downvote for asking an honest question.

      Really? Try browsing the internet some time.

    3. anonymous boring coward Silver badge

      Downvoted. But only to prove your excellent point! So take it as an upvote!

  8. Fazal Majid

    Heh, a blast from the past. I first installed Linux on my 33 MHz 486DX in 1991-1992 or so:

    https://groups.google.com/g/comp.sys.mac.advocacy/c/7MdzcPwmPFs/m/r89Mb88DzsUJ

    1. ArrZarr Silver badge
      Joke

      And you've only just recently got it running *just so*?

      1. jake Silver badge

        I'm sure many commentards have "just so" stories.

        Nowt wrong w'that.

    2. Tom 7

      A friend made me an HD with it installed and I ran it on a 486 and it seemed like a rocket compared with the WIn NT 3.5 I'd got for coding without 64k boundaries fucking everything up. Manage to get it running on a 16Mhz 386 with 4MB of ram and 2048*2048 desktop on a 768/480 screen. Seemed like all the worlds problems were being solved!

      1. Nick Ryan Silver badge
        Facepalm

        Thanks for reminding me about 64k boundaries. Shudders. Now back to the pills..

  9. Altrux

    DX2

    Fond memories of my 486 DX2 50MHz, in the early 90s. Fastest thing on the block, until my friend got the 66MHz version!

    1. Zuagroasta

      Re: DX2

      I had a DX2 to play X-Wing (and do a few other things like write a thesis and other cruft)... I mjst have burned out that poor CPU killing Imperials. Good times

    2. Troutdog

      Re: DX2

      Those were the days of the "turbo" button, before engineers decoupled software timing from the CPU clock rate.

      1. Nick Ryan Silver badge

        Re: DX2

        The odd thing is that even in the 8bit CPUs it was known that timing things to the CPU clock rate was a silly thing to do just for the differences between NTSC and PAL systems let alone different generations of the same hardware. That the early PC computer game developers didn't get this was quite disturbing... and also highly amusing when doing a drive by "turbo button press" when someone was playing Star Wars on it... :)

        1. martinusher Silver badge

          Re: DX2

          The clocks were timed to facilitate generation of color output on televisions. Its one of those weird shortcuts that people live to regret.

        2. Michael Strorm Silver badge

          Re: DX2

          On the Atari 800 (at least), the actual difference between CPU speed itself on NTSC and PAL systems was minor (1.79 vs 1.77 MHz).

          The reason US games written for NTSC came out slower on PAL systems is that they used the Vertical Blank Interrupt (VBI)- which occurred between screen refreshes- for updates and timing. Since PAL updated 50 times a second rather than 60, games ran slower.

    3. BOFH in Training

      Re: DX2

      I had a DX2 66 with 8MB ram, 420MB HDD, SoundBlast 16, with 2x speed CD-ROM drive.

      Came with Windows 3.11 for Workgroups.

      Don't recall which version of Dos, maybe 5 or 6.

      About 6 months later I had my first linux adventure when I installed Slackware from one of the Slackware CDs on it.

      Have been meddling with Linux on and off since.

      1. Joe W Silver badge

        Re: DX2

        Win for Workgroups should have been DOS 6, likely version 6.2. That was the time...

        1. David 132 Silver badge
          Happy

          Re: DX2

          Windows for Playgroups.

          There. FTFY.

          And yes, it was usually DOS 6.2 or (in my experience) 6.22 with it. As I recall - and a quick zip to Wikipedia confirms my decades-old memory - the point releases of DOS 6 were mostly concerned with disk compression/checking and the aftermath of the Stacker lawsuit.

          1. BOFH in Training

            Re: DX2

            Yeah 6.22 seems to ring a bell ;)

            1. Alistair
              Windows

              Re: DX2

              So I went DRDos and GEM. I think I've been stuck on the antims bus ever since.

              And Yes, Slackware!

    4. bazza Silver badge

      Re: DX2

      The DX2 was revolutionary, or rather very counter revolutionary. At the time semiconductors had hit a clock speed road block, and parallel architecture like Transputers looked like the way forward. Then the DX2 came out, and we all sank gratefully back into single core normality for the next 10+ years. And then when multicore CPUs started coming out, it was SMP all the way!

      Nowadays we're rather wondering if that wasn't a massive, lazy mistake. Problems like Spectre and Meltdown owe their origins to this, and wouldn't have happened on parallel NUMA architectures like Transputer.

      1. anonymous boring coward Silver badge

        Re: DX2

        Intel was the king of cranking up the clock frequency. I liked the Motorola stuff better. No damn fans.

  10. spoofles

    NUMA-tic

    Sequents, think of the Sequents!

    Poor things, whence the DYNEX.

  11. martinusher Silver badge

    If it works don't mess with it

    There's a lot to be said for not trying to upgrade old software to 'the latest'. We're all trained to continually do this because of security issues but typically we don't ask ourselves what these security issues might be and how they might be relevant to vintage hardware. Especially because a modular system like Linux should be able to pick and choose the subsystem components that go into the OS.

    I've built a lot of things on systems that are a lot less powerful than a 486. But them I'm doing a job with these systems, not trying to run a graphical user interface complete with visual special effects. I wouldn't use a 486 or similar processor on a project simply because its large and a bit of a power hog (and like many Intel processors it needs a fleet of support chips to run it).

  12. Anonymous Coward
    Anonymous Coward

    Asking out of complete ignorance...

    Is it possible and easy for someone like me (advanced user, but no expert level, say) to have a modern Linux installed without all the crud in the kernel that I dont need (like old CPU support, Bluetooth support, etc)? And if yes, would it make much of a difference performance-wise? I'm just curious, because I have long had to buy refurbed kit as I cant afford new, so anything that can make things go a tad faster is of interest to me so long as it doesnt involve anything too complicated/potentially dangerous if you get it wrong.

    1. Roo
      Windows

      Re: Asking out of complete ignorance...

      I used to worry about that kind of thing - anything to reduce the load on the cache & memory was good - but that hasn't really been much of an issue since the AMD K7 (Athlon) appeared (IMO). These days I suspect the biggest yield in performance would be to run older less memory intensive apps, and/or a lightweight desktop - eg: XFCE.

    2. doublelayer Silver badge

      Re: Asking out of complete ignorance...

      Some things in the kernel codebase aren't running and would give you no performance advantage. For example, if you strip out the 486 compatibility, your kernel will run at the same speed (unless they add something that they couldn't earlier, but if the only change is removing stuff, then no speedup). Some hardware support, drivers for instance, could be removed and make things a little more efficient, but mostly it would cut a little of the RAM usage rather than anything with CPU time.

      If you have performance problems, the kernel is not likely to fix them. Tuning the kernel can produce some advantages, but what's likely using your resource is services and programs that run in the background. You can probably get faster by either disabling them in a distro that you're already using or by switching to a lighter one that doesn't have as many included by default. You could start analyzing what's using your resource by using resource monitoring tools and checking which services are enabled in systemd or init (also remembering to check tasks they might run such as cron jobs).

    3. Anonymous Coward
      Anonymous Coward

      Re: Asking out of complete ignorance...

      May thanks for the replies! I shall not concern myself re the stuff I mentioned, then! :-)

  13. jonfr

    Older CPU

    Anything below Pentium D (2005) should be removed from the Linux kernel. Only takes up space and is not used today best to my knowledge (but the world is a large place and all that, so I might be wrong and not wrong at the same time). Same goes for AMD, anything older than 2005 to 2010 should be removed from the kernel.

    https://en.wikipedia.org/wiki/List_of_Intel_processors

    https://en.wikipedia.org/wiki/List_of_AMD_processors

  14. trindflo Bronze badge

    Teaching tools

    The only use I see for that architecture these days is as a teaching tool. Use freedos and maybe a DOS extender to demonstrate how to do certain things. Just looking at the difference between a PIC and an APIC reveals that interrupts stop everything in a single core processor. Linus is right to say so and maybe has been overly patient for waiting so long to say it.

  15. gnasher729 Silver badge

    Supporting 80486 takes developer time. And what if a change in the kernel would require a 486 change - is there anyone who will test it? Or even just try it out? So what is 486 support worth if it hasn't even been tried to work? I think Google is following Apple with a long delay dropping support for 32 bit ARM. (And Apple has even removed any ability to. run 32 bit ARM code from all its processors).

  16. Fading
    Linux

    Erm isn't Intel's CSME running on a 486?

    Here we go - https://www.theregister.com/2020/03/05/unfixable_intel_csme_flaw/

    I wonder if Minix will also drop 486 support.........

  17. Luiz Abdala
    Joke

    Airplanes use old cpus, right?

    I heard a Boeing 737 uses a 286-era cpu. Loosely comparable to a 286, not an Intel design. But still.

    Just because it went out of fashion doesn't mean nothing is using it.

    And I think Boeing has it covered, anyway.

    And they are not using Linux, for sure.

    I hope.

    1. fredesmite2

      Re: Airplanes use old cpus, right?

      Are they running 6.0-rc kernels ?

      No

      Likely an Old abandoned branch likely because it is the only one that works

  18. Bob H

    FDIV

    It'll be great to see when the Kernel no longer looks for the pentium FDIV bug!

  19. BPontius

    486 processors are 33 years old, it is 15 - 20 years past time to dump support. Ridiculous!! Just as with Windows people want Microsoft to continue supporting 20+ year old protocols and hardware, also complain that Windows is insecure but will not apply updates. Demanding continued support for old protocols like SMBv1, NetBIOS and Telnet. People won't move on from Windows 7 or XP (13 and 21 years old) and cling to hardware, drivers and software form that era even after upgrading to Windows 10.

    I realize people have financial reasons and other issues for not upgrading but to insist on continued support for such niche markets is unreasonable.

    1. anonymous boring coward Silver badge

      You think 15 years is a long time?

      You must be quite young.

  20. fredesmite2

    kill it

    A lot of stale garbage in the kernel that is no longer needed or used

    Who makes a i486 server ?

  21. anonymous boring coward Silver badge

    I'm not that into the lowest level stuff on CPU programming any longer, so here's an actual question:

    Why is 486 an "architecture"? I thought 386, 486, Pentium, Whatever were basically the same architecture with some miscellaneous instruction set extensions?

    Did I miss something drastic happening?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like