back to article Intel mulls cutting ties to 16 and 32-bit support

Chip giant Intel has proposed something rather unusual: a potential simplification of the x86 architecture by removing old features. A technical note on Intel's developer blog puts forward a rather radical change to the x86-64 architecture: a new x86S architecture, which simplifies the design of future processors and a PC's …

  1. John Robson Silver badge

    Backwards compatibility

    The anchor that either gives reliability or drags everything down to the lowest common factor...

    Modern hypervisor tech means it's probably time for 32 bit native hardware to be put out to pasture - the ability to pass through devices properly really helps

    \

    1. Anonymous Coward
      Anonymous Coward

      Stupid question, but...

      I could understand the issue with keeping all that legacy cruft if "x86" chips still had to maintain all that the native x86-descended architecture in hardware.

      But that hasn't been the case for a *long* time now, not since around the time the Pentium Pro and Pentium II came out. Modern "x86" processors are really an x86-to-microcode translation layer around around a RISC-like (*) core.

      I'd have thought that made it *much* less of an issue to implement in "software" (or rather, microcode). It's not like they should need to change the design of the chip optimised for modern 64-bit apps, or even need to devote time or resources to making sure it runs well. All that matters is that it works at all- it's going to run such ancient code orders of magnitude faster than the machines it was originally designed for regardless.

      (*) Okay, some people have disputed that they're RISC-like at all (**), but the point here is that the internal architecture *isn't* native x86 or anything close to it

      (**) Besides which, since the internal architecture and implementation isn't- or shouldn't be- user-accessible under normal use, it's quite possible that the internal design could have changed completely between different families of chips, so long as it retains external x86 compatibility.

      1. Mostly Irrelevant

        Re: Stupid question, but...

        Less complexity for those instruction decoders. x86-64 jettisons entire classes of instructions like the old x87 floating-point instructions. At this point, they might as well. you functionally can't run older OSes on current hardware anyway, Windows versions before 11 don't support Intel's asymmetric core chips (core 12+) as one major example.

        1. Justthefacts Silver badge

          Re: Stupid question, but...

          I wonder if this is actually the implications on Test, rather than design considerations itself. Having all these modes must be multiplicative on the amount of test that needs doing. When you consider simulating tens of millions of CPU gates in software, adding hundreds or thousands of instruction-cycles for minimal boot, at only tens of instructions per second, on every single test, before you can do anything at all, is a real problem.

          The question is always “why now”. I note that Intel had a disastrous roll-out of their 10nm designs, reaching spin #30 before they had fully functional silicon. Adding just a few extra days of test *per spin*, probably really poked them in the eye. They’ve likely had a real root-and-branch analysis of “what do we need to do, to reduce and speed up test during tapeout, from functional all the way through post place and route”. This one probably shouted “why the hell are we still doing this”

  2. abend0c4 Silver badge

    Spite, I tell you, spite!

    One of the first things I had published was a review of the 8086 in which I complained about its awkward architecture. Clearly, Intel have held a grudge ever since and have deliberately pursued me with this monster throughout my career, constantly prolonging its life without any other apparent reason. Now I've retired, they suddenly seem to have had a change of heart. QED.

    Of course, some may say it's not about me at all and that not even the power of marketing can keep inferior technology alive for more than 45 years, but I know what I choose to believe.

    1. Anonymous Coward
      Anonymous Coward

      Re: Spite, I tell you, spite!

      @abend0c4

      "Of course, some may say it's not about me at all and that not even the power of marketing can keep inferior technology alive for more than 45 years, but I know what I choose to believe".

      Huh, some may say. that, but of course it is spite. They are wrong and probably Millennials, Goths, or Tarot card readers, or teenagers. And what do they know?

      And yes I know I was a teenager once upon a very long time ago, but I was a perfectly good teenager. A swot really. Hmmm, could that thumping sound I can hear be my Mum and Dad spinning, at quite a high rate of revolutions, in their grave?

      1. heyrick Silver badge
        Happy

        Re: Spite, I tell you, spite!

        Hey, leave Goths out of it! The segmented memory models of the early x86 is a Lovecraftian horror that even us lovers of darkness, bats, and crypts try to avoid.

        1. Mike 137 Silver badge

          Re: Spite, I tell you, spite!

          "The segmented memory models of the early x86 is a Lovecraftian horror"

          If I remember right, the segmented model was designed to support multi-user systems (one segment per user). For that, it made perfect sense and was a big improvement. I do remember with some horror a multi-user office system based on a single Z80. Segmentation just persisted beyond that need and became a nuisance when the entire 640k was required for one user.

          1. mtp

            Re: Spite, I tell you, spite!

            I have some shuddering flashbacks to near and far pointers and then I drifted dangerously close to remembering EMM386 before I managed to save myself.

            1. Anonymous Coward
              Anonymous Coward

              Re: Spite, I tell you, spite!

              I remember tiny/small/compact/large/huge memory models. Those were the days.

              I also remember working with an early 386 DOS Extender. It had an interesting feature in that if you called a virtual function on a NULL pointer, it would jump to CS:00000000 - which was the top of main. So your program would run along happily until it mysteriously started again.

        2. Francis Boyle Silver badge

          Nonsense

          Even a Lovecraftian horror has a certain elegance to it.

        3. phuzz Silver badge
          Unhappy

          Re: Spite, I tell you, spite!

          Hey, leave Goths out of it!

          Fair's fair, they did sack Rome.

    2. EricB123 Bronze badge

      Re: Spite, I tell you, spite!

      Damn, you took the words out of my very mouth!

      Oh, I majored in VLSI in my university. NOW it would have been useful. Anyone interested in a 66 year old EE?

    3. Mike Lewis

      Re: Spite, I tell you, spite!

      The infamous demented register architecture.

      1. John Brown (no body) Silver badge
        Happy

        Re: Spite, I tell you, spite!

        That's no way to talk about this august publication!!!

  3. MacroRodent
    Boffin

    8080 and 8086

    The 8086 was never binary compatible with the 8080/8085, but you could map 8080 instructions and registers more or less 1-1 to 8086. There were translators that would take 8080 assembler and produce the corresponding 8086 assembler. Of course such programs were limited to using only a single 64k segment, and you had to convert the OS interface. However, for the most common porting case, the original MS-DOS API was so close to CP/M that porting was easy.

    1. Simon Harris

      Re: 8080 and 8086

      Interestingly NECs V20/V30 drop in replacement for the 8088/8086, as well as slightly enhancing the 8086 instruction set also included an 8080 mode to run 8080 code natively.

      1. MacroRodent

        Re: 8080 and 8086

        Yes. I swapped the 8088 for V20 in my PC/XT clone. It was a good and cheap upgrade, because the NEC V20 executed some instruction faster than a 8088. Multiplication in particular was quite a bit faster, which made the chip appear better than it really was in some benchmarks. I think the extended x86 instructions were the same as in 80186, like ENTER/LEAVE. One could use the 80186 target option in Microsoft C compiler, and get a bit smaller executable.

        1. John Brown (no body) Silver badge

          Re: 8080 and 8086

          Same here. It was still early enough in the "PC" market we know now, that having CP/M running ion a PC clone using older CP/M software like WordStar etc was still a viable choice, especially if you already owned a library of CP/M programs.

    2. Arthur the cat Silver badge

      Re: 8080 and 8086

      The 8086 was never binary compatible with the 8080/8085

      [All from old and failing memory, may be wrong.]

      However the 8086 could use 8085 peripheral chips, which hardware designers were familiar with. Which was probably a good job because Intel were rather slow at bringing out peripheral chips actually designed for the 8086.

      1. Simon Harris

        Re: 8080 and 8086

        I remember the 68000 did the same thing. As well as the standard clock signal there was a slow clock output at 1/10 the CPU clock speed, typically 0.8MHz for an 8MHz CPU, which was used to feed the E input of 6800 family IO devices as Motorola were a bit slow getting 68000 IO devices out. If I remember correctly, there were a few 68000 opcodes specifically designed to access 8 bit 6800 family devices that only used half the data bus.

        1. CowHorseFrog Silver badge

          Re: 8080 and 8086

          Do you mean movep ?

          I remember many years playing with an action replay and never seeing a movep anywhere.

          1. Simon Harris

            Re: 8080 and 8086

            That’s the one!

  4. mark l 2 Silver badge

    Lets remember that had Itanium become a successful next generation CPU architecture that Intel had envisioned back when it was launched, we might all be using PCs with IA-64 chips in them by now.

    The x86 line might have fizzled out and only exist in just a few chips for legacy industries or through some sort of emulation layer in the OS.

    1. Ken Hagan Gold badge

      So both x86 and x64 exist because Intel tried to reinvent the universe but only delivered it too slow, too expensive and too late.

      1. Anonymous Coward
        Anonymous Coward

        IA-64 wasn't a complete failure in the beginning so it could be because AMD64 had backward compatibility (which loops back to the first post's LCD reference).

        1. Michael Wojcik Silver badge

          IA-64 wasn't a complete failure in the beginning because of marketing. It was a lousy CPU which was dreadful to use, particularly if you were trying to write a compiler backend for it, or (god help you) debug something in assembly.

          Even the things that seemed like a good idea in theory – hey, a trap representation for integer registers! – were terrible in practice because they weren't handled well by the OSes that ran on IA-64. Which was mostly HP-UX.

          1. Roo
            Windows

            IA-64 was always DoA.

            This reply isn't really aimed at you Michael as I'm fairly sure you know all this already. :)

            The criticism of IA-64 that stuck was the fact it promoted static optimization over dynamic (on-chip) optimization - at a point in history where silicon economics made dynamic (on-chip) optimization tricks (already pioneered on big iron in the 60s & 70s) viable. It was a very backward design that might have done well in the early 80s, but made zero sense in the late 80s with the transistor budgets skyrocketing.

      2. Mostly Irrelevant

        There was also the case of the IA64 architecture not being too great for desktop workloads as well. Intel was on a super-pipelined chip design kick which is the reason Pentium 4s perform so badly for their clock speed. Long pipelines make branching computationally expensive which is one of the reasons Itaniums were often used in super computers for scientific problems with large datasets. The design had it's pluses but didn't make that much sense for desktops.

        1. Roo
          Windows

          IA64 didn't exactly set the high end alight either, and let's face it big memory was (and remains) not a huge technical deal and a niche market. AFAICT they were just trying to keep the old POWER vs PARISC fight going because they had no other market where they could compete.

    2. Martin Howe

      Of course, let's not let Intel off the hook for burying AXP in favour of Itanic; AXP + the JIT translator that Microsoft put in NT/2K for AXP would have been a lot better for everyday workloads and given people a good path to the future. And yes I have run Doom on AXP, but in 2022 when it should have been in 1995 :)

      1. Anonymous Coward
        Anonymous Coward

        Let's not let Intel off the hook for burying AXP in favour of Itanic

        I'm much more pissed about HP burying PA-RISC in favor of Itanium, which in my view was a bigger loss than AXP.

        "AXP + the JIT translator that Microsoft put in NT/2K for AXP would have been a lot better for everyday workloads and given people a good path to the future."

        If you're talking about FX!32, that didn't came from Microsoft but from DEC. And it had a number of severe limitations which made it only really useful for undemanding x86 apps like MS Office (or the original DOOM).

        We had a range of Alphas back in the later '90s. Native code performed somewhat better (on avg around 30%) on EV6 than on Pentium2 but then those P2 HP Kayak workstations did cost 1/3rd of the DEC XP900 Alphas we had.

        As far as WindowsNT was concerned, pretty much anything x86 ran faster on the Kayak than under FX!32 on the Alpha. If it even ran under FX!32, that is.

    3. NeilPost Silver badge

      … or they could have junked Itanium for Alpha 64 as that was already working, and well.

      1. bazza Silver badge

        Ah, there's nothing like "not invented here" to get in the way of sensible ideas...

        It was the same with Arm. I'm sure you recall that Intel inherited a license and design for that too, ignored it. I think they been paying the price ever since.

    4. Charlie Clark Silver badge

      Itanium had one real aim: kill DEC's Alpha chips and it succeeded in this. Other than that, Intel knew better than anyone else that keeping x86 largely the way it was, was the best to way to enforce lock-in. Switching architectures imposed huge costs for developers and users, who were supposed to be the same software twice. Even now, with heaps of excellent compilers, it's still by far the dominant desktop and server chip because migration on Windows is not entirely possible, expensive and not necessarily faster.

      Microsoft is preparing for an x86_64 only world with probably only the huge investments that companies have made around 32-bit version of MS Excel holding it back. That, and people still wanting to buy their own machines rather than renting them from Microsoft.

      1. Anonymous Coward
        Anonymous Coward

        Itanium had one real aim: kill DEC's Alpha chips and it succeeded in this.

        >> Itanium had one real aim: kill DEC's Alpha chips and it succeeded in this.

        Intel didn't need to kill HP's Alpha chips, it was already a zombie after Compaq bought DEC and before both became part of HP.

        And it's not just AXP that got extinct in favor of x86/x64, the same is true for MIPS, PowerPC and PA-RISC.

        >> Other than that, Intel knew better than anyone else that keeping x86 largely the way it was, was the best to way to enforce lock-in.

        You mean the lock-in that comes through x86, an architecture for which a number of 3rd parties (like AMD) hold x86 licenses so they can make their own x86 processors, versus the open nature of a processor design that is wholly owned, controlled by intel (Alpha AXP)???

        >> Switching architectures imposed huge costs for developers and users, who were supposed to be the same software twice. Even now, with heaps of excellent compilers, it's still by far the dominant desktop and server chip because migration on Windows is not entirely possible, expensive and not necessarily faster.

        That sentence doesn't make sense. Switching architectures can be difficult but for a lot of modern software it's not a massive issue (just look at the number of architectures Linux supports, including esoteric stuff like S390).

        Modern Windows is based on WindowsNT which was designed around multiple platforms (NT supported x86, Alpha AXP, MIPS and PowerPC), and later included support for IA64 (Itanium) and ARM as well. And Windows ARM shows that, in fact, migration isn't a huge deal, as it already comes with a very decent x86/x64 emulation layer so the majority of x86/x64 programs run just fine.

        >> Microsoft is preparing for an x86_64 only world with probably only the huge investments that companies have made around 32-bit version of MS Excel holding it back. That, and people still wanting to buy their own machines rather than renting them from Microsoft.

        Microsoft doesn't give a damn about the architecture (aside from the fact that Azure already runs both x86 and ARM), because for them it doesn't matter. As long as businesses voluntarily enslave themselves to the Microsoft ecosystem of applications and services they are ensured a steady revenue stream.

        1. Charlie Clark Silver badge
          FAIL

          Re: Itanium had one real aim: kill DEC's Alpha chips and it succeeded in this.

          Being bought by Compaq was partly because of the failure of the Alpha. While NT did run on Alphas, Microsoft wasn't keen on the work needed to maintain it and developers couldn't simply recompile for Alpha or ship fat binaries. NT was supposed to provide the hardware agnostic base so that the chip architecture wouldn't affect applications but, by NT 3.51 this had been ditched to make x86 run faster (and less securely). Windows on ARM only looks okay because modern chips are so fast, the underlying problem of needing to compile GUI apps for specific architectures has not changed.

      2. John Brown (no body) Silver badge

        "Microsoft is preparing for an x86_64 only world with probably only the huge investments that companies have made around 32-bit version of MS Excel holding it back."

        Yeah, I wonder how closely Intel and MS are co-operating on this? Can even Win11 and enough relevant drivers work in a pure 64-bit only environment? I bet there's still loads of legacy code still in there.

  5. Steve Channell

    Eliminating support for ring 3 I/O port accesses

    while application access to ports is generally a security vulnerability, it might be used by some implementations of user-mode TCP to eliminate buffering.

  6. Chris Evans

    ARM Comparison?

    I'd be interested to known what similar changes ARM have done or proposed to their architecture.

    1. heyrick Silver badge

      Re: ARM Comparison?

      The 26 bit shared PC+PSR model was ditched ages ago.

      It looks like 32 bit is on the way out (and some chips only support it in user mode anyway).

    2. diodesign (Written by Reg staff) Silver badge

      Re: ARM Comparison?

      Arm is phasing out 32-bit support gradually from its higher-end CPU cores, eg the Cortex-X2 for personal computing. 32-bit will most likely live on in the lower end, but at the high end, today's Arm engineers can't wait to ditch it and simplify the architecture.

      C.

      1. J.G.Harston Silver badge

        Re: ARM Comparison?

        I've examined the ARM64 archetecture in the context of writing an assembler for it, and I can't say that ARM64 is simplified compared to ARM32.

        1. Anonymous Coward
          Anonymous Coward

          Re: ARM Comparison?

          It's simpler if you don't have to support both.

    3. DS999 Silver badge

      Re: ARM Comparison?

      When AMD created their 64 bit ISA (AAarch64) spec'd in ARMv8 they allowed implementations to choose whether or not to also support their 32 bit ISA (AArch32) and as well as their condensed ISA "Thumb" (which was only used in memory starved embedded roles, never in Apple or Android smartphones) and ARMv9 essentially deprecated AArch32 so it is unlikely to be supported at all with ARMv10.

      That allowed Apple 4-5 years ago to drop AArch32 entirely and have 64 bit only CPUs. Android is in the midst of doing a similar transition now, and ARM's own implementations are starting to drop 32 bit support in some cores as diodesign notes.

      Even if this new ISA is adopted by Intel & AMD it doesn't eliminate 32 bit support like Apple has and the rest of the non-embedded ARM world is doing right now, just reduces its scope to user mode only. Theoretically they could have dropped 32 bit user mode as well and relied on JIT translation to handle running 32 bit x86 binaries on the now 64 bit only Windows/Linux operating systems, but it looks like they were afraid if they took things too far people wouldn't accept this new ISA. The result being that they will have to take another dramatic step like this in the future to finally drop 32 bit support for real.

      1. NeilPost Silver badge

        Re: ARM Comparison?

        Don’t you mean ARM created ?

  7. IGnatius T Foobar !

    It's about time.

    Pretty much everyone wants a machine that uses UEFI that boots straight into 64-bit long mode, and then runs an operating system with a 64-bit kernel. As long as 32-bit software can run once the operating system has booted, no one is going to lose any sleep over it. Linux users moved on a long time ago, no one is running MS-DOS on bare metal anymore, and even Windows stopped being able to run Win16 binaries quite some time ago.

    It's time to face the fact that the 8086 architecture simply wasn't elegant enough to maintain compatibility throughout the ages in the way that, for example, the IBM 360 architecture was. Let it go.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: It's about time.

      [Author here]

      > Pretty much everyone wants a machine that uses UEFI that boots straight into 64-bit long mode

      This is true, insofar as it goes, however in order to get into the 64 bit long mode, the machine starts in 16 bit real mode, transitions to 16 bit protect mode, transitions to 32-bit protect mode, and thence into 64-bit mode.

      To the best of my knowledge, on current processors, it's the only way to get there.

      And while it is true that it hasn't quite equalled the IBM 360's longevity yet, X86 has come closer than anything else in history.

      1. Michael Wojcik Silver badge

        Re: It's about time.

        X86 has come closer than anything else in history

        Has it? Last I looked, there were still a lot of Z80 cores inside little dedicated-purpose embedded chips, for things like kitchen-appliance controls. That era may have passed now too, but it was still the case circa 2010, which would give the 8080 architecture a good 35+ years. Wikipedia says there's still a Z80 variant in TI calculators as of 2015. True, x86 is only 5 years younger, so it won't take x86 long to surpass 8080 after the last version of the latter ships, but for the moment I believe it's still telling its 16/32/64-bit younger sibling to get off its lawn.

        (Of course you could pretend that 8080 is part of the x86 family, if you really wanted to loosen your definitions. And you could pretend the ROMP is part of the POWER family, which makes it more or less contemporary with x86. In any case, 360 beats them all handily, with a solid decade on 8080.)

        1. Binraider Silver badge

          Re: It's about time.

          Z80 is not only still used, but still in production!

          It's not without idiosyncrasies but a (simple) architecture with adequate power for control tasks that is essentially unhackable short of directly connecting to the hardware is an incredibly useful thing.

          The failures arise when you try to do more with something than it was intended. Enter X86

        2. heyrick Silver badge

          Re: It's about time.

          It's a little younger than the Z80, but the 8051 is still around in plenty of little embedded devices (bread makers, microwave ovens, etc).

          Nowadays you can get them with DSPs and such bolted on. Which isn't quite as weird as it sounds given the ST1 range of audio players was a Z80 core with a 24 bit DSP for handling MP3s.

    2. DuncanLarge Silver badge

      Re: It's about time.

      > Pretty much everyone wants a machine that uses UEFI

      Not me.

      All my machines use BIOS or UEFI with CSM and will do for many more decades.

      I'm waiting for UEFI to stabalise enough to run something other than windows without caveats/hoops/patches. Once that happens I may boot as UEFI.

      PLus a load of my hardware must use a BIOS or CSM to load in their own BIOS's such as my SCSI cards.

      1. phuzz Silver badge

        Re: It's about time.

        Most Linux distros install just fine with UEFI, no jumping through hoops required, and that's been the case for years.

        (If you wanted yet another alternative, technically OSX uses EFI, and if you don't mind some hoop jumping, even Haiku does UEFI booting)

  8. An_Old_Dog Silver badge

    Ramifications

    1. Killing off older hardware.

    2. Killing off older software.

    3. Increasing the amount of hardware "churn" (from the factory, to the end user, to the landfill).

    4. Boosting the "Well, of course you have to have (Microsoft) Windows on a computer!" mentality.

    Re: 1: Older hardware frequently have firmware updates and profiles which can only be loaded using a manufacturer's program, which runs only under some 16-bit operating system: DOS. MS-DOS, PC-DOS, FreeDOS, Win3X, or Win9X. Seagate provided hard drive firmware updates via FreeDOS. Even Microsoft's "Windows Memory Diagnostic" program does not run under Windows!

    Re: 2: Older industrial, machine, and application programs run only under (some form of) DOS.

    Re: 3: Techies (used to) have all sorts of DOS-based and bare-metal based diagnostic and testing programs. If PCs will no longer boot into a 16-bit mode, these programs won't work. Many of these programs were hobbyist-or small-company-written, and their functionality will not be effectively replaced by the larger software houses who "do Windows" ... because there is insufficient paying market for it. Without those tools, techies will have to spend more time diagnosing and testing things, which means a higher (potential) bill for the end-user, which means things which previously were economical to repair no longer are.

    Re: 4: With older OSes no longer able to boot, they will be less-seen, less-written about, and less-talked-about. Removing the visibility of alternative IBM-PC-compatible operating systems makes it less-likely people will even think about the possibility, let alone desirability, of non-MS-Windows operating systems.

    Liam Proven dismisses the need for booting into 16-bit mode by writing, "UEFI has already effectively eliminated the ability to boot 16-bit operating systems on bare metal, and barely anybody noticed." What he ignores is that the first thing a tech does with a PC they are diagnosing hardware issues on is to go into the BIOS and enable Compatibility Support Mode, so they can boot their super-duper diagnostics-filled flash drive ... and then changes it back to UEFI when they need to boot MS-Windows.

    1. DS999 Silver badge

      Re: Ramifications

      How does this "kill off older hardware" or "kill off older software"? Intel still sells 80286 CPUs, because certain customers (military, industrial and the like) require very long time periods where they can get replacement parts. You will still be able to buy x86 CPUs like today's 30 years from now, for the tiny niche that needs it. why drag around all that cruft for the 0.00001% of people who need to boot a 16 or 32 bit OS and so forth? It is stupid.

      Nobody is "booting DOS" for diagnostics in 2023, unless they are so backwards they haven't learned anything new in the past 20 years and haven't heard of Linux.

      1. John Brown (no body) Silver badge

        Re: Ramifications

        "Nobody is "booting DOS" for diagnostics in 2023, unless they are so backwards they haven't learned anything new in the past 20 years and haven't heard of Linux."

        OEM diagnostics and set up tools from both Lenovo[*] and HP (at least) still do. Well, to be fair, I've not delved into either deeply enough to say what the OS is underneath, but it's text-mode, command line and doesn't show any obvious Linuxisms when booting. Most likely it's FreeDOS under the hood. Likewise, using Yumi for a multiboot pendrive is still the easiest and simplest to use IMO. UEFI Yumi still seems experimental and Ventoy still seems a bit primitive if usable.

        [*] the OEM setup tool for "branding" a factory-new motherboard runs on some DOS-a-like anyway.. Lenovos bootable diagnostics run on Linux.

    2. doublelayer Silver badge

      Re: Ramifications

      "the first thing a tech does with a PC they are diagnosing hardware issues on is to go into the BIOS and enable Compatibility Support Mode, so they can boot their super-duper diagnostics-filled flash drive ... and then changes it back to UEFI when they need to boot MS-Windows."

      What OS do you have on your flash drive? I have Linux on mine. It boots into an Arch environment with all the utilities I remembered to install already present. For a while, I used the 32-bit version which UEFI could boot just fine in case I used it on an old computer. Then I tried to install some new packages and realized that hanging back had caused some problems installing and updating, so I had to choose between updating that image more frequently or just using the 64-bit version. I chose the latter and since doing so, I've found zero computers that I needed to use it on which didn't accept it.

      "Older hardware frequently have firmware updates and profiles which can only be loaded using a manufacturer's program, which runs only under some 16-bit operating system"

      You're already out of luck on that one. I can't boot those operating systems natively on newest hardware anyway. If I have such a device, I'd either get an old computer whose entire purpose would be running that software or I'd see if I could pass enough stuff through to a VM to make it do that. Also, while I'm sure there's plenty of hardware with such limitations out there, I question whether it's really getting firmware updates that still need a 16-bit uploader. If the company is really still updating them, that company can make a more modern firmware uploader.

      "Older industrial, machine, and application programs run only under (some form of) DOS.": Again, I know it's true, but those machines tend to include their own computer. They don't just let me slot in a new computer, which is fine because the new computer can't run DOS already and probably lacks the interfaces and/or custom chips they've built into theirs.

      "Techies (used to) have all sorts of DOS-based and bare-metal based diagnostic and testing programs. [...] Many of these programs were hobbyist-or small-company-written, and their functionality will not be effectively replaced by the larger software houses"

      Do you have a single example? I knew a few of those tools, and they have either been replaced by something open source or they no longer do anything particularly useful because, when the author of the 16-bit version stopped working on it, it stopped being useful for problems that happened on newer hardware.

      1. An_Old_Dog Silver badge

        Re: Ramifications

        @doubleslayer: I don't have a just one OS on my flash drive. What I have is GRUB4DOS, which boots various Linux, DOS, Windows, and no-os images. In no particular order, Clonezilla, Debian, Hiren's Boot CD, Ultimate Edition (Ubuntu-based), Knoppix, System Rescue CD, and Ultimate Boot CD for DOS, which contains mostly DOS-based programs and images, and some Linux images. I also have some bare-metal images: Memory Test 86 Plus, Memory Test 86, Windows Memory Diagnostic, and various boot loaders. UBCD contains many DOS-based diagnostics and configuration programs. I have some DOS images (MS-DOS, PC-DOS, and FreeDOS). My most-used DOS program is "Ranish Partition Manager v 2.43b (Mutha)" Another useful one is Joan Riff's (poorly-named) "DIAGS.EXE" which is a printer/printer-port analyzer. A second flash drive has a Windows 8 installer image, which I use for its pre-installation utilities (DISKPART, etc.).

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: Ramifications

          [Author here]

          You really should try Ventoy. It can do this, and it's a great deal easier. And it works in both BIOS and EFI Boot modes, so a single key will work on both BIOS and an EFI PC, including on Intel-based Macs.

          And yes, it can boot DOS.

    3. Liam Proven (Written by Reg staff) Silver badge

      Re: Ramifications

      [Author here]

      > Older hardware frequently have firmware updates and profiles which can only be loaded using a manufacturer's program, which runs only under some 16-bit operating system

      It's certainly used to, and until not all that long ago. However, this is increasingly untrue of any x86 hardware that is new enough to still be under guarantee.

      The reason is quite simple: to carry the Microsoft Windows compatible branding, a PC has to support booting in Secure Boot mode. Secure Boot only works in EFI Boot mode. You can't have Secure Boot and have legacy boot enabled at the same time.

      So, in order to support secure boot, which you need to do in order to get that window sticker and the financial incentives that go with it, there is a strong pressure on manufacturers to only support EFI Boot mode by default. That means that the BIOS compatibility mode of EFI, which is usually known as CSM, for compatibility support module I believe, is increasingly disappearing. None of the recent Lenovo machines which I have reviewed for the Register support it any longer, for example.

      As it happens, I have a personal hobby side project which involves booting MS DOS from USB keys, and it saddens me that it no longer works on most modern hardware. However this is just one of the many things about the modern PC and computer industry in general which saddens me. The sad fact is that Legacy or BIOS Boot is going away armour and is already missing from any machine manufactured since the 20 teens

      This is also why increasingly PCs support Windows-based firmware updates, and in turn, that has the beneficial side-effect for Linux users that Linux based firmware updates become much easier. The gnome firmware tool supports updating not only system firmware, but that of SSDs, hard disk drives, a network controllers, and so on.

      So while it is true that until just a few years ago, it was useful to be able to boot many PCs from a DOS USB key in order to update their firmware, that is actually becoming rare on more current PCs where this technique no longer works at all. It's 2023. Any PC that was manufactured before the 2020s is now probably out of warranty. And that probably means that any PC which supports booting from dos is now obsolete from the point of view of the accounts department. It's been deprecated, and if you asked nicely, it's the kind of cake that you can get to keep when you leave company. So that does mean there is properly more of it out there in the secondhand channel, but the secondhand channel for PCs is not a big one because most people are unable to judge if the kit is in good condition and still works and so forth.

    4. DuncanLarge Silver badge

      Re: Ramifications

      I totally agree. There is a ton of legacy stuff out there including where I work and it simply is not possible to NOT support 32bit, heck we probably have some 16 bit stuff about as well.

      It is legacy but required. It cant not/will not easily be re-written without significant investment. It's taken the best part of the last 10 years to even get started re-wtiting just one bit of softwware we have here, and thats because the original coders are long gone. It works just fine but we have an issue that it needs a rewrite so we can move it onto another OS, but that will still be 32 bit.

      > Re: 3: Techies (used to) have all sorts of DOS-based and bare-metal based diagnostic and testing programs. If PCs will no longer boot into a 16-bit mode, these programs won't work. Many of these programs were hobbyist-or small-company-written, and their functionality will not be effectively replaced by the larger software houses who "do Windows" ... because there is insufficient paying market for it. Without those tools, techies will have to spend more time diagnosing and testing things, which means a higher (potential) bill for the end-user, which means things which previously were economical to repair no longer are.

      Dont worry. Us technies hoard the hardware and software needed to run that stuff. When all the new kids/IT apprentices are trying to wrap their head around figuring out why something isnt working and getting lost because M$ constantly hides behind "Something went wrong" messages instead of real error messages, we pull out our magic boxes of flashing cursors and we get the job sorted easily.

      When we retire in 30/40 years (when I do at least) those skills will be all but lost. I'll be pottering about in the garden with a DOS PC running in the shed and playing about with old Linux distros and programming PIC controllers via RS232 happily, having taken all my magic boxes with me, leaving all those that follow me in IT struggling and capitulating to the IT service desk version of ChatGPT.

  9. An_Old_Dog Silver badge

    iAPX-432

    ... looked awesome from a software perspective. Too bad Intel couldin't make it work well-enough.

    1. Arthur the cat Silver badge

      Re: iAPX-432

      looked awesome from a software perspective

      I'd have said baroque rather than awesome. It was pretty much the last gasp of the ultra-CISC. Didn't it even have support for both OOP and garbage collection in the hardware? (Which may have had something to do with it not working well enough.)

      1. Ken Hagan Gold badge

        Re: iAPX-432

        Surely IA-64 was the last gasp of the ultra-cisc?

        I remember more than one commentator at the time remarking how IA-64 was not Intel's first attempt to sink x86 and also to do it with a chip so complicated that no-one else would be able to make it.

  10. JohnSheeran
    Pint

    Fantastic article. Thank you.

  11. mevets

    4 ring OS.

    QNX4 also used rings 1 & 2; specifically privileged items like drivers ran in ring 1; the microkernel in ring 0; normal apps and servers in ring 3.

    https://cseweb.ucsd.edu/~voelker/cse221/papers/qnx-paper92.pdf

    1. Anonymous Coward
      Anonymous Coward

      Xen as well

      Xen admittedly only used Ring 1 and it was in PV mode with 32-bit DomU kernels: https://wiki.xenproject.org/wiki/X86_Paravirtualised_Memory_Management

      Would be nice if they redefined the rings in a subtle way instead of ditching them:

      CPL0 - Hypervisor / Virtualisation

      CPL1 - Supervisor / Kernel-mode

      CPL2 - User-mode drivers (ports allowed)

      CPL3 - Applications / Utilities

      1. bazza Silver badge

        Re: Xen as well

        CPL2 gives me a little difficulty.

        Agreed that one would need to fully trust drivers anyway, just like we do today. If it's able to control hardware the driver can potentially access any memory using the hardware (via DMA attacks). One would need to trust it to not do that.

        The question is, in a separate ring to user apps, what's the value? There's still a mode switch when the application makes a call involving the device. Does one envisage this being quicker than a full mode switch to ring 0?

        1. mevets

          Re: Xen as well

          You don't need to fully trust all drivers. There is something to be said for your keyboard driver not being able to over-write your page tables.

          The real lacking was intel didn't provide any sort of ring & privileged opcode configurability wrt CPL.

      2. Liam Proven (Written by Reg staff) Silver badge

        Re: Xen as well

        [Author here]

        > Would be nice if they redefined the rings

        Intel *has* redefined the rings. The introduction of Intel VT, its hardware accelerated virtualisation technology, introduced a new ring into the model. However because all the existing numbers were used, it brought in a ring -1. So on a 64 bit machine with X64S, there will be ring -1, ring zero, and ring three.

        Personally I am not against protection rings, or even segments. They were potentially useful addition to the architecture and if operating system developers had used them properly, they would be a desirable thing. However they didn't. Microcomputer operating systems evolved from single tasking single user things like CP/M, and as they gradually gained multitasking in the 1980s and 1990s, those new facilities mostly didn't use memory protection and so on.

        That in part is what killed the Amiga: it could do multitasking but without memory protection on 68000, but the way that they did it meant that they couldn't take advantage of the new facilities of the 68030 without breaking compatibility with all older software.

        Apple had multitasking in the Lisa, but that was one of the things that got cut from the Macintosh so that it could sell profitably at a quarter of the price.

        On the other hand, the problem is Apple had with Copland are the reason that it had to buy Next, and that's what saved the company in the long run.

        So the small details of the story also played out in the big picture. When Ritchie and Thompson designed UNIX, one of the many things from the Multics design they threw away was lots of protection rings. UNIX was minimal, and used the minimal number of protection rings. So in the 1990s, when Apple and Microsoft both moved to designs at least inspired by UNIX, they adopted operating systems which made minimal use of protection rings. The IBM/Microsoft codesigned OS/2 use more of them… but of course it ultimately flopped and was supplanted by Windows NT, modelled on UNIX and VAX/VMS, with Unix's minimal use of rings.

        It's all just like hemlines and flared jeans really.

        1. bazza Silver badge

          Re: Xen as well

          So the small details of the story also played out in the big picture. When Ritchie and Thompson designed UNIX, one of the many things from the Multics design they threw away was lots of protection rings. UNIX was minimal, and used the minimal number of protection rings.

          It feels very much like that the use case of the middle-of-the-road rings has gone away anyway. OS/2's use of Ring 2 allowed priviledged programmes to access I/O ports. Well, in this day and age of reprogrammable device firmware, bus mastering DMA, etc. giving any code any access to ports feels pretty much like potentially handing over the entire machine. If so, it may as well be Ring 0, and at least get as heavily scrutinised as any other kernel-level code.

          The INTEGRITY operating system is quite interesting. The device drivers are assigned to a process space (and this is the only processes space which can then make use of that device; tasks in that process space can use it, tasks in another cannot), but from what I recall (I'd have to go look in the books) the kernel is mediating, checking and controlling access to ports, DMA, etc. It relies on the correctness of the kernel's code rather than on the hardware features of the CPU, a useful thing for portability. I think it achieves something like the end result of Ring 2 (though everything is still going through the kernel in Ring 0, so it's not necessarily fast), but without the risk that the driver in a process space can use the device to access memory other than its own process space memory.

          If you ever get a chance to do any programming with INTEGRITY, do - it's pretty good.

    2. Liam Proven (Written by Reg staff) Silver badge

      Re: 4 ring OS.

      [Author here]

      > QNX4 also used rings 1 & 2; specifically privileged items like drivers ran in ring 1; the microkernel in ring 0; normal apps and servers in ring 3.

      Ahh, that is fantastic information, thank you very much! This probably explains why it is so difficult to run QNX successfully under hypervisors as well.

      The late great Dan Hildebrand was of course the architect of the amazing QNX demo disc, which fitted an entire multitasking GUI OS Plus a TCP/IP stack onto a single 1.4 MB floppy disk. I very much enjoyed playing around with that in the 1990s.

      http://toastytech.com/guis/qnxdemo.html

      Sadly Dan H was taken from us far too young by cancer as with so many other people...

      http://onqpl.blogspot.com/2008/07/in-memoriam-dan-hildebrand_07.html

  12. JoeCool Bronze badge

    I remember when x86 rings were shiny and new

    But how about we get a design from AMD - they did a better job on 64-bit.

    1. aerogems Silver badge

      Re: I remember when x86 rings were shiny and new

      It's not so much AMD did a better job as Intel wanted to create a brand new instruction set while AMD just bolted something on top of x86. Intel's Itanic didn't have any compatibility with x86 apps, and at the time Intel was riding high on the smell of its own farts, so things didn't go as planned by Intel execs.

      1. Anonymous Coward
        Anonymous Coward

        Re: I remember when x86 rings were shiny and new

        "and at the time Intel was riding high on the smell of its own farts"

        The more things change... ?

  13. Vometia has insomnia. Again. Silver badge

    "Unused by modern software"

    Removing rings 1 and 2 (which are unused by modern software).

    Doesn't VMS, just recently released on x86, use all four rings? It's been quite some time and things may have changed over the decades, but that has the potential to be quite annoying.

    1. Mockup1974 Bronze badge

      Re: "Unused by modern software"

      oof poor VMS guys, they just ported it to x86 and now Intel wants to pull the rug from under their feet

    2. Norman Nescio Silver badge

      Re: "Unused by modern software"

      Maybe you are thinking of Kernel, Executive, Supervisor and User mode in VMS. They don't map precisely to Rings 0-3, and if you have two modes/rings, you can emulate as many as you want.

      Careful reading of this google groups thread gives some background: Why (conceptually) does executive mode code need unrestricted kernel mode access

      The Wikipedia article on "Protection rings" gives further background.

      NN

      1. Vometia has insomnia. Again. Silver badge

        Re: "Unused by modern software"

        Thank you, that is indeed what I was thinking of! I shall have a good look at your links as my incorrect presumption has become quite ingrained over the years...

  14. Anonymous Coward
    Anonymous Coward

    I think Sod's Law for removing features dictates that some crucial use for more than 2 rings will surface in the next 5 years.

    1. Arthur the cat Silver badge
      Happy

      some crucial use for more than 2 rings will surface in the next 5 years

      There's a chap by the name of Sauron nodding vigorously.

  15. aerogems Silver badge

    Seems like it's time. Microsoft got rid of 16-bit app support in the 64-bit versions of Windows, and now that Windows 11 is 64-bit only, 16-bit app support is effectively dead. So dumping that from the CPU seems perfectly reasonable. I know some Linux users will likely squawk because they still use some decades abandoned app that is 16-bit or something, but I'm confident the kernel devs can work out a way to transparently translate the old 16-bit instructions into their closest 64-bit analogs.

    1. doublelayer Silver badge

      "I know some Linux users will likely squawk because they still use some decades abandoned app that is 16-bit or something, but I'm confident the kernel devs can work out a way to transparently translate the old 16-bit instructions into their closest 64-bit analogs."

      I doubt it. Most of the oldest Linux software was completely open source, which means that, even if people are still using it without updating it, it is more likely to work if they recompile it to a native binary than trying to run the old binary. Admittedly, if the code is that old, neither are likely to work, but not because of 16-bit support but by using parts of the system that are no longer present.

    2. TheWeetabix

      I don’t understand your reasoning

      You act like everyone using Microsoft product is right up to date on the latest version (just a hint, they aren’t. literally any manufacturing or warehouse facility you go to will have a selection of PCs, running various versions.) and people using Linux are more likely to be using crusty old versions (again, no… binaries can be built against new libraries or against different flavours, and updates are safer). If we’re talking about the same hardware, you can still update and rebuild binaries further in Linux than you can in Windows, all things equal.

      Considering most of the hardware and operating systems in datacentres, Microsoft, making design decisions isn’t really a driving factor for chipmakers anymore.

      1. Richard 12 Silver badge

        Re: I don’t understand your reasoning

        Anything currently running on old hardware is kind of irrelevant.

        Aside from very specialist (expensive/embedded) environments, the main reason why really old stuff is still used is because it still works.

        There is usually something newer and slightly better to shift to - the reason they didn't shift before is because they didn't need to, not because they couldn't.

        So it'll stay on that hardware until it physically dies.

        Then it'll either be obsolete and not needed, get updated to whatever hardware is easy to buy at the time of death, have the hardware replaced like-for-like from Ebay, or have a lift-and-shift into a virtual machine of some kind.

  16. Aarb

    Breaking free of AMD's x64 licensing?

    Part of me is wondering if/ how this will affect Intel's licensing of the 64bit extensions from AMD? Intel isn't doing this out of the goodness of its heart that's for sure...

    1. bazza Silver badge

      Re: Breaking free of AMD's x64 licensing?

      Possibly they're thinking that tidying up the core design is a way of shedding transistors, and gaining a bit of performance as a result? Who knows. Thing is, AMD could do the same thing too...

      AMD holds the upper hand, ultimately owning AMD64. The complicated cross-licensing deals are far too complicated for me to follow. I'm wondering about the same thing. If there's something about "in X86 compatible chips" in the deals, and Intel do a chip with AMD64 that is not X86 compatible, it could get exciting! Has "X86" evolved to mean "AMD64"? Is AMD64 actually a derivative work?

      The cross licensing deals exist partly to keep anti-trust regulators away from Intel, AMD perceived as the underdog at the time. I'd say that, in the interests of not going through all that again, it'd be best if AMD / Intel agreed on it rather than messing around suing each other.

  17. Mockup1974 Bronze badge

    But what about my ArcaOS?

    1. demonfoo

      Considering many modern machines don't have a BIOS CSM anymore, you'll probably have bigger problems running e.g. ArcaOS than x86-S...

      1. dry

        ArcaOS 5.1, which is an OEM version of OS/2, doesn't require CSM, though it is recommended to enable it as it helps set up the hardware for a 32bit OS, things like having the framebuffer below the 4GB barrier. Also maximizing memory, a lot of modern machines only leave a couple of GB's or less available below 4GB

        And while you don't need ring 2 access, you do if you want to use DOS programs, including Win3.1 and of course it needs ring 0 access for the kernel and device drivers.

  18. Cybersaber
    Boffin

    Anyone notice something off about the picture?

    The 80386DX-20 in the picture with its bent pins laid gently on the back of a motherboard you can tell has PCI-E slots soldered into the other side?

    Sorry, I was around when those were current, and it just struck me as funny and took me down memory lane comparing the solder points to what it would have looked like on a period-correct mobo with ISA slots, etc, the old PGA or soldered-on BGA socket.

    Aaah those were the days when configuring jumpers didn't mean adjusting clothing on your children. :)

  19. martinusher Silver badge

    I'd put my money on emulation

    Based on my experience with a Z80 simulator that runs CP/M there's no point in running older code natively, you might as well just emulate the target environment. As for that environment its probably a good idea to switch to a more efficient architecture, the only reason for staying with x86 being Windows and the only reason for staying with traditional Windows is inertia.

  20. Binraider Silver badge

    Provided that emulation does the job of allowing legacy content, I have no real problem with this.

    It’s a harder sell at MS and unpicking stuff like the bootstraps of the OS. Who knows what cruft lurks there!

  21. Old Man Ted

    O for old timers

    This is just to stop me ( pre atomic baby ) a doddery old 79 nine from enjoying Leasuare Suit Larry form visiting his pleasure palace and bar played off an old DRDos 5" floppy on an new 286 unit which I while away the time away with plus loads of pre history game Which I still regually play

    1. BPontius

      Re: O for old timers

      Leisure Suit Larry what a hoot! Haven't played in a loooong time.

  22. StrangerHereMyself Silver badge

    Legacy

    Intel is completely wrong about this, since backward compatibility will be needed for decades to come. Windows success can at least partly be explained by its commitment to supporting old software all the way back to Windows 3.11 and MS-DOS.

    If Intel goes though with this it will lose out to AMD big time and could even sink the company. I also don't see any advantage for them, except removing some die space, for which the savings are negligible.

    1. IGotOut Silver badge

      Re: Legacy

      It will lose out big time?

      How big? A few hundred out of several million a year?

      And you could always still by the older chips

      1. StrangerHereMyself Silver badge

        Re: Legacy

        Like heading for less than 50% market share. That would be hallmark in computing since Intel has dominated the PC market for decades.

    2. BPontius

      Re: Legacy

      Windows 10 & 11 have already dropped native 16-bit support. Microsoft doesn't even make 32 bit Windows anymore, as even 32 bit software is fading fast.

      AMD dropped 16 bit compatibility in it's processors several generations back.

    3. kain preacher

      Re: Legacy

      Except AMD is on board

  23. Ozan

    Well, Some people noticed lack of 16bit in 64bit windows. For me, it was Need for Speed Most Wanted installer. Bloody thing was 16bit. At that time, I remember installers being 16bit. Also, I remember installer makers made tools available so we can install our games. That was a long time ago. I don't remember that well anymore.

    1. Binraider Silver badge

      X-Wing alliance has same problem. 16 bit installer on a 32bit game.

      Fans hacked together a replacement, but still, a pain.

  24. BPontius

    All for it!

    I am all for Intel cutting 16 and 32 compatibility. 16 bit compatibility needs to end and 32 bit is close behind.

  25. jglathe
    Megaphone

    Do it.

    Could reatly improve things. Reducing complexity might be a worthwile step.

  26. 45RPM Silver badge

    Arguably, the modern computer industry runs on ARM - especially if we consider mobile phones, tablets and set top boxes to be computers (and I do). That’s before we even consider the dash to ARM in the server room.

    So, whilst Intel is still very relevant, it’s the legacy, backward compatibility, platform which exists to support software from the past - and (for the time being) games.

    /devils-advocacy-mode

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like