back to article Will anybody save Linux on Itanium? Absolutely not

No sooner than Intel IA64 support is removed from the Linux kernel, complaining about it begins… but the discussions are fascinating. Although the proposal to remove support for Intel's infamous Itanium architecture - aka Itanic - from Linux was rebuffed in February, just weeks ago, in October, the move was approved for kernel …

  1. Paul Crawford Silver badge

    Texas Instruments had a VLIW family of DSP processors around the late 1990s that I had the sad misfortune of working on. Again the promise was 1GIPS of performance from a 200MHz or so clock rate (which seems nothing now, but then was seriously impressive), but that was only possible on very specific code segments when the various internal units (integer cores, MACs, loop counters/index, etc) could all run code in parallel. Which was rare. What made it worse was the piss-poor compiler tools that hardly managed to optimise C-code for that sort of a situation, a life way too short to learn its assembly rules, and to cap it off a long instruction pipeline that was dropped, with a serious performance hit, any time there was an instruction branch (i.e. an if statement or break in a loop).

    End result was a mediocre performance in reality, and a few years on it was beaten on performance by x86 style chips that had OOE and branch-prediction capabilities. Not to mention far better compilers for PCs, many of which were also free, and greater ease of debugging.

    1. Liam Proven (Written by Reg staff) Silver badge

      [Author here]

      > Texas Instruments had a VLIW family of DSP processors

      That's an interesting point, but a DSP isn't a general-purpose thing of course -- as GPUs are not today.

      (As an aside, a reader also wrote in to note that the Russian Elbrus processors apparently have a VLIW design, but then they are not mainstream, either.)

      You don't seem to hear so much about dedicated DSPs any more, but related kit still exists, like Sophie Wilson's Firepath processors at Broadcom.

      I think, like Intel's doomed "Native Signal Processing" initiative, in the end it just moved into the main CPU core, possibly in some generalised form like SIMD instructions, that is good enough for mainstream general-purpose computers.

      I reckon this is the eventual fate of discrete GPUs. They are already ridiculously large and hot and expensive, and as cryptocurrencies thankfully fade away, now banks of racks of the things are being used to run glorified neural network simulations in datacentres to power these allegedly-intelligent chatbots all the technologically-illiterate-but-excitable currently love.

      I reckon Apple has the right plan here. Make a smaller simplified version that you can built right into your SOC where it will be much more closely coupled to your main processor and memory, and as long as the drivers are good, that will do 80% of what 80% of your users need, for 20% of the cost and a lot less than 20% of the electricity and heat output.

      If and when Intel make core-integrated GPUs that are good enough for most people most of the time, the ludicrous ugly kludge of laptops with discrete GPUs and some ugly switching system will just go away and become a faintly-remembered historical embarrasment.

      1. thames

        Many years ago I worked for a company that used DSP accelerator boards from a major vendor (number one in their field) in PC based test equipment that we built. The DSP chip was the DSP32C from AT&T. I wrote the software, and benchmarks showed that we could only meet the required performance in terms of how long a test took by offloading the heavy data array calculations from the PC running the overall system to the DSP board.

        A few years later I was at a seminar run by our hardware vendor about their new products. They told us they were dropping the DSP board product line as it was no longer really necessary. The latest mainstream x86 CPUs had gotten faster, and more importantly they now had SIMD instructions. It was the SIMD instructions in the DSP which had made it so fast. I later did some benchmarks on newer hardware and found this was so.

        One big advantage of integrating the SIMD in the general purpose CPU by the way is that you no longer lose so much time shuffling arrays of data back and forth over the bus as you do different sorts of operations on it.

        There was still a market for specialized DSP chips, but it was increasingly in certain specialized embedded applications where close integration with application oriented hardware features was important.

      2. Palebushman

        Liam Proven - Reply Icon [Author here]

        What a delightful and enlightening reply Liam. I like you, it's a shame though that their are not more exquisite brains like yours in this field of media technology writers. Well done!

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: Liam Proven - Reply Icon [Author here]

          Gosh. Thank you!

      3. bazza Silver badge

        DSPs aren’t general purpose and that’s what sunk them. As soon as projects started thinking “whole system” and realised that, as well as the DSP functions, there was generally a whole lot of general compute to do as well to make a practical and useful system, the inadequacy of a DSP started to become blatant.

        And then Motorola did a PowerPC with AltiVec (Intel’s SSE and AVX are the later equivalent). Motorola really nailed AltiVec, and had access to better silicon processes too. This, to certain key projects, was the answer, and surprisingly small piles of PowerPC CPUs could do anything one needed done. With the right COTS kit it was perfectly possible to swallow fat signal streams and process it realtime, and do other things too.

        PowerPC remained competitive for a surprisingly long period of time. A 400MHz 7410 using carefully crafted libraries could hold its own against a 4GHz Pentium something or other.

        And there’s a lot of systems still using them. Whilst CPUs have since marched off into the distance, the spectrum hasn’t got any wider, and as PowerPC was adequate then, it still is today. Today other factors come into play, like the cost of rewriting and requalifying code for modern CPUs compared to the cost of renewing moderately old hardware. This is also why GPUs haven’t quite taken off in this field.

        A modern Intel or AMD CPU is of course a fearsome DSP beast, but one runs out of ideas as to how to keep one busy doing only DSP. One may as well run one’s DSP workload on, say, 10 cores and use the remaining 118 for something fun for the system operator.

      4. Geoff Campbell Silver badge

        Intel integrated graphics and some other stuff

        I've been using Intel's on-chip graphics stuff for years in my main PC, to simplify the PC build and keep costs down. They've been more than good enough for my needs for a long time now, with any serious 3D gaming kicked off onto an XBox.

        The new Snapdragon X stuff from Qualcomm is looking even better than that, with a welcome return to a nice simple CPU core. I personally think that we went down a serious dead-end as soon as we started doing things like OoOE and massively complicating the CPU core as a result. In any world that contains decent multi-threaded processing, that sort of complication just isn't needed for general-purpose computing.


        1. Stuart Castle Silver badge

          Re: Intel integrated graphics and some other stuff

          At home, I have a Macbook with an Apple M1 CPU/GPU. It's actually for work, so I don't really game on it. However, as an experiment, I installed parallels, and Windows on it. This was actually for work as well. I support Macs at work, and I know that at some point, I'm going to get handed a shiny new Macbook someone has bought and get told to put Windows on it, even though it's Apple silicon, not Intel.

          Because I'd heard good things about the performance, I installed Steam and a couple of games (Just Cause 4 and Arkham City IIRC). Both performed well, considering the ARM cpu was emulating an x86, and while I didn't get any frame times or anything like that, both were perfectly playable at 1080p. I doubt it would have handled the latest Call of Duty or other AAA game, and don't get me wrong, I do have a fairly decent spec gaming PC and a Steam Deck that I generally game on, but I was impressed that it emulated an x86_64 fast enough to play any game at a reasonable frame rate..

    2. John Riddoch

      I remember the early Itanium announcements, declaring how much faster it would be because you'd optimise code at compile time rather than runtime. That struck me as being a "better" way to do things, because you didn't mind if compilation took ages, provided it resulted in an efficient binary. Of course, initial compilers were poor and the resultant assembly wasn't as good as expected. I recall Intel finally releasing a compiler which made it better, but by that time, other architectures had moved on, Microsoft, IBM and Sun had ditched the idea of porting their operating systems to Itanium (or were very close to it).

      The article seems to explain it concisely - a seemingly good idea aged poorly and no-one wanted to give up on it after investing however much time and money into it.

      1. Gordon 11

        how much faster it would be because you'd optimise code at compile time rather than runtime. That struck me as being a "better" way to do things

        That was the theory.

        In practice we tried out a system for an HPC application. On running the compiler with an analyser for ~10 hours overnight the result was <1% improvement from not using the analyser. With no guarantee that it wouldn't actually be worse for a different input. And it wasn't any faster than the 64bit MIPS systems we had.

        And this application's code was updated about once a month

        It just wasn't worth the effort.

        Then AMD came along with amd64 and we all breathed a sigh of relief.

        1. Michael Strorm Silver badge

          > That was the theory. In practice [it wasn't]

          To quote Donald Knuth, "The Itanium approach...was supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write."

          Anyway, some good info on the failure of Itanium in this previous Register discussion, in the replies to this post by myself, this Stack Overflow question and here and here.

    3. Justthefacts Silver badge

      It was a DSP

      TI DSPs were phenomenal…particularly C6455 and C674x. “only possible on very specific code segments” - you mean like FIR filter, FFT, matrix operations, which was pretty much the purpose of buying a DSP chip? It’s pointless complaining that you needed to deeply understand and optimise code based on the microarchitecture, not fire-and-forget like a general purpose chip. Because you had to spend weeks if not months converting to fixed-point anyway. Manually loop-unrolling was the *least* of your problems. Sounds like you were trying to use it for the wrong applications, if you are complaining about the performance with branches in.

      1. heyrick Silver badge

        Re: It was a DSP

        "TI DSPs were phenomenal…"

        For a long time I used a gizmo called a "Neuros OSD". It was an ARM9 running at about 200MHz with a DSP core bolted onto it (running at about 100MHz). The actual chip was a TI320DM320.

        The UI was slow and clanky because the machine was running some early Qt/Debian combo and the processor wasn't really up doing anything much quickly.

        But when it handed the reins over to the DSP, that thing could record analogue (SD) TV in real-time (outputting H.263, sort of XviD style videos) at up to 2500kbps (which looked pretty good) and AAC audio (128kbit).

        So, yeah, that DSP was impressive.

        Sadly the Neuros plan of making the thing open source never really went anywhere because all the interesting bits were closed source binary blobs, so it just wasn't possible to do cool things like flag the video as being anamorphic, or one that I had planned - capture the teletext captions and insert them into the video as timed subtitles.

      2. Paul Crawford Silver badge

        Re: It was a DSP

        There are more algorithms than FFT/FIR that are well optimised in the libraries.

      3. martinusher Silver badge

        Re: It was a DSP

        You'd be insane to use a TI DSP of any family for general purpose computing. Its not what they're sold for. I'm used to the 28C family, the ones used for motor and power control. Although some of he newer ones feature ARM cores the original ones have a really naff 16 bit processor (and to make it more of an acquired taste it was a true 16 bit machine, no byte support). This wasn't so important, though, because the parts were all a collection of highly specialized, very efficient intelligent peripherals that needed the processor to set everything up and monitor the system's operation.

        A general purpose processor could probably do just as well but not at TI's price point.

    4. kuiash

      Sounds something like the 32XXX series. Yeah. They could be tricky but I love coding in ASM so I thought it was great!

  2. Binraider Silver badge

    How long do you need the writing on the wall for a company to take notice of changes it needs to implement?

    Itanium has been a dead man walking for well over ten years at this stage. Complaining about lack of support now isn't sensible. It's incompetence of the same sort that gave us un-mitigated RAAC schools.

    DO the capital maintenance. Or expect failure. There is no alternative.

    1. Liam Proven (Written by Reg staff) Silver badge

      [Author here]

      > Itanium has been a dead man walking for well over ten years at this stage.

      I would strongly dispute that.

      I need to see some evidence of it ever even standing up, let alone walking.

      It's not a dead man walking at all. It's a sort of coffin-shaped tub full of decaying giblets immersed in formaldehyde, but with a lot of life-support tubing going in and out of a small number of cadavers, keeping a few bits of them inflating and deflating in some horrible parody of metabolism.

      That's the point of the summary I included: if anyone really cared, they'd have been maintaining it already. They weren't, which is why it's gone. They are 100% not going to do it independently out of tree either, so it is not going to happen.

      There are definitely people running Itanium in production, yes. Maybe as many as hundreds of them. But they're running OpenVMS or maybe HP-UX and they'll keep it limping along until it can be emulated.

      The sales numbers were much much worse than generally realised. The Intel and HP charts looked good if you don't know how to read graphs... but if you did, you looked at the scales on the axes and saw that while x86 numbers were in millions of units shipped, the Itanium charts were unscaled. They were total sales: thousands of units per year.

      Very broadly speaking, HP scammed Intel into making a doomed failure of a successor to PA-RISC while at the same time ditching its Arm devices and killing Alpha, and HP got away with it.

      It's dead, Jim. It was never really alive but it's definitely dead...

      And it never walked. It never even shambled with arms outstretched moaning DAAAATAAA CEEENTEEERRRRSSS...

      1. anothercynic Silver badge

        Chapeau for the fantastic description and including the words giblets, cadavers and metabolism. :-)

      2. milliemoo83

        "It's not a dead man walking at all. It's a sort of coffin-shaped tub full of decaying giblets immersed in formaldehyde, but with a lot of life-support tubing going in and out of a small number of cadavers, keeping a few bits of them inflating and deflating in some horrible parody of metabolism."

        Akin to a dodgy kebab shop store room?

        1. Liam Proven (Written by Reg staff) Silver badge

          [Author here]

          > Akin to a dodgy kebab shop store room?

          Please imagine Willow, from the 'Bad Willow' episode of Buffy the Vampire Slayer, saying simply:


          (I did love that episode especially. But then I am, originally, English. And Englishmen are famous for loving the sound of leather upon Willow.)

          1. milliemoo83

            Off to watch some Willow.... and I don't mean the film. Also, Alyson Hannigan is a fantastic dancer... currently doing Dancing With the Stars.

      3. heyrick Silver badge

        "in some horrible parody of metabolism"

        You owe me a new nose as tea just exploded out of the current one...

        ... worse, I'm on break at work. How the hell do I explain this in French...

        1. milliemoo83


          1. bazza Silver badge

            Nah, just say “Liam Proven”. They’ll nod knowingly…

      4. Sunset

        Ah, yes. The total failure that was a US$4bn business in 2008, at a time when Opteron system revenue was barely half that.

        It was niche. It was always niche. It was high-end mission-critical, just like SPARC and Power, but in that niche, it did fine at one point (until the secular decline in that part of the server market as a whole.)

        As for "hundreds " - the number is in the thousands, even now. There are individual HP-UX sites with nearly a thousand machines. I would guess there are at least a thousand OpenVMS/IPF sites and at least two thousand running UX. I wouldn't be surprised if the numbers are 2-3 times that. (Remember that while VMS is available on x86 now, the ISV ecosystem still largely is not - Rdb for instance.)

        1. Liam Proven (Written by Reg staff) Silver badge

          [Author here]

          > As for "hundreds " - the number is in the thousands, even now.

          OK, I can accept that. :-D A decimal place out? Close enough for government work.

          It's a tiny tiny market and it always was, but I am happy to accede that it could be 10× bigger than my back-of-a-fag-packet estimate.

        2. A Non e-mouse Silver badge

          It was niche. It was always niche

          But the whole point was that it wasn't supposed to be niche: It was supposed to replace the x86. It ended up niche 'cause it failed. Miserably.

          1. Paul Crawford Silver badge

            <== THIS!

            Itanium was hailed as the 64-bit replacement for x86, then AMD came along with x86 compatible 64-bit support and it was dead in the water. But it shambled along for a lot longer than expected...

        3. Bitbeisser

          "As for "hundreds " - the number is in the thousands, even now. There are individual HP-UX sites with nearly a thousand machines. I would guess there are at least a thousand OpenVMS/IPF sites and at least two thousand running UX. I wouldn't be surprised if the numbers are 2-3 times that. (Remember that while VMS is available on x86 now, the ISV ecosystem still largely is not - Rdb for instance.)"

          Even thousands of Itaniums still running are not even a droplet in the ocean of computing these days. It was even a niche within a niche at its best times.

          And you mentioned those vast numbers of thousands of Itanium machines running HP-UX and OpenVMS, which just underlines that there is no real need for Itanium support in Linux. How many Linux distribution ever seriously where offered? And how many instances of those?

      5. Falmari Silver badge

        @Liam Proven "There are definitely people running Itanium in production, yes. Maybe as many as hundreds of them. But they're running OpenVMS or maybe HP-UX and they'll keep it limping along until it can be emulated."

        The company I worked for stopped compiling the production engine of our software for SUSE 11 about 4 years ago, Before I retired 2 years ago we had depreciated OpenVMS and I would not be surprised if HP-UX has not suffered the same fate (it was on the cards) as HP-UX is due drop out of support in the next couple of years. We have customers running Itanium in production but we dropped support as the OSs were no longer being supported.

        I not sure who is left to complain about dropping Itanium support from the Linux kernel are there any distro that still supports Itanium? SUSE 11 was the last major distro still supporting Itanium when support ended in 2019,

      6. Palebushman

        Liam Proven is spectacular!

        Only a well seasoned genius at his skills could come up with a paragraph like this:

        "It's not a dead man walking at all. It's a sort of coffin-shaped tub full of decaying giblets immersed in formaldehyde, but with a lot of life-support tubing going in and out of a small number of cadavers, keeping a few bits of them inflating and deflating in some horrible parody of metabolism."

        Absobloodylutely gorgeous Liam!

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: Liam Proven is spectacular!

          Thank you very much. :-)

          The core image that I was riffing on was from a Neal Asher novel I read some years back, which had a *splendidly* macabre concept called "reefs" -- reified individuals: uploaded copies of human minds, running in silicon, occupying their original bodies re-animated with technology keeping the dead meat body up and walking around.

      7. Porco Rosso

        Raymond Chen reads TheRegister...

        Windows Pinball on Itanium ...

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: Raymond Chen reads TheRegister...

          [Author here]

          It could be and it would be most gratifying if so, but it wasn't an exclusive of mine. :-)

      8. Anonymous Coward
        Anonymous Coward

        I started with Bull in 2005, and during the on-boarding, fhey proudly explained how they were transitioning their GECOS mainframe architecture to Itanium.

        Then I was sent to work at a customer, which was running one of their Itanium-based HPC super (a Top500 member at the time), which was using a modified RHEL3, I think.

        Fortunately, I didn't have much to do with it, merely got requests for totally regular systems like X terminals running from a Sun V120 or the odd MC88K box with SysVR3.

    2. Mage Silver badge

      for well over ten years

      Well over 15, nearly 20.

      WIKIPED1A (I remember 1 and it was 1st 64 bit since prototype NT4.0 64bit for alpha)

      Two versions of Windows XP 64-Bit [Itanium] Edition were released:

      Windows XP 64-Bit Edition for Itanium systems, Version 2002 – Based on Windows XP codebase, was released simultaneously alongside the 32-bit (IA-32) version of Windows XP on October 25, 2001.[37]

      Windows XP 64-Bit Edition, Version 2003 – Based on Windows Server 2003 codebase (which added support for the Itanium 2 processor), was released on March 28, 2003.[38]

      This edition was discontinued in January 2005, after Hewlett-Packard, the last distributor of Itanium-based workstations, stopped selling Itanium systems marketed as 'workstations'

      Windows XP Professional x64 Edition (for AMD type x86-64 cpus) was 2005.

      Obviously Itanium was "dead man walking" by 2004 and essentially dead in 2005. The only other totally pointless Windows versions was maybe ME & Win 8. Win7 was essentially a Service Pack for Vista.

      1. Paul Crawford Silver badge

        Re: for well over ten years

        On related project work, there were Windows NT machines running on Alpha machines, at the time throttled to just below 300MHz (limit for US export control then) and then MS dropped support it along with other non-x86 architectures was a serious wake-up call that MS can't be trusted on new technologies. In fact, on anything...

      2. A Non e-mouse Silver badge

        Re: for well over ten years

        There's a bit more to it than that. In one of Dave Curler's interviews on Dave's Garage channel he talks about Microsoft's internal arguements about the XP dervied 64-bit OS and the Sever 64-bit OS. (Spoler alert: The XP-dervied one was very unreliable and the server one took over)

      3. Liam Proven (Written by Reg staff) Silver badge

        Re: for well over ten years

        [Author here]

        I find your post rather hard to follow TBH and I am not sure what you are trying to say here, but you seem to have your information a bit muddled.

        NT 3.x and 4 ran on Alpha, yes, but only in 32-bit mode. There were no 64-bit builds of NT.

        An unreleased copy of a prototype 64-bit Windows 2000 was recently uncovered and I wrote about it:

        The Reg wrote about the forthcoming x86-64 XP nearly 20Y ago:

        I also looked at x86-64 XP recently:

        That story carefully tries to explain the differences between the two different 64-bit editions of XP, which it seems to me that you are muddling up.

    3. NeilPost Silver badge


      The irony of the time, effort money and hubris expended on Itanium… and playing out in parallel Intel ditching their (at the time) market segment leading ARM processors of XScale/StrongARM and the rise of modern cellphones (and everything else running ARM) is not lost on me.

      Apple losing their joint venture stake in the predecessor to ARM Holdings due to Mac/Jobs v’s Sullly) is by almost as bitter sweet.

      1. druck Silver badge

        Re: StrongARM/Xscale

        The only Itanium systems I've come across were so slow, you could probably emulate them faster on the ARM chip in a Raspberry Pi 5.

  3. Anonymous Coward
    Anonymous Coward

    like video games localization

    "You people who still want the architecture, maintain an out-of-branch patch set for the architecture for a year and we'll consider making it mainline again."

    It makes me think of the clueless gamers who apparently want either a native Linux port or a certain voice localization, saying all over forums "Linux or no sale".

    It usually turns out the gigantic effort required for either localization or port would barely make it for a dozen more sales.

    Same goes for Itanic support it seems. Effort vs. benefit ...

    1. Pete Sdev Bronze badge

      Re: like video games localization

      I have more than a few games from Steam, all native Linux. The availability is larger than one might think.

      If there's not a Linux version of a game, I won't buy it. Not a problem. I certainly wouldn't go around demanding one.

      I've not noticed people going around posting "Linux or no sale". Sometimes people ask "will there be a Linux version?" or the developers themselves say "Linux version in the future time permitting". Mind you, I don't spend a particularly large amount of time on gaming forums.

      1. Anonymous Coward
        Anonymous Coward

        Re: like video games localization

        "I have more than a few games from Steam, all native Linux. The availability is larger than one might think."

        Same here, but Linux ports, with Proton are a thing of the past now. Still plenty of pre-proton Linux ports: Borderlands 2, Hitman 2016, XCOM 2, and so many.

        The point is, with proton, almost no studio is even bothering to even try a Linux port, only on the ground it is native and not proton, for unmeasurable gain or sales.

        This doesn't prevent a very vocal tiny minority to demand them. Same seem to go with Itanium on Linux.

        Funny story: a long time ago, my company went Linux on Itanium. The clueless CIO wanted to show he could do Linux. At the maintenance costs of HP-UX on ia64. LOL. That's the only use case I've ever seen for Linux on IA64 !

    2. ExampleOne

      Re: like video games localization

      Is a wine/proton port a "native port"? I suspect for the ideologues, it isn't.

      My take is that an officially supported wine/proton port is effectively a native linux port of the game, so long as the developer is willing to commit to addressing issues that arise only on wine/proton. I don't dictate the game engine, why should I refuse to accept a usable stable set of libraries over the API they speak?

      1. Blade9983

        Re: like video games localization

        I would agree with you. As would Valve.

        If it runs on proton and it runs well I don't see why it matters. The Stallman's of the world probably don't like it. But video games are, for the most part, proprietary software.

      2. Pete Sdev Bronze badge

        Re: like video games localization

        For the record, I wan't including emulation in my previous comment, but genuine native versions.

        With Proton, thanks especially to the Steamdeck, the choice widens considerably.

        It's slightly curious that often the smaller or indie studios manage a Linux port (the dominant game engines having Linux versions AFAIK) where the bigger guns don't. <shrug>. They get my money, and gladly.

        Regarding availability, for me it's a bit like seeing shoes I like the look of but aren't available in my size (my shoe size being ~ as niche as gaming on linux). I think "oh well, that's a shame" and buy something else.

        1. desht

          Re: like video games localization

          Ah, so Wine/Proton/Steamplay are the insole of the gaming world!

        2. Nick Ryan Silver badge

          Re: like video games localization

          I suspect that while the larger studios are more likely to have the capacity to support Linux, their existing toolchains and in-house processes and libraries probably don't and adjusting the momentum of these is likely to be hard particularly with accountants looking at the figured.

          Smaller teams are more likely to start from fresh and to want to include as many potential markets as possible from the outset rather as this is easier to do from the start.

  4. Mike Pellatt

    Other VLIW systems

    Those of us who were around at the time, following comp.arch with utter fascination, remember the Multiflow TRACE - 125 or so sales. The book by Josh Fisher's wife is well worth reading.

    Before that, we also followed the massive parallelism of the Thinking Machines range.

    Arguably, both architectures suffered from compilers not delivering what was needed/promised.

    1. dinsdale54

      Re: Other VLIW systems

      If somebody says they have a new architecture and part of the pitch is 'we can fix problem [X] in the compiler' - You should probably run rather than walk away.

      At the couple of places I've worked where there was a compiler team - they were usually very cautious about what was achievable. It was the processor architects with some clever pet project who handwaved away significant problems as something for the compiler.

  5. Sunset

    There are many VLIW systems

    Many, many VLIWs exist (and ship annually.) Some general-purpose-ish examples are the Kalray MPPA (alive) and the Tilera TileGX (less alive.) Of less-general-purpose VLIWs, a large and visible example is the Qualcomm Hexagon, which ships in numbers of billions of cores per year.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: There are many VLIW systems

      [Author here]

      > Many, many VLIWs exist (and ship annually.)

      I will freely admit that I'd never heard of the Kalray, and while I did know about Tilera, I thought they only made massively-multicore MIPS chips. I'd never heard of the TileGX before.

      These would seem to be forms of DSP, though, and not what I'd call a general-purpose processor: something that runs a conventional OS and can be used to power a desktop, laptop or server.

      POWER, SPARC, MIPS, Alpha, ARM, x86 of course, and yes, Crusoe, or MC680x0, and so on... those I'd call general purpose processors. A DSP I would not. Is that unfair or a nonstandard distinction?

      It's not an area I keep up with. From a very quick and cursory search'n'skim, it does indeed seem to be something of a stronghold of VLIW designs. I am amazed, but rather pleased, to discover this.

      1. Sunset

        Re: There are many VLIW systems

        Neither TileGX nor Kalray are really DSPs. Tilera was manycore "general-purpose-ish" - it ran Linux (a whiteboxed RHEL) in one image across the system, mostly for networking applications. They went in normal rackmount systems. I have one. Kalray runs Linux too (though they're still in the process of getting their Linux port mainlined) but can't do a single SMP image across all x hundred cores on the chip - they get sliced into partitions of a dozen or so cores, each with its own Linux image.

        Tilera never made MIPS stuff. TPM decided they were MIPS based on... nothing, as far as I can tell, and then the claim acquired a life of its own. Their ISA was a 32b and later 64b VLIW with a dense 3-instruction 64-bit bundle.

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: There are many VLIW systems

          FWIW the MIPS claim was widespread at the time. Wikipedia mentions it and it came up on the Reg as well:

  6. neilo

    This story reminds me of ESR's commentary "The Italic Disaster" (

    Very old, and told with a focus on SCO.

    1. Lennart Sorensen

      How do slanted characters become a disaster?

      1. Richard 12 Silver badge

        They fell over

      2. Bill Gray

        How do slanted characters become a disaster?

        Think! about! Yahoo!

  7. Detective Emil

    Once I've got through LotR again …

    … I must cue up HHG. Have one of these for slipping in the references -->

  8. sarusa Silver badge
    Thumb Up

    Branch seems resasonable

    A branch seems like a very reasonable solution. If you still need Linux on your doomed zombie workstation then just branch a distribution where Itanium is still supported and use that forever. It won't get any more updates, unless you care enough to port them, but neither will your hardware.

    And the 'put up or shut up' is fair too - nobody will put that much work into modernizing a dead architecture.

    Meanwhile, everyone else benefits from having tens of thousands of lines of cruft removed from the kernel that no longer have to be modernized for future major kernel upgrades.

    1. containerizer

      Re: Branch seems resasonable

      You don't even need to do any branching. Just use kernel 6.6, which is going to be supported in LTS form for another 3+ years. Then you can branch it.

      But of course branching should not be a problem, as these folks will presumably already have branched their own compilers and distributions all of which dropped support for this arch long before the kernel did ..

  9. Bitsminer Silver badge

    The influence of Itanium

    Aside from nearly killing Intel ("look at that IBM mainframe revenue... let's make one and leave Pentium for the plebians...")

    SGI put a lot of effort into making NUMA work in the Linux kernel for their Itanium based Altix systems. They had 1024 CPUs in a single system to support and already had form with Mips and hundreds of CPUs.

    Everyone with a 128 core Epyc running Linux needs to bow to Mountain View for that.

    The compiler writers spent a lot of time thinking about VLIW. In the end the speculators took over and invested in silicon not software and we know how that turned out.

    AMD looked at Intel and said '64-bit Pentium, we can do that" and put Intel on their back heels for a decade.

    Back in the day me and $WORK delivered a few SGI Itanium systems in the 64 to 128 CPU range, running SUSE. The satellites came down before the computers were decommissioned...

    The King is dead, long live the King.

    1. Ilgaz

      Re: The influence of Itanium

      Dave Cutler recently said in David Plummer interview that he was impressed by the concept of AMD64 immediately and they started the AMD64 development at MS right away.

      Also another legend, Knuth says: In a 2001 interview, Knuth criticized Itanium for its complexity and lack of compatibility with existing software. He stated, "Itanium is a very complex machine, and it's not at all clear that this complexity is going to buy us anything in terms of performance." He also expressed concern that Itanium would not be able to run legacy software, which he considered to be a significant drawback.)

  10. Cruachan

    I only ever heard of Itanium because I got MSDN access through my then job in around 2006. Since then, having worked for more companies than I can count as a consultant and doing small business support, I have still never seen an Itanium processor outside a catalog.

  11. david 12 Silver badge

    I personally despise the architecture

    The only other VLIW machine that the FOSS desk knows reached the market was Transmeta's Crusoe

    That's the ex Transmeta employee talking. Does Linus hate the Itanium because it was different to the Transmeta Crusoe -- or because it was the same as the Transmeta Crusoe :)

    1. Ilgaz

      Remember Crusoe had great X86 support in an interesting fashion. It wasn't a bad chip, it was acquired.

      The issue with Transmeta was the gigantic hype created just by the Linus himself being there. We were checking Transmeta page source file and the page daily to see a single sign of "miracle chip" coming. The company or Linus did nothing to increase the hype, people hyped others etc. When the truly impressive (battery life) chip released we were like "So this is it?". It can remind you Segway however it was Kamen himself and the silicon valley billionaires hyped it to insane levels.

      On the other hand, Russian Elbrus a respected company hyped up their X86 crazy spec chip which most of the people thought it was a pipe dream. EL2K or something. Chip really exists, it ended up being a military chip.

  12. khbkhb

    Other VLIW…pre-IA64

    Cydrome Cydra5

    Multiflow Trace7,14,28

    Not microprocessors but VLIW shipped commercially significantly before IA64. Indeed some of the designers of each ended up at HP and Intel which, no doubt, helped lead to the tie up.

  13. Tuto2

    Like always with Intel, just like the first Pentium was designed by the Russians ( Vladimir Pentkovski ) with a group of the Russian Academy of Sciences, the Itanium was also designed by the Russian VLIW architecture in which the Russians are the masters of it and decided that they would make it a " proprietary " architecture with a twist and call it ( EPIC ) in which they moved aside the Russians and brought in a group from Hewlett Packard to deliver the new package, well it turns out they couldn't deliver the programming and the logic without the Russians and started to implode itself ( logic programming )( and internal speed bumps against itself ) to a messy slow processing!....... There are some people that could save it and advance it but current conditions are not ideal to seek their help, when everything fails go to the masters ( teachers ) to straighten your mess after all the Russians have at least 60 years playing with it!.....

  14. Displacement Activity


    I did a lot of work on the i860 back in the early 90's - boards, a kernel, and so on. This was at a time when a 40 or 50MHz processor was state-of-the-art. At the time, my view was that it was the first thing that Intel had got right since DRAM, and the 8080 and 8085. The 386 architecture was a disaster, and it only survived thanks to Microsoft.

    The 860 wasn't a general-purpose processor, though - it was all about high-speed floating point, and you had to hand-write your assembler to get the most out of it. It was essentially targeted as a co-processor. It had almost zero competition - the DEC Alpha went nowhere, and Fujitsu's uVP never took off. They also cost me £1000 a shot in the early days, which didn't help. It was, incidentally, "VLIW", because of all the extra bits required to specify how to wire up the FPUs and datapaths.

    I don't think it ever really died, though. I'm pretty sure a lot of the technology ended up in the Pentium. I think it was probably seen internally in Intel as a testbed for the Pentium, but I haven't seen this confirmed anywhere.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: i860

      I think the tech went into the Pentium *Pro*, the P6, not the P5.

  15. Anonymous Coward
    Anonymous Coward

    I remember when I was at Uni, the VLSI professor telling us that Superscalar, VLIW and out of order execution were the next big thing that was going to revolutionise computing.

    Then a few years later I worked for a company that bought a couple of HP Itanium boxes. Both suffered motherboard failures within hours of being switched on. Their engineer came out and discovered that the motherboards were completely different to the spares he'd been given and it took several weeks of messing about to find the right parts. After that fiasco, we lost confidence in it as a solution we could resell to our customers.

    Even with the fancy expensive compiler, the performance was not noticeably any different to our other HPUX boxes.

  16. Tuto2


    The WLIW has long been Russian programing language that Intel was keen on implement to the Itanium, but in the end the Russians were pushed aside and copy the concept but made the mistake of introducing HP to help with the programing ( You guys remember the hated and praised HP calculators with reverse Polish notations RPN ) well, some people praise it and some people hate it, I'm myself hated it with a passion since it didn't make sense to me at all and in a collaborative programing environment you are bound to have a lot of folks having problems making sense of Polish notations algorithms and are bound to a collision course specially when they make a twist of modifying the programming to fit a new programing language version ( HP ) called " EPIC " and turned out to be an epic failure... Everything could have turn out right if they let the original programmers finish the job but they needed to make a new unique instruction set called " EPIC "..... The Russians can still salvage it if asked!.. They like this kind of stuff!....

  17. Fingers_UK

    HPE SuperDome

    Having spent many years supporting systems utilising several generations of Itanium on HPE SuperDome 2 running HP-UX, I have to defend it a bit.

    We were able to successfully host multiple environments (database count into the hundred's) on each vPAR partition with minimal cores. The system performed well and was extremely stable.

    However.... we saw the writing on the wall, and re-platformed the applications onto X86-64 and Linux a few years ago. RIP Itanium

  18. Sparkus

    Sounds to me as though....

    someone bought the last 10,000 Itanium chips on speculation. Without compiler and kernel support, there will be no way to off-load them to the proles.

  19. Mockup1974 Bronze badge

    It's probably easier to do a port of NetBSD to Itanium. Surprisingly, nobody has ever done that.

    1. BinkyTheMagicPaperclip Silver badge

      Look a little harder, NetBSD code for ia64 has been around since 2005. It's a tier 2 port aka 'in an indeterminate state'. To summarise where it reached, looking at the mailing list :

      It never ran outside the ski emulator

      There's been no notable mailing list traffic since 2016

      There was an offer of a *free* SGI Itanium system (US/Canada only) back in 2015. No responses appear to have been gathered.

      GCC itanium support appeared to be dropped as of 2020.

      There was a thread this May about the removal of EFI support (maintained by Linux, impacts BSD) which boiled down to generally what Liam said : itanium isn't being used aside from legacy OpenVMS and HP-UX, a small group of people would like it to keep going simply because additional architectures lead to more portable software (not because they actually want to run it), no-one is prepared to do the work to improve the port.

      It's dead, Jim.

  20. BinkyTheMagicPaperclip Silver badge

    There's *far* more interesting platforms out there, but dropping architecture support is hardly new

    If you have a free moment, it's worth browsing the OpenBSD list of discontinued ports. I was slightly sad when sgi support was dropped, as that's both an interesting architecture with some fun custom chips (only usable under Irix generally). The OpenBSD sgi support was 64 bit, even under the O2, which differed from most other platforms. It was generally more usable than sgi Linux (although I understand that was since improved at least on the Octane), in fact at the time NetBSD sgi (32 bit) was more fun to play with

    However in reality it took several minutes on boot to do a kernel relink on a lower end R4000 O2, it was certainly not light on memory usage compared to legacy OS like Irix, and when I recompiled Xorg it took an entire week!

    Sometimes modern software simply outgrows hardware, and it needs to be left as-is.

    Got free time and money, and want to use an interesting platform? Get an IBM or RaptorCS based POWER system, and work to improve either the OpenBSD power64 port (which is big endian, of *course* OpenBSD are going their own way, again), or the other BSD/Linux ports which are little endian. It's not a cheap system, but you may get something approximating fairly modern performance[1]

    [1] on little endian. Everything is now designed little endian. Big endian is a fun check for making portable software, but suffers hugely for driver support (typically le only), javascript engines, and other software.

  21. Tuto2


    Nobody can save Itanium now since HP got its hands on it, they don't play well with the Open Source community and they're locked in their proprietary world of Hewlett Packard and their HP-UX that nobody uses anymore, they never contribute nothing to open source but chip away at their work to stay alive and as Intel got them involved on Itanium, the open source community dissipated and their advance stopped like on a brick wall all of their own doing, they just didn't have the know how on the implementation of the long instruction set and the number of programmers available for it, so much for the EPIC instruction set...

  22. paulwratt

    Absolutely Not: Absolutely Wrong


    Firefox for IA-64 Itanium? One small DISTRO still holds out against PLANNED OBSOLETENESS DELETION!


POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like