back to article 'It's dead, Jim': Torvalds marks Intel Itanium processors as orphaned in Linux kernel

The Linux kernel will no longer support Intel Itanium processors following a decision by Linus Torvalds to merge a patch marking the architecture as orphaned. "HPE no longer accepts orders for new Itanium hardware, and Intel stopped accepting orders a year ago," said Torvalds in a comment on the code. "While Intel is still …

  1. trevorde Silver badge

    Gone but not forgotten

    by both people still using it

    1. Sandtitz Silver badge
      Pint

      Re: Gone but not forgotten

      "by both people still using it"

      I think it's down to one now - I haven't seen Matt Bryant here for a looong time now.

      1. Intractable Potsherd Silver badge

        Re: Gone but not forgotten

        Now there's a name I'd put to the back of my mind. Didn't he go the same way as "Eldon"? (I think that's the right name.)

        1. Peter Gathercole Silver badge

          Re: Gone but not forgotten

          It was Eadon.

          Funny, I've just tried to find his comment history, and it looks like it's been expunged from The Reg. comments history. All of the comment trails lead back to a "This post has been deleted by a moderator".

          I remember I had some run-ins with him, but none as memorable as the ones I had with "Kebabert"

          Thinking back, there's a lot of previously frequent commenters who have disappeared. Is the readership of The Register aging that fast, or are people just losing interest?

          It would be interesting to have a chart of the top 100 commenters every year since The Register started keeping stats, and follow up on the people who no longer comment. Maybe we should sent the neighbors round to check that they're OK!

          1. Gritzwally Philbin
            Meh

            Re: Gone but not forgotten

            I think lots of folks just have little left to say. God knows, while not a chatterbox, I've taken to lurking and just reading the news and I've been on the site for a rather long time.

            1. ForthIsNotDead

              Re: Gone but not forgotten

              Indeed. This is my 20th year as a reg reader and commentard. Was introduced to it in 2001 IIRC.

          2. Pirate Dave Silver badge
            Pirate

            Re: Gone but not forgotten

            "Is the readership of The Register aging that fast, or are people just losing interest?"

            Things change, popularity wanes, interests trend elsewhere, and AMFM made a few posts that were comprehensible. All strong portents of doom.

            I admit, I sort of lost interest after Lester passed, something just "changed" somehow in the following year or two. But maybe that was me more than El Reg.

    2. Anonymous Coward
      Anonymous Coward

      Re: Gone but not forgotten

      Funny, I must have dreamt all that money I made selling them until the end of last year then?

    3. Michael Wojcik Silver badge

      Re: Gone but not forgotten

      HP-UX on Itanium is still one of our supported platforms. It would be nice to drop it for new releases, since its C++ implementation is woefully out of date.

      Itanium has some other traps for the unwary. Its registers have a not-a-value state which can trip up poor code. One I spent some time investigating was an intermittent SIGILL (Illegal Instruction) which eventually turned out to be due to compiling some very old C code without the correct feature macro. That macro enabled ISO C function declarations, so the prototypes weren't being included, which meant that external functions were implicitly given "int" return type.

      Some of those functions were actually defined as having "void" return type.

      On Itanium, a void function does not move a value to the register used for the return value, since it doesn't return anything. So whatever's in that register stays.

      Meanwhile, the K&R caller doesn't know anything about "void", so it tries to move the value out of the return-value register when the call returns.

      If previous operations have left that register in the not-a-value state, then you'll get a CPU trap, which HP-UX translates to SIGILL (for lack of a more-appropriate signal).

      This one baffled the folks on comp.unix.hp-ux. I didn't figure it out until someone with Itanium knowledge here mentioned this little quirk of the architecture, and it occurred to me to check whether the code in question was being built with C90 features enabled.

      Having a trap state for a register isn't necessarily a bad idea, but the cause was really not obvious (particularly since triggering it was dependent on environmental factors).

    4. Corporate Scum

      Re: Gone but not forgotten

      Just popped in to say "Itanic" one last time for auld lang syne.

      My best to all the old timers! Authors, editors, mods, and fellow commentards. It's been a strange ride. Our vulture is all grown up, but I still remember the days when I wondered if they were selling more tshirts than ads.

  2. Anonymous Coward
    Anonymous Coward

    Itanic industrial mistake

    I've gone through every step of this industrial debacle:

    - mid-90s with HP people announcing proudly they were dropping PA

    - 2000s years with IA64 being an HP-UX only CPU, with a couple of bozos installing Linux (why would you do that on so expensive boxes anyway ?)

    - 2000s again, with HP-UX being utter shite (mostly the storage layer) vs. competition, every IA64 CPU being many years behind any X86

    - late 2000s with HP-UX 11iV3 being the first (and only I think) version of HP-UX having a good storage layer

    - the Oracle lawsuit revealing HP had to pay millions to Intel for Itanic life support, making every customer understand why the blip this platform has so insane support costs

    With all the above, surely the platform was doomed. I'm actually surprised they didn't pull the plug like 5 years back ! There probably were quite a lot of rich and captive customers, here. Probably MPE played a role, as I know at least a couple of companies that were still using it not too long ago.

    1. HPCJohn

      Re: Itanic industrial mistake

      Bozos? You mean like Rolls Royce? And the hundreds of SGI Altix supercomputers?

      These ran SuSE Linux and they weren't constructed in a shed by some wild eyed open source evangelists.

      1. Anonymous Coward
        Anonymous Coward

        Re: Itanic industrial mistake

        Pretty sure the "bozos" remark was meant ironically. The Itanic VLIW arch was great at regular FP64 computations and the Itanic SGI Altix had some limited success, but for apps like databases, SPARC was much better. But SPARC didn't save Sun and Itanic didn't sink Intel and HP, proving that big companies can f* up and survive, while smaller companies walk on a knife's edge despite better tech.

        1. Anonymous Coward
          Anonymous Coward

          Re: Itanic industrial mistake

          > but for apps like databases, SPARC was much better.

          The only way that SPARC got anywhere near the Itanium's database performance was by Oracle refusing to sign off any HP TPC benchmarks after they bought out Sun. Even then it was about 5 years before they managed to catchup with the single image TPC-C figures. Before Oracle bought Sun they usually tended to partner HP on big database deals. Running big Oracle databases is what the vast majority of HP-UX servers did.

          1. This post has been deleted by its author

    2. ssieler

      Re: Itanic industrial mistake

      Re: " Probably MPE played a role, as I know at least a couple of companies that were still using it not too long ago.".

      Thousands of HP 3000s are happily running today (PA-RISC, and probably under a dozen Classics).

      One that I know of has one thousand users logon every day (i.e., at any give time during a normal work day, there are 1,000 logged in users) ... and that computer was probably made about 20 years ago!

      IIRC, a very preliminary port of MPE/iX from PA-RISC to Itanium had been running in the lab,

      and might even have been announced at the 2001 Interex conference, just before MPE was cancelled.

      Stan

      1. BurnedOut

        Re: Itanic industrial mistake

        I think you do remember correctly regarding MPE and Itanium, but I cannot remember where I once read about the limited work that was done on the possibility. As it is, there's no doubt that because MPE was never released on Itanium, it had no role in the lifecycle of Itanium in the 21st century. In fact MPE ran so well on PA-RISC that as far as I know it was not released on any servers more powerful than the N-class (or RP7400?) in the early 2000s, and therefore not on the PA-RISC versions of SuperDome.

        It's ironic that HP went on to release OpenVMS on Itanium, after the Compaq merger (Compaq I think had initiated that port) and continued selling that combination, bearing in mind that VMS had been a major competitor for MPE.

        Although VMS is therefore relevant to the lingering on of Itanium, another significant factor is perhaps the fact that no port of HP-UX to x86 or x86-64 has ever been released.

        1. Stoneshop Silver badge

          Re: Itanic industrial mistake

          It's ironic that HP went on to release OpenVMS on Itanium, after the Compaq merger (Compaq I think had initiated that port) and continued selling that combination, bearing in mind that VMS had been a major competitor for MPE.

          But even when you have IA64 VMS running your environment there would be little incentive to port things to HP-UX (or MPE) once you saw HPE loading a gun and opening the barn back door. When you're forced into porting or rewriting anyway, selecting a target environment that's not as tied to a single vendor is probably one of the items on your shopping list. Rather near the top, even. And that's not just because of that single vendor for the hardware and OS, it's also the small pool of ISVs writing software.

          Although VMS is therefore relevant to the lingering on of Itanium,

          To some extent, yes. But HPE had already decided against updating VMS to run on the newer Itanics, so that writing had been on the wall for several years already.

  3. HPCJohn

    SGI Altix also

    Itanium was a great architecture for CFD work and meshing.

    Not only HP machines - SGI Altix were constructed from Itanium processors. NUMA machines which could address huge amounts of memory.

    When a blade was replaced in an SGI Altic, when the machine was rebooted the blade would join the system.

    Of course there were export control regulations - Uncle Sam did not want $nation to make supercomputers by buying up spare blades..

    So when a blade was replaced the SGI engineer had to phone up a number in the USA and be given a code number to type in at boot time.

    Or the blade would not be recognised.

  4. Steve Channell
    Facepalm

    mulii-core killed Itanic

    While AMD64 Opteron killed the market for Itanic, it was the multi-core approach that killed its projected performance advantage, leading to a change in software design. The remaining case for very long word instructions use-cases was killed by GPGPU.

    It was no just a commercial failure, but an architecture: good riddance to bad rubbish.

    1. Mage Silver badge
      Unhappy

      Re: mulii-core killed Itanic

      64 bit XP for Itanic was very short lived, killed off years before 32 bit x86 XP.

      It was the 2nd 64 bit Windows. The first was a version of NT4.0 for the 64 bit Alpha.

      DEC and demise of the Alpha wasn't really anything to do with the doomed Itanic. That was a more complex thing and also it was great pity Intel got the DEC StrongARM and that HP got Compaq and DEC.

      Nearly 11 years ago:

      https://forums.theregister.com/forum/all/2010/04/05/microsoft_pulls_plug_itanium/#c_733422

      1. sw guy

        Not the 2nd 64 Windows

        From the ground-up, Windows NT was built as really really portable.

        Proof are its 4 initial platforms:

        - 32b x86 (nobody had ever though of a 64b x86 at that time

        - 64b Alpha

        - 64b MIPS (I got one at work!)

        - PowerPC (I saw references inside doc., even possible one of my colleagues used one) I do not remember if it was 32b of 64b

        Itanium came later.

        1. Mark Honman

          Re: Not the 2nd 64 Windows

          Pretty sure PPC was 64b - we had one, running Windows even - before we switched it to AIX.

          1. Peter Gathercole Silver badge

            Re: Not the 2nd 64 Windows

            Although the PPC architecture included 64 bit models, these were an optional part of the feature set, implemented later in the doomed 620 processor, and then the Amazon and Apache Power processors from IBM Rochester for the AS/400 line of systems, and later merged back into the RS64 and Power ranges.

            I saw NT4 running on a prototype IBM PowerPC desktop system in the 'Think' range. It strongly resembled an AIX RS/600 system called a 40P (model 7020, the predecessor to the long lived 43P desktop and deskside workstation). It used a PowerPC 601 processor, which was a 32 bit part. My (40P) system was also a prototype, and eventually I got an 'upgrade' kit that turned it into a production spec system (although the system was only available for marketing for a very short period, at least in the UK). This upgrade replaced the entire main board and several other components, and unlike most IBM systems, was a real bitch to get apart!

            NT running on PowerPC just looked like NT.

        2. Lennart Sorensen

          Re: Not the 2nd 64 Windows

          NT 4 was only ever 32 bit, and that was all that ever ran on the powerpc, mips and alpha. There were 64 bit development work on the alpha, but it was canceled before release, so only itanium got 64 bit windows released initially to be joined by x86 later, and eventually arm.

          As for being portable, well maybe for Microsoft code, but it only works on little endian, which certainly prevents some CPU targets from ever running windows. Only the fact powerpc, mips and arm can run both ways allowed windows to be ported to them, since they always run them in little endian mode for windows. Alpha and x86 of course were only ever little endian. Motorola 68k of course would never have a had a chance to run windows.

          1. sw guy

            Re: Not the 2nd 64 Windows

            Thanks for the clarification, I assumed kernel code was compiled to "preferred" register size of CPU.

            Well, I reckon this is ambiguous for MIPS, but in my remembering (which can be wrong) Alpha was kinda 64b only with support for 32b.

            I totally agree regarding endianess. Actually, I almost noticed it, but I did not just to stay short.

            Anyway, NT kernel was able to run of beast with very different ways of interfacing with peripherals, for instance, and my I never heard my colleagues writing drivers saying they had to take care of this.

        3. Dazed and Confused

          Re: Not the 2nd 64 Windows

          it was even ported to HP-PARisc, ported but not released. HP had to add little endian support to let it happen.

        4. Daniel von Asmuth
          Windows

          Re: Not the 2nd 64 Windows

          IMHO all the initial NT architectures were 32-bit, little-endian. Microsoft planned a 64-bit version of Windows 2000, but it was cancelled, so the Itanium was the first 64-bit Windows, followed by AMD 64.

          I suppose x86 (and its 64-bit cousin) are (brain)dead as well.

        5. ssieler

          Re: Not the 2nd 64 Windows

          initial platforms...

          I don't know if it would qualify as initial, but Windows NT ran on HP's PA-RISC ... remember seeing a CD with the OS bits on it. (Our company did a *lot* of PA-RISC work :)

        6. bazza Silver badge

          Re: Not the 2nd 64 Windows

          NT on PPC was 32bit; 64bit PPCs didn't come along until much later

      2. Colin Bull 1
        Mushroom

        Re: mulii-core killed Itanic

        My impression is that there was a lot of leverage applied by Intel to kill of the Alpha. That leverage could be inducements for HP to buy other intel processors cheaply or backhanders to senior execs.

        The Alpha was superior at that time and had a large following.

        1. StripeyMiata

          Re: mulii-core killed Itanic

          We got a brand new, recently launched Compaq AlphaServer ES40 at work to replace a VaxCluster 4000. As planning/doing the migration would take a few weeks, it was sitting doing nothing so as an experiment I ran Seti@Home for OpenVMS on it for benchmarking. It was the 7 fastest computer in the world at that time.

          1. Yet Another Anonymous coward Silver badge

            Re: mulii-core killed Itanic

            At some point Dec were selling Alphas with NT for 1/2 the price they wanted to charge for the same Alpha running VMS. So we bought loads of them and installed Linux.

            The only tricky part IIRC was that the screen was single frequency and so you had to login with a serial terminal to get the xconfig right before you could see anything.

        2. Anonymous Coward
          Anonymous Coward

          Re: mulii-core killed Itanic

          My impression is that there was a lot of leverage applied by Intel to kill of the Alpha. That leverage could be inducements for HP to buy other intel processors cheaply or backhanders to senior execs.

          Except it wasn't HP which killed off Alpha, the decision and announcement was made by Compaq well before the merger talks with HP.

          A long way prior to this, back when DEC were still DEC and had great plans for Alpha's future.

          So a long way back.

          HP bought container loads of Intel processors, but not the X86 kind. These processors were destined to end up in printers which HP sold a lot of, therefore there were of lot of processors involved here.

          Intel were killing off the i860 line, so the printers need to find a new processor, the x86 line wasn't suitable, it would have made the printers far too expensive. One possible solution was a level 0 PA processor, HP weren't going to be in the fab business for this. So Intel and HP started to talking and HP mentioned the PA3 project as being their intended next generation. PA2 was the 64bit version, oh and the level is nothing to do with the version BTW, the PA3 VLIW project had kicked off at about the same time as the PA2 project but was a much longer term venture.

          Intel wanted a way forward, RISC was kicking its butt from a performance perspective and there was this damn AMD problem with someone else being allowed to make x86 processors. What HP were investigating looked like the answer to their problems, and Intel taking on much of the cost of development and all the fab'ing side was the answer to HP's problems. So they got into bed and Itanium as join venture happened. On the first generation (not designed by HP) was massively late to market, in fact at one point it looked like the HP designed Mk2 was going to overtake it.

          I'm less aware of the details on the DEC/Alpha side, but my understanding was that there was a legal dispute between DEC and Intel and to make this go away Intel bought a bunch of assets off a cash strapped DEC, this included the Alpha development team. People don't like being sold, so they buggered off en-mass to AMD and were responsible for the AMD64 bit x86 processors.

          Intel had never intended there to be 64bit x86, they wanted x86 to die so they wouldn't have to share the market place.

          Most manufactures eventually got on the Itanium bandwagon at least for a while. The lateness of Merced killed a lot of this.

          1. NeilPost Silver badge

            Re: mulii-core killed Itanic

            The irony of Intel getting StrongARM/XScale and pissing it away and eventually flogging go Marvell for pittance is not lost on me - esp. as Intel still don’t really have a mobility strategy.

            The rise of Qualcomm and Apple silicon ties back to this as XScale dominated in Compaq/HP iPaq, Dell Axim and much else.

            1. Yet Another Anonymous coward Silver badge

              Re: mulii-core killed Itanic

              Almost as if being a $Bn monopoly supplier for 20years harms your ability to pivot into new areas which will destroy your existing business.

              1. NeilPost Silver badge

                Re: mulii-core killed Itanic

                Yes but...

                - Intel inherited a tidy business selling XScale/StrongARM partnered with Tier 1 customers Dell, HP/Compaq, Palm, Sharp, Blackberry, Kindle

                - Intel still barely have any mass market mobile penetration - phones, tablets etc. 15 years after flogging it to focus on x32/x64 ‘mobile’.

                - XScale was the successor to it’s own RISC i860/960.

    2. Charlie Clark Silver badge
      Stop

      Re: mulii-core killed Itanic

      As soon as it was clear that Intel was not really behind it, it had little chance in the x86 world because it was shit for x86 code. And, as long as Intel kept producing x86 chips, they had little incentive to favour another archictecture.

      x86_64 meant that people could have their c86k and 64t it without feeling the need to replace every bit of software on their systems. That was always going to be a big ask in the pile it high sell it cheap world of x86.

  5. gnasher729 Silver badge

    Back in the day I read up quite a bit about the Itanium architecture. They had a lot of excellent ideas. There was one problem back then: There were _too many_ excellent ideas; creating a compiler making use of all of them was very, very hard, and making a JIT compiler making use of any of them was even harder.

    There is a bigger problem right now: All their great ideas have better solutions now in modern chips. Itanium could read three instructions in a package, your iPhone processor can read 7 to 9 depending on the model. Itanium had some good tricks to avoid the need for Out Of Order processing, your iPhone processor can handle a few hundred out of order instructions. Itanium had some pretty poor hardware to execute x86 code, much slower than an Intel processor, and much slower than a current ARM processor with the right software.

    1. Paul Crawford Silver badge

      TI also made a DSP using the same VLWI style of architecture. Its performance on typical compiled C code was rubbish compared to the advertised performance, as that assumed all 8 logic/arithmetic units could be used in parallel.

      The compiler very rarely achieved even 2 parallel instructions, and decision branches dropped its pipeline killing throughput. Unless you were willing to waste your precious life learning the assembler and all bizarre limitations of what could work with what, and how to structure your algorithms to avoid many decisions (e.g. conditional loop break), it was just not worth it.

  6. Grease Monkey Silver badge

    Having left the whole server arena about ten years ago I thought Itanium would have died a looooong time back so I was quite surprised to learn it was still available. Especially when the Xeon has always done a decent job.

    I wonder how much of distraction continuing Itanium development for so long was for Intel.

    1. Mage Silver badge
      Coat

      Re: left the whole server arena about ten years ago

      Technically available doesn't mean alive. It was mostly dead nearly 12 years ago and no-one could persuade Miracle Max to resuscitate it.

      1. Colin Bull 1

        Re: left the whole server arena about ten years ago

        They people that were buying this type of processor were stipulating that it had to be commercially available and supported for 10 years. And these companies are not ones you would mess with -even if you were Intel.

    2. Anonymous Coward
      Anonymous Coward

      Not so much a distraction for Intel as it was for HP Enterprise what with the HP California crowd - such as Whitman etc - waiting in vain for what she even termed "proper computing" to return, which of course it never did. Meanwhile ProLiant - which by implication had to be "improper computing" - was paying all the bills ....

      1. Anonymous Coward
        Anonymous Coward

        Actually, I meant Livermore rather than Whitman.

  7. Binraider Silver badge

    Always been curious to try the steaming brick pile out. 2nd hand bits never really got to throwaway don't care.prices unfortunately.

    I'd be even more interested in trying out event incarnations of SPARC, but ludicrous pounds.

    1. druck Silver badge

      We had a few different Itanium variants in a test lab all with everything else under the sun (groan), They were all stupefyingly slow compared to contemporary SPARCs and POWERs, and of course left for dead by Xeons.

  8. MacroRodent Silver badge

    Not even in NetBSD

    Out of interest, I checked if NetBSD still supports Itanium, and surprisingly even it has dropped it, or never had (The II Tier table has "none" for the latest release of IA64). Curious, because NetBSD supports many platforms you can find only in a museum, or a fleamarket with a particularly slow turnover...

    1. kaszak696

      Re: Not even in NetBSD

      As far as i know, NetBSD never had a functional ia64 port that could boot on a real hardware with any sort of success. I don't know why it's even listed as tier II.

  9. Blackjack Silver badge

    That was fast

    Considering how the Linux kernel still supports things that are way older and deader.

    1. b0llchit Silver badge
      Coat

      Re: That was fast

      Well, IA64 has been a dead ghost walking since its first cores hit the market. So, calling it only recently deceased may be an underestimate of how dead the platform is.

      The ghost has been haunting the house for quite some time. So killing it in the Linux kernel can be considered mercy to all other architectures.

    2. Dazed and Confused

      Re: That was fast

      It probably didn't help that there were no drivers for the newer systems, it was only the older, pre-blade, HP Itanium system which could run Linux (and I guess Windows). The SuperDome2 and the blades couldn't run Linux as they never bothered to do the drivers for the chipsets.

      Wasn't there a story here a couple of years back about a new Linux kernel release that had added a new feature for Linux on PA?

      1. Yet Another Anonymous coward Silver badge

        Re: That was fast

        Also not many enthusiast running Itanium for fun.

        1. Damage

          Re: That was fast

          I have about a dozen blades from a superdome. Also odds and ends like the PSU. Couldn't find anyone who wanted them so they are in temporary storage.

          The superdome itself will be converted into an impressive looking beer fridge when time permits.

          1. Dazed and Confused

            Re: That was fast

            Ha, I once bought an old 827 server as it came in a nice rack and I wanted the rack. The rack with the server was much cheaper on eBay than a rack without one. I soon found a volunteer who wanted a free server.

            1. BurnedOut

              Re: That was fast

              Those were tough racks and tough servers. I was once involved with the commissioning of an HP9000/827 mounted in its HP 1.6m rack, which had been allowed to fall over on its side in the car park during unloading. After it had been stood up again, and moved to the computer room, it looked wonky, with considerable denting to the side panels, but everything worked fine.

    3. J27

      Re: That was fast

      But with less users? I think that's the important metric here. Almost no one ever used it and it's been dead for 10 years.

  10. Pascal Monett Silver badge

    "killed off competing efforts such as DEC Alpha"

    And that is a bloody shame.

    1. Bitsminer Bronze badge

      Re: "killed off competing efforts such as DEC Alpha"

      DEC killed off Alpha, nobody else did.

      DEC was married to Big Iron, selling extremely expensive machines built around $2,000 chips. So, for instance, when they did an "Alpha workstation" (Multia) it was crippled with few IO options and small and unexpandable RAM. Because said workstation could have kicked the big iron to the sidewalk, if only it had the IO. A classic case of the new tech eating the old tech's lunch; Sun and others took advantage.

      Later, the Alpha 400 workstation came out; $WORK used it for an NFS server for a few years (40 SCSI disks!). I inherited that one, ran Linux and OpenBSD; it was still on the US Export Control List at the time.

      1. Anonymous Coward
        Anonymous Coward

        Re: "killed off competing efforts such as DEC Alpha"

        " when they did an "Alpha workstation" (Multia) it was crippled with few IO options and small and unexpandable RAM."

        Fake news, fact check needed. See e.g.

        https://en.wikipedia.org/wiki/DEC_Multia

        http://www.brouhaha.com/~eric/computers/udb.html

        Multia (aka "Universal Desktop Box") was never a workstation. It was initially aimed as"thin client" with local processing in a tiny box (what HPQ might nowadays call a USFF box).

        The Alpha Multia was based around what could later almost be classed an SoC, the Alpha 21066/21068 chip. Later on there was a Pentium version of the Multia too.

        Later in the life of actual Alpha workstations, the Personal Workstation family were actual workstations, a shared desktop enclosure available with either x86 or Alpha processor daughtercards depending on target market. These were based around the "industry standard" (but not widely used) NLX motherboard form factors.

        https://en.wikipedia.org/wiki/Digital_Personal_Workstation (Alpha and x86 flavours)

        https://www.youtube.com/watch?v=g0Qc5RDTQmQ (Alpha flavour)

        Hth.

  11. GrumpenKraut Silver badge
    Thumb Down

    Not satisfied

    I used an Itanium as a bottle opener once. Bad idea, that thing had *very* sharp edges and my hand was bleeding quite a bit. Cannot recommend.

    On a more serious note: I tried it for integer computations and it sucked, as in elephants through a straw. The CPU was rumored to be at 70k dollars, by the way.

    There were seriously large pieces of silicon inside, determined using a Dremel on the bottle opener mentioned above. Sadly I left it on my table and the next day it was gone, a cleaner had thrown it away.

    1. A.P. Veening Silver badge

      Re: Not satisfied

      I used an Itanium as a bottle opener once. Bad idea, that thing had *very* sharp edges and my hand was bleeding quite a bit.

      So you can claim first hand experience on the origin of "bleeding edge".

      1. GrumpenKraut Silver badge

        Re: Not satisfied

        > "bleeding edge".

        LOL, yeah. Sadly also hurt like fuck.

  12. JohnSheeran

    It's sad that actual 64-bit processor architectures haven't taken off in the mainstream. Between Itanium, DEC Alpha, Sparc, Power (though Sparc & Power9 is still around) and I'm sure there are others, the market had a future that could have been ramped up much like the current (polish a turd) x86 architecture (which is actually IA-32 for Intel and RISC64 for AMD via NexGen). Now with x86 we are very incrementally increasing actual performance and we could have seen a bigger leap by now. Of course that requires everyone to adopt 64-bit processing and a lot of code would have to be redone no matter where you run it. It's not just 64-bit memory addressing which didn't necessarily need such a major overhaul.

    Oh well, such is life.

    1. Lennart Sorensen

      Unfortunately it is much easier to switch over a few programs at a time to 64 bit x86 while keeping your existing 32 bit code running using x86-64 than it is to migrate everything to a new platform all at once.

      At least AMD did a better job cleaning up a bit while adding 64 bit than intel has ever done in the past when extending the x86 architecture. x87 had to die and adding more registers was desperately needed. AMD did a very good job on the polishing of intel's turd.

    2. jotheberlock

      I'm curious why you think x86-64 isn't an 'actual' 64 bit processor. It's not just 64 bit addressing, it has 64 bit GPRs and ALUs too, just like MIPS-64, Sparc 64, PowerPC 64 or indeed AArch64.

      And as, indeed, AArch64 shows quite clearly, the ISA isn't the main constraint on increasing performance; we would not all be suddenly using 10GHz CPUs if MIPS had won out over x86.

      1. JohnSheeran

        Thumbs up for the excellent technical discussion.

        I'm not implying that clock cycles have anything to do with this. In fact, I'm suggesting exactly the opposite. The core processing on x86 is still 32-bit which is why everyone in the world didn't have to recode their app in order to work at all on x86-64.

        1. doublelayer Silver badge

          "The core processing on x86 is still 32-bit which is why everyone in the world didn't have to recode their app in order to work at all on x86-64."

          That's not really how I'd phrase it. AMD64 can run 32-bit X86 code natively, but that doesn't make it 32-bit. If you compile for AMD64, you use 64-bit capable instructions, which this has. It's not just a 32-bit processor with larger addressing. So I'm not sure what you're trying to say with the part I quoted. I have two ideas:

          1. "AMD64 is 32-bit even when you compile to its ISA natively": That's incorrect, but I don't think that's what you're saying.

          2. "It would be better if the transition to 64-bit required everyone to recompile for it so we got the benefits faster": I get the idea, but I don't know that it's been a major problem. We've had 64-bit desktops and laptops for over a decade now, and you can pretty much guarantee that most users today have a 64-bit OS and most of the performance-sensitive programs they run on it are also 64-bit. The occasional old or small program still runs under X86, but that's only a problem if it will actually benefit the user by using the faster instructions or more memory. Quite frequently, such programs don't need to be that fast.

          For that matter, we also have ARM64, which is like AMD64 in that it can coexist with previous versions of the ISA. Still, most mobile devices that are powerful enough (phones, tablets, not the SOC running the embedded devices), are using a 64-bit OS and apps compiled natively to it. ARM is even planning to drop 32-bit support in their next range of high-end cores because so many people never use the 32-bit capabilities.

          Meanwhile, the ability to run stuff without having to recompile it means people will adopt 64-bit hardware faster. When the software supporting it comes out later, they already have the ability to run it, and having the hardware themselves, they can also compile and test their stuff to run under it as well. The overlay method makes some sense given those benefits.

          1. JohnSheeran

            OK, I'll admit that I had not revisited this in some time but a little research indicated that I was still on target but I can see why people would vote it down.

            X86-64 can operate in Long Mode and potentially support 64-bit operands. However, it defaults to 32-bit operands.

            I'll admit that I am having a hard time finding any confirming data but the general feel is that there are many apps using 64-bit virtual address space but not as many using 64-bit operands. I admit that I may be wrong about that however.

    3. J27

      ARM64 is growing in popularity, that's a pretty modern architecture free of cruft. It's not even backwards compatible with ARM32.

      1. Paul Crawford Silver badge

        Yes, but most use is for phones, etc, that have very little "legacy" or bespoke software for which customers can neither buy, or possibly afford to buy, replacements.

        The server market on Linux is a bit better as it has long been that case you wrote code assuming it should compile and run on multiple platforms. Not that everyone did, of course, but far less of a monoculture than Windows / x86 became.

  13. Anonymous Coward
    Anonymous Coward

    Enjoy?

    “enjoy a supported version of Unix, HP-UX 11, until 31 December 2025.”

    Tolerate... yes for certain workloads. Enjoy? No. Not a chance!

    1. phuzz Silver badge

      Re: Enjoy?

      Well, if you prefer, there's always Windows XP 64-bit ;)

      (I wonder how many people actually ran that)

  14. wleslie

    Going to miss it

    I'm still irritated that intel killed it by pricing it out of the hands of developers so that it could play in the high-margin workstation space. My McKinley (14 years old, no OoO Poulson that executes 12 instructions per clock per core) absolutely destroyed on memory-heavy integer workloads if the instructions were lined up just right, until it died a year ago. Hoping someone "does an M1" at some point and shows that, outside of interpreted workloads, it was really just intel failing to care rather than a real performance issue.

  15. keitai

    All the technical merits or problems with itanic are kind of irrelevant to why it failed. The key problem with Itanium was simply that it was way too expensive! Any new revolutionary platform that has broken though, was first ridiculed cheap crap - which eventually gained performance and quality over age. "Here, this itanium platform is just as expensive and proprietary as your old UNIX servers, migrate to them" is only attractive sales pitch to CIOs who spend too much time golfing with their suppliers. Meanwhile their engineers are already playing with X86 machines running "crappy linux" that somehow appears reliable enough and saves a lot money...

  16. Steve Chalmers

    The processor chess game was won long before...

    Processor instruction set architectures have always been a critical mass game, not a technical merit game.

    The x86 won even though its instruction set and memory model were a piece of shit (I designed with it in 1978). It won due to the IBM PC design in, which in turn got the PC compatible design in, which in turn got the Compaq server line design-in.

    In the chess game, Intel got one move from 32 to 64 bits (due to available industry software design resources, the same constraint Windows got to first, sending IBM's subsequent OS/2 into oblivion). Presumably because the x86 memory model and instruction set were so clumsy, Intel spent its move on a fresh sheet of paper architecture (I knew a lot of the Itanium architects at HP but was not one myself). But AMD almost immediately checkmated Intel by pushing an upward compatible 64 bit extension of x86 -- a much easier compatibility story for customers and software developers -- and Intel had no choice but to respond by following. At the instant Intel made the decision to respond to AMD, Itanium predictably had no path to critical mass, for business reasons.

    Linus Torvalds was very kind to wait until after Bill Worley's death to make this final decision.

  17. fishman

    Itanic

    The article author must be relatively new to the Register - it was called "Itanic" by the Reg authors back in the day.

    And what with the removal of "Biting The Hand That Feeds IT" slogan?

    This place is heading to Hades.

    And stay off my lawn!

  18. Henry Wertz 1 Gold badge

    I wonder how many they *really* sold?

    First a comment on Intel making a huge mistake -- sales on this thing were a disaster, and this aspect was a real disaster for Intel. However (I don't know if Intel was this smart intentionally or it's just how it worked out), this eliminated (SGI) MIPS, (HP) PA-RISC and (originally Digital, but HP by then) Alpha, and temporarily Sun SPARC (it ultimately came back, but Sun mulled going to Itanium and stopped SPARC development for a while). This left only IBM Power (IBM made it clear at the time they intended to keep making their own CPU line.) PA-RISC, Alpha, and SPARC all cleaned Intel's clock performance-wise, but after they stopped development for several years to try Itanium, the several-year-old designs would not keep up with some Xeons, so these vendors then ended up using Xeons.

    Second.. I wonder how many they *really* sold. I'm not claiming the (already quite low) sales figures were fabricated, I'm sure they were sold at some price. But, for example, I remember at the U of Iowa here, a department got a HP (Superdome I think) with Itaniums.. they commeneted that at the time, due to the compiler not handling the VLIW very well, their several year old PA-RISC HP system handily outran it. But they were happy to take the Itanium system -- in HP's zeal to show sales, the supposedly $250,000+ Itanium system ended up costing the department something like $1,000. I wonder how many of these were sold at anything like a regular price versus sold at an extreme discount like this?

    Torvald's comment may be right -- it seems extraordinary to drop support for a CPU that is still in production, but HP has gone to extraordinary lengths throughout the life of this CPU to keep support going, to show sales at any cost, and so on, well after the point it was clear it was not going to pay off. The sales on this CPU were so low even back in the 2000's when it had it's highest sales, that it could well be the case that present-day sales of it are actually 0.

  19. GSTZ

    Itanium was often misunderstood ...

    ... which soon led into misery, despite some quite promising points. And of course it did not help when the initial Itanium release had significant flaws, quickly leading to the deadly "Itanic" nickname.

    The main problem was that Intel did commuincate poorly, so Itanium was widely misunderstood in the market. Many people dismissed it, simply because Itanium did not reach the same level of shipment numbers as x86 chips. Such people would also not understanding why trucks are more costly and somewhat slower than cars, yet do make very good sense in many cases.

    Many programmers did not fully understand the implications of Itanium's EPIC architecture and the need to design and code differently than on x86. Not willing to adapt their behaviours, they got less-than-ideal results and did put the blame on the chip - not on their own ignorance.

    In a wider context, we do not find not much big iron in IT any more. As per the above analogy, trucks have become very rare, and we do use way too many cars to get things moving. Not very efficient, and pretty expensive in total (although the single boxes are indeed cheap). Too many drivers needed, and IT budgets did not shrink.

    Back to Itanium - yes, this relatively modern architecture is now dead, while we still use an 40+ year old architecture despite all its shortcomings. Is this a good idea ?

    Just a small example - Itanium is not affected by Meltdown, Spectre and other speculative processing vulnerabilities.

  20. Dominic Sweetman

    Poppadom

    Itanic was doomed by the time it was announced. The idea of VLIW was to increase parallel execution, but out-of-order CPU implementations exploited the same opportunities much better. Itanium was conceived at a time when out-of-order was first being contemplated and fought, and found to be terrifyingly difficult; I got the impression that early OOO attempts like the MIPS 8000 burned out half a generation of brilliant engineers... But as often happens, people worked it out and suddenly it all seemed possible.

    The conspiracy theorist in me suspected that the smart people at Intel were not really blind to the obvious flaws, but saw it as a great way to divert corporates who might otherwise turn to RISC architectures, at least until Intel could make something which worked (as it turned out, until AMD could make something that worked). But it might have been that the decision makers got caught up in their own hype.

    At least the Register called it pretty early...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022