back to article The Register just found 300-odd Itanium CPUs on eBay

Intel has stopped shipping the Itanium processor. In January 2019, Chipzilla issued an advisory [PDF] warning that last orders for the CPU must be lodged by January 30, 2020, and that final shipments would head out the door on July 29, 2021. Which was yesterday. So concludes an odd story that started in the age of the …

  1. bazza Silver badge

    One of the reasons some people liked Itanic was because it had a fused multiply-accumulate instruction in its vector core, which is the building block for a lot of signal processing routines and a fair few image processing filters. X86/64 didn't. (Power PC did).

    That kept Itanium ahead, then Intel finally added the instruction to Xeons in about 2011, after which there was zero reason for Itanium.

    1. Dazed and Confused

      It was feature of PA-Risc and Itanium started life as PA3.

      The instruction was heavily used in graphics too and at one point HP'S top graphic card for their workstations had 6 PA processors, all be it without memory management bits. Meanwhile the workstation would only have 1 PA processor.

      1. bazza Silver badge

        Indeed yes. I often thought back then that Intel weren't putting FMA into x86 simply to make Itanium look better than it really was.

        They did this in other aspects of x86 too. In the Nahlem architecture they added a bunch of maths registers that couldn't be access from 32bit code. This made 32bit code look a lot slower than 64bit code, even though those registers could have been added to the 32bit ISA without breaking existing 32bit code. It's all been a bit artificial. Ok, so no surprises there, but play games likes that too much and someone like AMD can come along and, overnight, make one look foolish...

  2. Anonymous Coward
    Anonymous Coward

    Abandonware

    ""The Register just found 300-plus Itanium CPUs on eBay". It's true! Some of the ads – like this one depicting a brass rhinoceros may not be entirely convincing. However, plenty of others depict working servers, CPUs, or other components an Itanium user may find handy."

    You have now completed your journey to Abandonware, IA64.

    What's next ? An IA64 emulator and dumped images of popular IA64 SW builds ?

    1. Sgt_Oddball
      Holmes

      Re: Abandonware

      I feel like I should see if I can find a rack mount since I've got space in the basement wooden rack for more servers...

      Bring it out every now and again and try to do something even remotely useful with it.

      *checks eBay.....*

      On second thoughts let's not. Especially since there's only blade servers available for sane money. The only one I could get home and run is £2,400 (free shipping, bonus!) so sod that for a game of soldiers.

      1. Joe Montana

        Re: Abandonware

        I have a couple at home, bought a few years ago when linux distros and microsoft were still providing token support for it.

        These machines are basically worthless now, but you have people who will continuously relist them on ebay for 50% of what they cost when new and wonder why they never sell.

        A lot of companies will just junk the machines as scrap, or occasionally you'll find someone who wants to get rid of a whole pallet load of them for the cost of shipping.

      2. Binraider Silver badge

        Re: Abandonware

        I've always wanted to try out the steaming brick, however parts never got to chuck-away pricing territory to warrant it.

      3. Alsibbo

        Re: Abandonware

        I score an HP Superdome 2 (16 Socket Quad Core, 256Gb RAM) and a few IOX enclosures for £500 on eBay last year :) keep your eyes out. Can't be many people with one of those by the telly :)

  3. John Riddoch

    Optimised in compiler

    I seem to recall that part of the Itanium design was that the compiler would optimise the code rather than the CPU trying to figure it out at runtime, with all the out of order execution, branch prediction etc silicon that makes other CPUs more performant. To me, at the time, that sounded like a good idea; spend extra time at compile time to get efficient code rather than letting the silicon figure it out. In retrospect, it should have been obvious that there were several flaws here:

    - It requires a good compiler; they were lacking at the time.

    - It requires proper coding to maximise potential of the chip. Comment in the article highlights it was difficult to code for.

    - Programmer time is expensive, more so than faster chips from Sun, IBM or even x86.

    - CPU designs move over time; optimise for CPU x and CPU y will probably not work quite the same way.

    Some good ideas in the chip, but the market moved before it was ready and it was practically obsolete by the time it shipped. AMD showed that you could have 64 bit CPUs which would still run legacy 32 bit code, so you didn't need to ditch everything to get 64 bit capabilities.

    1. Dazed and Confused

      Re: Optimised in compiler

      > AMD showed that you could have 64 bit CPUs which would still run legacy 32 bit code, so you didn't need to ditch everything to get 64 bit capabilities.

      This is perhaps the biggest issue.

      When the main part of DEC's Alpha dev team objected to being sold to Intel they upped sticks and moved en-masse to AMD with the result of them bringing out the AMD64 architecture. It could run the legacy x86 code so had a massive market place.

      Intel had wanted x86 to die out, other people could make it. They'd hoped that IA64 would corner the market giving them a monopoly. There wasn't ever supposed to be an x86-64. But AMD and the Alpha design team forced their hand.

      In the end they proved that it didn't matter how crap the instruction set was, what mattered was the market size as this controls the investment in research.

      Intel had wanted to use that to drive Itanium, and this is why just about everyone in the business had signed up for Itanium initially. The expectation was that all the money was going to be thrown at IA64.

      There were other issues too of course. The initial version of the chip, Merced which was designed at Intel was years late. The mark 2 processor, McKinley, which was designed at HP nearly over took it, as was detailed in stories here on El'Reg at the time. Merced ended up being little more that a developers platform.

      But then AMD64 arrived and so many developers went off in that direction and took the market place and therefore the money with them. It doesn't matter how good your HW is without SW you can't sell any of it. I used to fill show stands with eager people demo'ing Unix workstations but when the buyers realised there were no apps do what they wanted they wondered off.

      AMD64 allowed large flat memory models and so killed off the Unix workstation market place in a blink of an eye. Turns out it only existed because PCs had not been able to hold enough RAM to do many tasks. Linux could run most of the Unix apps with little or no tinkering. Lots of the dev teams for Unix apps already had their code running on Linux anyway as it was a free "yet another build platform" they could test their portability. Free to the extent it was often run on HW which was destined for the skip because it wouldn't run the latest version of Word on some admin's (not sys admin) desk.

      1. ChrisC Silver badge

        Re: Optimised in compiler

        "In the end they proved that it didn't matter how crap the instruction set was, what mattered was the market size"

        Oh, I think Intel were well aware of that lesson long before this - from a technical perspective, x86 should have been dead in the water next to the elegance of the 68k architecture, but the marketshare dominance of "Intel Inside" PCs more than compensated for its technical limitations. Alas.

        1. Dazed and Confused

          Re: Optimised in compiler

          Intel were well aware of that aspect, but a 68000 system made a contemporaneous x86 system look like an abacus with rusty rails. This was the other reason they were aiming to get off x86. They knew they were loosing the performance battle with the Risc processors.

          But the Alpha design team showed that the right designers could make an old fashion instruction set perform well. After that, so much time, effort and basically cash, has been spent on the x86 designs that it has made up for the inherent weaknesses.

          1. Michael Wojcik Silver badge

            Re: Optimised in compiler

            To be fair, it wasn't just the Alpha team. NexGen showed it was feasible to ship an x86 CPU that decoded the CISC stream into RISC instructions to speed up the ALU, for example – and more importantly, showed Intel that if they didn't do it, someone else would. A lesson they'd have to learn all over again when AMD started shipping x64 CPUs.

            (AMD also acquired NexGen, of course. And eventually some of the stuff from Cyrix after its acquisition by National Semi.)

            1. Anonymous Coward
              Anonymous Coward

              Re: Optimised in compiler

              Arguably there aren't any mainstream CISC processors today. They're all basically RISC processors under the hood with a hardware abstraction layer sitting over the top of it.

              1. Scene it all

                Re: Optimised in compiler

                Even the various IBM 360 processors in the 1960s were really RISC hardware microprogrammed to emulate the "IBM 360" instruction set. Different technologies used to design the hardware in various models allowed for different cost/peformance across the product line.

                1. Anonymous Coward
                  Anonymous Coward

                  Re: Optimised in compiler

                  And AMD were already masters of that - the 2900 bit-slice family was around before Intel got off 8 bits.

                  And the 68000 was the worthy successor of the LSI 11. The PDP11 is still by far one of the most elegant general purpose CPUs.

        2. Robert Sneddon

          Compatibility

          from a technical perspective, x86 should have been dead in the water next to the elegance of the 68k architecture,

          The 8086 architecture allowed for backwards compatibility in both hardware and software to its predecessor 8080 -- the system bus could use existing 8080-family chips and a subset of the register and addressing modes matched the 8080's internals making it easy(-ish) to rewrite existing 8080 software and get it to work on 8086 hardware.

          The 68000 was certainly an elegant clean-sheet design but it had no support chips for a long time after the first CPUs came out. The system bus wasn't directly compatible with any of the 68xx support chips or the 8080-family chips so it required bodges and/or lots of TTL around it to make it work at all. Any software for the 68k platform needed to be rewritten from the ground up rather than 'simply' being refactored.

      2. katrinab Silver badge
        Gimp

        Re: Optimised in compiler

        Or, it takes someone like Apple to sucessfully migrate to a new CPU architecture. They've managed it 3 times.

        If Microsoft do manage to switch over to ARM, and get the rest of the developer base to move with them, it will most likely be due to lots of people wanting to run Windows in Mac virtual machines.

        1. Anonymous Coward
          Anonymous Coward

          Re: Optimised in compiler

          > Or, it takes someone like Apple to sucessfully migrate to a new CPU architecture. They've managed it 3 times.

          Or HP.

          They started HP-UX on the Focus processor, then they introduced it on 68xxx. These ran in parallel.

          They they had PA-Risc which again ran in parallel for a while.

          Then they moved it to IA64 (OK, so they shafted the team that did the main part of this port).

          Unlike Apple, they designed 3 of those four themselves from scratch. I know Apple design their Arm stuff, but a lot of the technology is licensed. HP even fabbed the Focus and the early PA chips.

          I guess the dig was at MS. Windows has proved a issue to port to other architectures. When NT looked promising on non x86 platforms it proved easier to make other CPUs run little endian than make NT run on a big endian system.

          Apple have the advantage with the OS that it's borrowed from BSD and that was written to be portable.

          1. Michael Wojcik Silver badge

            Re: Optimised in compiler

            IBM moved the AS/400 from a proprietary CISC architecture (IMPI) to POWER around the same time Apple switched to PPC. Many binaries (those that were "observable", i.e. included debug information) didn't even need to be recompiled, because the '400 used a binary format (TIMI) that was compiled to machine code at load time.

            And, of course, we must have the obligatory mentions of various portable-software systems such as UCSD p-Code, Java bytecode, and OSF's AND-F (meant to be converted to native at installation time). Not to mention Micro Focus's own INT format, which came after UCSD p-code but preceded the other two.

          2. dedmonst

            Re: Optimised in compiler

            Or how about OpenVMS? That went VAX->Alpha->IA64 and is now going to x86:

            https://vmssoftware.com/about/openvmsv9-1/

            1. Lorribot

              Re: Optimised in compiler

              Unfortunately it is not moving quick enough.

              The publishing industry widly used a product called Vista for all its financial and warehousing that runs on VMS and many are still using Itanium hardware as there is no (current) virtualisation options for this and moving away from it is massively costly as you are looking at ERP stuff like SAP which means business process changes and increased support costs and often many more servers and support staff.

              VMS was HP so moved to Itanium but then HPE got bored with it and open sourced it then sold all rights to it.

              Itanium was the right sort of idea just badly executed, they should have offered licencing to other chip vendors as they did with x86, but introducing a closed ecosystem in to a crowded market that had no interest is never going to work especially given the cost to move these things even back in those days of Server 2008.

              1. Dazed and Confused

                Re: Optimised in compiler

                VMS was HP so moved to Itanium

                VMS was ported to Itanium before the merger had even been mooted. Compaq had committed to IA64. Interestingly the VMS port was done on HP N-Class servers with Merced processors installed. This wasn't a product that was ever scheduled for release. The N-Class was sold with PA processors but was based around the Merced bus.

                As to virtualization, the last VMS sites I dealt with (banks) were virtualizing their Itanium VMS boxes under the HP-UX virtualization product which allowed them to run on the last generations of the Itanium boxes.

      3. TeeCee Gold badge

        Re: Optimised in compiler

        Oh, it was worse than that for Intel.

        They already had a shelved 64 bit Pentium design and when the Fat Lady started clearing her throat, they dusted it off.

        The slight snag was that world+dog already had stuff out for AMD64 and this had significant differences to Intel's original design.

        They were forced to hit it with a big hammer until it looked enough like AMD64 that it would run what was already out there. Hence the egg-frying sack of shite that was Netburst, which they got out the door just in time for AMD to introduce the world to the concept of dual-core X86.

        Rinse. Repeat. Fade...

      4. This post has been deleted by its author

    2. Jim 68
      FAIL

      Re: Optimised in compiler

      As I recall, the impetus for EPIC was a soon discredited research paper implying that there was a huge amount of unexploited parallelism in existing source code that was missed by compiler peephole optimization and RISC runtime reordering.

      I tested this with a DEC Alpha in 1993 running OSF/1. The compilers had the option to do deep/wide optimization across the entire set of source files for an application.

      I compiled some large biomedical imaging and genomic applications and found the difference in runtime performance was about 3% or less - not worth the effort.

      Given the lack of runtime optimization and the nondeterministic nature of cache misses and memory access, the only way to get Itanium to work was to put the entire working set in cache.

      It would have been interesting to see what the highly regarded PA-RISC design team could have done has they not been displaced.

      Of course the same goes for Alpha.

  4. mark l 2 Silver badge
    Joke

    The Register just found 300-plus Itanium CPUs on eBay"

    So ALL the Itaniums ever sold are now for sale on ebay?

    1. Anonymous Coward
      Anonymous Coward

      300 sold ever?

      I doubt that many have actually parted with money to have one, some of them must be free samples.

    2. Lunatic Looking For Asylum
      Joke

      How long before we see Vintage/rare/collectible in the titles (possible all at once...)

  5. Anonymous Coward
    Anonymous Coward

    Come on lads, do some research

    >The product spluttered along through new releases in 2012 and 2017, but its death notice came in 2019 –

    >and in February 2021 it was ejected from the Linux kernel.

    Read your own article that was linked. It got marked as orphan in MAINTAINERS which means no one came forward to maintain it.

    It didn't get "ejected" however and the code is still in mainline to this day:

    https://github.com/torvalds/linux/tree/master/arch/ia64

    Whether anyone actually runs mainline on an ia64 machine is another thing but it'll probably only get removed when someone cares enough to submit a series to remove it and that will probably only happen after some treewide cleanup that it gets in the way of. Even then there might be one weirdo out there that puts their hand up to keep it.

    1. Dazed and Confused

      Re: Come on lads, do some research

      > It got marked as orphan in MAINTAINERS which means no one came forward to maintain it.

      And the maintainers gave up a long time before the end of the line for Itanium. After they moved away from the SX2000 chipset (I guess zx2 on the baby boxes) there were never any drivers. So you could run Linux on a rx8640 but nothing newer such as the BL8[679]0c blades, SD2s or rx2800 boxes. I don't think you could even run it in their VMs from version 6 onwards.

  6. pmsrodrigues

    When is HPE keen on us using second-hand kit? You can't even download BIOS and firmware (ILO, etc) for their last gen servers without a service agreement.

    1. J. Cook Silver badge

      When is HPE keen on us using second-hand kit? You can't even download BIOS and firmware (ILO, etc) for their last gen servers without a service agreement.

      ... and if you have a service contract for the hardware, you might get a refurbished part, aka "used but tested OK" or if you are slightly lucky, remanufactured / recertified, which means that they cleaned up all the scuff marks and ran it through the manufacturing QA process again. And if you were truly lucky, you get NOS (new old stock), which means it spent most of it's life on a warehouse shelf.

      We happen to have a couple old gen 8 DLs sitting in the racks at [RedactedCo], only they have Cisco paint on them.

    2. Anonymous Coward
      Anonymous Coward

      > You can't even download BIOS and firmware (ILO, etc) for their last gen servers without a service agreement.

      Have you tried recently?

      I think that everyone can setup an account on their support site now and download the full ISO image again, I know they stopped you doing it for years, but this policy seems to have changed again. I got the supplemented full image from them in June and I don't have a support contract or a machine in warrantee.

    3. dedmonst

      As others have pointed out - you clearly haven't looked recently - for most of their product range, HPE just require you to have an account on their sire and you can access patches/updates etc.

      Although their Integrity (IA64) systems are an exception to this I think - why you would want to operate something like that without a support contract is beyond me.

      And with regards to getting HPE to support 2nd hand kit - of course they will with some provisos:

      - You can expect to pay a "return to service fee" in addition to a standard service contract.

      - You can expect them to ask you to prove any HPE software on the platform is properly licensed

      - In some cases you can expect the first 30-60 days to incur T&M charges (for those lovely folks who try to return their system to support only when it has failed!)

    4. Smirnov

      When is HPE keen on us using second-hand kit?

      "You can't even download BIOS and firmware (ILO, etc) for their last gen servers without a service agreement."

      Wrong!

      While you needed a live warranty or support contract to download BIOS updates, firmware for iLO and controllers has always been free.

      And starting with Gen10 servers I believe even BIOS downloads no longer require a live warranty/support contract, just a free HPE account.

  7. DJV Silver badge

    Itanium did score plenty of decent wins, but never set the world on fire.

    But Samsung had a damn good attempt with the Samsung Galaxy Note 7.

  8. Ross 12

    "Explicitly Parallel Instruction Computing, and its IA-64 instruction set"

    Wait, is AMD's EPYC a sly two-fingered salute at Intel?

  9. bjzq888

    Free Supercomputer!

    A long time ago, my educational-organization employer was offered a free Itanium cluster; it had been one of the early installs at the NCSA, and it came complete with Myrinet switches, racks, and all. Unfortunately, I did a bit of research and discovered Red Hat was about to discontinue Itanium support, and we were a 99% RHEL shop. Also, it was going to take a semi trailer to hold it all, we'd have to move it ourselves, and we had no place to put it all anyway. So we politely declined. It wasn't really going to get us much in the way of performance, since we didn't do any clustered workloads anyway.

    Edit: Here it is:

    https://ncsa30.ncsa.illinois.edu/2002/04/titan-cluster-comes-online/

    1. Korev Silver badge
      Boffin

      Re: Free Supercomputer!

      And these days the GPU that is playing your cat videos has more power. I love living in the future...

    2. J27

      Re: Free Supercomputer!

      Old supercomputers are also really bad for power consumption. Probably would have eaten your organization out of house and home.

      1. Anonymous Coward
        Anonymous Coward

        Old supercomputers are also really bad for power consumption.

        Don't tell me, around 2004 I had a quad-plex of HP Convex V2600 servers. Each of them was essentially a large cube (approx 1.70m on each side), equipped with 32 PA-8600 processors and 32GB RAM, each needing a single-phase 230V 32A connection. I had to get dedicated power outlets installed just to power them.

    3. bjzq888

      Re: Free Supercomputer!

      Upon further research I think it was NCSA Mercury:

      https://www.csm.ornl.gov/workshops/SOS8/NCSA-SOS8.pdf

      Mercury, phase 1 TeraGrid

      ß Intel Itanium 2 1.3 GHz IBM cluster

      ß 512 processors + head nodes

      ß 2.662 TF peak performance

      ß GPFS, 60 TB

      ß Production Jan 2004

      At some point they had upgraded it with Myrinet, and later on with some other changes. I saw it in place once on an informal tour. The move would have been a massive undertaking, and by the time we were offered ownership (2010 or so) it was seriously underpowered on a per-server basis compared to the newest Xeons were we getting.

      http://www.ncsa.illinois.edu/news/story/a_workhorse_retires

  10. Bitsminer Silver badge

    The SGI Itaniums were OK

    $FORMERJOB sold a number of multi-CPU SGI machines based on Itanium, some 48p and 64p machines. Back in the day when "p" meant a single-core chip. It all fit in a single rack. SGI had magic chips that provided NUMA memory controllers for up to hundreds of processors, all seeing the same RAM.

    One year there was a 3700X (or was it a 'BX?) with 64 CPUs and 128GB memory on sale on the trade floor at the Seattle Supercomputer show for a mere $300k. So, on behalf of $FORMERJOB I bought one.

    1. HPCJohn

      Re: The SGI Itaniums were OK

      I managed several SGI Itanium systems. They were very good for CFD work.

      True tale - when a blade had to be replaced on an Altix, the SGI engineer had to phone a number in the States and get a unique code. Else the blade would not join the system.

      Preventing $COUNTRY fro assembling a supercomputer by buying spare parts.

  11. steviebuk Silver badge

    So thats what Wang were

    Going to school in Boston Manor, and waiting for the E1/E8 to go home at the Great West Road bus stop (guaranteed a seat then as was before the school). The Wang building used to be on the corner. 1000 was the address number if I remember right. Never knew they did computers.

    1. Anonymous Coward
      Anonymous Coward

      Re: So thats what Wang were

      Ah yes... the company that thought it would be a good idea to brand their support and maintenance contracts as "Wang Care"

      (Later changed to "Wang Guard", which isn't all that much better)

      1. Paul Crawford Silver badge
        Coat

        Re: So thats what Wang were

        So was the top guy there the Wang King?

        Yes, I should have got my coat and left earlier...

      2. Michael Habel

        Re: So thats what Wang were

        Wang is up today afert closing down yeterday.

        1. Ken Shabby
          Paris Hilton

          Re: So thats what Wang were

          Famous for the advertising slogans I used to see on Route 9? Been a while though, I could be dreaming.

          "Wang Cares!" and "My Wang never goes down"

  12. Geez Money

    If the Alpha hadn't been killed by arguably the worst CEO in tech history it likely would have survived and possibly even run the show now. Alpha DNA via a poached engineering team is where Athlons come from, before that AMD never really managed a competitive CPU and the Athlon and Alpha had similar killer features (like moving the external memory controller onboard, something Intel was a follower on). It's frustrating when the technologically superior solution dies because the company that has it is run by nimrods. Good on AMD for spotting their moment and saving us all from an Intel design monopoly, though.

  13. /dev/null

    Minicomputers?

    Not sure why the author starts off talking about minicomputers - the minicomputer architectures of Prime, DG etc were already old hat by the mid-90s - what Intel was trying to kill with Itanium were the proprietary *microprocessor* architectures of the Unix workstation/server vendors like Sun (SPARC), SGI (MIPS), DEC/Compaq (Alpha) and IBM (POWER). These were not “minicomputers”. To some extent, then, it was a success, as it did indeed kill off MIPS (as a mainstream architecture, it of course lived on in the embedded world) and Alpha. Even Sun and IBM hedged their bets at one point with OS ports to Itanium.

  14. Steve Channell
    Meh

    Rose tinted view?

    "Intel liked the idea of having another product line" is a rose tinted view. Intel wanted to kill the x86 architecture and the licence agreements IBM had forced it to sign with AMD and other fab vendors. Intel thought it could end-of-life the x86 architecture line, and move everything (including desktops) to Itanic.

    It wasn't a "wrong call", it was flanked by AMD Opteron dual-core processor and AMD64 extended instruction set.

  15. spireite Silver badge
    Coat

    AMD discovered the Opterons?

    .... for years I thought it was Captain Scarlet

  16. Anonymous Coward
    Anonymous Coward

    A closer analysis of eBay kit...

    There's plenty of SPARC, Power and Alpha kit available on eBay too if you go hunting around. Not sure what your point is?

    What you won't find much of is the last generation HPE Integrity i6 units (the systems with the last generation 9700 IA64 CPUs). These are the ones that HPE will support out until 2025. A lot of what is on eBay is already end of support as far as HPE are concerned. Folks that still run this stuff need it to be supported.

    And where you will find i6 systems they are definitely not cheap. If you happen to have a room full of i6 systems you don't want or need you'd do well to speak to someone at HPE Financial Service Certified Pre-Owned team - I think you'll find they would give you quite a bit of cash for them.

    But but but - that would mean there's actually some demand for these systems? That doesn't fit with el Reg's narrative on Itanic over the last 10 years does it? So "lots of cheap kit on eBay - ha ha" it is then.

  17. Henry Wertz 1 Gold badge

    Had one at U of I

    Had one at the U of I -- when HP found they were not selling very well, and wanted to show sales, they sold an Itanic-based HP Superdome to the U (engineering department) for something like $1,000 (this was something like a $250,000 machine.) I talked to someone in the department, within days they'd already scrapped their plans to scrap the old PA-RISC-based system they had, since it was easily outrunning the Itanium one.

  18. Henry Wertz 1 Gold badge

    New Coke?

    I still don't know (on Intel's part, not HPs!) if this was an unmitigated disaster or a brilliant move.

    New Coke -- people say it was some disaster, and the new coke itself was; but it greatly increased sales of regular coke. I suspect it was a collosal mistake on their part that just happened to increase sales in the end... but there are those who claim it was all one elaborate campaign to increase overall sales.

    Itanium -- so this cost Intel dearly, but, they knocked a lot of competitors off the market -- HP had both PA-RISC and Alpha (Alpha from DEC)... which BTW various PA-RISC and Alpha models were taking turns being fastest CPU available on the planet, Intel at that point wasn't even close. Got SGI to quit using and developing MIPS. IBM may have considered it (no more PowerPC/POWER) but decided to not drink the coolaid. I'm sure there were a few other architectures that ended then but I can't recall them off the top of my head. I don't think they expected AMD to introduce a 64-bit chip. Just saying, it cost Intel dearly, and my guess is it was just a collosal mistake on their part... but (other than IBM) after all these workstation vendors tried then ditched Itanium, they had stopped development of their own CPUs for too long to restart their programs and catch up, they all ended up buying Xeons instead!

    1. Anonymous Coward
      Anonymous Coward

      Re: New Coke?

      Yes, it did. My dad bought the entire inventory of two grocery stores' Coke. The Coca-Cola company resumed bottling non-New Coke when he had less than a dozen 2 liter bottles left.

  19. BurnedOut

    It was never quite what was expected

    As we know, there was a very long wait between early indications that HP and Intel were working on the new architecture, in the first half of the nineties, and the availability of any systems running HP-UX on the new hardware (around 2001). Then when they did arrive it was an anti-climax - applications needed recompilation for optimum performance (although installing new executables for ISV software like Oracle was not much work) and I don't recall any stories of night-to-day performance gains, because PA-RISC development had not stood still.

    I have a recollection that not only was the early announcement a shock to those working with PA-RISC systems (which had only been shipping since 1986 and had in many cases provided great performance compared to their predecessor systems), but also the impression was gained that the new processor would be able to run both existing PA-RISC and x86 code. I don't know what the situation was with Windows-based applications (there were some deployments of Windows on HP's Itanium based servers in the early 2000s), but for HP-UX the "Aries" emulation feature (which turned out to be the way to run HP-UX PA-RISC code if not recompiled for Itanium) did not seem to get much use for production applications.

    This was in disappointing contrast to the previous PA-RISC launch in the late 1980s when a lot of HP3000/MPE applications ran much faster under emulation on PA-RISC that they had done on the old 16-bit predecessor systems, and HP9000/HP-UX on PA-RISC had shown significant price/performance advantages compared to other systems of the time like VAX/VMS. Of course HP3000 was another story - it was already a fading business in the later 1990s and there was no production port of MPE to Itanium, so it had to stay on PA-RISC for its remaining years.

  20. Sparkus

    Shall we re-hash

    IBM and

    AIX 5L

    project rattler

    project monterey

    and the never-ending SCO lawsuits?

    Seems like an apropos time,,,,,,,,,,,,,

  21. Anonymous Coward
    Anonymous Coward

    Pentium Pro and its descendants were a hell of a CPU architecture in hindsight. And then AMD64 finished Itanium.

  22. HPCJohn

    Remember that the world was meant to go Windows NT. Dave Cutler hired from DEC to produce WNT.

    WNT was going to run on desktops through to mainframes. What really killed the Alpha was Microsoft pulling the port of WNT to Alpha, if I recall correctly.

    1. Aitor 1

      Alpha

      Alpha was way too expensive, and having to recompile all programs meant no sw available at launch.. no user base= no sw.. so there it goes.

      Trasnmeta had a nice plan, but failed too.

      We had pa-risc, tested Itanium performance... And decided not to use it.

  23. jollyboyspecial Silver badge

    Back in the day

    Around the turn of the century we bought a bunch of servers with quad Xeon 550 processors. Pretty beefy at the time given that most of our Intel servers were pentium 2 or 3 dual or even single processors. The organisation still had Motorola mini computers in use running some proprietary software. The senior IT management were keen that we should buy similar boxes for this job as they thought anything with an Intel processor was a desktop toy. However the project manager had decided on MS SQL and so Intel was seen as the best architecture, especially once the bean counters saw the relative prices.

    When the next project needed servers itanium surfaced. Having been told Intel was now pretty much compulsory, senior IT management were very keen on Itanium. However we were lucky that the bean counters didn't like the idea ones little bit.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like