The Titanic Architecture. Only sinking a bit slower than the other name-fellow.
GCC 15 dropping IA64 support is final nail in the coffin for Itanium architecture
GNU Compiler Collection (GCC) 14 should appear any month now, and its documentation reveals that the upcoming iteration, GCC 15, will no longer build binaries for IA64 – or Itanic, as The Reg dubs it. Development work on version 14 of GCC is still under way but there's an interesting paragraph in the Caveats section of the …
COMMENTS
-
-
Friday 12th April 2024 19:10 GMT Michael Strorm
Linus Torvalds in "*can* actually be diplomatic when it suits him" shocker!
Interestingly, that's one case where he *does* appear to have been uncharacteristically diplomatic in his response:-
> That said, I'd be willing to resurrect itanium support, even though I personally despise the architecture with a passion for being fundamentally based on faulty design premises, and an implementation based on politics rather than good technical design.
> But only if it turns out to actually have some long-term active interest (ie I'd compare it to the situation with m68k etc - clearly dead architectures that we still support despite them being not relevant - because some people care and they don't cause pain).
> So I'd be willing to come back to the "can we resurrect it" discussion, but not immediately - more along the lines of a "look, we've been maintaining it out of tree for a year, the other infrastructure is still alive, there is no impact on the rest of the kernel, can we please try again"?
As "Mewse" noted here (seen via this Register article) this was a "reasonable proposal" that could be translated as
> "If the people who are complaining about needing more time for this change (*), suddenly find the time to modernize the code they don’t want removed, it can be re-added. It will not happen and I imagine everyone knows it won’t happen."
He didn't explicitly say that because he didn't need to- it was a more diplomatic way of avoiding a fuss and (apparently) giving them the chance to get what they wanted. While in reality putting the ball in their court and removing any basis of complaint if no-one was willing to put *their* effort or resources where their mouth was.
(*) Mewse noted: "They delayed removal of the architecture for as long as possible, and then when they finally committed to removal, they received the inevitable complaint from the one person on the planet who still uses the architecture."
-
-
Monday 15th April 2024 04:08 GMT Grogan
Re: "celebrated industry diplomat Linus Torvalds"
It's mostly just pretentious people that complain about him. He's actually a nice guy. I haven't in some years, but I used to follow the kernel mailing list. Any "rudeness" was more him just being jovial in admonishing people for bad code etc. He's been nice to me personally (and spent more of his time than I wanted to take) during interactions to get drivers fixed.
I'm a jackass too, I say what I want... so I couldn't criticize anyway.
-
-
Monday 15th April 2024 10:28 GMT Glen Turner 666
Re: "celebrated industry diplomat Linus Torvalds"
Linus has been diplomatic for much of his career. A reason Linux took off the way it did is that getting a bux fixed in a Linux system was far easier than getting a bug fixed in BSD Unix. Sure BSD eventually fixed that, but by then it had lost the opportunity to be the major '386 Unix.
There was a period where Linus was far far too blunt for the size of the community which had grown around Linux. Force multipliers like fan channels and the trade press meant that previous behaviour was now amplified into being unwelcoming. But Linus is no fool, has demonstrated personal growth throughout his career, and got on top of that issue.
In this case, Linus was diplomatic. His offer was generous, and whilst he didn't expect it to be fulfilled, it was a clever way of demonstrating the actual resources available to maintain a Itanium port (ie, expected to be zero).
-
-
Friday 12th April 2024 12:42 GMT HuBo
Rolling rolling rolling ... rawhide
Love that EPIC ILP concept (thanks for the link!). Itanic may have been ahead of its time with it (and under-invested?) leading to a lack of buoyancy against alternative icebergs. Still, in 32-bit ARM, and in RISC-V with compressed instructions, compilers can generate a mix of 32-bit and 16-bit instructions, where the shorter ones can be read two-at-a-time from memory, and executed in parallel if sufficient execution ports exist, and there is no dependency hazard (or branch). I can imagine the same thing going with 64-bit or 128-bit instructions (aka 4-wide and 8-wide relative to 16-bits), with potential speed-up relative to uniform 32-bit ISAs.
Vector processing instructions that are so useful in HPC and AI/ML are already longer than 32-bits, and so standardizing on larger instruction widths, within the perspective of EPIC ILP, may just be where we end up heading anyways. Much as FP64, FP32, FP16, FP8, and FP4 are used in MxP and AI/ML, one could imagine EX128, EX64, EX32, and EX16 as "scalable" instruction formats in 128-bit compute (needed beyond Exascale, and good for vectors too). Itanic sunk, but the useful parts of the underlying concepts should certainly be fruitfully recycled into the next arch advancements IMHO.
RISC-V, in particular, seems to have quite a bit of room to expand instruction widths beyond 32-bits, making it a good potential test-bed for this (which could well solve its 1W2R design limitation in 32-bit).
-
Friday 12th April 2024 17:41 GMT Ken Hagan
Re: Rolling rolling rolling ... rawhide
Probably all true, but it doesn't help you. The central problem for ia64, known at the time and still unsolved, is that real-world code is gnarly. It has data depencies every half dozen instructions and branches every couple of dozen.
You could have an infinite number of execution units and feed the entire program to the CPU in one truly epic instruction, and you still wouldn't have any more parallelism than a modern out-of-order CPU.
-
Friday 12th April 2024 18:29 GMT DS999
Re: Rolling rolling rolling ... rawhide
Under invested??
Both Intel and HP devoted billions to it, with HP migrating their entire very successful RISC business to it - though to be fair they originally developed it and only brought Intel in as a partner because they knew they couldn't continue to fab it themselves as they had done with PA-RISC. Intel planned to make Itanium the only option for customers who wanted to enter the 64 bit world, relegating x86 permanently to 32 bits. That would allow them to slowly migrate everyone from x86 (where they had competition) to patent protected IA64.
That's why AMD "beat them" to 64 bit for x86. Intel didn't want x86 to ever gain 64 bit support. They actually implemented 64 bit support in their x86 chips sold to the public well before AMD did, but didn't activate it, they intended it as a backstop in case AMD did that, so they could enable it and say "here's the REAL 64 bit version of x86!" and hose AMD, with Intel's version of 64 bit being SSE and AMD's being 3D Now! of 64 bit. But Microsoft hosed them first when they announced they would support AMD64, and informed Intel that was the ONLY 64 bit implementation of x86 they would be supporting.
One can argue about the reasons Itanium failed, but "lack of investment" was not one of them!
-
Friday 12th April 2024 19:30 GMT Michael Strorm
Re: Rolling rolling rolling ... rawhide
> They actually implemented 64 bit support in their x86 chips sold to the public well before AMD did, but didn't activate it, they intended it as a backstop in case AMD did that, so they could enable it and say "here's the REAL 64 bit version of x86!"
I've genuinely never heard that one before- if true, it sounds very interesting. Do you have any concrete evidence that this was the case?
-
Friday 12th April 2024 21:02 GMT Roo
Re: Rolling rolling rolling ... rawhide
I don't buy that story at all. What I do know is that, before EPIC was released, Intel's Andy Glew did reveal that Intel had done some fairly well advanced studies into a 64bit x86 but this avenue was shelved in favor of EPIC which was seen as having more growth potential. From what I recall from Glew's posts Intel were keeping the same kind of register count but widening them, AMD took a different route which allowed for more growth - increasing the register count and binning the 8087 shaped barnacle, thus we have monstrously quick AMD64 chips that can pretend to be a Pentium Pro on the rare occasion that they run 32bit code.
-
-
Saturday 13th April 2024 12:58 GMT Roo
Re: Rolling rolling rolling ... rawhide
I had forgotten that the initial "Yamhills" shipped to consumers were crippled - although I'm pretty sure the Xeon versions were shipped 64bit from day 1.
Timeline wise AMD published the first specs of AMD64 back in 1999 and released their first Opteron in 2003. References to Intel's Yamhill seemed to pop up in early 2002 - at which time AMD were publicly calling out Intel to jump on to their AMD64 bandwagon (and Microsoft had announced they were developing for the AMD64 - so presumably they already had AMD silicon samples to use and Intel could have seen how much of a threat the AMD silicon was).
All that said there is evidence that there was a predecessor to Yamhill, corroborated by whispers on USENET and at least one statement from AMD pointing out that their proposed 64bit chips would support running of 32bit legacy code in early 2002.
-
Saturday 13th April 2024 13:26 GMT Michael Strorm
Re: Rolling rolling rolling ... rawhide
I did so, and Google returned numerous results confirming that Intel had been working on a 64-bit implementation of x86 in secret. That in itself is hardly surprising. (*)
But none of them appear to back up your far more wide-ranging claim- the one I quoted and made clear I was replying to!- that
> They actually implemented 64 bit support in their x86 chips sold to the public well before AMD did, but didn't activate it, they intended it as a backstop in case AMD did that, so they could enable it and say "here's the REAL 64 bit version of x86!"
So, as I said, if you have any evidence that Intel were secretly including their own implementation of 64-bit in their chips long before they officially did so, I'd be interested to see it. Otherwise, you'll excuse my scepticism.
(*) While Intel would never have publicly admitted that while pushing Itanium as the only route to 64-bit, it'd have been far more surprising if a company so large and so tied to x86 *hadn't* at least hedged their bets by doing so, even if only to keep their options open- and as a fallback to the expectation that AMD might try- on the assumption they *probably* wouldn't need it.
-
Saturday 13th April 2024 20:03 GMT DS999
Re: Rolling rolling rolling ... rawhide
Yamhill WAS their own implementation of 64 bit. Do you have any idea of the development timeline for a CPU? Its 64 bit was not AMD64 - its development began well before AMD announced that, let alone when the first AMD64 chips appeared on the market so they couldn't have included AMD64 even if they wanted to.
Intel obviously did not want to let AMD lead in that, they just assumed an AMD effort at 64 bits would get little market traction similar to AMD's copycat MMX/SSE instructions. Intel was caught totally off guard when Microsoft said they would support AMD64 as the only x86 64 bit extension, and they had to modify their designs in the pipeline to be compatible with AMD64. Made easier since they already supported 64 bits so it was just different opcodes and maybe more registers (I have a gut feeling Intel would have wanted to cripple 64 bit x86 by not adding registers like AMD did) but still damaged Intel's reputation as the x86 "leader" when they became the follower in 64 bit as far as the average person was concerned (who had never heard of Itanium and didn't care about it)
-
Saturday 13th April 2024 21:09 GMT Michael Strorm
Third time lucky?
Whether or not all that is the case is beside the point. I hadn't asked about whether Intel already had a 64-bit x86 design for chips that were "in the pipeline". (*)
I made quite clear that I was interested in- and specifically asking about- was your claim that they had "actually implemented 64 bit support in their x86 chips sold to the public well before AMD did, but didn't activate it".
In other words, you appeared to be suggesting that Intel had already released and sold nominally "32-bit" chips that contained secret, unactivated 64-bit capabilities just waiting to be unlocked?
*That* would be somewhat interesting if it turned out to be true, but- given the subsequent lack of evidence to back it up- I suspect that it isn't.
(*) As I already said, it would have been far more surprising if Intel *hadn't* been hedging their bets by secretly working on one or more 64-bit extensions to the x86 design for several years prior, regardless of what it suited them to admit publicly.
-
-
-
Sunday 14th April 2024 13:26 GMT Michael Strorm
Re: Third time lucky?
That's great, thanks for finding that.
The article doesn't definitively prove anything in itself- it's a speculative analysis of chips that hadn't been released at that time. However, when I looked up "Prescott" (the revision of the P4 that the article refers to), the Wikipedia article and this reference do seem to provide official confirmation that Prescott was designed to support 64-bit, but that the first versions released to consumers wouldn't have it enabled.
So, from that point of view what OP originally said was at least partly correct.
What it doesn't back up is the part that says Intel was doing so "in their x86 chips sold to the public well before AMD did", since AMD's first 64-bit offerings came out in April 2003, almost a year before Prescott.
And it's not clear from that alone whether Yamhill started out initially as an incompatible, rival version of 64-bit x86 that they were forced to make compatible with AMD's. (AMD apparently released the spec for "AMD 64" in mid-2000, but Intel may well not have felt obliged to go along with that at the time, creating an incompatible spoiler instead, and I suspect they'd have already started developing their own 64-bit extensions by then regardless).
But all this is interesting anyway.
-
-
-
-
-
-
-
Saturday 13th April 2024 15:25 GMT Steve Channell
segment registers were the difference
The intel 80386 introduced configurable size segments, which all the OS vendors used to wet segments to 32-bit and used a single address space, with segment selector registers set to this one large segment. Without much change it was possible to increase address space from 4Gb to 16Gb by using {code,stack,data,extra} registers. All the effort would fall to OS and compiler vendors,
AMD didn't just increase the addressing from 32 to 64 bits, they remapped segment registers as general purpose registers to ensure that AMD64 programs would always be faster than "whatever Intel called their version" (was it x86-64 or ia32-64, can't remember).. Fortunately Intel was persuaded to build AMD64 compatible chips
-
-
-
Friday 12th April 2024 20:53 GMT Roo
Re: Rolling rolling rolling ... rawhide
EPIC was doomed before it left the drawing board for three reasons - both of which were already apparent from the short life of Multiflow Computer Inc - and these reasons were (IIRC) articulated by a number of well-seasoned computer architects on USENET (which back then included folks who worked on the DEC Alpha, ARM, PA-RISC, MIPS, SPARC, POWER, x86 (incl x86-64) - see Andy Glew's posts on the topic in particular).
1) Dynamic Scheduling is always better than static scheduling because a) it adapts to what is actually happening in real time and b) it can work in conjunction with static scheduling.
2) Static scheduling requires large register files and wide datapaths in whatever fabrication process you choose. This means that a VLIW style design will inevitably have a lower clock rate in a given implementation technology relative to say it's RISC peers. The only way a VLIW machine can compensate for this is to widen the datapaths - which again forces the clock to be slower (to manage skew), thus compounding the problem.
3) Advances in hardware fabrication & implementation vastly outpaced any benefits attributable to VLIW.
Full disclosure - I liked the idea of VLIW, it's just that whenever it was implemented in hardware it ended up being slow and more complicated than whatever else was current.
On the plus side Multiflow's efforts were not totally wasted. They did have to write a very good compiler. Their compiler was very well regarded and licensed by competitors who promptly put it to work compiling code for RISC chips (during the peak period of the compiler wars)... The big register files are still with us, but they are implementation details - rather than architectural details, thus modern processors can (and do) take advantage of wide issue and huge register files when and where it suits the implementation...
I think the real point of EPIC was to sink the competitors but leave the x86 alive at the low end, which didn't pan out because AMD spoilt the party with AMD64.
IIRC Linus Torvalds spent some time working for Transmeta on a VLIW architecture - which also sank without (much) trace...
-
Saturday 13th April 2024 20:13 GMT DS999
Re: Rolling rolling rolling ... rawhide
Dynamic Scheduling is always better than static scheduling
While that's something everyone accepts and "knows" now, it was far from clear in the early 90s when HP began development on PA-RISC 3.0 which became Itanium. The first superscalar chips were out but many believed they could never go more than 3 or at most 4 wide, because of how many more transistors and how much more power was required simply to go to 2 way superscalar execution.
People like to talk about how Moore's Law is ending today, but there has been talk of Moore's Law ending basically forever. There are always roadmaps out 10 years or so (there still are, in fact) but beyond that is terra incognita, where it isn't clear whether we'll be able to continue going smaller. HP's engineers had the additional pressure of seeing the trend line for cost for building fabs, and they knew that even if Moore's Law carried on into the 21st century that HP may not continue to fund it (and they were right about that, hence the partnership with Intel)
As for Intel I think they were in love with the idea of a patent protected ISA that would have no pesky cloners like Cyrix and AMD, something built from the ground up with the future in mind, not the legacy encumbered x86.
While Intel probably believed they could keep pushing Moore's Law a lot longer than HP feared, HP had this nifty Dynamo runtime compiler that could take optimized compiled code and actually execute it faster using JIT type technology. Intel was incredibly impressed by that, and believed that static scheduling in cooperation with Dynamo technology could surpass what dynamic scheduling in hardware could do. It turned out Dynamo's gains were limited, and dynamic scheduling had much further to run than anyone would have believed in the mid 90s.
-
Monday 15th April 2024 19:22 GMT Roo
Re: Rolling rolling rolling ... rawhide
"While that's something everyone accepts and "knows" now, it was far from clear in the early 90s when HP began development on PA-RISC 3.0 which became Itanium."
Well, not everyone is a computer architect or designs microprocessor front-ends for a living ... However within that community dynamic scheduling was already recognized as a win by the 70s, perhaps the most famous example of it was the CDC 6600 (released in 1964). Superscalar processors were old hat by the 90s - there was plenty of data out there from the boat-anchor machines to show the benefits (eg: CDC 6600, IBM S/360/370/etc) of dynamic scheduling.
By the late 80s/early 90s superscalar microprocessors had just started appearing and transistor budgets were sky-rocketing year on year with no plateau in sight. So from my PoV in the 90s HP/Intel & EPIC (1997) were very much swimming against the tide - everyone else was going superscalar: AMD 29K (1990), Motorola 88110 (1991), Alpha EV4 (1992), POWER2 (1994), HyperSPARC (1993), MIPS R8000 (1994), AMD K5 (1996), and even the Intel P6 (1995)... Even INMOS had a superscalar hack called the "Grouper" designed for the T9000 in 1990.
I like oddball CPUs and would not have begrudged EPIC some success if it was a good fit for the problem - but that wasn't what I was seeing before or after it's release.
-
-
-
Saturday 13th April 2024 11:36 GMT StargateSg7
Re: Rolling rolling rolling ... rawhide
Meh! At NCA (North Canadian Aerospace) we've been doing 128-bits wide for 20 years now and have SEPARATE instruction processing streams for 128, 64, 32, 16, 8, 4 and 2 bit signed/unsigned integer processing and also have SEPARATED-OUT the processing streams for Fixed Point and Floating Point arithmetic.
All six streams (signed and unsigned FP, Fixed and Integers are handled separately!) run in parallel and since we DO NOT USE virtualized cores (i.e. Hyperthreading) but rather have single-core for single-thread processing on our combined-CPU/GPU/DSP/Vector super-chips, it means we don't have issues with pipeline stalls and nor do we have issues with predictive branch crashes that Intel, AMD and IBM have.
We use a many-core architecture so our super-chips are all 1024 cores and that means super-fast and low-latency 1024 available processes/threads! Each core is OPTIMIZED for a set of specific bit widths so we have divided the 1024 cores up into 32-core processing-groups for each of the 128, 64, 32, 16, 8, 4 and 2-bit values used for the SIGNED and UNSIGNED Floating Point, Fixed point and integer operations. Think of these processor groups as hardware-hyperthreads that work in cooperation and use ONLY THEIR OWN shared-within-processor-group RAM and/or local-core-only Cache memory.
This means you can assign parallelized tasks to SEPARATE processor groups at different bit-widths that can be scheduled at your whim and then started, stopped and saved whenever one needs. No other 32-core processor group can intrude upon another processor group and their memory spaces are completely secured and separated-out in their own sandbox for best security practices. AND within each 32-core processor-group, applications can assign tasks as a 1, 2, 4, 8, 16 or 32-core hyper-process so that sub-tasks can be separated out to single or multiple cores and then processed on an individual sequential or parallel basis within the greater 32-processor-group.
We can optimize application-threads to work ONLY ON specifically-sized data that best fits the intended application. (i.e. 64-bits for unsigned integer RGBA pixel processing or signed 128-bit fixed point real values for Astronomical orbit calculations) and since processing results for each 32-core group can be sent to a GLOBAL SHARED HEAP, it also means we can cooperatively process diffierent types and bit-widths of data in parallel BUT synchronize the use of all final results for downstream processes that need or have a rigorous and pre-determined time schedule (i.e. such as audio/video streaming!)
I think our setup is FAAAAAAAAAAR superior to what Itanium EVER envisioned and our 20 year 128-bits wide super-chip experiment is proof-positive of good chip design since we are now publicly disclosing some of the hyper-advanced technologies that these super-chips have discovered for us!
(i.e. we now disclose our 64k by 64k array of Trapped Xenon Particle Nanowells technology used for instantaneous quantum entanglement-based communications at MANY Petabytes per Second datarate network communications using less than one watt of power in a one cm square microcircuit with first working chips and designs out for WORLD-WIDE FULLY FREE AND OPEN SOURCE disclosure under GPL-3 licence terms right today -- ALL the telecoms and ISP's ARE NOW TOAST!!! --- UTTERLY FREE ultra-high-bandwidth global communications forever!)
V
-
-
Saturday 13th April 2024 12:21 GMT StargateSg7
Re: Rolling rolling rolling ... rawhide
OH YEAH BABEEEE we did legalize the devil's salad in Canada .....BUT..... I would rather have my shots of Asbach Uralt German Brandy or a nice shot of Sambucca OR a 25 Year old Scotch!
I only PRETEND to be insane! But in reality we actually DO HAVE the world's fastest supercomputer at 20 YottaFLOPS at 128-bits wide! AND we actually DO HAVE multiple 160 IQ+ super-intelligences working and playing 24/7/365 making ASTOUNDING scientific discoveries which are just now being announced and disclosed publicly!
We have a list of 60+ discoveries that are being WORLD-WIDE FULLY OPEN SOURCED under GPL-3 licence terms! All designs, source code and manufacturing processes are being fully disclosed publicly for your perusal and download! Even our super-chip Tape-Out design is being open sourced in a few hours! We make our super-chips out of Borosilicate glass substrates and while we did use Gallium Arsenide opto-electronics for quite a while, we finally went for a far simpler manufacturing process that you can even do at home or at the office with a simple DIY vacuum chamber, a Green Laser etcher and some powdered copper and carbon black! We just run the chips at a higher voltage using 540 nanometre line widths for our chip designs which can be easily etched onto large 20 cm by 20 cm borosilicate glass panels with a cheap green laser! We can make a 128-bits wide 50 PetaFLOP super-chip for less than $25 USD and you can too!
V
V
V
-
-
-
-
Friday 12th April 2024 12:57 GMT trevorde
Unloved & unmissed, even by it's parent
Worked for a company who had one of the early Intel Itanium dev boxes. One of our devs spent 6 months porting the viewer part of our product to it. We demoed the viewer at our world user conference to great indifference. We assumed there was even less interest in running our main product on Itanium. Shortly after that, the project was canned and the dev box was put into a cupboard. Fast forward a few years and the sysadmin was having a clear out. He contacted Intel about returning the dev box and was told they didn't want it back and to keep it. He offered it around the office but there were no takers.
-
-
Friday 12th April 2024 19:25 GMT Michael Strorm
But that's MS all over. They see someone else's success (in this case, Flash), want to create their own version, tell everyone that it's going to be their next big thing and use their market power to railroad developers into investing their time and resources supporting it.
Then, after that latest attempt at throwing mud at the wall doesn't turn out to be the instant success they hoped for, it's dropped and forgotten about without a second thought as they move on to the (would-be) Next Big Thing. Leaving any developers and consumers who fell for it with nothing to show for it except knowledge/possession of another piece of discarded and obsolete MS technology.
Remember Windows RT? Exactly. Just another reason why one shouldn't waste time with anything MS releases until it's clearly shown that it's here for the course.
-
Sunday 14th April 2024 14:27 GMT MacroRodent
Porting
About that ">I spent much of 1999-2004 porting software to it, which was a prime waste of time and money. "
I wonder how much of it really was due to Itanium quirks, and how much the general hassle of porting code that has too long assumed "longs" and pointers have the same 32 bit size?
If the latter, then AMD64 would case the same pain, and work on making code 64-bit clean for Itanium is not wasted.
-
-
Friday 12th April 2024 14:48 GMT squizzler
I do wonder if IA-64 could become an open ISA along with so many others. And if so, Intel could use its foundry services to offer hobbyists a single board computer based on the ISA. It would be a massive comeback! And cheaper than trying to buy their way into RISC-V by acquiring Si-Five as they considered a few years back.
This is discussed in the linked post. The Reg has always had an unhealthy obsession with Itanium - mostly with dissing it - so if you fancy publishing it as an article, please knock yourselves out!
https://heartofwalesbikes.wordpress.com/2024/04/12/raise-the-itanic-could-intel-revive-ia-64/
-
Friday 12th April 2024 15:39 GMT Liam Proven
Allegedly much of the design was inspired by the Russian Elbrus project.
https://www.theregister.com/2002/05/24/elbrus_the_itanium_slayer_returns/
https://www.theregister.com/2004/05/24/intel_elbrus_deal/
I think there are still VLIW chips selling in Russia. It's also quite popular in the DSP market, I learned last time around.
But this one? No.
-
Saturday 13th April 2024 02:53 GMT bazza
Itanium was quite good for DSP only because if included a fused multiply-add (FMA) instruction in its SIMD / vector unit, which is good for FFT implementations, and x64 didn't have one. And because the Itanium came from Intel (ergo, it must be "the best"), it was supposed to be the one to use. To emphasise the point Intel didn't put an FMA into the x86/x64 line up until, what, 20213?
Trouble was that Altivec on PowerPC included an FMA, which made chips like PowerPC 7400, 7410, 8641D surprisingly competitive, especially against Intel's x64 line up. Itanium artificially extended the useful lifetime of 8641D because Itanium really wasn't embeddable. It wasn't until Nahlem came along that x64 (through brute force alone) started beating the 8641D, but only for FFT sizes that overflowed cache (Nahlem's superior memory bandwidth won out over the 8641D's slower memory subsystem).
Nahlem was good enough for a lot of actual DSP work (on thing like radars, EW systems), meaning that 1) one could finally move on from 8641D, and 2) one didn't have to go to Itanium. With the writing on the wall, Intel finally added an FMA instruction into the x32/x64 line up in about 2013, about 13 years too late in my opinion. There's a lot of kit that got built around PowerPC 7410, 8641D, it's still in service, and MLUs seem to be involving re-manufacture of ancient parts (something that is surprisingly cost effective) rather than port software to newer x64 based systems. This is happening even though some system suppliers have full API compatibility all the way through nearly 3 decades of product line up. Altivec really was the right core extension at the right time, with execution speeds well attuned to signal bandwidths, and those bandwidths aren't really changing that much; it's not like there's more spectrum available today vs 25 years ago.
As for the Cell processor; well, that was quite the beast. It took Intel well over a decade to make anything that outperformed that.
-
-
Saturday 13th April 2024 08:50 GMT bazza
FMA is not about precision, it's all about time / compute performance. Intel were using FMA in Itanium to differentiate it in the super computer market place, and that worked to a limited extent. But, they blew it.
With an FMA instruction, a CPU (well, it's vector / SIMD core(s)) are a lot, lot quicker than a processor where the FFT butterflies have to be calculated as separate multiply and add. It's not just that two maths operations are merged into one instruction executed in a single clock cycle, it's a also whole lot more friendly to caches. That is, more work is being done per data loaded DRAM->L3->L2->L1->pipeline than without an FMA, so the benefit is multiplied for FFT lengths that exceed, say, L1 cache size.
For comparison, a 400MHz PowerPC 7410 (with Altivec) was quicker at 1K floating point complex in place FFTs than a much newer 4GHz Pentium 4. For the smaller FFT sizes that could be accommodated in L1 cache the 1GHz 8641D (which also had Altivec) was still quicker than the Nahlem-vintage Xeons. Itanium was likely equally performant having an FMA, but it was too difficult a chip to integrate into the kind of embedded hardware that was used in things like aircraft, tanks, etc.
Itanium wasn't attractive enough to the super computer manufacturers either. Fujitsu / Riken famously turned to customised SPARC CPUs for the K machine, AMD's Opteron line was quite popular also. The kind of applications that supers get used for (computational fluid dynamics, protein folding, etc) don't necessarily benefit from an FMA instruction anyway, so Itanium's "advantage" over the rest of the x64 line up that existed back then was much smaller, and the disadvantages (shite compilers) were stark.
The Cell processor was way quicker than any of them. It was only really when Xeons grew >8 cores, FMA and memory bandwidths close to 100GByte/sec did they start outstripping the Cell processor, but were still running a lot hotter. The entire high end military embedded systems market was poised to adopt Cell wholesale, only for IBM to drop the chip. One of the other attractions of Cell was that they'd majored on 1 clock cycle completion for all SIMD operations at the expense of precision. The reasoning was that for most applications (like DSP, or graphics for games, etc) speed of execution trumped outright numerical correctness; better to get an approximate answer now, rather than an exact answer in another clock cycle.
-
Saturday 13th April 2024 14:12 GMT Roo
The IBM BlueGene processors looked to be better balanced than Cell processors to me - and they had FMA with direct pipes to memory (bypassing cache). Their power efficiency was exceptionally good, and they got much closer to hitting their theoretical peak performance than pretty much anything else at the time.
-
Saturday 13th April 2024 15:48 GMT Bitsminer
The SGI Altix line was originally on Itanium. The chief benefit was "single OS image", what we today call a multi-core computer, with any core count you could afford up to 512.
$WORK used it for the very large RAM and many-core (32 to 128 processors) that were otherwise unavailable. The SUSE and Redhat Linux of the day were not so efficient so only 2/3 of the processors could be used for the application, the rest had to be left for Linux to run the filesystem, memory etc etc.
As I recall our big complaint was the single-cycle delay in floating-point operations between the decode and operation. That's what one fellow said anyways; in a pipelined CPU I didn't see the issue. He had the PhD in computer architecture and I didn't.
FFTs were very good if you used the Intel libraries (not so much the SGI library) and the GCC/icc open-source versions sucked wind. Good thing FFT was only half our workload....
-
Sunday 14th April 2024 06:59 GMT CowHorseFrog
Itaniums ran at less than half the speed of an Intel for many times more. Even comparing a single Itanium FMA against an Intel multiply and add, the intel is going to do far more.
Your comment is broken, another way of thinking is for the money of an Itanium you culd buy quite a few intels, in the end those intels would do far more multiple and adds than the single itanium.
-
-
-
-
-
Friday 12th April 2024 19:29 GMT doublelayer
There are legal and IP problems, but theoretically they could deal with those and open the thing up. Let's assume they did it, which they won't do because the checks that they've done so properly take employee time and they don't see a reason to waste it. Even if they do, who is going to build the computer? Someone has to design a processor using the architecture that can be usefully built on modern fabs. Someone needs to manufacture and test a lot of them. Someone needs to build a board that uses that chip. Someone needs to write the firmware and get a kernel which can run on it. Who is going to do each of those four things, and why do you expect that it will be any more interesting than the many ARM or RISC-V based SBCs that are easily purchased? It probably won't be cheaper or lower power, and it certainly won't be more compatible with anything, and if they make it fast then AMD64 becomes a valid competitor as well, so what's the reason for a user to buy it? If there isn't one, nobody will bother making it.
-
Friday 12th April 2024 21:23 GMT Michael Strorm
I had a look at the linked article. It claims that "Software compilers were initially not up to the task, although researchers did make strides early on. There is no reason to believe they would not have continued to do so if interest had not waned in the architecture.".
Really? This sounds like wishful thinking. Donald Knuth- who ought to know- commented in 2008 that "the "Itanium" approach [was] supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write".
"Basically impossible" doesn't sound like he's leaving *any* leeway that they would have been "possible" simply by throwing more resources at them. Quite the opposite- it sounds like he's saying that Intel released the Itanium assuming- and *relying* on the assumption- that the problem *could* be solved AT ALL... until it turned out it couldn't.
I mean, Intel and its partners invested multiple *billions* of dollars in Itanium. At one point I read something saying that (in effect) Itanium was a bet-the-farm move for Intel. That may have been wrong, since they're still here, but it makes clear how much money was at stake.
And there's no way they wouldn't have happily thrown hundreds of millions more into compiler research if that had plausible chance of recovering their investment in Itanium.
But- while I'm no expert when it comes to CPU design- the article doesn't really make clear *why* we should want Itanium back on a technical level, beyond the fact that more architectures would be better.
My understanding is that Itanium was a product designed around the concerns and assumptions of early 90s chip design. And that by the time it came to market in the early 2000s, many of those concerns were already proven to be misplaced (e.g. the idea that out-of-order, dynamic and speculative execution wouldn't be workable contradicted by CPUs already on the market) and many of the assumptions were turning out to be wrong (e.g. that efficient parallelism and resource allocation could be done statically at compile time).
-
-
-
Friday 12th April 2024 16:27 GMT DaemonProcess
Take some credit
I think the Register needs to take some of the credit for helping destroy the credibility of the chip.
Be careful of who you diss Reg because 20 years later we are stuck with only 2 types of mainstream CPU with a 3rd only just coming in. Back in 2000 we had a whole bunch more to choose from and life was more interesting. The trend to make everything in software is going to hold back hardware innovation going forwards. You could point out how much is being offloaded to graphics processors these days but how many architectures of those do we have - 2?
Having said that, the first iteration of the Itanium really was poor. The 2nd was faster but very complicated.
The other thing that killed it was HP themselves. In the early post Compaq/Dec merger years I've never seen such a company where the management was so wholly set against each other. No wonder Hurd went to 'Orrible.
-
Friday 12th April 2024 18:22 GMT Richard 12
Re: Take some credit
It was doomed at inception due to the deliberate lack of compatibility with existing software.
You cannot assume everyone is going to be willing to port everything to a new target architecture.
Absolutely everyone has better things to do, and there's plenty of software that simply cannot be ported at all for various reasons.
That's why AMD succeeded and Intel failed.
-
Friday 12th April 2024 22:47 GMT Anonymous Coward
Re: Take some credit
One of the good reasons, to be sure, but another is that their architecture was only sufficiently "better" for a narrow range of work loads. Most applications would never see large gains from an Itanic optimized port, other than gains in the number of truly vicious bugs they would have to sort. This has been the bane of pretty much all of Intels dedicated server accelerators as well. A weird architecture that offers to few performance benefits to justify the cost of re-training and re-building everything should be a formal anti-pattern by now.
This was just the wrong model of parallel computing for a general purpose computer. Which is why most of the customer base dried up. A couple large deep pockets, and the poor fools who sunk cost themselves into a budgetary corner where bleeding money for trivial gains was less immediately painful than re-architecting the entire deployment back to commodity hardware.
Then like a nuclear half-life the Itanium market decayed year on year in decades of obvious decline. It's architectural curiosity aside, the last gen Itanium processors are now inferior to cell phone parts or some smartwatches. Support for it should probably have been culled a decade or two ago.
-
Sunday 14th April 2024 07:48 GMT ianbetteridge
Re: Take some credit
"You cannot assume everyone is going to be willing to port everything to a new target architecture."
I mean, that's exactly what Apple has done, and I think what Microsoft would love everyone to do with ARM when/if Qualcomm pulls its finger out. So it's possible, but the way Intel approached it was not ideal.
-
Monday 15th April 2024 01:46 GMT Jumbotron64
Re: Take some credit
Apple can do it and has done it 3 times ( Motorola 68k to PowerPC, PowerPC to Intel, Intel to ARM/Custom In House) simply because they are both an OS and Hardware. Plus for each transitio/breakage Apple had various compatibility schemes to soften the blow. Intel was never going to be able to do that.
-
Monday 15th April 2024 10:16 GMT Ianab
Re: Take some credit
To be fair Apple didn't switch to their new Arm CPUs "cold turkey". They already had much experience with them in their phones and tablets, and things like compilers and software libraries already existed, and they would have programmers on staff already familiar with the architecture. Now I'm sure it still cost $millions to port their OS and software to the new silicon. But it it wasn't a "start at square one", and there were sensible economic reasons to make the move. Their new ARM chips are more power efficient, and I bet they cost a lot less than they were paying Intel for their previous CPUs.
So Apple had both the resources ($ and staff), and a technical / economic advantage to swap over.
-
-
-
Friday 12th April 2024 19:18 GMT aerogems
Re: Take some credit
Hurd wasn't given much of a choice given he apparently watched too much Mad Men and thought he could hit on his secretaries like it was the 1950s and 60s. Then he ran to Orrible, as you call it, because Ellison was the only one willing to offer him a job.
Granted, he still was better than Apotheker (or however you spell it) who, IIRC, didn't even make it 1 Liz Truss as CEO, and couldn't even show up for the first day of work because he was ducking process servers from his last employer.
Coat icon because both of those individuals were told to get theirs and exit the HP premises.
-
Friday 12th April 2024 20:01 GMT Michael Strorm
Re: Take some credit
> I think the Register needs to take some of the credit for helping destroy the credibility of the chip.
Nah. I don't think The Register was quite *that* influential, at least not back in the early 2000s when it was a far more UK-centric publication, and I doubt it was *that* widely-read among the movers and shakers in Silicon Valley compared to US publications.
Let's remember that Donald Knuth himself "dissed" the Itanium when he said "The Itanium approach...was supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write."
> Be careful of who you diss Reg because 20 years later we are stuck with only 2 types of mainstream CPU with a 3rd only just coming in. Back in 2000 we had a whole bunch more to choose from and life was more interesting.
You make 2000 sound more like the mid 1970s to the late 1980s. I remember 2000 as being a time when that diversity of CPU architecture was already in decline and everything seemed to be moving towards x86 and Wintel PCs in general. Even Apple gave in and moved to x86 a few years later.
Yes, there were still some other architectures around from the past decade or two, but- even more obviously in hindsight- most of those seemed to be already in decline, the past rather than the future.
Today Apple has moved away from x86 with the "M" series processors, and ARM is a huge player. Granted, ARM has been around since the late 80s- long before 2000- but it's a big player and a big deal in a way it never used to be. You couldn't have seriously imagined a data centre full of ARM-based computers twenty years ago.
And let's remember the reason that Intel didn't implement 64-bit on the x86 was because they hoped by not doing so, they could force those who wanted 64-bit onto the Itanium and have the market to themselves again. If Intel had their way, everyone would be using Itanium by now, and AMD would have been shut out.
So let's avoid seeing Itanium through rose-tinted glasses.
-
-
Sunday 14th April 2024 22:06 GMT squizzler
Re: Take some credit
Cleary the barb in your post got under the skin of some people. At the time of writing seven people have downvoted the post versus only me to give it the thumbs up. Ideally we would not downvote posts made in earnest because you disagree with the content. I suspect the reg might become a bit of an echo chamber.
-
-
Friday 12th April 2024 20:30 GMT jake
Credit where credit is due.
"Itanic, as The Reg dubbed it."
Seems it wasn't ElReg who came up with the nickname, it was ElReg reader Andrew N. See this article, from the days before paragraphs and commentards were invented:
https://www.theregister.com/1999/10/28/amd_vs_intel_our_readers/
Hint: It's in the final line.
-
Friday 12th April 2024 22:52 GMT Anonymous Coward
Re: Credit where credit is due.
Ah, the golden years. It really is interesting to review the old bookmarks now and then. If you were around long enough you got to see little bits of culture and history being written. Strange how some of us have been here longer than most of the staff at this point.
Quote not the deep forum lore to me, I was there when it was written?
-
-
Friday 12th April 2024 22:59 GMT ldo
Guess Whom The “Howls Of Dismay” Were Coming From ...
You see the type in the comments sections on this very site, don’t you? Those who love to pontificate about how Open Source doesn’t quite suit their needs. When it is pointed out that the code doesn’t write itself, that it all happens because somebody cared sufficiently to get off their bum and actually do something about it, the response is either bluster or silence.
Torvalds was right to call their bluff.
-
Sunday 14th April 2024 02:39 GMT aerogems
Re: Guess Whom The “Howls Of Dismay” Were Coming From ...
Indeed. The "if it ain't broke" crowd. If everyone followed that philosophy we'd still be a bunch of nomadic hunter-gatherers. Civilization never would have formed, and we certainly wouldn't be here discussing it. As I've gotten older and wiser, I have generally adopted a similar attitude. Sometimes I don't like the changes made to an app or game, but since I'm not the one writing the code, I figure I have very little room to complain. I can say that I don't like a particular change and I wish they would revert it or make it an option people can toggle, but since I'm a mediocre programmer on my best days, I don't throw a giant tantrum like a spoiled toddler who didn't get the toy they wanted at the store. If it bothers me that much, I just find a different app to do the same basic function/stop playing the game. I suppose if there were no other options I'd try my hand at making my own version of the app, but that's never been necessary to date.
-
Friday 19th April 2024 13:00 GMT johnny-mnemonic
Re: Guess Whom The “Howls Of Dismay” Were Coming From ...
I'd recommend to do better research or not rely on hearsay, before accusing someone of not putting his money where his mouth is.
Because we are happily keeping Linux/ia64 alive - actually over two mainline releases already. You can get the picture from [1] and [2]. And so far we solved every issue on the way and more: For example we brought back HP Sim platform support - up to mainline that is. You can reassure yourself by checking for example [3] that **all** Linux stable release (candidate) kernels are working happily inside Ski running on x86_64. We are also regularly testing mainline release (candidate) kernels built with the latest GCC snapshots w/LRA enabled on actual hardware (rx4640, rx2620, rx2660, rx6600, rx2800 i2).
[1]: https://lore.kernel.org/all/fe5f6e9b-02a2-42e9-8151-ae4b6fdba7e3@web.de/
[2]: https://lore.kernel.org/all/145da253-b3bc-43da-a262-a3ebdfbea5a2@web.de/
[3]: https://github.com/johnny-mnemonic/linux-stable-rc/actions/runs/8747837492
We put our money where our mouth is and welcome everybody doing the same. Cheers.
-
Monday 22nd April 2024 04:52 GMT dr.shuppet
Re: Guess Whom The “Howls Of Dismay” Were Coming From ...
I'd also add that along with keeping the Itanium patchset up to date with the latest mainline kernel, we also fixed a bug that has been in the kernel for some time and prevented it from booting on Integrity machines with a certain memory topology (e.g. the rx6600) [1]. This is an issue not fixed in 6.6-stable due to the kernel dropping Itanium.
[1] - https://github.com/lenticularis39/linux-ia64/commit/14fd001ae32642884f27e750144e73a96d88a81a
-
-
-
Saturday 13th April 2024 01:23 GMT BinkyTheMagicPaperclip
eh, who cares
It was never good enough to use in a commercial setting (except for a very specific set of workloads), and a pain in the arse to fiddle with at a hobbyist level. The price and form factor of second hand kit never reached the level where it would be anything other than a noisy, slow, power hungry rack mount system sat in the corner of your room.
It speaks volumes that some open source lists offered free itanium systems to people willing to develop and no-one was bothered.
I suppose in a way it's really quite sad. There was Windows itanium which is vaguely interesting as is the early use of EFI, but if you look at other architectures such as POWER, Sparc, PA-RISC (prior to itanium), Alpha, ARM, and MIPS they all have something more interesting or different systems based upon them.
No viable NetBSD port, which is saying something - the Sharp Zaurus and PReP platforms are better supported! No OpenBSD port. FreeBSD port that worked somewhat better but never supported beyond a VGA console.
It was Intel's new architecture failure moment of the 00s. IBM tried vainly to switch the market to Workplace OS and POWER based PReP in the mid 90s and crashed and burned, taking OS/2 with it. Intel didn't learn from this and decided that developers would just switch to itanium, honest. Precisely how many historical precedents does it need to prove that unless your new platform has an order of magnitude improvement over the old one, incremental improvement is better, and that even an order of magnitude may not be sufficient.
-
-
-
Saturday 13th April 2024 12:22 GMT BinkyTheMagicPaperclip
Re: IBM tried vainly to switch the market
Depends how you measure it. Last Wii U physical game was 2020, eShop game 2022. eShop stopped new purchases in 2023, Nintendo Wii U gaming servers turned off only this week. The eShop still remains available for downloading previously purchased content.
Lovely console, despite the flaws[1], and a spotty line up of games in its lifetime.
[1] The number of cables to use it to its full extent is excessive. Power cable for the console. Power cable for the gamepad. *Another* power cable for the gamepad because the first one is plugged into a gamepad dock, but the battery only lasts three hours and if you want an extended play session another power source is required. Power cable for external USB storage because internal fills up fast. Cable for sensor bar. Power cable for USB hub, because the built in wireless is awful so you need USB Ethernet, but also want to run a USB based Lego Dimensions pad, and the USB based external storage without using the front USB ports
-
-
Saturday 13th April 2024 11:53 GMT BinkyTheMagicPaperclip
Re: IBM tried vainly to switch the market
I didn't say the architectures were all dead - just mostly. There's Raptor Computing Systems who will sell you a reasonably priced (for POWER, but still not particularly cheap) POWER9 based completely open source system, and it's certainly the closest thing to a non x86 system performing at roughly the same level.
Sometimes I do wish I have one, but it's a lot of money for a system I probably wouldn't use enough.
-
Saturday 13th April 2024 12:24 GMT Anonymous Coward
Re: IBM tried vainly to switch the market
Can't wait to see that Spinal Tap CPU in a live performance: -mcpu=power11
-
-
-
Tuesday 16th April 2024 07:12 GMT fajensen
Re: eh, who cares
It speaks volumes that some open source lists offered free itanium systems to people willing to develop and no-one was bothered.
An outfit I worked for disposed of a bunch of AMD Athlon space-heaters by putting them on a pallet in the street.
It's an university town and the students her are like ants: Someone finds a crumb somewhere and they all show up.
-
-
Friday 19th April 2024 13:00 GMT johnny-mnemonic
GCC 15, not GCC 14
Regarding:
> Support for the ia64*-*- target ports which have been unmaintained for quite a while has been declared obsolete in GCC 14. The next release of GCC will have their sources permanently removed.
This comes from - most likely - the future change log for GCC 14 (https://gcc.gnu.org/gcc-14/changes.html). Speaking in there of "The next release of GCC" when just mentioning GCC 14 in the sentence prior then of-course means GCC 15, not GCC 14. I think this is straight-forward to understand. And if in doubt this could have been easily confirmed by talking to the GCC release manager, like I did.
So yeah, ia64 was marked obsolete during the development cycle of GCC 14, but it's not gonna be dropped with the release of GCC 14.
And for the future of ia64 support in the GCC please refer to this thread here: https://gcc.gnu.org/pipermail/gcc/2024-March/243432.html