"...everyone who's still onboard the Itanic..."
Very nice. Cheers, Liam.
Good news, everyone – well, everyone who's still onboard the Itanic, anyway. GCC 15 will de-deprecate Linux support for Intel's original 64-bit chip. The development team of the GNU Compiler Collection has accepted a code patch from hero developer René Rebe of Berlin-based ExactCODE, with the crowd-pleasing announcement: The …
Well, exactly. That sounds much like what Donald Knuth was referring to said in 2008 when he commented that "the "Itanium" approach [was] supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write".
Since the entire philosophy and design of the Itanium was built around- and completely reliant upon- the assumption that instruction scheduling and ordering should and *could* be done at compile time- rather than dynamically at run time (thus freeing up die space that would otherwise be required for scheduling hardware)- the fact that such compilers turned out to be not just difficult but "impossible" to write was a pretty fundamental problem...!
It sounds essentially as if Intel released the Itanium assuming- or (by that stage) hoping- that they would and could be created if they and the industry just threw enough time and resources at it. And that they were wrong.
I suspect that by the time of release they already knew it this was a much bigger problem than first assumed. Most likely they'd started development taking for granted that the required compilers would be at least doable and ready by the time the chip came out... and by the time it did, and they weren't, Intel were already in way too deep- to the tune of many billions of dollars- to pull out.
Ironically, the concerns that Itanium was meant to address (including that dynamic, out-of-order and speculative execution couldn't be calculated and scheduled efficiently at runtime) were already being proved unfounded by CPUs including Intel's own newer x86 models.
As far as I'm aware, Itanium also had numerous other problems, including poor performance of its x86 compatibility, and the fact that the architecture in general was aimed at the needs and state of the early 90s market which had already moved on by the time of its release.
More interesting Itanium info:-
* Here
* See the *replies* to this comment of mine
To be fair, avoiding "dynamic, out-of-order and speculative execution" could have avoided whole classes of of security vulnerabilities (Spectre, Meltdown etc), the mitigations for which can have severe impact on perfomance.
I wonder if Itanium might actually compare reasonably well now, if compared to x86 with all those mitigations enabled?
Anybody here still running Itanium CPUs and care to explain why? I installed/maintained one Itanium HP-UX system years ago, but that was just a stop-gap meant to be used for a couple years while the application was ported and QA'd. The writing was on the wall pretty early on that it wasn't going to be around long, so a terrible ecosystem to start to buy into.
If you need higher single-core speeds or better RAS than x86-64, IBM/POWER seems to be the only option left. Their exorbitant prices for equipment and support keeping them afloat despite ever-declining market share.
Though the later would be easily solved if if Dell/HP/etc servers had a BIOS option to run CPUs in a "mirror mode" the way they allow you to do so with RAM... Cheap commodity servers could have the best RAS around.
"Though the later would be easily solved if if Dell/HP/etc servers had a BIOS option to run CPUs in a "mirror mode" the way they allow you to do so with RAM... Cheap commodity servers could have the best RAS around.
We are using VMware Fault Tolerance for some specific workloads which in effect mirrors the CPUs and memory of VM's between hosts. On cheap commodity servers with the mirrored RAM as well.
VMware FT itself requires enterprise licenses so it's not exactly cheap to provide proper RAS for a single system, but if you already have the VMware infrastructure then FT is easy and cheap to implement.
Do other hypervisors provide the same feature? I dunno.
> Now, can someone come up with an emulator for the things, please?
Aren't they dead easy to find for free lying behind most skips?
I have a couple of HP models, actually quite fond of them, just can't really justify running them due to the energy usage isn't great for the performance they output.
> Aren't they dead easy to find for free lying behind most skips?
No. They're close to as scarce as hen's teeth.
Sales were just barely in the tens of thousands of units in their best years.
https://www.theregister.com/2005/02/28/itanium_04_sales/
The improved but incompatible American model of the ZX Spectrum, the Timex Sinclair 2068, sold 80K units.
https://www.retrothing.com/2009/05/timex-sinclair-2068-computer.html
There are probably more TS2068s out there than Itanic boxes.
I have an Itanium CPU that I pulled from a server at work that was being scrapped a number of years back, mainly just grabbed it as a random collectible oddity, but that does remain the only Itanium system I've ever actually come across in my career. Every other server at that place was x86.
This post has been deleted by its author
How good is the gcc compiler for a VLIW machine such as the Itanium?
A couple of decades ago we had a lab full of different machines for testing the client software which talked to our hardware boxes. This included Xeon E5 v3, Opteron, Power 8, Sparc64-x, plus a Itanium 9100 and 9540. The Itaniums were the slowest of the lot by quite a way when running code compiled by g++ with -Ofast. I'd always assumed there must have been a commercial compiler from Intel which could take better advantage of the architecture, but was there?
> I'd always assumed there must have been a commercial compiler from Intel which could take better advantage of the architecture, but was there?
As far as I'm aware, no- and that was the entire problem.
The Itanium's design was fundamentally built around- and entirely reliant upon- the assumption that instruction scheduling, reordering, allocation et al would- and, more importantly, *could*- be done better in advance by the compiler, rather than in hardware at runtime.
And it turned out that this assumption was wrong and that- as Donald Knuth himself later noted (and I mentioned here) "the "Itanium" approach [was] supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write".
"the assumption that instruction scheduling, reordering, allocation et al would- and, more importantly, *could*- be done better in advance by the compiler, rather than in hardware at runtime.
And it turned out that this assumption was wrong and that- as Donald Knuth himself later noted (and I mentioned here) "the "Itanium" approach [was] supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write".
This interesting in itself viz that run time resource allocation scheduling can access a much large problem space than any static algorithm unless "impossible to write" means impracticably difficult for human compiler constructors rather than "possible to write" is provably false (in all models.)
Probably simplistic but I could imagine a compiler that uses the hardware's dynamic algorithms to perform simulation execution on some intermediate representation before generating optimised code.
At least we won't have to face the consequences of dodgy AI written ia64 compilers generating hallucinating binaries.
I don't think Knuth literally meant "proveably impossible" back then, just practically, but coming from him that's damning enough regardless.
Somewhat like you, I've assumed previously that the compiler-time approach to scheduling failed because it was both static and unable to be aware of the state of the system at runtime and take advantage of that knowledge.
That does smack of being a possibly fundamental limitation with *any* compiler-led approach, but whether that's proveable, let alone proven, isn't something I'm aware of having been done to date?
TI also had a VLIW architect DSP chip around the millennial and it the same issue, it only delivered anything like the promised sped for very few hand-crafted library calls, anything that relied on the C compiler was very hit-and-miss, mostly miss, as it struggled to deal with the issues of parallelism in the chip. Also it had a deep instruction pipeline so any code branch ands you lost a significant amount of time. For some code that was not an issue, but in my experience it was very disappointing for 90% of the jobs it had to do.
I know, in an ideal world with infinite resource it's 'nice to have', but frankly I'd rather see someone properly emulate IBM PReP systems before they bother with itanium, at least then the general populace could run some rare operating systems such as OS/2 PowerPC (the alpha that is labelled 'release'), or Solaris PPC.
Given no-one cared when offered a free itanium system on the NetBSD mailing list, and every time I saw an itanium system on ebay my thought was 'why bother?' even when they dropped to a couple of hundred quid, I just can't see this flying. There are still 'bargains' to be found, but it doesn't run anything recent. NetBSD never had a formal release. FreeBSD and Linux have dropped support years ago. Windows releases are ancient. OpenVMS is a chargeable option, and why not just run it on x64?
You'd be better pursuing POWER systems if you want to do non Intel, at least they have some future.
> You'd be better pursuing POWER systems if you want to do non Intel, at least they have some future.
Do you mean the POWER/PowerPC architecture? Even that seems to be legacy at best rather than having a serious future.
I got the impression that the world- or at least big business- was moving towards ARM-based systems for a serious non-x86 alternative to Intel in datacentres etc al ( in addition to its longstanding use in lower-end devices).
There's still a new Power generation on the horizon, and more in the roadmap.
AIX may be approaching legacy status, but the synergy with the mainframe, plus the customers who still want to run IBM i and even AIX will keep Power going for a few cycles more.
Power 10 and later are neither POWER (the original RS/6000 processor) nor PowerPC any more. It's evolved beyond both of these.
That's very informative, thank you. That said, I'm not sure it entirely contradicts my assertion about it being "legacy"- it does sound like even its future and development is being driven mainly by areas it's already established in rather than entirely new uses, the latter of which seem to be dominated by talk of ARM and speculation about RISC-V.
Power 10 and the like are large and expensive processors. The newer versions will continue in this vein. While ARM and RISC-V processors will continue to evolve upward to approach the performance of Power and x86_64 processors, there will be quite a few workloads that are better run on high speed, super-scalar processors. Not everything looks like a Web server.
The smaller Power and remaining PowerPC processors that are still manufactured are surprisingly quite popular as embedded processors, but as these are relatively limited function devices, there's nothing really that needs changing. ARM and RISC-V are already displacing some of these devices, but in general, the embedded PowerPC processors are still pretty cheap and well understood, and do what they have to do quite well, in the same way that Motorola 68x00 derivatives, and even Z80s, up until they stopped making them, did.
Hi Liam,
yes, Qemu does cover PReP, but it is incomplete. It doesn't handle endian switching properly, and even then it's likely to need extra work to get the IBM BIOS running on it.
As far as I'm aware, PReP systems were only produced by IBM. When they changed to CHRP Apple got on board.
I was sure I had seen IA64 support in QEMU and after a little searching I found support was dropped in version 2.11 (2017). If anyone wants to spend months brushing the dust off this it would be interesting to find out if emulation on a modern CPU is now faster than real hardware.
Something like 15 years ago HPUX/IA64 was quite popular platform for SAP and I've seen many of them then. But it's years since I've seen it last. I think it was like 5 years ago in one of the banks, running huge archiving database. Probably migrated already cause this HW was pretty old already back then.
But one of my friends is currently working on large openVMS migration from Itanic based almost 20 years old HW. He says it's so complex that probably these old boxes will still run in 5 years.
> Also, this is just the GNU C compiler. It doesn't mean that you can build new versions of the Linux kernel. Kernel 6.7 dropped Itanium support and it still doesn't look like it's coming back.
Yeah, well, in reality we, that maintain Linux/ia64, didn't miss a single mainline release or release candidate. **Everything** since and including v6.7-rc1 runs on our machines (details below).
If you don't want to rely on hearsay, myths or outdated information about Linux/ia64, head over to http://epic-linux.org/ and get the relevant information.
****
Out-of-tree support for Linux/ia64 is there and maintained (Tomas' fork was created beginning of November 2023 already, see "created_at" in [1]), as well as for the glibc. Just check the repos at:
https://github.com/linux-ia64/
For per Linux mainline (release) candidate source code you can refer to:
https://github.com/johnny-mnemonic/linux-ia64
[1]: https://api.github.com/repos/linux-ia64/linux-ia64
In addition: Linux mainline release (candidate) kernels are regularly tested on actual hardware (rx4640, rx2620, rx2660, rx6600 and rx2800 i2) and in ski under x86_64. Also all currently active stable kernels are build-tested and run in ski on x86_64, too. For the latter you can check for example the results on [2].
[2]: https://github.com/linux-ia64/linux-stable-rc/actions
HTH
P.S.
Seeing that other comments seem to get through moderation quickly (even on Sunday) I send this again in the hopes that this gets published now. My first comment identical up to the P.S. section seems to be in limbo since 21 hours for an unknown reason.
[Author here]
This is absolutely fascinating.
From this, I have also found http://epic-linux.org/
(HTTP in 2024? No wonder it's not widely discussed!)
That contains personal attacks against me by name. I am not happy about that. It says a correction was sent. I checked; it means you left a comment.
Firstly, you should know, most Reg authors follow the simple rule of "never read the comments." I'm one of the only ones who does, and I only check for a day or two if that as a rule.
So: for future reference, leaving a comment *isn't sending a correction.*
Secondly, we have a corrections address, as described here:
https://www.theregister.com/Profile/contact/
Thirdly, to contact any Reg author, there's a comment form right at the top of every Reg article. As far as I can see you have not used that.
As a counter-example: Tomáš Glozar of Red Hat Brno _did_ contact the editor about the GCC 14 article and we changed it.
Now I know about this, I will go look for more info.
But emailing us is the way to tell us about errors, or to let us know about new projects, continuations, etc. Leaving a comment is *NOT* how to do it. Our hero moderators do their best, but it's a low-priority task, which is why you got duplicated comments here.
@Liam Proven:
What I can read on http://epic-linux.org/ are statements that point out a fact, i.e. that someone's wrong. Since when does that equal to personal attacks. Well, it's a public community website from the Linux/ia64 community for the Linux/ia64 community and everybody else interested, so everybody can convince themselves about the quality of the statements there. And there is now even a copy of it in the Internet Archive since the afternoon. So those statements are frozen in time now for the forseeable future. Thanks to whoever did that. Good websites should absolutely be in the Internet Archive.
I think we can leave it at that now. And sorry for my ignorance regarding the procedures at the Register. I now know how things work when errors are made again and you got the whole picture about Linux/ia64 now.
I consider that a productive outcome for everybody involved.
Cheers.
P.S.
Coincidentally I am working on getting more use out of ski since a while and consider it quite useful in the meantime - i.e. compared to nothing at all for ia64 emulation. ;-) I am not aware that it is only a partial CPU emulation, I think they even integrated the IA-32 emulation in it (or at least a "subset" of it, see page 66 of [1]), but I haven't studied it in all detail. So expect more guidance on using it in the future.
[1]: https://github.com/trofi/ski/blob/master/doc/manual/SkiManualMasterDoc.pdf
Hi -- just to say, yes, if you see something wrong with an article, please contact corrections@theregister.com at least so that us editors are aware of the problem and we'll act on it ASAP.
Leaving a comment is like putting a bug report in a GitHub issues comment, when really, you should be opening a new issue (emailing us in this case). Cheers,
C.
> I consider that a productive outcome for everybody involved.
Well, good.
As far as I can see, though, you still haven't emailed me. I suggest you do that and we can talk about the effort and then, given some more info, I could write about it.
Meme quotes aside.. It would be cool if we could look at CPU arches again given how x86 is even more absurd than it was when AMD bent humanity over that one time.. The number of (major/security) bugs that have happened since that simply wouldn't have happened on Itanium is nuts.