back to article HPC won't be an x86 monoculture forever – and it's starting to show

Remember when high-performance computing always seemed to be about x86? Exactly a decade ago, almost nine in ten supercomputers in the TOP500 (a list of the beefiest machines maintained twice yearly by academics) were Intel-based. Today, it's down to 57 percent. Intel might once have ruled the HPC roost but its influence is …

  1. FIA Silver badge

    Feature Remember when high-performance computing always seemed to be about x86?

    Not related to the story, but that has just made me feel very very old as I remember when high-performance computing considered x86 a joke.....

    <lays a solitary flower at the grave of Alpha>

    1. seven of five Silver badge

      Same. ASCI White. Those were the days, when I was young(er).

  2. williamyf Bronze badge

    «RISC-V backed away from that standard. They said this is absolutely insane," he explains. "Why don't we do a clean from-scratch design, clear the slate, clear the room, clear the whiteboard, and do things right from the get-go?»

    And then, they turned around and based 90% of RISC-V on the venerable (and old) MIPS architecture, that also had plenty of baggage.

    1. DS999 Silver badge

      Yes RISC-V is what you get when a bunch of academics who have never done high performance CPU design or cutting edge compiler design create a new ISA.

      It was never created to be a general purpose real world ISA, it was created as a THREE MONTH summer project to fill a research need. It was designed to be small and simple and only perform the minimal tasks they needed for that project. It was extremely bare bones but it has had more and more stuff added over the years, without any grand design to lead it all. If they wanted to a create a viable new general purpose high performance ISA they would have created one that had everything it needed from the start, with plans for growth and control over what could be officially added rather than letting anyone implementing one be able to go their own way.

      While there is a "standards body" of sorts for it, it has no real power so if some big player like Qualcomm ever was induced to switch from ARM their market power would dictate whatever they did became the defacto standard even if it conflicted with what others are doing in the RISC-V space. So the people who want that sort of thing to happen should be careful what they wish for.

      It is hilarious how that POS architecture has got so much love in the tech community just because it is open source, and they see it as a counter to ARM's licensing payments or dragging along x86's 45 years worth of useless legacy. Not that it is perfect, but PowerPC is open and it is far better than RISC-V, though a clean sheet ISA made by people who really know their stuff would beat them all.

      1. jake Silver badge
        Pint

        Couldn't have said it better myself. I'll bet we both get downvoted by the ignorant GreatUnwashed :-)

  3. jake Silver badge

    Helpful hint

    Rewrite the article from the perspective of the AI bubble bursting. That way you'll be ready when the inevitable happens.

    1. Like a badger Silver badge

      Re: Helpful hint

      Indeed. Like newspapers with complete libraries of obituaries of the not-quite-dead, ready to be deployed the moment the relevant person's sand timer runs out. Which makes me wonder, how soon do they start? At what age were Musk or Zuck's obituary-to-date first written?

      1. Roland6 Silver badge

        Re: Helpful hint

        > At what age were Musk or Zuck's obituary-to-date first written?

        Don’t know, but My money is on Musks being published first…

        1. Anonymous Coward
          Anonymous Coward

          Re: Helpful hint

          Yeah, it's odd we're still waiting for that sweaty cage deathmatch showcase showdown ... that could really benefit the whole of humanity, with great efficiency, imho! ;)

  4. bazza Silver badge

    Fugaku…

    …is still growing I think.

    I like Fugaku because it’s pure CPU/Vector processing, and a lot of its performance comes from the very efficient Tofu Interconnect fabric welded into the CPUs. It can really shift data like nothing else.

    1. Korev Silver badge
      Coat

      Re: Fugaku…

      Tofu is a tasty interconnect

  5. This post has been deleted by its author

  6. borje

    I remember the time when different CPU and systems architecture were good for different applications and type of workloads.

    If you wanted good performance on the SGI NUMA machine, just use have of the CPU's and make sure you ran one process on each 2-CPU part. If you wanted it to perform really bad, then make sure you ran more processes than you had CPUs, and after a while all processes where running against remote (slow) memory. On a Sun Starfire (E10000) you had a very small slowdown when overloading the system.

    Different CPU architectures where good for different applications and of course correct use was critical. I remember one HPC app that ran almost 2x faster on SPARC by changing from standard 8k pages to 4M large pages. And to get to the point, I do think that the default page size is a bigger problem than there being some old instructions laying around unused in the silicon. My first PC had 384kB of RAM, and then a 4kB page size was quite reasonable. But using that same 4kB page size when you have TB of memory - that is a lot of overhead as TLB entries need to repeatedly be loaded from memory. I also have the feeling that many applications are no longer well optimized.

    I do hope that we in the future will see more different CPU architectures in HPC. Someone claimed that RISC-V has a lot of MIPS heritage and that might not be that bad - at least that heritage is less of a burden than what X86 has.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon