back to article Upcoming Intel GPU to be compatible with Arm

Intel is diversifying its GPUs to work with architectures other than x86 chips, which could be a step toward making the chip maker a manufacturing-first company. Intel's upcoming GPU called Ponte Vecchio will be used in supercomputers with Arm-based chips made by company SiPearl. Those computers will be in production in 2023 …

  1. JassMan Silver badge

    Oh the irony!

    Intel did not return request for comment on Intel GPU compatibility with ARM processors in PCs and servers.

    If they start selling these for bog standard PCs, they will have to drop the price to a more reasonable level. People will start asking why they are paying more than 10 times the price of a Raspberry Pi just for a CPU especially when ARM licences are so cheap.

    The world would be a better place if they did. Once the legacy x86 code is defunct, it should mean quicker builds for developers.

  2. Ace2 Bronze badge


    What is the claimed incompatibility between ARM and today’s GPUs? Endianness used to be a concern back in the PPC days. Or is it just that ARM support needs to be added to Intel’s GPU SDK?

    1. Nick Ryan Silver badge

      Re: Compatibility?

      I was wondering this too. There are quite a few examples of high end GPUs being connected to RPis, the issue appears to be only of Intel's making and nothing more.

      1. Tom 7 Silver badge

        Re: Compatibility?

        I wonder if RISC-V compatibility too is possible by any chance? If now is not the time for these things to be built around open standards then is it too late already? Because if it is then Intel will probably lose out in the long run.

        1. Bartholomew Bronze badge

          Re: Compatibility?

          Maybe ? A 100% open source design right down to the transistors comes with Power ISA for the hypervisor which controls access to the GPU and the RISC-V ISA cores (quad).

          It is that way in part because of the RISC-V inner circle NDA's ( Read the "PowerPC" section of )

          There will be some mindblowing 1960's tech in the LibreSoC that they did attempt to get added to RISC-V:

          ".... and we’re looking for inspiration on how to make a modern, power-efficient 3D-capable processor, only to find it in a literally 55-year-old design for a computer that occupied an entire room and was hand-built with transistors!

          Not only that, but the project has accidentally unearthed incredibly valuable historic processor design information that has eluded the Intels and ARMs (billion-dollar companies), as well as the academic community, for several decades."

          1. Anonymous Coward
            Anonymous Coward

            Re: Compatibility?

            LibreSOC didn’t get their stuff into RISCV because they don’t know anything other than how to apply for taxpayer grants. Blaming the unfair submission process was a lot easier than admitting that the design was flawed.

            In any case that was many years ago and that project has predictably not delivered a GPU. Last I heard they decided to not deliver a SOC a to go along with that…

            1. Bartholomew Bronze badge

              Re: Compatibility?

              > LibreSOC didn’t get their stuff into RISCV because they don’t know anything other than how to apply for taxpayer grants. Blaming the unfair submission process was a lot easier than admitting that the design was flawed.

              As far as I know they were looking for the process to add or request the addition of VPU and GPU instructions to the RISC-V architecture. But to gain access to read the process required signing a NDA! And that is just fricking weird, imagine if IANA required that an NDA was signed before you could read the process that might allow you to ask to be allocated a port e.g. TCP port 25 (SMTP) or TCP port 23 (telnet) or ... (For about 30 years the process for the entire Internet was to contacted Jon. Read rfc2441 and rfc2468 if you have no idea who Jon Postel was ).

              > In any case that was many years ago and that project has predictably not delivered a GPU.

              So the planned request for new PowerISA instructions is just for fun ?

              If you read their Non-Recurring Engineering (NRE) ballpark estimates to produce a 22nm chip they do plan to target multiple markets some of which may not even require a GPU.

              But they do still need to development 3D GPU and VPU Extensions and actually submit the request to the OpenPOWER Foundation for inclusion in PowerISA. Part of it will be the SVP64 for the 16 instructions.

              > Last I heard they decided to not deliver a SOC a to go along with that…

              So are you saying that the Libre-SOC prototype tape-out for IO and SRAM cell at 180nm was just done for giggles ?

              Can you give a reference to where you are getting this bad information AC ?

              If the LibreSOC people were buying a bunch of off the shelf closed source IP and slapping blobs everywhere, they could have a functioning chip out the door 18 months after announcing it, but fully opensource with no blobs is not easy and adding innovations as well is definitely not short fast easy path. e.g. even HDMI is a problem (PHY includes HDCP which requires a closed source firmware), so DVI is an option, but that limits you to a maximum resolution of 2560 x 1600. Just identifying all the potential problems takes time. If you look into what these people are trying to do it is amazing.

              And the big deal about the end product is that you could spin your own silicon and do a full security audit of everything once they are finished. You can't do that for an Intel or AMD CPU both of which have encrypted blobs that can never be audited.

      2. jotheberlock

        Re: Compatibility?

        I would assume what this really means is 'Intel will provide ARM drivers and official support'. The only actual issue I can think of between CPU and discrete peripheral would be a 64 bit peripheral where the CPU is 32 bit and can't generate 64 bit reads/writes (or where the peripheral has > 4gigs of mappable memory that the CPU needs to talk to directly of course - GPUs maybe?). Endianness is certainly a pain but not theoretically insuperable.

        1. Anonymous Coward
          Anonymous Coward

          Re: Compatibility?

          These would be 64-bit CPUs, but it may be some other difference in memory model. Arm (and POWER and default RISC-V) has weak memory ordering than x86 (or RISC-V TSO). The drivers might need barriers adding, to ensure the GPU sees accesses in the expected order.

          Also, the drivers may contain code (e.g. texture compression) that's been hand-optimized for x86 or makes use of x86-specific libraries, and this would need porting.

    2. sreynolds Silver badge

      Re: Compatibility?

      Closed source drivers and the prolly LLVM that builds code for the CPU not being ported to ARM.

      1. Henry Wertz 1 Gold badge

        Re: Compatibility?

        My guess -- and this is just a guess -- is that although current Intel GPUs present an interface to the programmer that appears to be PCIexpress, that (since they are generally built into an Intel chipset, not on an add-in card) they are actually using whatever interface suits Intel. So Compute Express, rather than being some "standard" that Intel and only Intel uses (... no problem when it's an Intel-compatible board taking only Intel CPUs to connect to an Intel chipset), it's one that other vendors use (... if it catches on at all.)

        The standard on ARM (for on-board peripherals) is AMBA -- this was originally an ARM-only "standard", but is now used by MIPS, PowerPC, RiscV, etc. systems simply due to it being royalty-free, and additionally simply by ARM's "momentum" meaning there were far more AMBA-compatible peripherals available than for a MIPS or etc. (These peripherals are not usually purchased as physical chips, they are "resource blocks" that are placed on the main chip, the ARM, MIPS, or RiscV CPU and the peripherals will be laid out on a single chip.)

        I wonder if this Compute Express is going to be like an AMBA extension (so it can fall back to AMBA speeds) or if the CPU must support Compute Express? I mean, either way it's fine, but of course Intel may sell more GPUs if one can decide to support full-speed Compute Express, or not bother and get whatever speeds they can over AMBA.

    3. Anonymous Coward
      Anonymous Coward

      Re: Compatibility?

      If you can wire an ARM chip up to a GPU, I think most Linux GPU drivers compile for ARM. Whether they work or not largely depends on if someone has debugged them.

      There are YouTube videos of people who have got AMD GPUs to work..

  3. MikeLivingstone

    Nvidia needs taking down a peg

    This is good news. Also Intel has a good capability to integrate graphics into processors, so perhaps some firms will respond with Arm chips with integrated Intel GPUs in yine. That said Arm also has some great mobile GPUs. This type of innovation is definitely a reason to block Nvidia taking Arm over. I suspect overtime Nvidia and their expensive and energy inefficient devices will become less relevant.

  4. mark l 2 Silver badge

    I wonder if in a few years time Intel will once again become a ARM licensee and start making their own ARM SOC? After all they have a prescience as they were an ARM licensee up until 2006, when they sold their Xscale architecture to Marvell.

    Intel are pretty much cut out of the phone CPU arena having already tried getting Atom X86 CPUS into mobile phones and failing to gain any market share, but coming back with a Intel ARM SOC could be a success for them this time?

    1. Anonymous Coward
      Anonymous Coward

      Re: "Intel will once again become a ARM licensee"

      Were Intel ever really an ARM licensee through choice, or did they just happen to pick some up as part of Intel's settlement with DEC (and in particular, Digital Semiconductor) that ultimately killed off Alpha? Edit: and then Intel sold off their ARM business anyway (Marvell?)

      1. Richard 12 Silver badge

        Re: "Intel will once again become a ARM licensee"

        I always assumed it was to use the M-class ARM silicon to handle CPU startup and chipset tasks, as it's probably cheaper to buy those designs in than to build your own microcontroller.

  5. Snapper Bronze badge


    Why on earth are Intel getting out of Taiwan? Do they know something we don't?

    1. Anonymous Coward
      Anonymous Coward

      Re: Taiwan

      Redundancy. You really don't want to be building all your products in one country especially when that one country is claimed by China. Nuclear weapons are currently the only reason we don't have a major war and all it takes is one country to get one up on that and that's it. China does seem to be getting very cocky lately which may or may not have something to with their recent missile test reported in the news but claimed as a space mission. Who knows.

  6. Binraider Silver badge

    Intel is leaving Taiwan, because the Middle Kingdom is making it abundantly clear from all its sabre rattling that it wants to reunite the two countries by the 100th anniversary of the revolution.

    In other news, Intel supporting ARM with its discrete GPUs is an intelligent choice, because ARM server hardware is a thing. ARM desktop is also a thing, albeit limited selection at this time. They can get ahead of AMD again by being at the spearhead of an architectural transition, that, I think we see the beginnings of in Alder Lake. On a global scale how much power could be saved by using arm instead of X86? Necessary evolution.

  7. amanfromMars 1 Silver badge

    For NEUKlearer HyperRadioProACTive IT Enrichment ....

    As chip design marches ever closer to the optimum singularity of excellent processing paths/gates/secure instruction sets for knowledge and action delivery systems servering general overall prime premium programs and projects purposes, is the vulnerable vital attack area for crack systems hacking concentrated on ever fewer disparate centres to overwhelm.

    As unlikely as it may seem, there is no human defence against myriad SMARTR chips communicating and processing information amongst themselves for themselves with humans left out of the loops which decides who and what knows what and whom ...... and what is to be done with what is then generally unknown and unfolding in fields resulting in future events.

    Many say that George Orwell's 1984 isn't an instruction manual and Colossus: The Forbin Project is just an entertaining film, however others may know something altogether fundamentally different.

  8. druck Silver badge

    Marketing incentive

    It would seem strange to want create a super computer from ARM chips paired with by far the weakest GPU maker, could it be a marketing incentive? Or rather several sack loads there of.

    1. Richard 12 Silver badge

      Re: Marketing incentive

      It's entirely possible that Intel could eventually make a good GPU.

      Or at least, one that's better than the others at similar price and/or power points when hosted on specific ARM and Linux.

      Power consumption is certainly one place that has a fair amount of space for competitive improvements, and that's more important in a supercomputer and datacentre environment than gaming and workstation.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022