back to article Nvidia's accelerated cadence spells trouble for AMD and Intel's AI aspirations

In the mad dash to capitalize on the potential of generative AI, Nvidia has remained the clear winner, more than doubling its year-over-year revenues in Q2 alone. To secure that lead, the GPU giant apparently intends to speed up the development of new accelerators. For the past few generations, a two-year cadence was enough to …

  1. Vincero

    "But the point remains: Nvidia intends to roll out new GPUs faster than ever."

    What are you talking about...? Early on Nvidia used to release a new GPU family every year; Riva 128 (1997), TNT (1998), TNT2 & Geforce (1999), Geforce2 (2000), Geforce3 (2001), etc.

    Sure there were a few misses along the way, but Nvidia are one of the few companies that consistently pushed a yearly release cycle back in the day, even if not a new architecture at least you'd get an improved offering in some way be it process shrink or architectural tweaks (e.g. G80 > G92 which improved texture processing even though main core architecture stayed more or less the same).

    Intel's Tick-Tock methodology wasn't something revolutionary. But due to slower improvements in manufacturing over time and different market conditions, neither really have stuck to the same release schedules.

    Also, let's be realistic - the faster they keep releasing new chips the less time they have to recoup costs bringing it to market, and lowering the product tier for existing chips (which may not get a lot cheaper to make) over time to sell alongside the new top tier chip may not stay sustainable over time.

    Much like the early 3D accelerator days, whilst there are still some new developments to take advantage of and boost performance I expect the development rate will slow years quicker than it did for 3D graphics.

    1. diodesign (Written by Reg staff) Silver badge


      OK, fine: back in the day, as you say, it issued PC GPUs annually. We're writing in the context of today, and specifically Nvidia's response to AI GPU demand. Lately Nvidia has been on a two-year cadence, as I'm sure you know.

      I've tweaked that sentence. If you spot something wrong, do let us know via, please.


      1. Vincero

        Re: History

        Yep, and this will only be the case so long as there is money to be made. Once the advancements dry up and shareholders start to question why profits vs R&D costs are starting to drop we will not doubt see the same malaise that has happened since AMD stagnated with the later parts of the GCN era Fury/Vega and Nvidia could just leverage Pacal/Turing for a longer period, or Intel just kept remixing Skylake whilst AMD transitioned to Zen.

        In terms of the AI space I suspect they are reacting more to Intel's sudden uptick in performance and focus than AMD, as they also have the necessary in-house resources to also compete in edge device scenarios which AMD doesn't have yet.

      2. Headley_Grange Silver badge

        Re: History

        In the good old days there was a link below articles for reporting errors.

  2. IgorS

    NVLink attacked NIC?

    "The NVLink mesh is also only good for GPU-to-GPU communications"

    I don't see why NVIDIA could not just add NVLink support to their NICs.

    They own the whole stack, so it is within the realm of possibilities.

    The larger problem I see is the partnership with other vendors, like Cray, that would have a hard time doing the same.

    1. Anonymous Coward
      Anonymous Coward

      Re: NVLink attacked NIC?

      Or just put Mellanox directly on the GPU card

      Who needs your pathetic PCIe bus ?

    2. Matt1685

      Re: NVLink attacked NIC?

      NVLink doesn't include a physical layer. It uses the PCI-E physical layer, or with Power it uses (used) what OpenCapi uses.

  3. Yet Another Anonymous coward Silver badge

    Boeing vibes ?

    (Intel) backed out of the latter amid a restructuring of the division and cost-cutting measures.

  4. Henry Wertz 1 Gold badge

    NVlink expansion

    So, NVLink supports up to 256 devices. That just sounds like they are using a single byte in the protocol for addressing essentially. But it's proprietary and only used by NVidia, you have Nvidia cards using NVLink to connect through Nvidia-supplied NVLink switches. They do not have to concern themselves about compatibility with any other devices, or following some kind of industry standard. It seems to me they could just extend things to use a 2-byte address (or go to 4 bytes if they're concerned they could have a cluster with over 65536 devices...), either an incompatible NVLink change, or some method so it can limit to 256 device NVlink if you may be using newer cards in an older system with older NVLink hardware.

  5. Matt1685

    I think Nvidia wouldn't need PCI-E for its networking if they had an alternative physical layer such as an optical PHY.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like