back to article AMD sharpens silicon swords to take on chip and AI rivals

Once the relative minnow of the chip industry, AMD senses blood in the water following a series of missteps by arch-rival Intel, and head honcho Lisa Su is wasting no time in talking up its game plan to investors. The fabless semiconductor biz has long played second fiddle to that other Santa Clara chipmaker, as well as Nvidia …

  1. DXMage

    But AMD has clearly stated that they give up on GPU leadership

    AMD has clearly stated that they have given up on taking the GPU crown for now. So at to try and take the middle tier. All Nvidia has to do is shave their memory speeds a bit and then they take the middle tier as well unless AMD aims to cut prices to practically zero margins they won't take market share from Nvidia unless they are bringing something huge like WAY better RT that spanks Nvidia in the mid tier and costs significantly less.

    1. Jr4162

      Re: But AMD has clearly stated that they give up on GPU leadership

      I don't think Nvidia or AMD care about the gaming market. To be fair, the layoffs in the gaming industry wouldn't encourage me to make it a significant priority if I was in their shoes.

      I'd probably use lower binned chips or chiplets from my AI products for gaming and focus my energies on AI and AI based cards that are able to game, but not the other way around. This means getting rid of the low memory / low end stuff and releasing cards with 16GB VRAM as a minimum. For the cheaper cards, use slower gddr5 vram to make the card affordable compared to a 8GB card with gddr6 if necessary.

      1. Anonymous Coward
        Anonymous Coward

        Re: But AMD has clearly stated that they give up on GPU leadership

        A dizzyingly missed opportunity really, seeing how gaming's often the "gateway drug" to coding, a stepping stone (pun intended) for getting the youngest ones addicted to speed (another pun?), spending red-eyed sleepless night after red-eyed sleepless night refactoring an algorithm, refining it, fixing it, for the highest kicks in performance, that smoke the competition ... of other youthfully addicted coders.

        Get them hooked to your product lines when young, and they'll likely continue using it into their professional capacities later on. Nvidia's CUDA perty much followed that path into becoming the virtual "cocaine" of AI programming IMHO (not literally, of course ...). Gotta look at these things long-term, and engineer the conveyor belt just right (and without evil)!

        1. Anonymous Coward
          Anonymous Coward

          Re: But AMD has clearly stated that they give up on GPU leadership

          The people on my uni campus who play video games in that way are NOT the people developing useful (non-game) software.

          Just an observation.

    2. IGotOut Silver badge

      Re: But AMD has clearly stated that they give up on GPU leadership

      Theses days they probably make more money from a single hyperscale bit barn than all the gamers combined.

    3. Fido

      Re: But AMD has clearly stated that they give up on GPU leadership

      When I read the Tom's Hardware interview with Jack Huynh, AMD's senior vice president and general manager of the Computing and Graphics Business Group, I initially had the feeling that leadership was clueless and lacking.

      After refection I decided trying to sell what the engineers made is better than promising zetascale compute

      I think AMD unified the engineering behind Radeon video cards with the Instinct GPUs but badly presented this as deprioritising gaming.

      Right now AMDs CUDA equivalent--ROCm--is not well supported on Radeon video cards. As games may soon include a nontrivial element of AI inference, getting ROCm to work on all AMD GPUs seems essential to me. Unfortunately, this unification seems to have led to the next generation Radeon cards not being competitive for high-end gaming.

      Although a senior vice president can only sell what the engineers can make, I would have characterised the lack of a flagship gaming card as a necessary unification of two diverging GPU architecture so ROCm driven AI libraries can run everywhere.

  2. Anonymous Coward
    Anonymous Coward

    A key issue is availability. Try to buy a laptop with AMD CPU and GPU. Most offerings are Intel / Nvidia or AMD/Nvidia. There is a lot of demand for them by Linux users given their open source drivers, and good power/performance profile, but nigh impossible to find (or if you find them, at a premium).

    It matters little if your offering is better in FLOPS/Watt or FLOPS/$ if it can't be bought (laptop, can't speak for server/desktop)

    1. David Hicklin Silver badge

      > A key issue is availability. Try to buy a laptop with AMD CPU and GPU

      But that is not the fault of AMD - that is the laptop manufacturers being wedded to the WINTEL model and unwilling to change.

      1. O'Reg Inalsin Silver badge

        Or is it that catching up with NVIDIA firmware/hardware is hard?

        1. Dostoevsky
          Linux

          My laptop's RTX 3060Ti works flawlessly. Even Linus has said Nvidia went from being on his shit-list to being a useful contributor.

  3. John Savard

    Nvidia isn't Intel

    If AMD is gunning for the top spot in AI accelerators, then indeed it could overtake Nvidia in gaming GPUs as well, since what AMD lacks isn't raw shader power, but the fancy AI related features that Nvidia pioneered in GPUs.

    But I think I'll wait for some evidence that they're achieving results before getting my hopes up.

    And Intel can either correct its missteps, or go fabless - I don't expect it to be behind AMD forever, either.

    But as long as AMD remains a formidable competitor, it can always become #1 again even if it can't permanently claim the top spot.

  4. StargateSg7

    The AMD Instinct GPU cards are SURPRISINGLY EXCELLENT due to their 192 GB of onboard memory which is AMAZING for helping A.I. and advanced data-mining tasks. Calculation-wise they don't approach the individual NVIDIA GPU cards BUT their better memory size and wider memory-bandwidth more than makes up for it AND they are CHEAPER so you can buy two and get more performance overall than the NVIDIA options.

    If I was a CTO for a Top-Ten-Thousand-Companies-To-Work-For CTO (Chief Technology Officer) and wanted to take some A.I. computer and data mining tasks in-house and AWAY from Amazon AWS or Microsoft Azure Cloud, I would be seriously thinking about spending come company money for a 1000 GPU cards bulk buy of the AMD M300 Instinct GPU cards (5.3 TB per second memory bandwidth) at $17,000 USD per GPU card (i.e. Total Cost: $17 Million USD) and rack-em-high in a local bit-barn-warehouse someplace in really-cheap-electrical-power British Columbia or Quebec (i.e. at 10 cents a kilowatt/hour) and make that 1000 GPU card network do advanced A.I-based data mining and product development science 24/7/365

    At 16-bits precision at 2.61 PetaFLOPS per GPU card, when run on a 1000 GPU card array, my deep-data-mining software and A.I.-centric LLM (Large Language Model) and Stable Diffusion Image Generation models would have over TWO EXAFLOPS of 16-bits precision A.I. Model supercomputing horsepower for my in-house product and services-development tasks! That will make me some SERIOUS MONEY on new-found-products-and-service with a less than TWO YEARS return-on-investment (ROI)

    It's a HECK OF A LOT CHEAPER than an NVIDIA GPU array processor solution!

    V

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like