back to article Intel wants to run AI on CPUs and says its 5th-gen Xeons are ones to do it

Intel launched its 5th-generation Xeon Scalable processors with more cores, cache, and machine learning grunt during its AI Everywhere Event in New York Thursday. The x86 giant hopes the chip will help it win over customers struggling to get their hands on dedicated AI accelerators, touting the processor as "the Best CPU for …

  1. HuBo
    Thumb Up

    Nice gem

    Granite Rapids probably remains the upcoming chip to watch, but I see a couple interesting takeaways on this 5th-gen Emerald chip. One is that, at equal core count (64), it is claimed to have not only equal-to or better performance than the EPYC9554, but also better performance per Watt. That's very nice if it holds up to upcoming scrutiny. The other takeaway is that this chip can run AI inference in INT8, via matrix operations, which I think is rare today in other CPUs (not 100% sure of which ARMs have SME yet -- Scalable Matrix Extension).

  2. Omnipresent Bronze badge

    good luck.... you are going to need it.

    64 cores might be just enough to run the new windows. It will surely be sneaking around behind your back when you sleep.

    Plan on twice the battery consumption as well.... next year.... sometime, if they don't turn off half the cores to make you buy a new computer in an "update".

    That's assuming the AI didn't off you first.

    1. HuBo
      Pint

      Re: good luck.... you are going to need it.

      I was looking at benchmarks for these Emeralds on phoronix, servethehome, and tomshardware, and many showed that more cores produce better performance (eg. HPCG, Graph500), as expected, but a few, including OpenFoam and SciMark, had the opposite trend where chips with just 32 cores beat those with 48, 64, and 128 (on tomshardware iirc). Something to do with memory access bottlenecks (number of channels and throughput) I think.

      1. Snake Silver badge

        Re: good luck.... you are going to need it.

        That's assuming most people's workloads can scale to 48+ core usage in the first place.

        1. HuBo
          Pint

          Re: good luck.... you are going to need it.

          Indeed -- it could be inter-core communication bringing the perf down on SciMark and OpenFoam (vs the other benchmarks).

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like