back to article Intel takes on AMD and Nvidia with mad 'Max' chips for HPC

Intel's latest plan to ward off rivals from high-performance computing workloads involves a CPU with large stacks of high-bandwidth memory and new kinds of accelerators, plus its long-awaited datacenter GPU that will go head-to-head against Nvidia's most powerful chips. After multiple delays, the x86 giant on Wednesday …

  1. Anonymous Coward
    Anonymous Coward

    Hot news

    Next years CPU faster than last years. Makes a fine space heater too.

  2. Anonymous Coward
    Anonymous Coward

    Does it come with a Power Station?

    350W?

    Aren't we supposed to be looking at reducing Data Centre power and water consumption?

    Nice one Intel. I hope that you have a few dozen trucks loaded with Solar panels and a 10MW Tesla battery with each sale?

  3. Bitsminer Silver badge

    1 Gbyte/core

    give the CPU ... more than 1GB per core

    Boring. With very fast accelerators like AVX the memory for one core is wayyyy too small.

    And the critical tech here is HBM which is not manufactured by Intel.

    1. Anonymous Coward
      Anonymous Coward

      Re: 1 Gbyte/core

      The data is not entirely unique every cycle so 1GB is fine. In truth I feel we're at a point where the CPU is the limiting factor. 20 years ago, in 2002, it was the RAM that was the limiter. I know nothing of the technicals but it's clear that the latest RAM specs are beyond what CPUs are utilizing. Well, a single CPU at least, as I guess RAM like HBMe2 is designed to handle the max bandwidth of multiple CPUs simultaneously (well, at least 2).

      1. Anonymous Coward
        Anonymous Coward

        Re: 1 Gbyte/core

        Each HBM2e stack (in this case a stack of 8 DRAM dies) has 16 pseudo-channels each of which is an independent chunk of 1GB address space (kind of like a DDR4 DIMM, or half a DDR5 DIMM). So these Xeons must have 4x HBM2e stacks integrated in-package.

        In an ideal world then, the data would be laid out in 56 pseudo-channel sized chunks (since 56 cores are present) so that each core has its own working storage.

        Compared to DDRx channels, that also means that the access pattern on any one pseudo-channel should be more efficient as long as the application code for each core is optimized to generate predominantly sequential memory accesses.

        Although it's a while since I've worked on HPC code, that figure of 1GB/core is probably reasonable because modern HPC is designed to scale out over huge numbers of cores. So the HBM advantage is either (a) HPC runs finish faster or (b) since today's performance increases come mainly from increased core count, use more cores to solve bigger problems in the same time.

  4. NeilPost Silver badge

    HBM

    High Bandwidth Memory on CPU can be used as system memory.

    .. so it’s like Apple’s M1 and M2 then ??

    1. FIA Silver badge

      Re: HBM

      The M1 and M2 use DDR4 and DDR5 respectivly. The memory is just packaged on the SOC rather than being external.

      Whilst the memory is closer, it's still using DDR4/5 signals and timings.

      1. Anonymous Coward
        Anonymous Coward

        Re: HBM

        That's apparently how it is with this "Max", at least do you see proof otherwise? Apple's is gddr4/gddr5 I believe... or am I wrong about that too? I thought Apple's M2 was a SoC that has pretty much hit a wall for future performance uplifts, just like most ARM based SoCs (and x86 for that matter).

        1. FIA Silver badge

          Re: HBM

          From the article:

          "With 64GB of HBM2e, a dual-socket server with two Xeon Max CPUs will pack 128GB total."

        2. Anonymous Coward
          Anonymous Coward

          Re: HBM

          Apple's description of M1/M2 as having "high-bandwidth memory" confused me too - I thought they meant HBM the technology rather than generic memory that has improved bandwidth. HBM the technology (JESD235A/B/C) has been around for a while - most notable use up til now has been in Nvidia's data centre GPGPUs like A100.

  5. Anonymous Coward
    Anonymous Coward

    So, Rapids still for.nome Ai?

    Mentions Max os the new name for 1 Xeon type but, is it still Rapods for the desktops and Non-Ai Xeons?

  6. Lordrobot

    It's All New... It's X86...

    Well the NAME was changed!

  7. xyz123 Silver badge

    Dead tech since on Jan 10, intel is switching to pay-as-you-go processors.

    Any datacenter fancy paying $50/month PER CPU just to use the processor they purchased for $1000+ ?

    Imagine a supercomputer, with 100,000 CPUs. having to BUY the processors..so there's your first $100,000,000

    Then a "subscription" fee of $5,000,000 a month JUST to keep the CPU's features active.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like