back to article Nvidia's MLPerf submission shows B200 offers up to 2.2x training performance of H100

Nvidia offered the first look at how its upcoming Blackwell accelerators stack up against the venerable H100 in real-world training workloads, claiming up to 2.2x higher performance. The benchmarks, released as part of this week's MLPerf results, are in line with what we expected from Blackwell at this stage. The DGX B200 …

  1. beast666 Bronze badge

    Train your models to hallucinate in less than half the time. Great!

  2. harrys Bronze badge

    what a big huge massive humongous elephant in the room.....

    not a single mention of power consumption in the article

    crazy crazy crazy

    were doomed i tell ya, were all doomed!

    1. Korev Silver badge

      Re: what a big huge massive humongous elephant in the room.....

      On paper, the B200 is capable of churning out 9 petaFLOPS of sparse FP8 performance, and is rated for a kilowatt of power and heat. The 1.2 kW GPUs found in Nvidia's flagship GB200, on the other hand, are each capable of churning out 10 petaFLOPS at the same precision.

  3. Herring` Silver badge

    I see all the computing power, that many FLOPS and I think: couldn't this be used for something useful? i.e. not GenAI.

    1. Korev Silver badge
      Boffin

      Sadly they're optimising GPUs for half precision which means that a lot of more traditional codes won't run as well. In HPC land Nvidia GPUs only became useful once they'd introduced double precision.

      1. Herring` Silver badge

        Years ago I worked on a thing for doing modeling. Loads of doubles, the same calcs performed over a bunch of data points...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like