back to article Microsoft claims it has spun up a top-five AI supercomputer for its pals at OpenAI – but won't reveal the full specs

Microsoft has spun up a custom AI supercomputer in its Azure cloud for OpenAI that ranks within the top five publicly known most powerful supers on Earth, it is claimed. The behemoth is said to contain “more than 285,000 CPU cores and 10,000 GPUs,” with up to 400Gbps of network connectivity for each GPU server. If this beast …

  1. Anonymous Coward
    Anonymous Coward

    Mine's bigger than yours

    And you have to trust me on this.

    1. Korev Silver badge
      Coat

      Re: Mine's bigger than yours

      That's a mighty big one you're linpacking there...

  2. IGotOut Silver badge
    Joke

    Pah...

    I've got one with BILLION GPU's and a gazillion CPU's.

    Its the NO1! In the world.

    You just need to trust me.

    OK.

  3. Fading
    Windows

    285,000 CPU cores and 10,000 GPUs

    And it still falls over when encountering a 2GB Outlook pst file.....

  4. Woodburner

    285,000 CPU cores and 10,000 GPUs

    Just testing the minimum hardware requirements for the next version of Windows.

  5. tip pc Silver badge

    285k CPU cores and 10k GPUs does The fourth Tuesday of every month have it on its knees

    Does it still get its monthly Tuesdays?

  6. Claverhouse Silver badge
    Coat

    My rocket ship's in the garage.

  7. StargateSg7

    An NVIDIA Tesla supercomputing-specific GPU is the most LIKELY hardware for the GPU portion and is about 50 TeraFLOPS for 32 bit Single Precision Floating Point number crunching (or 100 TeraFLOPS for 16-bit Half-Precision Floating Point performance) and I suspect Microsoft went with AMD Rome 7742 CPUs for the 64-bit number crunching which maxes out at 2.3 TeraFLOPS for 64-bit Double Precision Floating Point operations or 4.6 TeraFLOPS for 32 bit Single Precision number crunching.

    Since MUCH supercomputing work is done at Single Precision 32-bit number crunching, the total performance with 10,000 NVIDIA Teslas GPUs and 4,454 (i.e. 64-core CPUs) of the AMD EPYC Rome 7742 CPUs calculates out to 500,000 TeraFLOPS total for the GPU portion and 20,488 TeraFLOPS total for CPU portion or a Grand Total of 520,488 32-bits wide TeraFLOPS or about 520 PetaFLOPS at 32-bits wide. If you compare it against SUMMIT which 64-bits throughout, that means the Microsoft Supercomputer could do about 210 PetaFLOPS at 64-bit Double Precision Floating Point Number Crunching.

    I should note however in MULTIPLE locations within the sprawling US military industrial complex, I know of at least FOUR of ExaFLOP+ scale 64-bit Double Precision supercomputers now operating!

    Sooooooo, that’s actually not too bad for a civilian supercomputer!

    P.S. Ours is the MOST POWERFUL SUPERCOMPUTER OF ALL !!! 119 ExaFLOPS at 128-bits wide Floating Point Operations SUSTAINED on 60 GHz GaAs substrates using combined-CPU/GPU/DSP super-chips !!!

    WE STILL WIN BY A FAAAAAAAAAAAAAR MARGIN !!!!!!

    V

  8. Steve K
    Coat

    Can it play (and win) at Crysis

    OpenAI Five, a bot capable of playing Dota 2

    But can it play Crysis?

    (sorry)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like