
Need to know....
Does it run Doom?
Nvidia revealed its most powerful DGX server to date on Monday. The 120kW rack scale system uses NVLink to stitch together 72 of its new Blackwell accelerators into what's essentially one big GPU capable of more than 1.4 exaFLOPS performance — at FP4 precision anyway. At GTC this week, we got a chance to take a closer look at …
Or 3DMark. I remember getting the 'over 9000' Steam achievement and as it has a high score table, what this beast could do. I know it's not the intended application, but I've always found it somewhat disappointing that it doesn't seem possible to use systems like this as actual graphics cards. Then again, I doubt there's any monitors that could support the potential frame rate.
Mine would be 48V direct to the PCBs.. The stonking thick red/black DC cables that connect each PCB to the busbars looks good enough for 100A@48V ie 4.8kW to supply each pair of Blackwell modules, each having 2 GPUs, so around 1.2kW per GPU.
That of course would mean that the busbar itself would be carrying 2.5kA @ 48V.
This is one hell of an expensive water heater..
Nvidia's got a good thing here, a great base for stunning mirification of occasional museum visitors, schoolchildren on field trips, and gardeners, but desperately needs the expertise of a visually explosive voodoo tribal arts plastician to fully exploit the aesthetique potential of their new hardware (it seems). For example, last month marked the 10th anniversary of the introduction of the visually stunning (and performant) "Death Star" MacBook Pro that remains to this day, in our collective subconscious, a timeless icon of cylindrical design elegance. Couldn't the highly performant NVL72 rack be similarly reformulated into a long-lasting evocative geometry, or shape, beyond that of a Space Odyssey's prismatic monolith?
In view of the human-face-like anthropomorphism of the GB200 (2nd and 3rd photos, and upside-down 4th) two exploratory concepts readily spring to mind. First, a retro-futuristic hommage to paleolithic cave dweller paintings, with broad strokes of GB200s (and associated arrays of stitching switches (photo 8), evocative of a hunter gatherer's bounty, possibly fish), strategically exhibited on a large wall, and ceiling, of the target museum locus. Second, a series of totem poles, with GB200s representing the faces of departed ancestors, their strengths, beliefs, and lineage (again, retro-futuristically). Cabling could run on the inside of these magnificent structures, with interconnects through the wings of the avian-descended members of the clan. The totem poles could be seated on pairs of shofar ram horns (penultimate photo) wherefrom liters per second of cooling fluids could spiritually enter and exit the piece, for maximum effect, and performance.
Such inspirationally mind-expansive projects of computational arts should indubitably be commandited, sponsored, and even patroned, IMHO -- the opportunity is just too great to miss (for posterity)!
Not quite yet, but it may be sinking in (some) ... buoying, floating, and capsizing, in the IntelAIocean that is not wAItered down by drops of sea sAIlt ... getting there, slowly but shirley, sherlock, through a lock that is sure to press the buttons of even the most wrinkled of shirts! (or somesuch ... not as easy as it looks) Your turn.
All those little boxes with fat, fat cables connecting them...
Reminds me of SGI Altix 350 (the smaller Altix with up to 16 individual dual-cpu nodes with a big switch in the middle.) They had two very fat cables for each node.
Fun fact: a node was called a "brick" in an unfortunate coincidence of terminology.
Bigger Altix like the 3700BX2 used the same cables to interconnect multi-CPU blades in a larger chassis, with multiple chassis per rack.
The 64-processor BX2 ran about 20kW in one rack. if I recall correctly. Somewhere I have a screenshot of vi occupying 13GB on a 128GB machine...
We spent more time on the phone with SGI Minnesota negotiating power and grounding needs than we did with the salesman to buy the thing.
Yes, the article said 120kW of compute
So it probably only includes power consumed by the GPU dies, and not the 57 Arm-core "management" chip on each node, nor the DC power conversion/supply efficiency.
But also, it's designed for 20C @ 2L/s, but it probably runs at 167kW in some sort of worst-case power-eating test for commissioning the cooling system.
Nevertheless, the 8-rack superpod is going to need its own susbstation at 1MW...