Xeon 28 Core 2.0GHz-16GT-UPI: The Ultimate Storage Spooler or a Parallelism Pipe Dream?
Right, you magnificent bastards. I'm speccing a new box to act as a primary host for a virtualised development and test environment, which essentially means running a few dozen VMs that are mostly idle, but all need to spin up at once without complaining.
I'm looking hard at one of those Xeon 28 Core 2.0GHz chips with the 16GT/s UPI https://serverorbit.com/cpus-and-processors/xeon-28-core/2-0ghz-16gt-upi. On paper, it's a core-packed beast for massive VM density. But my spidey-sense is tingling, and it's not just the 40 cups of coffee.
The Core vs. Clock Conundrum: For this kind of "idle-but-must-instantiate" workload, are we in "cores-over-clocks" territory, or is the 2.0GHz base clock going to be a painful anchor, making the entire system feel sluggish? Am I buying a car park for 28 Mini Metros instead of a garage for 8 Porsches?
The I/O Bottleneck Tango: This thing will be fronted by a proper all-flash array. With 28 cores potentially hammering the storage controllers, does the 16GT/s UPI become the unsung hero preventing a mutiny in the memory lanes, or is the real bottleneck just going to shift downstream to the PCIe lanes on the chipset?
The Real-World Vulture View: Has anyone actually deployed these specific chips for a similar "wide" VM farm? Did the core count deliver the glorious parallelism we're promised, or did it just give you 28 different contexts in which to observe latency?
In short: Is this a genuinely smart play for maximising idle-but-ready VM count, or am I about to learn an expensive lesson about Amdahl's Law the hard way?
Your cynicism is eagerly awaited.