Keep trying...
At US 26c/CPU hr, the best performing system (Azure H16), is still almost 3x more expensive than the commercial rate on a large HPC cluster (10c/CPU hr)
The best clouds are genuinely competitive with do-it-yourself high performance computing – and Microsoft's top-tier Azure is the best of the lot. That's the conclusion from research conducted by Mohammad Mohammadi and Timur Bazhirov of Exabyte and offered as a pre-print at Arxiv. The two boffins took a simple approach to …
1: "The researchers believe the biggest determinant of scalability is the interconnect in place"
Well, well, well. Who would have thought? I finally understand why a distributed-memory set-up with Infiniband interconnect is better than an old Beowulf-style cheapo Ethernet interconnect. </sarcasm>
2: Speed-up isn't always the best measure: I might achieve near linear speed-up, but without knowing the performance on a single machine, I still know little about actual performance.
3: Linpack is ubiquitous, but it doesn't necessarily reflect real-world applications (although much code is written on the back of Linpack. I am sure some of our code would not necessarily run well on the cloud (we are testing it)
4: 32 nodes is not huge by HPC standards, unless each node is really massive in terms of numbers of cores. The Edison system has 5586 nodes. Scaling computation over those numbers is a very different ball game.
As mentioned already by other comments, it's not really telling you anything sensible. The use of speedup rather than RMax is poor, this metric really doesn't tell you what you think it does, it assume Linpack is affected by network performance but it really isn't, and it's comparing two different hardware generations (the HPC system is running on CPUs that are at least 1 generation older than the Azure resources, and likely to be 2 generations old). Given linpack is really just a flop/s engine, you can't compare different generations of CPUs like this.
Biting the hand that feeds IT © 1998–2021