Interesting, but ...
1: "The researchers believe the biggest determinant of scalability is the interconnect in place"
Well, well, well. Who would have thought? I finally understand why a distributed-memory set-up with Infiniband interconnect is better than an old Beowulf-style cheapo Ethernet interconnect. </sarcasm>
2: Speed-up isn't always the best measure: I might achieve near linear speed-up, but without knowing the performance on a single machine, I still know little about actual performance.
3: Linpack is ubiquitous, but it doesn't necessarily reflect real-world applications (although much code is written on the back of Linpack. I am sure some of our code would not necessarily run well on the cloud (we are testing it)
4: 32 nodes is not huge by HPC standards, unless each node is really massive in terms of numbers of cores. The Edison system has 5586 nodes. Scaling computation over those numbers is a very different ball game.