back to article HPC bar goes lower and wider

The more things stay the same, the more things are likely to change, and clear evidence of that could be seen today at the announcement of the latest Top500 Supercomputers league tables at the International Supercomputer Conference in Dresden. The tables, compiled every six months, show the fastest-performing systems …


This topic is closed for new posts.
  1. Stephen Booth

    What about the memory

    All the really big HPC systems on the top-500 have only a couple of cores per node. Beyond that and memory bandwidth saturates. For HPC type applications most of the parallelism comes from large node counts. Unfortunatly using multiple nodes is much harder than multi-threading.

    In my opinion very large core counts will only work for niche applications unless we start to see some inovation in memory system design but the memory manufactures seem only to be interested in making larger chips of the same old basic types rather than investing in significantly new technologies. Whatever happened to rambus?

  2. Anonymous Coward
    Anonymous Coward

    Now That's You've Got All Those Cores....

    .. beware geeks bearing parallelizing compilers. They're still generally a poor substitute for talented software engineers who know parallel algorithms and can cut good code.

    Vendors will nevetheless swear your developers won't have to expand their body of knowledge (much.) Just like "if you know C, you know C++." Only more so.

This topic is closed for new posts.