And the ones not on the list?
I wonder where GCHQ's systems fit in.
Japan needs a little good news these days, and it comes from the International Super Computing 2011 conference in Hamburg, Germany, as the K supercomputer, a massively parallel Sparc-based cluster built by Fujitsu, has taken the lead in the number-crunching race as gauged by the June 2011 edition of the Top 500 supercomputer …
Most of the US's top supercomputers are owned by the Dept. of Energy (DoE) who happen to be tasked with monitoring our nuclear stockpile-- I guess a lot of the oomph goes into simulating nuclear decay and the effects of bombardment on electronics so we know when various bombs need to be decommissioned. In their spare cycles, they cary out various complex simulations for understanding the effects of different sorts of war events and modeling new kinds of weapons.
Wikipedia probably has good (and more detailed) entries on the primary missions of each of our big supers.
Here's a clue (the icon).
The land of the free is becoming far from free. The US authorities have everyone so whipped up into such fear that most people can't see the growing tyranny appearing all around them.
The irony is one of the so called "Founding Fathers of the United States" said it best when he said "Anyone who trades liberty for security deserves neither liberty nor security" - Benjamin Franklin
The thing that convinces the government to fund them has historically been nuclear weapons research. We don't get to test bombs anymore, so wouldn't it be nice if we could simulate every conceivable aspect of them? That's pretty much what they want and, really, it's the only sensible thing to do if you've got a nuclear arsenal and aren't allowed to pop the things off now and then. It's no coincidence that the biggest ones usually end up getting built at places like LANL, although universities are starting to acquire them too.
However, once they're built, there's usually a lot of spare capacity, and that goes into everything from biophysical simulations to designing antennas. A great deal of American scientific work--even totally innocent stuff like cancer research and figuring out ways to clean up toxic things--benefits from the defense budget, and this is one of the ways. Any computer this powerful will have people lining up to use it, and many of them aren't even weapons engineeers.
How does the efficiency of a machine like the Tianhe-1A scale over the number of nodes used? I know it's a proprietary interconnect, but from what is known, could the efficiency per nodes used be inferred? e.g. does the efficiency go up when using only a portion of the nodes, like if the system was used more for multiple jobs with multiple partitions than single monolithic jobs.
[PS I think Roadrunner said "meep, meep"]
No: NSA/DoD systems don't show up here (hush hush). DoE has two parts: NNSA (roadrunner, cielo) overseas the nuclear stockpile, and does some related research, Office of Science (jaguar, hopper) does energy-related research (what it says on the tin). For instance, you can see some of the projects for Jaguar below.
http://www.doeleadershipcomputing.org/wp-content/uploads/2011/01/2011INCITEFactSheets.pdf
Also, most of the US machines (by count) aren't government, so they are probably doing drug discovery or trading stock.
http://www.top500.org/charts/list/37/segments
Links, to what?? Cluster computing is a group/cluster of individual machines, which are linked together with an interconnect, (a network). Interconnects are either:
Ethernet based for capacity computing, think particle physics work, or finance, e.g. QM or
Capability based, whereby you require very fast, low latency message passing between distributed codes. e.g. CFD, Computational Chemistry codes. Low latency interconnects are: Quadrics, SCI, Myrinet and Infiniband, plus a few proprietary ones from Cray, IBM< and this Fujitsu.
All compute bodes are standard linux boxes, running a fairly standard Linux OS, although you might have added specialist maths libraries. Hence why the scalable linux statement was silly.
If you still want links look up "High performance computing".
Link? Top 500 websites. Click on every single computer in the top (and about 98% of the entire Top 500 list) and you'll see:
OS: Linux
Be it a single OS or thousands of Linuxes, they still need to communicate very effectively to grab 98% of the entire Top 500 list.
It's not like someone could replace all these Linuxes with, say, the ATARI 512's TOS, and still grab 98% of the entire Top 500 list.
Yes, but Todd Rundgren is correct.
These super computers are basically a large cluster on a fast switch. You just add a new PC to the network, and voila, you have increased performance. So, it has nothing to do with scalability when we talk about one large SMP computer, such as a IBM POWER795 with as many as 32 POWER7 cpus. Or Oracle Solaris M9000 server with as many as 64 cpus.
When we talk about one single large SMP computer, Linux is never run on them because Linux scales bad vertically. Linux scales to ~32 cores or so, on one large server.
Linux scales excellent in a large cluster with lots of PCs (good at horizontal scaling), but scales extremely bad on one single large server (vertical scaling). Linux merits is on a large cluster. Google runs a large cluster with Linux servers. There exist no Linux server with as many as 32 cpus. But there exists large super computers which are basically a cluster, for instance the SGI Altix with 1024 cores - which is just a bunch of blade PCs on a fast network.
Sure it runs 2048 cores, just as the ALTIX SGI server does. But it just a bunch of PCs on a switch.
Let me ask you: have you thought about this?
IBMs biggest Unix server P795 has 32 cpus
IBMs biggest Mainframe z196 has 24 cpus
Oracles biggest Unix server M9000 has 64 cpus
HPs biggest Unix server (Integrity?) has 64 cpus (I think)
And they fiercly fight for benchmarks. IBM was so proud over their P595 TPC-C benchmark world record. Why can not IBM simply put in 64 cpus in the P795? Why had IBM to rewrite the old and mature Enterprise AIX that ran on big Unix servers for decades - when P795 was to be released? The P795 has 256 cores, and that was too much for AIX to handle. The earlier P595 Unix server had 128 cores which was manageable by AIX. Why dont IBM put in 64 cpus? Or even 128 cpus? Are there some difficulties when you dont do clusters?
Why does Linux stutters on SMP servers with 32-48 cores?
The oomph of the SPARC64 VIIIfx comes from a custom-designed HPC-oriented vector instruction set called HPC-ACE. The scalar components are similar to slightly tweaked, cache-starved, low-clocked versions of the existing (slow) SPARC64 VII core. An 8-core SPARC64 VII at 2GHz with a smaller cache doesn't exactly sound like the Holy Grail of commercial computing, so I can't see why they would be "fools" as you said not to commercialize it.
I have to say that "K supercomputer at Riken" looks an impressively big room. Although I pity the poor engineers who have to descend into those dark server tunnels between the racks. It would be best to tie a rope around their middle before they go in and then if they pass out from the server heat, then drag them back out again before they cook! ;)
Do the poor Japanese have permission to turn it on more than once a year? As a result of the recent disasters they are experiencing power-cutbacks. Offices running aircon at 80 deg (at a very hot/humid time of the year), factories are changing their working week to flatten out demand too. Turn this beastie on an half the nation's lights will go out.