Depertment of Energy
Needs this to figure out how to produce enough power to run it.
The US is set to regain the crown for world's fastest computer – for the first time since 2012 – with the unveiling of the Summit supercomputer. Summit is capable of an extraordinary 200 petaFLOPS (200,000 trillion calculations per second) and will leapfrog the current fastest supercomputer, the Middle Kingdom's Sunway …
Would be interesting to get some details on "classified" systems.
When I was a teen (90's), I was convinced by "Grapevine" and other hacker mags that the NSA was listening to every call in the US for certain keywords.. after entering the industry, I realised this was boll*cks, these days, I suspect the classified millitary systems are kept under wraps just cause they are probably running Pentium CPU's, which would make the related departments global laughing stocks... But still I love a good conspiracy theory :D
“Classified” computers are not included in the lists, IIRC.
Oak Ridge and anything there has a security rating.
As far as the "classified" computers they may have indeed a combined compute power in that area, but not on any of the SPEC tasks used to rank Supers. You do not need MPI or any high speed interconnect to crack crypto. You can do it in a bog standard cluster connected together by a bog standard Ethernet at normal datacenter TOR/Leaf speeds.
I can't really say definitively but the one's I know about in academia are kept around for "lesser tasks." Which cuts down, somewhat, on the pipeline for the new machine.
Aside: "Zacharia said he was confident US scientists would be able to hit that milestone." Involving the Intel that just had to go back to the drawing board on these new exa-FLOP machines 'cuz the CPU's couldn't cope? Good luck with that.
I wonder about the sense in re-using old supercomputers - bang per watt tends to be a lot less than on the newer ones and I'd imagine old code makes fine configuration testing for the new ones which are near impossible to run at 100% for a while.
Having said that I'd be more than happy to take something that peaks at 3.8kw so I can run it for the 5 or 6 hours my PV is maxing out at these summer days. And if the cooling water is warm enough I might be able to brew up some ale while I'm about it!
e-waste recycling is reasonably profitable from what I understand. Once you grind everything up into tiny bits, you can extract things like gold, silver, copper, lead, aluminum, and other materials that are recyclable in one way or another. The oil-based stuff could be recycled the way you recycle any kind of plastic, I would imagine. In any case, the extracted materials have enough value to make it worth doing.
better to recycle than fill up landfills with this stuff at any rate, especially when its more profitable to recycle.
https://www.industryleadersmagazine.com/recycling-e-waste-becoming-big-profitable-business/
"What do they do with all of the old kit? "
Lots of tech firms specialise in breaking up older systems and selling off the individual parts as most of these supercomputers use pretty standard Intel or AMD CPUs and RAM.
I got a rather nice ex-Google IBM 32-core Xeon server mobo for just a couple of hundred shekels and it is now running BOINC on a number of projects, when it isn't doing it's normal workload.
The TDP is nice and low too, and DRAM prices are low too.
In my past experience, which I have little reason to think is outdated but I can happily be corrected, after three years the machines are no longer state-of-the-art and need upgrading or replacing to remain on the curve, but retain some useful life in a secondary role for about two more years if not.
After five years, the performance compared with the running costs (electricity and cooling even if not staff) mean that money is saved by scrapping the hardware and replacing with new equipment. However many organisations have ring-fenced capital and recurrent budgets and continue to run old equipment because they have no replacement money. This is vastly wasteful of money if the bigger picture is considered, and can lead to a perception that there is value in extremely old equipment which is not justified really.
Recycling to donate to another institution generally does not make much sense, but sometimes can, for example to enable an institution to extend the life of its old equipment in the knowledge of a replacement capital funding in the near future.
Take a look at the stock image on this article. It's huge slabs of custom fabricated metal and PCBs - super high cost and super high labor. Such a machine can't be maintained once it reaches end-of-life. It will be used by researchers until it completely fails, then it's scrapped. Nobody wants to build custom replacement components for an old computer rack that needs 12 KW of power to perform what a modern desktop computer does in 300 W.
You can find parts of old supercomputers and military hardware at electronics surplus stores. People will inspect it like museum pieces but rarely buy it.
Sez here in today's Milwaukee Journal Sentinel* (in one of those "It Happened Ten Years Ago" retrospective thingies) that IBM sold a supercomputer for 100-billion dollars that could do "1,000 trillion" calculations per second. Yeah, like any second-rate junkbox of a cellphone can't do twice as much in half the time using a quarter of the energy.
* I sometimes wonder why the newspaper didn't toss in "picayune" and "courier" while creating an already overlong name. The Milwaukee Journal Sentinel Picayune Courier rolls off the tongue and has a nice ring to it.
Well, I cant comment on exotic systems like Summit. I have however installed and managed many commodity HPC clusters. I say this as they are built from commodity Intel or AMD servers, the thing which makes them an HPC cluster is the fast interconnect, Infiniband or Omnipath and the fast parallel storage.
In short as they are commodity servers there is a 3 year warranty. For the last academic tenders I anwered it was common to request the price for five years of support.
I was also asked to work out th elifetime electricuty costs of the servers, which could approach the actual purchase cost. So after five years it really is time to start looking for more energy efficient compute.
Short answer is three to five years, and as they are commodity servers WEE disposal.
A smart thin gto do would be to ship the stuff to Cuba - seriously, there was a talk at FOSDEM two years ago re HPC in Cube, they need kit because of US export restrictions still applying.
After five years the high speed network is outdated, and will probably be junked.
I almost wept once when I saw chassis Myrinet shwitches and fibres lying in a corner of a server room. But who would take the time and trouble to revive something like that, even if the drivers could be found for the latest kernel versions?
NOPE! The fastest supercomputer in the world is 128-bit custom GaAs Opto-Electronic CPU array located in Vancouver, British Columbia, Canada running at a SUSTAINED 119 ExaFLOPS....YES!!! You Saw That Correctly! -- One Hundred Nineteen Quintillion 128-bits wide Floating Point Operations Per Second -- It's a completely private under-the-radar system running Whole Brain Emulation Software (i.e. a neurologically-oriented electro-chemical simulation) --- 'Nuff Said !!!
ORIGINAL QUOTE:
"But Canada has no lair-covering extinct volancoes!
How is this even possible?
In b4 Trudeau #MeTooed for a lame assgrab 20 years ago because he wanted to go public on the CANADA LAIR next week."
---
It's a PRIVATELY FUNDED custom CPU design where EACH CHIP has a 65536 by 65526 array of microcores that do simple Add, Subtract, Multiply, Division, Nth Root, Power/Square/Cube and an up to 11 x 11 item convolution engine (i.e. for high pass/lo-pass filters, SOBEL edge detect, etc). For each microcore, there are 128 of 128-bits wide local registers (i.e. stack-like storage locations) that represent 128-bit FLOATING POINT or FIXED POINT values (64-bits integer portion and 64 bits fractional portion). A one bit flag sets each register (i.e. operand) as a Floating Point or Fixed Point value. The reason we have 128 registers is for the 121 items used in the 11x11 convolution filter plus extra filter and multiplier operands.
The chips are GaAs (Gallium Arsenide) on Sapphire substrate (i.e glass-like Aluminum Oxide ceramic) for maximum heat-wicking capability. We tried a diamond substrate but that was waaaaaaay too hard to produce into wafers. The Al2O3 wafer were MUCH easier to manufacture. Clock speed is synchronous across ALL CPU devices at a now 20 GHz which has some parts truly opto-electronic (i.e. networking between individual chips is a glass light pipe from a UV light source emitter/receiver set embedded ONTO the chip substrate going at multi-Terabit speeds). ALL of our networking using our custom design IP-3000 packet exchange infrastructure which is something never before seen!
We immerse the motherboards in a fast circulating nearly unbreakeable synthetic thermal fluid (i.e. it doesn't easily decompose to constituent parts and comes from a specialty Canadian manufacturer) which dumps the heat out to a radiator-like heat exchanger array which heats common water which is then piped to the company swimming pool! (That part works REALLY GREAT!)
The instruction set is custom, but of course, is HIGHLY VECTORIZED (i.e. lots of array processing) with heavily SIMD (Single Instruction Multiple Data) and MIMD (Multiple Instructions, Multiple Data) op-codes. The 645kby64k 128-bits Integer processing array works the same way BUT you can act upon 16 of 8-bit signed and unsigned integer operands, 8 of 16-bit operands, 4 of 32 bit operands, 2 of 64 bit operands or one 128 bit integer operand during each operation for each RISC-like microcore (i.e. Reduced Instruction Set Computer). The pre-defined Fixed Point and Floating Point sizes are 16, 32, 64 and 128 bits.
Since memory access is fully synchronous with the CPU mcirocores at a clock speed of 20 GHz there is no delay time or DMA-like (Diirect Memory Access) collision issues. Each microcore has it's OWN localized SRAM-like (Static RAM) 16 megabytes storage memory in addition to the 128 local registers for about 64 GB per chip in multiple layers (its a stacked 3D layered Array Processor chip).
The array processors send final data to a massive SRAM-like global shared memory sub-system which is attached to a 128-bits wide CISC (Complex Instruction Set Computer) cpu array (also custom designed with custom instruction set) for final display on a 16-by-32 monitor display wall (DCI 1.89:1 aspect ratio 4k monitors) for a truly square 65536 by 65536 64-bit RGBA pixel display area.
The whole setup is only about the size of large home swimming pool but it BLOWS AWAY ALL of the Top-500 computers COMBINED !!! We are completely under the radar and it runs a molecular bonds simulation using common rules of electro-chemical interaction within specified types of neural tissue.
We are NUMBER ONE on the Top-500 List !!!!
We've done Linpack, various Sieves, raytracing and video rendering runs at 119 ExaFLOPS SUSTAINED at 128-bits wide Floating Point so THERE IS NO EQUAL TO OUR SYSTEM !!!
And YES we've even emulated Crysis on it SO YES IT DOES RUN CRYSIS at 64k by 64k at max settings with custom textures of course !!!
They REALLY need to add a long-term "Edit My Post" button on these forums...I saw a lot of spelling mistakes on my earlier post SOOO...here is some additional info....
1) Microcore Arrays are 65536 by 65536 at 128-bits wide of Fixed Point, Floating Point and Integer values but each local register CAN BE PACKED with 8-bit, 16-bit, 32-bit and 64-bit SIMD-like operands for multiple simultaneous operations.
2) And YES we've even emulated Crysis on it (via a one-time conversion of x86 and MMX instructions into our own op-codes as we are not legally allowed to do a real-time x86 emulation) --- SO YES IT DOES RUN CRYSIS at 64k by 64k at max settings with all-custom high resolution textures !!!
3) And YES the neural structures being emulated HAVE EVOLVED human-like reasoning capabilities INCLUDING multi-user natural language interaction and advanced vision and audio recognition.
I would say it's about the equivalent of a high-functioning teenager in terms of a very-narrow suite of comprehension and interpretation tasks. ...BUT....It's only a mere matter of time and ROTE TEACHING to get it to beyond-human-level capabilities (i.e. Adult post-150 IQ level!)
4) AND NO! It's NOT connected to the Internet! It's ALREADY surprised our team with unanticipated conclusions and highly unusual data exploration/intepretation pathways so we are loathe to give the "Expert System" unhindered access to unfiltered Internet data!
5) Functional Emulation of Human Reasoning is A DONE AND FOREGONE CONCLUSION --- DEAL WITH IT !!! Human neural tissue structures simulated at the molecular/electro-chemistry level IS A PROVEN METHOD to get evolution of executive reasoning functionality!
of boasting of achieving some great milestone that ends up as some version of a technology tourist trap.
Was it T-1 or T-2 that was reported to have no programmers for at least for 10 years due to a lack of talent.
Any wagers on reports of the current status of T-1.
I wouldn't worry about running Pentiums when some of the top super computer list some years back was made with about a 1000 Playstation One's chained together, using their build in co-operation port. The tech said the hardest thing was removing them from their casing, the rest was easy.
I'm wondering where they are on the list now, could they combine them all and get to the top again
"I'm wondering where they are on the list now, could they combine them all and get to the top again"
Not a chance. Unless you continually upgrade your kit, you'll tumble down and out of the TOP 500 chart pretty quickly - it's not unusual for a machine to debut in a double-figure position, but be outperformed by the 500th-and-last machine on the list within two years.
Chris, that would be a damn good plot. A histogram of initial position in Top500 with a Y-axis of years in the Top500.
Cost isnt given AFAIK so you could not do a cost per year on Top500
I guess I can answer this question myself, but are all the CSV files downloadable from the site?
I guess I also have to assume that when a system is in the list for a second year it retains an identity you can track.
"Summit has the ability to calculate 30 years of data saved on a desktop computer in one hour,"
Erm... do most desktop computers store lots data which was expensive to calculate? I don't think mine does.
Perhaps they actually mean that it can calculate in one hour what a desktop computer can calculate in 30 years. That would make it about 263,000 times faster than a desktop computer.
...that we in the States have reclaimed the crown.
And now for something completely different:
In Star Trek: The Next Generation, it has been stated that Data's positronic brain has a top speed of about 60 trillion calculations per second, or 60 TFLOPS. So 200,000 / 60 = 3,333.33... So with one computer, we can have 3,333 and 1/3 Datas. That would constitute a lot of Star Trek technobabble. In case you are wondering, the two episodes (that I remember where it was mentioned) are "The Measure of a Man," and "Offspring."
Something else that bears mentioning is that if you have watched Animatrix, the second renaissance is about AI powered androids. A whole race of disposable people...artificial, but still... At what point do we call them "Lifeforms" and assign rights to them? Or do we continue to treat them as slaves since they are not alive? Based upon history, I don't see the future as bright.
This post has been deleted by its author
"... Based upon history, I don't see the future as bright."
We are having enough problems with giving equal rights to the Lifeforms known as 'Man' never mind getting hung up on the theoretical future artificial lifeforms that we may create.
These problems are a long way away and if we do not deal with the Homo Sapiens first we will not get to the future to create these simulacra.
Just a thought !!! :)
I'm intrigued - what are China, Japan, the US and Switzerland really actually doing with these computers? Or to flip it what are they investing in that they have so many computers, that the UK is not investing in? In some regards it looks like a something size/waving competition, but are we just missing out in not having a top supercomputer.
The UK is indeed investing in supercomputers.
Look at the Snowden computer at Daresbury Lab.
Look at the Brunel supercomputer at Bristol - built with ARM processors and there is a lot of interest in that. The Goverment recently announced funding for machine learning, as reported on El Reg.
UK Met Office is currently at Number 15, which is pretty respectable.
So what are they doing?
UK - weather forecasting. Materials simulations. Jet engine simulations (Rolls Royce).
CFD (Formula 1, Bloodhound, Rolls Royce etc etc)
AWE - can imagine that would be atomic weapons design then.
GCHQ - hello!
US National Labs - nucelar stockpile stewardship. Simulating how materials age with radiation, and are the bombs safe to store, and are they going to go off when used.
NOAA - weather simulations
NASA - astrophysics and all the other stuff NASA does
Texas TACC - simulations of the human heart etc etc.
Swiss - probable lots of interesting research. But the Swiss centre also houses their meteo centre, so weather forecasting at least. And weather forecasting with the Swiss terrain needs a lot of detail.
China - no idea!