But can it run Cryses?
A raspberry Pi 4 has over 4GFLOPS while a Pi 5 is reported to have over 10GFlops.
This year marks the 30th anniversary of the Top500 ranking of the world's publicly known fastest supercomputers. In celebration of this fact and with the annual Supercomputing event underway in Colorado, we thought it'd be fun, if a bit silly, to see how cheaply we could achieve the performance of a top-ten supercomputer from …
I remember, circa 2004, buying the latest Sun UltraSPARC number-crunching server for our scientific research institute. It had 12 dual-core CPUs and a then-insane 48GB of RAM, plus a still-useful-now 3TB of FibreChannel high speed drives. And it cost about the same as a 3-bed house, at the time (after the educational discount!). Now, less than 20 years on, you could easily smoke that with a custom-build PC in the <£1.5k bracket.
When I started in IT, the "New Thing" from Sun was the E10K - 64 UltraSPARC I CPUs @ 400MHz, 64GB RAM, etc, etc. It was a full rack system you'd wheel into your data centre and hook up to power/ethernet etc and cost in the region of £1m fully loaded.
About 10 years later, the T5220 was a 2U rackmount server with 64 threads/128GB of RAM for a fraction of the price and I'm pretty sure it would outperform the E10K.
That realisation was kinda scary...
100 gigaflop to 1 exaflop is an increase of 7 orders of magnitude. That's much faster than personal computers have improved, and a good bit faster than the old Moore's law. I guess that's because, in addition to silicon getting better, spending on supercomputing has also massively increased.
If the trend holds, we'll run out of prefixes within my lifetime.
For the time being the trend seems to hold if not even accelerate, Google's Cloud TPU already reaches 10 exaflops in training of large language models, although it's not enlisted onto the official TOP500 list, and while that score may not be directly comparable to linpack it's yet upon doing something useful. And if working on AI, a 400-teraflop cloud tensorflow accelerator is available at roughly $1.2 per hour spot price, so it's 0,3¢ per teraflop-hour. Insane times.
not going to lie, I am no expert on PC set ups or outputs, but reading this, and comparing from a 30 year window, I mean WOW
as a matter of interest, what would Moore's Law have you EXPECT to achieve, from 1993 baseline to today ?
sort of interesting to see if it DOES correlate to the doubling every 18 months :o)
The Cray CPU was small. But the Cray system was a lot more besides the CPU, it had a whole raft of storage units, I/O, cooling system, motor/generator sets, test equipment etc., all of it had to be there for the system to function. I have a site planning manual with pop-out cardboard templates for all the different units that could be arranged on your local building plan. Plus there was all the space needed for field engineers, operators etc. Along with several tons of equipment Cray would throw in 1 swivel chair weighing exactly 11.4kg, but the customer was responsible for providing 2 wastebaskets. I spent my first morning on the job just mounting tapes. So the site was rather like a kind of high tech lab with a whole lot of industrial looking equipment, and then off to one side there was this thing with leather seats, looking rather like a street urinal designed by Ettore Sottsass.
All supercomputers these days are clusters. This means that Moore's considerations don't really apply to how high the benchmark numbers can go. If you want to have a more powerful supercomputer than the top ones, you could just add more nodes to one of them. While it's not completely simple and linear, you are not limited to the performance of a single system. While that has always been possible, the software for treating a cluster as one machine has been improving a lot faster nowadays than it did in the 1990s.
The first PC I worked on was a IBM clone ((Olivetti I think) in 1984. It ran an 8086 at 4.77MHz and cost about £1,500 (all in).
Today my home PC is a running a 5800X at 4.7GHz and would cost about £1,500 to build from scratch tomorrow.
Moore's law (double the performance for the same price every 18 months) equates to about 42% per year. 39 years at 42% starting from 1 gives 869,451.64
My current PC is about a million times more powerful then that first one so Moore's law is still holding.
I thought i would opine in here by stating that the StargateSG7 fellow, while in real life after meeting him in person is actually quite insane in addition to his massive boxer frame, it was proven to me that he personally holds several design and manufacturing patents and awards for his supercomputing works. It was also amply demonstrated to me that his company does secretly lead the Top-500 ranks by having Yotta-scale computing in their underground far-north server farm facility.
After having to wear a number of jumpers a d a bulky Anarok just to keep warm in the cold Canadian north, a tour of their facility was quite impressive with over four million square meters of floor space inside of a most-impressive size mountain range which makes the Cairngorms look like minor earth mounds!
I think for you Yanks, that would be 43 million square feet of floor space racked from floor to ceiling with in-house designed opto-electronic processors each about the dimensions of a typical 500 milliliter box of Milfina Creme you can buy at any Aldi. A most impressive display I do must say!
It seems the superpowers of supercomputing has moved on from fifth form and are now at Oxbridge showcasing their technological talents.
While I was not given a demonstration of the whole brain emulation of which he has often spoken of in his online postings, I no longer doubt the technical ability, if not veracity, of their facility being able to host multiple instances of such constructs!
I do must give points to him, even with his in-person insanity, most frightening physical bulk and brusk military-like affect. It was an impressive tour and it seems that the StargateSG7 bloke has shown to me that he indeed was not spouting mere gibberish during his outlandish ministrations on the Register posting sections.
It seems the T500 list is missing numerous secretive facilities already at the Yotta-scale of computationaly power and beyond.
Probably not, although it depends on how expensive power is where you are. For example, the current price in the UK appears to be £0.270/kWh, so £5.00 should be able to buy you 18.52 kWh. If you use the same amount of power consistently through the month, that allows for an average power consumption level of 25.7 W. That level of power consumption won't be sufficient for a desktop, but you have two ways to make it work. The first is to create an optimized system for getting as much computing as you can from that power limit. You can get some pretty good CPUs in that power limit, along with some SSDs for low-power data storage. The other way is to share hardware with someone else who is paying for their computing needs. Either way, if you do one of those, your power bill will likely be lower than the rental cost for this machine. Of course, you wouldn't necessarily need to run either the rented machine or your own machine at all hours, in which case both bills would decrease.