Not enough buzzwords
You need to add Cloud, Big Data, AI, Robotics and PaaS (Pi As A Service) before you can market this.
Noboby in management, who holds the budget purse strings, will buy it based on how fast or useful it is.
The Los Alamos National Laboratory will this week reveal its latest "High-performance computer" - a cluster of 750 Raspberry Pis. The Lab's Gary Grider had the idea of trying to package up serious clusters, not to replace its supers – the three fastest at the moment are the Grizzly, Fire, and Ice systems, all in the 1-1.5 …
Many years ago regarding large computers Linux was never mentioned and now again it's never mentioned as it has become the default. On the top500 list more than 50% ran Linux before they started to count them. I don't give a shit, sort of, but it certainly shows the importance of marketing. Once, for a very short time, there was one running "Apple" and wow the ink about that. Just imagine if MS payed somebody enough to try to tweek Windows into the list. Sometimes I have this feeling that if there ever is Linux on the desktop (19 years for me) it will happen in a similar way, no fanfare, most cars have four tires, ever told anybody about it.
In a way Windows and Linux are very similar, if there is a new virus Windows is never mentioned, it's the default.
People don't really use desktops much anymore, the computer they use the most is the phone in their pocket, and the majority of those run Android, which is based on Linux.
Of course, it's using linux to run proprietary Java apps so it's not exactly what the open source zealots were expecting...
Anyone using an actually Raspberry Pi for a 750 node cluster is an idiot. It would make way way more sense to use a Orange or Banana Pi and get an actual 1Gbps Ethernet connection instead of dicking about with a 100Mbps one. It's a 10 fold improvement in one of the most critical components in a Beowolf style cluster.
The sensible thing is to stick 48 Orange/Banana Pi's in a 1U and marry them up to a 48 port GB switch, and then tie 16 of those together with a 10Gb switch for 768 nodes. That's 32U of rack space leaving enough left over for a more normal x86 server to act as a file server.
Oh and my day job is an HPC system administrator...
I certainly could be wrong (never used any Pi ) but I thought I had read the ethernet on the Pi was running off the USB bus ?? (not sure if still the case), but as you say, probably not a very good setup beyond a simple toy - the exception may be for setups that aren't network bound (e.g. download a batch of data to work on and then work on it from local storage/memory).
Even if it's only 100Mbps, as long as it's on the PCI bus (not USB), I'd think would be a major improvement over anything running on top of USB.
Yes, the Ethernet and 4 port USB hub all go via a single USB port into the SoC. Yes, it is a bottleneck, no it hasn't really stopped a lot of sales, because in general people are not too worried about it. Those who really need the throughput get alternative devices and live with the less than useful support.
As for whether this is a toy, I'd suggest that if Los Alomos National Lab though it was a good idea to make one, then it's not really a toy. After all, it's not like the Pi is the new kid on the block with an unexpected design that will catch them unawares.
This is a project for testing code in an environment similar to the real HPC's, so you don't take up valuable compute time ironing out the kinks. It gives you 750 * 4 core A53 devices running at 1.2GHz, with slow interconnects and only 1GB RAM per node, buts costs less than $35k or so (my figures, not sure of final cost, assuming $35/Pi ex VAT plus costs for the Bitscope racks)
Raspberry Pi make what is it about 500k a month or something ridiculous so buying in quantities is not a problem.
As for using something like a banana pie or an orange Pi yes they do have faster ethernet but the os support is absolutely rubbish. Since this is a educational project rather than a serious high-performance computing project it seems that the Ethernet is probably not that important anyway.
> how many FlOpS
says the Pi3 GPU provides "24 GFLOPS of general purpose computing performance", which would make a not-too-shabby 18TF for the cluster of 750, if you can code for the GPU.
OTOH, it says the Pi-B did 0.065 GF on LINPACK single-precision, which is rather less impressive.
A single NVidia Titan Xp claims >10TF by itself.
The latest Pi version is a quad core arm A53 running at 1.2gig so considerably faster than the original Pi model. They also support neon in the latest models so that if you can take advantage of it give you a massive speed improvement.
There's a thread on the Raspberry Pi forums that has a lot of linpack style testing done on it which might be worth a look if you're interested.
3W per blade * 150 units per blade * 5 blades in a cluster = 2250W. Or about 47A at 48V. Or around 20 standard ethernet cables using all 4 pairs at 600mA per pair (802.3bt).
I'd have thought a 13A plug would be more appropriate - or one of those washing machine outlets, since it's the US.
The Raspberry Pi 3B get their power from the BitScope Blades.
Each Blade Quattro mounts four Raspberry Pi. The Blade gets its power through mounting holes at either end.
The Blades are attached to two power plates - positive on one end and ground on the other. In this way BitScope can mount up to 15 Blades in each power pack. Depending on the Blade used (Quattro or Duo) 60 or 30 Raspberry Pi can be mounted. There is no power cabling other than to the power plates.
BitScope then use multiple power packs to build out a Module. In this case the node count is 150 - with integrated networking and power.
It is a VERY inexpensive but massively parallel system for research, not just to buy but to operate as well.
A typical 42U Rack will hold seven modules - 1050 nodes.
The only cabling is for network connections and this is simply for networking - NOT POE.
Biting the hand that feeds IT © 1998–2021