Fun to watch
Other institutions like CSCS (Swiss National Supercomputing Centre) run their weather codes on Nvidia GPUS; it could be rather fun to see how these architectures perform in the medium to long term.
A fellowship of four UK universities, along with HPC veteran Cray and the Met Office, have been handed £3m to build a 10,000+ ARM core supercomputer. The project could settle whether ARM-based supercomputers can beat Xeon ones on cost while offering the right performance. The scheme is called Isambard, after 19th century Brit …
Other institutions like CSCS (Swiss National Supercomputing Centre) run their weather codes on Nvidia GPUS; it could be rather fun to see how these architectures perform in the medium to long term.
Dunno about 'fun' but I'm very interested to see how it goes.
Although seemingly primarily based on Arm, with that mixture of Arm, x86(Xeon and Phi) and GPU it seems more like a test/evaluation system. I can't see a good reason to incorporate all of those technologies in a single system unless it is to compare the different processor architectures within the same system architecture, so perhaps testing the system architecture as well as Arm technology.
Wow, £3m. That's a massive 20,000sq ft of London office space (for a year).
For comparison, the House of Commons Administration reports: "In 2015-16 net income of £16.4 million was generated predominantly from commercial activities including retail, tour activities, catering and from property receipts. "
Could be a great day for ARM.
I find it hard to believe that an ARM array at the same clock frequency and with appropriately sized caches on each processor, would not beat an Intel code museum.
The results should be very interesting, but I suspect not without dispute.
Not really.
1) Supercomputing code generally is written by scientists and runs horribly. I've done multiple tests and found that I often can rewrite their code and perform better on 40 processor cores and 4 GPUs than they do on 3 million pound computers.
2) We're not comparing ARM to x86 here. That comparison can be accomplished far better with a few desktop systems. Performance-wise, you're making the assumption that performance is related to instruction set. It's generally about instruction execution performance and memory performance. Intel uses more transistors on their multipliers than ARM uses in their entire chip. This may sound inefficient, but it is those things which given Intel an edge. Let's also consider that memory performance is almost all about management of DDR bursts and block alignment. ARM has much tighter restrictions on those things. Also, more often than not, the scientific code makes profiting from cache utterly meaningless. Ask a scientist working on this code whether they can describe the DDR burst architecture or whether they can describe cache coherency within the CPU ring bus or whether they can explain the process of mutexing within a NUMA environment.
This is about whether shitty code costs less to run on one computer 100 times larger than it should be vs another.
For 3 million pounds, I would imagine they could have bought a gaming PC and a programmer.
I think this chip is interesting:
https://www.parallella.org/2016/10/05/epiphany-v-a-1024-core-64-bit-risc-processor/
The thing I kinda worry about is that it might be as difficult to program as one of those old cell processors. A super computer was actually built of cell processors but abandoned after 3 year. Reading between the lines it seems no one could or was interested in programming the thing.