
Welcome to the machine
Given some of the use cases they've suggested, their choice of "Welcome to the Machine" as it's anthem seems a little disturbing.
"It's all right we know where you've been"
HPE is undertaking the single most determined and ambitious architectural redesign of a server’s architecture in recent history in the shape of The Machine. We'll try to provide what Army types call a sitrep about The Machine, HPE's "completely different" system: its aims, its technology and its situation. Think of this as a …
Hehe, I've just finished watching the techno-thriller series 'Person of Interest', in which an all-seeing AI is dubbed 'The Machine'. The soundtrack includes Portishead and DJ Shadow amongst others, but it refrains from using Pink Floyd's Welcome to the Machine until the end of its 4th season, and to great effect.
If you only want to watch it for the techno stuff, there's a case to be made for jumping in at the beginning of the 3rd season, in which it breaks from it's police-procedural format and then some. It was created by Johnathan Nolan (Momento, The Prestige, Inception, Dark Knight, Intersteller, Westworld) and addresses many of the themes he's known for.
A quantum leap is the smallest change a system can do. A quantum leap would be to take a conventional server... and remove a screw.
What's needed here is a revolution. The kind of revolution that used to be common in the 1970s and 1980s, where the "next" machine commonly was 2-10 times faser than the previous one.
Actually, to be specific here, and pedantic, a hyperspace jump could be a few millimeters.
What we need here is a hyperspace jump across galaxies located enormous distances from each other. Here we assume that the physical distance is an allegory for technological progress. ;)
As I understand it ( and it may become clear shortly that I am not a physicist ) a quantum leap in computing would result in a machine that will execute any task it has been handed during its own lifetime in an arbitrary order. Processing would commence with an "Oh boy" statement, whereupon the main thread has to identify where and when it is located, resolve some kind of dilemma and move onto the next process. Meanwhile a secondary thread will slap a small piece of perspex and spout percentage chances regarding the purpose of the current job, which will all turn out to be incorrect.
It seems a little bit complicated TBH.
A "quantum leap" is a state transition that cannot possibly happen in a gradual way. Just because these effects in "real life" (aka, the realm governed by the laws of physics) are most-pronounced in the microscopic world doesn't mean they are a measure of the "smallest possible" at all.
A "revolution" in that sense isn't a state transition within the same system either, but one that replaces one system with another. In the realm of compute, go from maths with an abacus instead of using fingers and sticks, or the analytic engine instead of the abacus, and from there via general-purpose programmable computer to generalized quantum computer.
Where "The Machine" will fit in we'll hopefully find out soon!
a universal pool of non-volatile memory is accessed by large numbers of specialised cores ... in which data can stay still while different processors are brought to bear on either all of it or subsets of it.
It's a long time - over 35 years - since I worked on a multiprocessor shared-memory architecture. Presumably HPE reckon that they can implement monitors or semaphores - to control access to shared memory - in such a way as not to adversely affect the Machine's performance.
"It's a long time - over 35 years - since I worked on a multiprocessor shared-memory architecture. Presumably HPE reckon that they can implement monitors or semaphores - to control access to shared memory - in such a way as not to adversely affect the Machine's performance."
The state of the art has moved forward a little in that time period (lockless algorithms, LL/SC etc) - the fundamental problem of data sitting on the end of a high latency pipe hasn't changed though. :(
Seems like HPE have tripped over some Denelcor HEP brochures while clearing out some greybeard dens.
Love to play with something like this, but I don't think it does anything to fundamentally change the way systems are built or perform at the base level. The system software and software architecture that sits on it may well deliver a difference - but I'm somewhat sceptical about the chances of that being an advantage that would uniquely apply for this particular assemblage of components.
It really doesn't matter if you are logically moving processes to data or data to processes, or keeping both in situ, data still has to travel down those long distance high-bandwidth links... Those long distance high-bandwidth links will still need plenty of Watts in silicon to drive them regardless of whether the data travels down fibre or wire. :(
Even if you look at it as being a bog standard server with terabytes of storage that's as fast as RAM, that's still pretty useful. You could have a huge database that behaves as if it's entirely in RAM, but without worrying about flushing it to disk.
This would explain a recent hiring spree in HPE's storage division, locally. Didn't jump on that bandwagon. Wonder if it might actually lead somewhere, with all Flash Arrays....I quantum leap would be no longer hiring System Administrators who actually aren't capable of doing the job, despite having certs and degreess....
For me, The Machine smells like a loss leader. There is just enough conceptual information to whet our appetites without giving any software tools we may use to start experimenting with. This act is designed to get the industry saying "HPE" a lot. If HPE were academically interested in creating a new approach they would have to be more open about it. Remember that PC architecture only became widespread after companies other than IBM joined the effort.
Whilst on the subject of architecture, the description sounds a lot like mainframe parts in a server enclosure.
May I suggest taking a look at these articles? ...
Programming For Persistent Memory Takes Persistence (April 21, 2016) ... https://www.nextplatform.com/2016/04/21/programming-persistent-memory-takes-persistence/
First Steps In The Program Model For Persistent Memory (April 25, 2016) ... https://www.nextplatform.com/2016/04/25/first-steps-program-model-persistent-memory/
Hope this helps.
Chris Melnor – You wrote “It also appears that the SOCs contain a processor and its cores and also cache memory; so, if this is the case, we have a CPU using a memory hierarchy of cache, DRAM, local NVM and remote NVM – four tiers.”
That was, at first, my take as well. As suggested here, it is indeed only remote nonvolatile memory – not remote volatile memory - that can be accessed by another node’s processors. It happens, though that the scope of cache coherency is limited to individual nodes for both the volatile and nonvolatile memory accesses. So, yes, a Processor A’s cache (on a node M) can contain data from a remote Node N’s nonvolatile memory, but changes to that same memory by a Node X’s processors won’t be visible to software running on Node M.