It will not work....a sad desperate attempt from a company that's past is sell by date
IBM boffins have unveiled new work in-memory computing: doing processing inside Phase Change Memory with no external CPU. Traditional computing requires a memory to hold data and an external processor to which the data is transferred, processed and then written back to memory. This is the Von Neumann architecture and, can be …
Tuesday 31st October 2017 09:28 GMT Lee D
Timing synchronisation would seem to be the problem, especially if you're talking temperature critical operations.
I don't see how you'd be able to do these computations any faster than current technology, or even quite a bit slower. Certainly without then having to push the results to somewhere else to actually make use of them.
It probably has some niche applications somewhere... no RF emissions? No central clock / space missions where clocks might not work reliably / time might change because of speed / etc.? I don't know. But as a mainstream technology? I don't see it.
Tuesday 31st October 2017 09:58 GMT Destroy All Monsters
Sounds interesting, but of course only useful for some algorithms (that have characteristics of "physiciality": they are local and far away nodes need to be messaged first). In computers, memory is used as "super-lightspeed" communication medium.
But yeah, basically the Connection Machine 1 without the hypercube communication channels? (Upon which, it is always interesting the re-read The Book. It's likely floating around on the 'net somewhere. I mean come on MIT Press, USD 34 for a 30-year old extended paper in B&W?)
Tuesday 31st October 2017 11:21 GMT Doctor Huh?
I'm glad I'm not the only one who came up with this, ahem, connection.
The basic idea of moving the computation to the data has been around for decades and pops up frequently. My most recent favorite is the Netezza appliance, which essentially implemented smart disk storage using FPGAs and hard drives. SSD storage would have solved the low MBTF of the mechanical devices. But, 2 factors have put that technology on life support:
1. About 5 years ago, Hadoop became the shiny new thing (now it's Spark, and tomorrow it will be...?), and interest in dedicated appliances waned as interest in on-demand Cloud-based Hadoop grew.
2. IBM bought Netezza. IBM buying your company is a more effective kiss of death than the one Michael Corleone gave Fredo, because IBM doesn't wait for Mom to die before putting out the hit.
I can see Seth MacFarlane doing a whole bit on how this memory chip is an improvement in rain detection over the "weather rock" present in so many places.
Tuesday 31st October 2017 10:03 GMT Doctor Syntax
Tuesday 31st October 2017 10:18 GMT DropBear
I don't think so, in anything like its current form - I see zero interconnectivity in this. Just a bunch of cells, each one reacting locally to some external stimulus in a horribly rudimentary way. Without all the interconnectedness (and much more importantly, the part that can _modify_ the connections in a meaningful way) this sounds nothing like neural networks...
Tuesday 31st October 2017 11:11 GMT Dave 126
There has been success in using neural networks that have some areas of just local connections. Indeed, it seems the bottlenecks that these introduce are essential to keep the amount of information low enough to be processed efficiently. The image in this first link below illustrates this well:
More theoretical stuff here:
Tuesday 31st October 2017 22:44 GMT Doctor Syntax
Tuesday 31st October 2017 10:16 GMT frank ly
From the description of heating effects causing changes to the electronic state and hence derived data value, it sounds like it could be used as a signal integrator (with fixed natural leakage rate) and a trigger output if heating rate due to signal (minus natural cooling rate) exceeds a certain value.
I suspect it would be quicker, easier and more flexible if you multiplexed all the inputs into an A/D converter and then used 'traditional' digital programming methods to perform computations on them.
Tuesday 31st October 2017 10:19 GMT steelpillow
The return of analog
It does sound like a return of analogue computing, in at least some respects. As I understand it, repeated small SET pulses may be applied to nudge the device conductance until sufficient material is affected by the phase change to register a 1. The pulses are summed analog-fashion in the conductance of the device, i.e. the amount of material currently changed. If so, then it is rather like the way human memory works, by strengthening already-existing synapse connections between neurons. I wonder if it can accumulate partial RESET signals to weaken memories, too. Now that would be something!
Tuesday 31st October 2017 20:58 GMT Colin Tree
I'm waiting for three-state memory
+ve -ve zero
true, false, don't care
true, false, maybe
true, false, stop bit
Especially as a stop bit, serial storage, variable word length,
a true or false flag could be stored as f+1,
a very long number could be stored in n+1 bits,
variable length instruction, execute
how much redundancy is there in fixed length words ?
Wednesday 1st November 2017 09:02 GMT annodomini2
Please correct me if I am wrong...
There seems to me to be two possible avenues:
Both with Time delayed operation
1. Slow switching transistors (phase change), but with retained state, so you could set a switch state, but only action it when needed, this I think is where the potential power savings are intended to come from in the circuit.
2. Potential Analogue computer routes.
The main issue will always be the mechanical thermal gradients in the circuit, I doubt you'd be overclocking this to 7Ghz (not that it would stop someone from trying).
But for applications where speed is less of an issue and integrity is high there could be products.