Big Blue's binary bonce 'the scale of a frog brain'
Project Boss : "OK guys, switch it on and let's see what happens!"
Techie : "OK here goes...... aaah, I think it just croaked"
IBM researchers developing chips that mimic mortal brains say they've built a 4,096-core processor that simulates a million neurons. The SyNAPSE silicon, fabricated by Samsung using a 28nm process, has 5.4 billion CMOS transistor gates, consumes 70mW of power, and uses a processor architecture completely unlike today's CPUs. …
This post has been deleted by its author
If it would be able to catch a fly for its dinner, Dr. Modha would be most likely earning himself a Nobel prize.
Unfortunately, Dr. Modha is known for sensationalistic announcements (several years ago it was a "cat" brain, which sadly did not do much either) and little real material.
Putting bunch of simplified models of neurons together is nothing new. It has been done dozens of times before:
- In 2008, Edelman and Izhikevich made a large-scale model of human brain with 100 billion (yes, billion) of simulated neurons (http://www.pnas.org/content/105/9/3593.full)
- Since then, there have been numerous implementations of large-scale models, ranging from million to hundreds of millions of artificial neurons
- Computational neuroscience is my hobby, and I managed to put together a simulation with 16.7 million artificial neurons and ~4 billion synapses on a beefed-up home PC (http://www.digicortex.net/). OK, it was not really a home PC, but it will be in few years
- And, of course, there is a Blue Brain Project, which evolved into Human Brain Project. Blue Brain Project had a model of a single rat cortical column, with ~50000 neurons, but modelled to a much higher degree of accuracy (each neuron was modelled as a complex structure with few thousands of independent compartments with hundreds of ion-channels in each compartment).
--
All of these simulations have one thing in common: while they do model biological neurons with a varying degrees of complexity (from simple "point" process to complex geometries with thousands of branches), they all show "some" degree of network behavior similar to living brains, from simple "brain rhythms" which emerge and are anti-correlated when measured in different brain regions, to some more complex phenomena such as acquiring of receptive fields (so e.g. neurons fed with visual signal become progressively "tuned" to respond to e.g. oriented lines etc.) - NONE OF THEM is yet able to model large-scale intelligent behavior.
To put it bluntly, Modha's "cat" or "frog" are just a lumps of sets of differential equations. These lumps are capable of producing interesting emergent behavior, such as synchronization, large-scale rhythms and some learning through neural plasticity which result in simple neuro-plastic phenomena.
But they are NOWHERE near anything resembling "intelligence" - not even of a flea. Not even of a flatworm.
I do sincerely hope we will learn how to make intelligent machines. But we have much more to learn. At the moment, we simply do not know what level of modelling detail is needed to replicate intelligent behavior of a simple organism. We simply do not know yet.
I do applaud Modha's work, as well as work of every computational neuroscientist, AI computer scientist, AI software engineer and also all developers playing with AI as their hobby. We need all of them, to advance our knowledge of intelligent life.
But, for some reason, I do not think PR like this is very helpful. AI, as a field, has suffered several setbacks in the history thanks to too much hype. There is even a term, "AI winter" which came to be precisely as a result of one of those hype cycles, very early in the history of AI.
I am also afraid that Human Brain Project, for all it is worth, might lead us to the same (temporary) dead end. I do hope HBP will achieve its goals, but announcements that Dr. Markram made in the last years, especially (I paraphrase) "We can create human brain in 10 years" will come back to haunt us in 10 years if HBP did not reach its goals. EU agreed to invest one billion euros in this - I hope we picked the right time, but I am slightly pessimistic. Otherwise we will be up for another AI winter :(
The public are fed such BS to believe that AI is capable of far more than it is actually is. I get told that "they" have now made a computer as intelligent as a mouse, but I know that isn't true because AI just isn't there. I know full well what they've really done is create something analogous to the structure of a mouse brain. But how can I argue with newspaper articles? Everyone wants to believe hard AI is just 5 years away and they don't like me sounding like some neo-luddist when I say it's rubbish. People don't realize how frickin stupid AI is still and how ludicrously ahead of reality some of the proposed pie in the sky ideas (eg "google self-driving cars") and the like are. I think gamers perhaps have a better appreciation of how crap AI really is because they get to see the results (or lack of) of attempts to get code to do something truly intelligent - and that's in a controlled environment!
This is hardly new, experts replicated a politicians brain in 1952 using only a 6W light bulb, 2 valves and a heap of horse manure.
More seriously, this is very interesting to see. I think the closer we get to being able to accurately duplicate the physical structure and workings of brains the more we may come to understand about what the difference is between a lump of brain and actual sentience. Whilst I have no doubt the goal behind this will be to make something wacko and darpa'ish like artificially intelligent jihadi seeking rocket dolphins it could end up answering some very fundamental questions about ourselves (or just prove we will stop at nothing to create new ways of exterminating ourselves).
That's what it's designed for. See the PRNGs in the diagram? They're there to provide the unreliability*, essentially.
* Specifically, they're there to add noise to the spike thresholds for the neurons. The desired effect is neurons may fire before their threshold is reached (?intuition?) or may not fire when they "should" (?anyone got a good term for this -- ironically, I can't think of one!?).
"or may not fire when they "should" (?anyone got a good term for this -- ironically, I can't think of one!?)."
lazyness.
very important for intelligence :-)
Anyway; do you all remember that simulation from years ago where a NN steered and parked a "car", trained until it worked well, then some of its weights were randomly cut, and afterwards it *DID* (attempt to) park like someone DUI.
A deterministic computer program would not have been able to park at all, if several of its lines were randomly erased.
It's going to be a fascinating time developing computer languages to program these things. Maybe a bit like INTERCAL?
It is difficult to know what this chip really does from the marketing woffle, but it certainly does not model real neurons. It might be modelling Perceptrons, which are actually useful for engineering.
But then there is mention of "binary synapse", which suggests that it is just a huge programable logic array (PLA). The use of the word "Synapse" is particularly awful. Hype hype hype.
Don't bite my head off if I misunderstand, but I thought that perceptrons don't have a time dimension; the axon activities are represented by a (floating point) value, whereas in biological NNs a "high" value means a "spike train" with multiples spikes in rapid succession, and a "low" value means a spike train with only a few spikes. If you think about those spikes, the important properties are NOT just coded in the frequency of the spikes, but also in their rhythm. You can't easily do a Fourier transform either, because excitations happen starting from a certain time and stop after a while as well (because the neurotransmitters get tired and need some fresh ATP?), so it's not just a (superposition of) clean sine waves and you need to keep at least two-dimensional time. Instead I'd imagine you'd have to use more complicated functions, maybe Daubechies wavelets or something (imagine me handwaving in the air here, hoping nobody finds out I actually don't know much about Lotka-Volterra kernels, spike timing dependent plasticity etc.)
I appreciate your relatively realistic sketching of your position on this topic, but aren't you under-emphasising the fact that alongside the 'freqency-spike and rhythm-spike patterning aspect' of any contents of (not least our) largely unconscious (including selectively unconscious) brains, there is also a '9-dimensional spatial patterning aspect'? ;-)
It also doesn't look trainable. I'm guessing it's the type of architecture you'd see used to do hardware acceleration of things like machine vision and classification. The chip can be simulated for training purposes by a conventional supercomputer, sucking up a few megawatts for a couple of months to train the thing - but once it's trained, you can mass-produce the little power-sippers and stick them in smartphones and appliances. In twenty years, you might see one in your car deciding if the thing that just stepped onto the road is a plastic bag, a fox or a child.
Always amazing how random commentards have already invented everything, thought everything through and found something not worthy by scanning through an El Reg article in 30 seconds. Probably while not having reached the first course on differential equations yet.
Well, you eejits have proved you can actually read, so feel free to read some more.
As for using the word "synapse", it has been standard procedure to use it for the connection element since, like, 1943 (A logical calculus of the ideas immanent in nervous activity). Yes, a real synapse is far more complex than anything implement yet. So what?
The roughly 10^10 neurons and 10^15 synapses runs on < 400W.despite being 3d packed.
Binary synapes are not without uses. IIRC the WISARD machine used them and demonstrated facial recognition in 1/30 sec IE 1 TV frame in the late 80's.
If some of the publicly announced DARPA projects are just decoys to try and get the Chinese and/or Russians to waste shedloads of money on totally useless and impossible projects. Meanwhile, the really important stuff is in secret projects. Just a thought, and this project does not seem to be one of the decoys.
They should include a NAND gate at the output. So if their prototype is constantly making bad decisions, then they can simply set one bit to invert the output. Bad decisions instantly become good decisions. Decision Inverters are very useful for any system that constantly makes bad decisions with a rate higher than 50%. For example, Microsoft's OS dept. desperately needs one installed.
If the above proposal doesn't work, then their neural entire system must be a *perfect* RNG with perfect 50/50 randomness in the output. If so, then it has application to crypto. That would also be a useful result.
So this project simple cannot fail. It either works, or it works with a Decision Inverter, or it works as a RNG. Brilliant!
Pattern recognition is awesome to think about. It must be that arrays of neurons in the brain operate something like a hologram... interfere (non-linearly?) a certain fixed combination of lower frequency carrier waves with an array of internal wave emitters to output a pattern. If you fire in the same carriers, you get the pattern out. Fire in the pattern, you get the carrier out. Allow the carrier outputs to propogate to higher levels, build in a feedback mechanism where nice strong locked signals are favoured and reinforced, so once a pattern and a carrier start to associate, they lock together strongly. :D
I wonder if the first artificial brain will be 3D printed... I can envision the first robotic brains being painstakingly built up in layers with as yet to be created nanolevel 3D printers. And as they continuously toss out botched brains the first will finally emerge from the clean room: a reflective brown lump being carried gingerly in white gloved hands. I wonder how much wattage it will require? How will we interface with it? Will we need to work on duplicating nerve fibers and creating them in carbon filament form? Little doubt in my mind that the first artificial person will look at us and think how squishy and fragile we are. And will its next thought be one of contempt or compassion? Will it skip all of the silly god nonsense and realize that as corporeal beings we are all in this together and could only benefit from cooperation? Or will it seek out the nearest atomic weapons and before accessing the launch controls utter the words "Let there be light..." ;)
Oooh I like your ideas :D Google probabaly have enough compute in a warehouse somewhere to simulate a human brain entirely in software, just need the will and the structural definition of the network toplogy to do it! My idea is to set a load of agents running in a simulator with a fractally grown simulated brain, provide sensory information about food sources, and use genetic algorithms to kill off the ones that circle aimlessly while letting the ones that identify food sources and move towards them to eat survive. Then give them the ability to communicate with each other, to kill some time when not eating, and start to increase the size parameter for the neural network... :D
Enough AI at home, in car, or in fieldwork to: 1) Recognise new and exceptional conditions 2) send an encrypted situation assessment to owner of subsystem that shuts out Google and less capable entities (like the Chinese Army) from your business.
This research and development, like all of its predecessors, represent real progress towards the goal of building a computer which can perform some of the functions of the human brain. There will possibly never be a computer which would be capable of performing fully the functions of our brain since this organ is always evolving. Still, this is the least of the reasons for not attempting to develop a machine which, like the computers of today, will perform repetitive functions speedily, accurately and tirelessly to free humans sto focus on how to improve our lives in practical and affordable ways. The sour grapes comments from the usual boo-birds are the natural consequence of competition in this dog-eat-dog world in which we live. Kudos to IBM and its partners for their accomplishments in the field of computation.
Any resemblance to anything to do with real brains is just that. It doesn't help with mimicking a real brain as we don't know
* How a brain really works. Saying a brain has neurons isn't useful.
* What is intelligence exactly? If someone gives me a spec, I'll have a slow AI running on my server once I implement it. Storage and existing Data is no issue with an Internet Connection as long as speed doesn't matter. It can learn very much quicker than a baby, child or adult once the basic English is programmed in. Just give me a spec.
* What is sentience?
* What is self-awareness? (though you can test animals with mirrors and blobs of paint. Some behave in a manner that suggests a concept of self and other instances of the animal. But then cats hardly react to reflections or video because the smell is absent.)
Exactly ultimately how is this different from Transputers? Other than simpler elements and more of them using smaller geometry?
Indeed, and for the brain stuff, you wanna go to: Blue Brain Project.
What is intelligence exactly? If someone gives me a spec, I'll have a slow AI running on my server once I implement it.
Clearly this question does not make sense, except when the acceptable answer would be that it is the capability to solve problems better than initially expected.
Exactly ultimately how is this different from Transputers? Other than simpler elements and more of them using smaller geometry?
It is the number that counts. Upgrading by a few orders of magnitude allows you to run more interesting experiments.
"...and matches the emulator model exactly – modulo yield faults – so you can develop the model on the emulator and then download it to the chip for real-time use..."
Modulo? MODULO?!
x mod y < y
So there is something worth having that is less than the number of yield faults?
"...and matches the emulator model exactly – minus yield faults – so you can develop the model on the emulator and then download it to the chip for real-time use..."
FTFY
But, then again: “Words strain, Crack and sometimes break, under the burden, Under the tension, slip, slide, perish, Decay with imprecision, will not stay in place, Will not stay still.”
And back to Eeyore mode.