A section of rat brain
I'd bet it's absolutely nothing like ANY biological brain, as they don't yet exactly know how a brain works. They only know what some biological components do.
A section of rat brain has been digitally reconstructed by a huge team of 82 scientists, some of whom had been working on the project for decades. The research was led by scientists at the Swiss Federal Institute of Technology in Lausanne (EPFL), who said that they had only completed a "first draft reconstruction of a piece of …
OK, let's be a bit less gnomic, and try to place Henry and the rest of Blue Brains' work into an IT context.
A cortical microcircuit is really the minimal functional unit of a brain. An analogy is to consider a neuron as a transistor and the microcircuit as some sort of generic IC. One of the achievements of this work is to provide us with a provisional count for the number of distinct neuron (i.e. transistor) types. When you consider that biology has what we'd call an "extremely high process variation" and that neuroscientists are basically given a pile of 37,000 different cells and asked to classify them into "morphologies" then you're getting an idea of what's needed.
Another key idea in this paper is that the places where connections are made occur geometrically, that is wherever the "wires" get sufficiently close to permit connections (synapses) to form. Obviously, it remains for the results to be confirmed by other labs, but if this result proves true, then again our task becomes a bit easier, as the biology becomes a bit less tentative.
To me, as someone tasked with providing an even more simplified version of this circuit (but running in real time, rather than taking hours to simulate a second, as the model reported in the original article does) the true significance is that we have a "reference semantics" against which we can compare the behaviour of the SpiNNaker model.
A test version of SpiNNaker-2 has been taped out in July, and we should get back the result just before Christmas. Although the test chip is just trying out ideas (in 28nm), our[*] aim is to permit a microcortical circuit of 40,000-100,000 neurons[**] to be realised on a single one Watt chip.
There are of course any number of features which Henry's model does not yet include, but the intriguing thing is that the model already makes a number of predictions about the results of future experiments.
[*] Sebastian Hoppner (TUD), Christian Mayr (TUD), Steve Furber (UMAN), Dave Lester (UMAN).
[**] Both the number of neurons and the number of connections increases as the animal's brain complexity increases, e.g. for a macaque we're looking at about 80,000 neurons with a fan-in/out of ~5,000. For the rat we're looking at 37,000 neurons with a fan-in/out of ~1,000.
In the US, it could just be three lines of text:
1) Do what the most senior person in your party does
2) If you are the most senior person, do the opposite of the other party
3) If the other party hasn't done anything, do what the nice man with the briefcase of money tells you
So where are the sexbots we have been promised?
Well, I suppose PETA will be on the case within a week or so.
But now... can we use genetic algorithms to compress the exorbitant computational power needed to run the ratslice™ into a symbol-processing quasi-equivalent running on a few Xeon CUDA-capable processors?
right so average volume human brain = 1130 cubic centimetres
= 1130000 cubic milimetres
= factor of 3424242 bigger than this experiment (assuming same neuronal density)
using Moores Law 3424242 = 2^(x/2) where x is number of years to achieve this
(log(2)3424242)/2= x
So about 43 years to emulate a human brain purely in terms of neuronal interaction.
Yes I was bit bored during the enforced viewing of Strictly
Schultz writes: "Is that a financial stimulus package or do they expect to get 1.2 billion worth of scientific knowledge out of their supercomputer?"
Well, the results reported here are part of the Blue Brain Project, which has been primarily funded by the Swiss Federal Government. The Human Brain Project -- of which I'm a part -- is funded on a biennial basis contingent on results, probably to the tune of about €0.5B in ten years. This funding originated in FET-ICT ("Future Emerging Technologies for ICT") whose mission is to foster industrial collaboration between different ICT companies in the EU. Currently, the money is coming from DG Connect, which is part of the EU's Digital Agenda, which I think was established by The Register's favourite recently-retired EU Commisioner: Neelie Kroes.
The University of Manchester contribution with Technical University of Dresden is a new 28nm SpiNNaker component. SpiNNaker-1 cost the UK research council ~£5M for a 130nm component. Our HBP funding is order €10M.
It does seem rather a lot of money to be spending on massively complex models of the brain when we are only just beginning to understand how the individual cells function.
E.g. http://jonlieffmd.com/blog/is-the-primary-cilium-a-cells-antenna-or-its-brain
@David Lester: I'm not a neuro-scientist but I like to follow the research/read articles etc. It seems like there are complexities in synapse and neuron function that would need to be accounted for to make the model valuable.
I'm sure you are aware of many more examples, but two I've been reading about recently:
1-Dendrite preprocessing of information, seems like there is a lot more going on there than previously thought, not sure if the models take that into account.
2 - Neurons switching between slow and fast firing type depending on conditions.
Are the models being used (for synapse and neuron activity) good enough to think the entire model will provide a reasonable simulation of actual?
R Olsen writes: "@David Lester: I'm not a neuro-scientist but I like to follow the research/read articles etc. It seems like there are complexities in synapse and neuron function that would need to be accounted for to make the model valuable."
Well, I'm not a neuroscientist either!
You've hit on exactly the right question. What level of modelling accuracy is required to obtain the results you're interested in? Steve and I are particularly interested in "plasticity and learning", i.e. mechanisms that allow animals to learn and remember their responses to previous stimuli.
"I'm sure you are aware of many more examples, but two I've been reading about recently:
1-Dendrite preprocessing of information, seems like there is a lot more going on there than previously thought, not sure if the models take that into account.
2 - Neurons switching between slow and fast firing type depending on conditions.
Are the models being used (for synapse and neuron activity) good enough to think the entire model will provide a reasonable simulation of actual?"
Let's answer this in two parts: Henry's model does indeed feature dendritic computation and the neurons also switch modes. Our simplified SpiNNaker models do not currently feature the dendritic computation feature (instead we model the currents passed into the neuron as a linear rather than multiplicative property), but by using the Izhikevich neuron we do get the bursty-ness property.
Previously we'd have been able to say "well our model shows many of the same properties as Henry's more complicated models", but this wouldn't say anything about how well that matched biological reality.
So, for us the interesting next test is: "Do we need non-linear dendrites?" Because everything in SpiNNaker is done in software, we can make this change, but it will affect the speed and or density of function we can achieve. One thing to point out is that the systems we're developing in HBP can model one brain area in high fidelity and the rest of the brain at a much lower level of detail.
"[...] the systems we're developing in HBP can model one brain area in high fidelity and the rest of the brain at a much lower level of detail."
If a small portion of a brain's function could be emulated in real time - then it raises an interesting possibility of replacing a damaged portion of tissue. The rest of the real brain acts as the "lower detail" part of the model.
Like some hardware emulators of a system component - it only needs to produce the same interface activity when plugged into a live system. How it models that internally is irrelevant. Only the interface has to be inserted into the brain itself - although that might be tricky if it needs too many connections.
As far as the Reg article goes, what's being reported here is fascinating and clearly a significant achievement for the team involved. A remarkable effort over an extended period of time to get to where they are today. Kudos to everyone involved.
On a lighter note, and purely for comic effect, you understand, the simulation results may turn out to be replicated with a simple piece of script that emits :
[CHEESE, SEX, CHEESE, SEX, BATHROOM_BREAK, SQUEAK]
every few minutes. Tip of the hat to Terry P. for the SQUEAK, obviously.