back to article Artificial brains could point the way to ultra-efficient supercomputers

New research from Sandia National Laboratories suggests that brain-inspired neuromorphic computers are just as adept at solving complex mathematical equations as they are at speeding up neural networks and could eventually pave the way to ultra-efficient supercomputers. Running on around 20 watts, the human brain is able to …

  1. An_Old_Dog Silver badge
    Joke

    If the Human Brain

    ... is so good at solving partial differential equations, why do they give so many people difficulties in their maths classes?

    Speaking of hardware/problem fit, it's always bugged me that fad-followers did so much to destroy and suppress analogue computers, when they are an excellent, and better fit than digital computers, for certain classes of problems and simulations.

    My uni had an old, multi-rack analogue computer in the E.E. department, but I had no idea how to work it; I saw no docs for it in that room.

    1. Neil Barnes Silver badge

      Re: If the Human Brain

      The wonder is not that the dog dances so well, but that it can dance at all...

      The first computer I ever saw was analogue... I do wonder how much more efficient, and smaller, they might have become (for appropriate problems, of course) had the transistors in an operational amplifier followed the size changes of those in logic chips. Though I suspect that decreased signal to noise ratio at small geometries might become a limiting factor somewhat sooner (not my field; just speculating).

      1. Anonymous Coward
        Anonymous Coward

        Re: If the Human Brain

        Yeah, might not be able to fix a modern analog computer with a hammer (unlike a 2015 Soyuz capsule) ... but it looks like they can solve Ordinary DEs with "neither saddle nor girdle" iiuc ... yummy!

        The humain brain meanwhile might do all this through (intricate) "eyeballing" ... which a right hammer could either help or hinder I guess? ;)

    2. m4r35n357 Silver badge

      Re: If the Human Brain

      The universe IS analogue - it is not subject to O(N), it neither recognizes our mathematical algorithms nor is limited by them. It does not calculate - it simply IS, immediately and without processing delay. Analogue computing is real; digital computing is just mathematics. Anyone who thinks that digital computing can get us any further than solving differential field equations, in over-simplified mathematical abstractions, is missing the point entirely.

      1. Bebu sa Ware Silver badge
        Happy

        The universe IS analogue - … - it simply IS, immediately and without processing delay

        I wouldn't bet the farm on that proposition especially "immediately and without processing delay."

        How much the Universe "just is" at the smallest conceivable scales is a fairly decent headache for theoretical physicists and philosophers, I suspect.

        As far as I can see analogue computers — using charge, current, emf and capacitance,resistance and inductance as proxies for other physical properties that share the same differential equations — are just as remote from reality as a digital computers applying the same differential equations either symbolically or numerically.

        Quantum computers with their complement of qubits would presumably count as analogue computers in some sense. Superposition would seem like reality indulging in speculative execution. ;)

        1. Groo The Wanderer - A Canuck Silver badge

          Re: The universe IS analogue - … - it simply IS, immediately and without processing delay

          I like your analogy of quantum "speculation."

        2. that one in the corner Silver badge

          Re: The universe IS analogue - … - it simply IS, immediately and without processing delay

          > As far as I can see analogue computers — using charge, current, emf and capacitance,resistance and inductance as proxies for other physical properties that share the same differential equations

          Absolutely.

          Whether you are computing with charge, Meccano, water or floating point you MUST always be aware that you are running a model, not The Real Thing. Your results are only as accurate as the product of the efficiency of your computer and the closeness of the match of your model to the processes in reality.

          You can set up some simple (!) flow models using all of those mechanisms to see how your swimming pool behaves. Until it fails to predict that somebody can run across the surface - because your model only copes with statistical fluids and doesn't include the catastrophe when the custard in the pool goes non-Newtonian. Or the pyroclastic flow does the same (but don't try the "running on the surface" trick without a pair of very stout boots).

          1. HuBo Silver badge
            Pint

            Re: The universe IS analogue - … - it simply IS, immediately and without processing delay

            Cool links!

        3. This post has been deleted by its author

        4. m4r35n357 Silver badge

          Re: The universe IS analogue - … - it simply IS, immediately and without processing delay

          I was over cautious above, what I should have said is: "nature does not calculate". All simulations are equally invalid.

          1. Anonymous Coward
            Anonymous Coward

            Re: The universe IS analogue - … - it simply IS, immediately and without processing delay

            Hmmm. Your O(N) idea still has m3r17 imho. I mean in-memory compute perty much seeks a more nature-matching "O(1) solution time complexity", say via crossbar arrays (CBAs) of memristors operating in the analog domain (or maybe just proximal interactions). And it should help tame the Von-Neumann bottleneck monster that so haunts our compute.

            Spukhafte Fernwirkung may be fundamental but it doesn't really survive microscopic and larger scales (in my experience).

    3. Jou (Mxyzptlk) Silver badge

      Re: If the Human Brain

      To be fair: Being able to calculate differentials/integrals has no use for 99% or peoples life. Being able to understand what it means (not calculate) has no use more than 95% or peoples lifes. So it does not matter if that percentage of people cannot get it, since they won't engineer anything. Or check whether a statistic given not only sounds convincing, but can be checked for its validity. Being able to calculate percentage and a few things around it, for example, would be good. 100% of people should be able to do that, but for some "developed" countries the reality is bad to downright frightening.

      1. An_Old_Dog Silver badge

        "No Use for 99% of Peoples' Lives"

        Mathematics is a tool which lets us model things and solve problems. Though I'm a techie, I haven't used trig and calculus often, but I have used them.

        Additional fit-for-the-problem tools in one's mental toolbag is always a good thing. You can use a power drill to pound in nails ... or just ignore the problem as "unsolvable", but doing either leads to a less-productive society.

        1. Jou (Mxyzptlk) Silver badge

          Re: "No Use for 99% of Peoples' Lives"

          I specifically wrote "differentials/integrals". I know them, if needed I could calculate differentials, and with a bit more time integrals since I've been taught that 30 years ago. The principal what is is for and how to approach is still in my mind. Similar for imaginary and complex numbers: I know them, including where they are useful and how to apply them, but did not need them after school since I am not that deep into engineering or 3D programming. (Try using Blender, the 3D renderer, for higher stuff like the well known BMW benchmark test model without complex numbers, you will fail)

          You write "trig and calculus", which is a much simpler math and more bound to normal life experience. I agree here with you: Every techie MUST be able to do that when needed, else he pays with body parts - either cut off, burned by fire or electrics.

  2. Jou (Mxyzptlk) Silver badge

    replace "point to" with "found somewhere"

    There are several news stories out there telling "AI invented new math and computing method", whereas "searched and found" would be more appropriate.

    Somewhere someone posted a better solution, or a working solution at all, to some problem. And "Artificial Incompetence" simply had it in its index/model, presented it among the results with the typical "I found it!" boasting. Way too often, when doing classical search with the results, the original source pops up to prove a human solved it before.

    1. Anonymous Coward
      Anonymous Coward

      Re: replace "point to" with "found somewhere"

      Yeah, and interestingly this Sandia work, while about neuromorphic compute, is not really about AI. Rather (afaics), it stems from their prior "review of non-cognitive applications for neuromorphic computing". Something about how a "system of distributed, spiking proportional-integral (PI) controllers" (for example), may be used to "energy-delay advantage" in graph search, constrained optimization, solving PDEs, etc ... (eg. Figure 6 in that review paper).

      I would've thought this a use case for memristors (esp. with analogue crossbars) but apparently the sparse format of the target problems means "data movement dominates energy costs", which makes their method better.

      They also critique "physics-informed neural networks" that require "training data, and [...] have largely been optimized for the use of graphics processing units". Their tech, by contrast, is not LLMs on GPUs, nor deep learning and the likes.

  3. Dan 55 Silver badge

    Recent introduction to SpiNNaker from Steve Furber

    Building a Brain with Professor Steve Furber CBE

    One point he made was LLMs in use today waste a huge amount of resources because you could discard 98% of the calculations as they haven't changed since last time they were calculated.

    Similar the design of the ARM CPU back in the day which he was also involved with where Acorn did more with less when compared to their US counterparts who were also developing RISC chips.

    1. Justthefacts Silver badge

      Re: Recent introduction to SpiNNaker from Steve Furber

      That is true, but there’s a tradeoff here that he’s not addressing.

      To know whether or not the inputs have changed to each operation, you need to effectively cache all the intermediate results at every layer. The spiking architecture does exactly that, but at the cost of providing an SRAM register local to each arithmetic operation. This is very power and area hungry. With current hardware technologies, it’s often not efficient to memoise stuff, instead recalculate what you need when you need it (a software lesson that applies more generally than AI software).

      Some Alternative architectures do cache intermediate results by storing out to main SDRAM. But then, it costs hundreds of times more energy to shuffle cached data into and out of external SDRAM than the actual computation. The price of at least doubling the amount of HBM memory that you need, might well be more than the execution units you are saving. So it really isn’t obvious a priori whether “wasting 98% computing stuff that hasn’t changed”, is less efficient.

      I don’t believe there is any way to know without just building the best possible implementation of each architecture that you can, and finding out the fact of the matter. So it’s definitely great that people actually are doing this.

  4. Mike 137 Silver badge

    Not quite

    "Pick any sort of motor control task — like hitting a tennis ball or swinging a bat at a baseball. These are very sophisticated computations."

    Actually, these are not "computations" in the math sense at all -- they're dynamic systems using sensory/motor feedback tuned by practice. The big mistake is to view the brain as primarily a computer or "thinking machine". As the late Bob Ornstein pointed out decades ago, the brain is primarily a body controller, which is why we do motor tasks so effortlessly. But this does imply that in the absence of a body to control, the "artificial brain" would not perform as expected (or maybe even at all). The functioning "brain in a vat" is merely a Hammer fantasy.

    1. retiredFool

      Re: Not quite

      I think Plato and other philosophers might disagree that the brain is just a body controller. Controllers don't think about when they are going to go MTBF. (or maybe better MTTF, mean time to total failure) One thing I've always given to doc's versus mechanics. Doc's don't get to turn you off, do the repair and turn you back on again. Once you are "off" for what 2 minutes, you are unrepairable. Imagine the mechanic replacing a piston (or even a valve) while the engine was purring.

      1. Autonomous Mallard

        Re: Not quite

        > Doc's don't get to turn you off, do the repair and turn you back on again.

        They _do_ get to turn the "engine" off for a bit if needed though... https://en.wikipedia.org/wiki/Coronary_artery_bypass_surgery (see the procedure section)

        And somewhat more dramatically, even turn off blood flow entirely: https://en.wikipedia.org/wiki/Cardiopulmonary_bypass (see the uses -> hypothermia section)

        Admittedly stretching the car metaphor a bit... and if we're being philosophical about it, is the "engine" the heart or the brain?

      2. that one in the corner Silver badge

        Re: Not quite

        Compare and contrast:

        > I think Plato and other philosophers might disagree that the brain is just a body controller

        >> the brain is primarily a body controller

        (Hint: "just" != "primarily")

        Plato didn't have to contend with evolution and the resultant realisation that plain old body control predates cognition as the function of the bulk of that organ (leaving aside that IIRC he wasn't convinced that the brain is the thinky bit).

        And we can get into the whole discussion about "we can conceive taking an adult brain and paring away the non-cognitive bits, the just-a-body-controller parts, starting with disconnection from the rest of the central nervous system[1], to leave us with the perfect thinking blob; BUT if we have no plausible mechanism for how that adult part could have come about, in a functional state, without it having arisen from, been nurtured and sculpted by[2], all the stuff we cut away AND the observation that before that part was functional the overall organism was functional and could conceivably[3] remain so for a time comparable to the adult lifetime THEN we are left with the conclusion that the entire existence of the cognitive bit is secondary to, merely an epiphenomenon of, the body controller."[4]

        [1] Nice thing with thought experiments about the organ of thought, we don't worry about tedious little things like how long a brain can function, let alone sanely, cut off like that. Igor, the icepick!

        [2] Literally, what with the selective pruning and reinforcement of neural connections that makes the whole thing slowly turn on in the first place.

        [3] The conception including the continued ready provision of basic necessities, no need to think about going on the hunt.

        [4] This concludes your 2 a.m. second year dormitory discussion, please pass the ganja; man, man have you noticed "dog" spelt backwards is "god"! And, and, some words have changed meaning over time! Whoooah, dude!

  5. HuBo Silver badge
    Windows

    in-Memories of a FEM fatale

    It'd be cool to see this approach tested on Blumind's (Canucks) 100 billion neurons (1 quadrillion synapses) all-analog 12 Watt brain chip imho. Especially seeing how the Sandia folks "associate a small population [eg 8-16] of recurrently connected neurons with each mesh node" -- so the Blumind could help validate large-mesh scaling of their tech ...

    Also interesting that Aimone-Theilman's "published" (TFA link) paper refers quite a bit to a 2013 Franco-Portuguese (Centre for the Unknown) Open Access PLOS article that focused on motor-control ODEs, particularly some "2D arm controller" (position and dynamics). Aimone-Theilman augmented each of the Franco-Portuguese "neuron with an additional state variable that integrates the local residual error" which eliminated "steady-state error", and was key to accuracy when solving their linear model elliptic PDEs. Very nice!

    Most impressive (to me) is they figured how to solve a PDE while their "neurons synapse only with their nearest neighbours in the mesh", as needed for success of the in-memory compute architecture in this field (per TFAᖖ). It'll be nice to see how their approach fares in time-dependent (parabolic/hyperbolic) and nonlinear PDE problems. And also if it can be adapted to non-spiking in-memory compute solution of PDEs (or is spiking fundamental to making this work?). Fascinating stuff!

    (ᖖ by comparison, linear algebraic matrix-vector solution methods, direct ones at least, would require filling up of the whole reverse Cuthill-McKee re-ordered matrix's band, which implies numerical connection to non-neighbor nodes afaict)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon