back to article Italian researchers' silver nano-spaghetti promises to help solve power-hungry neural net problems

Researchers in Italy have developed a physical system to mimic properties of human brains that they hope will massively reduce the power costs of neural networks fundamental to AI development. Successful approaches to neural networks have largely depended on software representations of brain synapses on top of a conventional …

  1. jmch Silver badge
    Boffin

    Mmmmm, spaghetti ....

    </homer>

    Erm, I mean, wow, that seems interesting! Probably difficult to physically scale down, and therefore will probably fir the amount of neurones of a mouse brain in a contraption the size of a room.... but nevertheless it would still be more energy efficient than a softwrae-based neaural net, and if it's even half as clever as a mouse it would be a million times cleverer than today's neural nets

    1. TeeCee Gold badge
      Terminator

      Re: Mmmmm, spaghetti ....

      ..a mouse brain in a contraption the size of a room...

      Oh great, house-sized robotic mice. Now you've gone and done it.

      1. Anonymous Coward
        Devil

        Re: Mmmmm, spaghetti ....

        Given that they've already mapped the nematode brain, I'd worry more about a Sand Worm sized invasion.

    2. Blank Reg

      Re: Mmmmm, spaghetti ....

      I don't know about that, they are using nano wires so small that you need an electron microscope to see them. it sounds like it's already scaled down

    3. Inkey
      Boffin

      Re: Mmmmm, spaghetti ....

      It's not that new

      And should be scalable, if you consider that actual human nuerons have been trained... although you would still have to feed them.

      www.sciencedaily.com/releases/2004/10/041022104658.htm

      Truly fascinating though ...

      Little head cheese with your bolignese sir?

  2. Alister

    Bistromathics!

    Bistromathics itself is simply a revolutionary new way of understanding the behavior of numbers, Just as Einstein observed that space was not an absolute but depended on the observer's movement in space and that time was not an absolute, but depended on the observer's movement in time, so it is now realized that numbers am not absolute, but depend on the observer's movement in restaurants.

    1. jake Silver badge

      Re: Bistromathics!

      Dependent on the observer's movement between restaurants, Shirley.

    2. hopkinse

      Re: Bistromathics!

      For the really difficult problems, a pan-galactic gargle blaster always helps oil the wheels :-)

  3. Tom 7

    10**9 energy consumption improvement?

    That's pretty good. And combined with the neuromorphic improvement which I think will have several orders of magnitude reduction in the volume of neural nets, assuming this stuff runs at a commensurate speed, we could be getting somewhere soon!

  4. Morrie Wyatt
    Coat

    There's the problem

    Silver nanowires?

    Should be platinum iridium shouldn't it?

    Not a problem though, just get Powell and Donovan onto it, they'll sort it right out.

    Or Susan Calvin if you would prefer.

    Mine's the one with "I Robot" and "The rest of the robots" in the pocket.

  5. Anonymous Coward
    Anonymous Coward

    Neuromorphic seems very popular all of a sudden.

    Wonder where they get their inspiration? 3. 2. 1. and Turn.

  6. Cliffwilliams44 Silver badge

    "Thou shalt not make a machine in the likeness of a human mind."

    The O. C. Bible

    1. BinkyTheHorse
      Happy

      2022 would be a bit early, but putting e.g. 2030 for "Butlerian Jihad" sounds about right.

    2. Ididntbringacoat

      Well, what could go wrong? I mean, we see how stable the Human brain is and what a rational mind it gives rise to . . . .

      Oh, wait . . .

  7. BinkyTheHorse
    Flame

    Off by a bit

    "[...] usually you have thousands of parameters that you have to train."

    No, "thousands" is for toy problems. Tens of millions is completely normal for real-world problems, with state-of-the-art nets being some orders of magnitude more complex still (GPT-3 has supposedly 175 billion).

    Perhaps the researcher was confusing the number of "parameters" with "possible combinations of hiperparameters", but that's problem-specific, so doesn't map to ANN scales directly. Either way, a bit worrying to potentially see the relevant researchers being under such a misconception – hopefully it was merely a mental hiccup.

    Regardless, progress in this problem is welcome in any case. Icon related, especially in the Summer.

  8. steelpillow Silver badge
    Holmes

    Two for the price of one?

    Seems to me they have two quite independent results here.

    One is the silver nanowire implementation of the fabled memristor. We need to know more about that technology. Can it lead to commercial memristor devices with say two terminals and predictable properties worthy of a standard spec sheet?

    T'other is that the rat's-nest, aka spaghetti, aka random, network topology has useful properties which structured topologies do not.

    Or am I missing something?

    1. Anonymous Coward
      Anonymous Coward

      Re: Two for the price of one?

      a) yes, b) yes and c) your valiant 'or' notwithstanding, still yes.

  9. Kev99 Silver badge

    Remknds me of what Russian doctors were doing to repair spinal injuries. 50 years ago.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like