back to article DeepMind takes a shot at teaching AI to reason with relational networks

The ability to think logically and to reason is key to intelligence. When this can be replicated in machines, it will no doubt make AI smarter. But it’s a difficult problem, and current methods used in deep learning aren’t advanced enough. Deep learning is good for processing information, but it can struggle with reasoning. …

  1. Dave 126 Silver badge

    One of the guys who founded DeepMind, Demis Hassabis was on Desert Island Discs the other week. It was quite astounding. He chose a Prodigy track to remind himself of how Cambridge had been a "holiday camp" to him - he'd been home schooled to allow him to train as a chess master, but at the age of twelve he had an epiphany that professional chess was a waste of human brain power. He then left home to become the chief coder on Bullfrog's Theme Park game. Therefore, Cambridge was the first time in his life he hadn't been working or training full time, so he partied. He still got a double first, though!

    He comes across as a really warm, articulate guy.

    Wow.

    1. Dave 126 Silver badge

      http://www.bbc.co.uk/programmes/b08qy1sl

      Desert Island Discs.

      "Born in 1976, he was introduced to chess aged four and, by the age of twelve, was the world's second-highest ranked player for his age. With his winnings, he bought himself a PC and taught himself to code. "

      Hats off!

  2. Destroy All Monsters Silver badge

    Short recall on classical AI architectures (GOFAI and Nouvelle AI)

    Lecture 19: Architectures: GPS, SOAR, Subsumption, Society of Mind

  3. SeanC4S

    If this type of associative memory had be figured out in the 1980s then very likely there wouldn't have been an AI winter (of discontent in the UK?)

    https://groups.google.com/forum/#!topic/artificial-general-intelligence/C-LJSnjaz2c

    Demis could have programmed it up on his ZX Spectrum.

    The question is can you make a deep network out of such memory? The answer is no, because the information loss is too much compared to computational gain per layer. I would say you can solve that by combining in some aspects of reservoir computing. Actually I think conventional deep networks also have that information loss problem too and it is sometimes resolved by hacks like ResNet.

  4. iTheHuman

    A refrain I've heard from the Big Names in the field is one that Moravec first proposed in the 80s being that we should expect the two approaches (bottom-up and top-down) to meet prior to practical agi

    This is a very nice result

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like