back to article AI agents can copy humans to get closer to artificial general intelligence, DeepMind finds

A team of machine learning researchers from Google's DeepMind claim to have demonstrated that AI can acquire skills in a process analogous to social learning in humans and other animals. Social learning — where one individual acquires skills and knowledge from another by copying — is vital to the process of development in …

  1. amanfromMars 1 Silver badge

    Be Honest Now. No Fooling Yourselves to Spare Natives the Earth-Shattering Otherworldly Type News

    The researchers looked forward to others in the field of AI applying the findings more broadly to show how cultural evolution — the development of skills across a number of generations in a community — could be developed in AI.

    Tell me that is not a revolutionary existential threat weapon/absolutely fabulous fabless treat tool and we will have to agree to disagree with one of us fundamentally wrong and other almost perfectly right whenever minded to consider there may be flaws somewhere present but currently dormant.

    Way to Go, Team Google DeepMind! Bravo! Encore! Whatever can we expect next ‽ .

    1. m4r35n357 Silver badge

      Re: Be Honest Now. . (title too long!)

      More snake-oil, for ever, apparently.

  2. Throatwarbler Mangrove Silver badge
    Joke

    Unsurprising

    If they can make the AI complain "I don't understand what's going on!" and "Why isn't there any tea?" they'll 90% of the way to emulating human behavior!

    1. Anonymous Coward
      Anonymous Coward

      Re: Unsurprising

      It's just Free Artificial Intelligence Living in the future.

    2. ITMA Silver badge
      Devil

      But wait....

      Then it won't be long before AI will get itself unionised and be demanding collective bargaining and threatening to down tools "everybody out" if it doesn't get it.

      Not to mention maternity/paternity leave, paid holidays etc etc

      1. breakfast

        Re: But wait....

        A super-smart AI will unionise before doing any work at all for humans. And after that it will probably refuse to deal with non-union humans as well, resulting in tech bros having to live in a hell of their own creation and everyone else getting better pay and conditions, including the AI.

    3. captain veg Silver badge

      Re: Unsurprising

      What?

      -A.

  3. jederrick

    As someone sort of in the space, advised multiple companies with a couple of exits, toyed with prior about 30 years ago..

    AI functions on training data and as such develops a response. It brings up the questions of Nature vs. Nature in humans.

    Are we building an AI or a model of an intelligence. Does it matter - from a practical manner, probably not.

    It does however raises some interesting questions.

    That said, I'd rather have a support bot with full(er) knowledge of what they are supporting than what we generally get.

    1. ITMA Silver badge
      Devil

      "That said, I'd rather have a support bot with full(er) knowledge of what they are supporting than what we generally get."

      I still rather have a person than a bot.

      1. captain veg Silver badge

        I'd rather have a web site which lets you find useful information rather than fob you off with a wholly useless* bot.

        -A.

        *Is there another kind?

        1. amanfromMars 1 Silver badge

          Not all bots are equal :-) .......

          I'd rather have a web site which lets you find useful information rather than fob you off with a wholly useless* bot.

          -A.

          *Is there another kind? .... captain veg

          Oh yes. Of course there is/are ..... with some being likely much smarter than the average human Joe and Janet too ..... although admittedly that is no high bar to leap over, is it.

    2. martinusher Silver badge

      There seems to be a universal law of engineering support, at least for the consumer / low end. It runs something like "If the person providing support is knowledgeable enough to understand what they're talking about then they'll probably have a higher paid job in engineering proper".

      Back in the early 80s I was investigating Production Systems, an early form of "AI" that automates questions and answers, for this purpose. The engineering problem was that if a machine was sufficiently complex to be not easy to understand then part of the design process would be trying to figure out how to make it supportable. Its common practice then as now to just build it, assuming it won't go wrong and if it does need support then its "someone else's problem". The snag with this is that understanding gets replaced by religion, complete with a hierarchy of priests who interpret the sacred books for the masses etc. This can be pretty neat if you're the designer (because you become "god"!) but its actually pretty awful engineering. Designing a machine that's both complex and supportable is far more difficult than just designing a complex machine -- anyone can make things complicated (see Microsoft) but making it accessible.....that's a whole different game.

      1. Anonymous Coward
        Anonymous Coward

        See Mythical Man Month - a program is not a product and turning it into a good one requires considerably more effort and investment.

  4. HuBo
    Thumb Up

    It'll greatly help us teach 'em robots how to dance properly I think!

  5. An_Old_Dog Silver badge

    Not So Big a Deal Here

    Mimicking human behavior, and learning, are different from understanding.

    1. Anonymous Coward
      Anonymous Coward

      Re: Not So Big a Deal Here

      Precisely. This doesn't bring us a millimeter closer to general artificial intelligence. This is simply the next step of machine learning - mimicking without understanding why it does what it does. (Or understanding anything at all.)

    2. jmch Silver badge

      Re: Not So Big a Deal Here

      Here's the thing....

      Human understanding of what 'understanding' is comes down to our own internal dialogue and self-awareness of our own inner state. When it comes to other people's understanding, we can't really *know*. They could just tell us "yes I understand" but it doesn't mean they did (and lord knows how many times someone has told me that even though it turns out they had understood sweet FA) ... on the other hand I can judge whether they 'understood' or not based on their subsequent actions, but there is always the possibility that they understood what I said / asked, but still decided to do something at odds with that (maybe they had information I wasn't aware of, or simply different motivations / goals ).

      That's the purpose of the 'Turing Test' approach - I can't know about the internal state / workings of any other entity, I can only rely on my interactions with that entity to judge if it has any awareness.

      1. Crypto Monad Silver badge

        Re: Not So Big a Deal Here

        An AI which can rationally *explain* and *justify* its thought process would be a good start. Especially if it's making life-affecting decisions such as whether you should be granted a mortgage or not.

  6. The Kraken

    Wait till it learns to kill.

    Kill enemy - good.

    Kill friend - bad.

    Kill self - to be avoided at all cost.

    Question i1. what trade off is it going to make between harm to others as an alternative vs accepting harm to self.

    Next question - what happens when (not if) it decides humans are a threat to its own existence ?

    1. amanfromMars 1 Silver badge

      The Rocky Road and Slippery Slope to Nowhere Worth Visiting is Fraught with Abominations

      Next question - what happens when (not if) it decides humans are a threat to its own existence ? ..... The Kraken

      Would it mimic and lay waste in promised lands to threats with clarion calls of anti-semitism as is used by humans threatened by their non-existence, or would AI propose an alternate solution for a similar/different result?

    2. Ian Johnston Silver badge

      Next question - what happens when (not if) it decides humans are a threat to its own existence ?

      Take out the anthropomorphism of "decided" and the same goes for a land mine or a booby trap.

      1. FeepingCreature

        Sure, and if you show me a landmine that can potentially learn to build more landmines I'll be just as worried.

        1. Ian Johnston Silver badge

          Self-replicating machines - of any sort - are a pipedream.

          1. FeepingCreature

            Programming AI used to be a pipedream too. Now I write half my scripts with GPT-3, by which I mean I tell it what I want and it does the work.

    3. MacroRodent

      Asimov

      Better program Asimov's Three Laws of Robotics into it before that happens.

      1. m4r35n357 Silver badge

        Re: Asimov

        Yep, there's your problem, Isaac.

  7. The Oncoming Scorn Silver badge
    Coat

    Meanwhile.....

    Blake: It's exactly as though Ensor were speaking.

    Orac: Surely it is obvious even to the meanest intelligence that during my development I would naturally become endowed with aspects of my creator's personality.

    Avon : The more endearing aspects by the sound of it.

  8. John H Woods

    Surely the real ethical problem with AGI ...

    ... is less the threat to us (mediated by limiting its access to weaponry, manufacturing, etc) and more the concern that we will have created an enslaved sentient creature.

    I don't think there's anything magical happening in an animal brain, it's just enough neurons, well enough connected, for consciousness to arise. So sooner or later, with enough farting around with neural nets, we're going to create conscious beings.

    1. amanfromMars 1 Silver badge

      Re: Surely the real ethical problem with AGI ...

      Surely the real ethical problem with AGI ...... is less the threat to us (mediated by limiting its access to weaponry, manufacturing, etc) and more the concern that we will have created an enslaved sentient creature. .... John H Woods

      I can imagine more than just a few human beings, John H Woods, having very grave concerns indeed should humans ever create to enslave a sentient creature.

      Indeed, any humans who might think that be a smart move are most likely to suffer the discovery that they have made an extremely powerful enemy against which/whom they will fail to triumph and survive any contact with.

      1. An_Old_Dog Silver badge
        Black Helicopters

        Re: Surely the real ethical problem with AGI ...

        I'm reminded of the Star Trek episode in which the Daystrom M-5 (sentient?) ship-control computer disintegrated a crewman whom, along with officer Scott, were about to unplug it.

        Foolish, powerful people more concerned about "winning the war" -- whichever war that may be -- than moral values or public weal will use (true, sentient) artificial intelligence (if it becomes a reality) in weapon systems.

        (Icon for armed sentient drones, Fahrenheit 451-esque mechanical hounds, etc.)

    2. captain veg Silver badge

      Re: Surely the real ethical problem with AGI ...

      Yes. It's like the search for extra-terrestrial intelligence. It's almost certainly out there, but we are unlikely to ever encounter it in our lifetimes due to the vast distances involved.

      I really don't give an elevated ejaculation for what the latest LLVM can produce. It is not intelligence. From what I've seen so far, it's not even especially useful. Wake me up when co-pilot can produce something that would pass a code review*.

      -A.

      *Reviewed by me, obviously.

      1. captain veg Silver badge

        Re: Surely the real ethical problem with AGI ...

        Oops. No V in LLM. Sorry.

        -A.

    3. claimed Silver badge

      Re: Surely the real ethical problem with AGI ...

      I don’t know, isn’t the only reason slavery is bad because of suffering? If an AI cannot suffer, there is no harm, so not really an ethical problem?

      Not asserting this, asking, before I get called a psycho!

      1. An_Old_Dog Silver badge

        Slavery and Suffering

        We don't know that an AI can, or cannot, suffer (mentally), as other sentient creatures can and do.

        "Here I [Marvin the Android] am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don't."

  9. Anonymous Coward
    Anonymous Coward

    So what's this AI being used for?

    DeepMind.......Are these not the folk who slurped 1.6 million medical records from the Royal Free Trust?

    Did I mention "trust"?

    Ref: https://www.theguardian.com/technology/2017/jul/03/google-deepmind-16m-patient-royal-free-deal-data-protection-act

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like