back to article Google teaches robots to serve humans – with large language models the key

Google's largest AI language model is helping robots be more flexible in understanding and interpreting human commands, according to the web giant's latest research. Machines typically respond best to very specific demands – open-ended requests can sometimes throw them off and lead to results that users didn't have in mind. …

  1. Il'Geller

    Google finds information in its context and subtexts, which was a problem posed by NIST TREC QA. I solved this problem, after which I lost PA Advisors v Google. And after my loss Brin, Page and Zuckerberg sabotaged the development of the technology for 12 years. Remember DeepMind with GO? These ridiculous BERT model?

    Randall Ray Rader (born April 21, 1949) is a former United States Circuit Judge and former Chief Judge of the United States Court of Appeals for the Federal Circuit.

    1. sreynolds

      So they finally mastered....

      They finally got the bots to say "Would you like fries with that"?

  2. Anonymous Coward
    Anonymous Coward

    "Google teaches robots to serve humans..."

    Given that it's Google, I'm minded to ask: "Serve humans - to whom?"

    1. Mike 137 Silver badge

      Re: "Google teaches robots to serve humans..."

      An amazing Twilight Zone episode (To serve Man, Mar 2, 1962) explored this concept. An alien race landed on Earth, delivered numerous goodies, invited earthlings to visit their planet and shipped them to it en masse. They left a book lying around with the title (when finally translated with difficulty) 'To serve Man'. Unfortunately it eventually turned out to be a cookbook.

      (based on a short story by Damon Knight [Galaxy Science Fiction, November 1950]).

      1. Anonymous Coward
        Happy

        Re: "Google teaches robots to serve humans..."

        Thanks for that - I knew I'd come across a story like that somewhere

      2. Roj Blake Silver badge

        Re: "Google teaches robots to serve humans..."

        The Simpsons also did it in a Treehouse of Terror episode.

        1. J.G.Harston Silver badge

          Re: "Google teaches robots to serve humans..."

          Originally a Damon Knight short story that became a Twilight Zone episode.

          Edit: Mike137, you got there before me. :)

      3. Pascal Monett Silver badge
        Thumb Up

        Ah, the Twilight Zone.

        Still relevant today, after more than 60 years.

  3. Paul Herber Silver badge

    If I had a robo-butler

    I'd call it Banter.

  4. Howard Sway Silver badge

    Asking the robot something like "I just worked out, can you get me a healthy snack?"

    So, you "work out" a lot and eat healthy food. And you are then going to buy a robot to fetch apples for you.....

    Let's face it, the only command it needs to understand is "more beer!".

    1. Paul Herber Silver badge

      Re: Asking the robot something like "I just worked out, can you get me a healthy snack?"

      Bite my shiny metal apple!

  5. Mike 137 Silver badge

    "going over to pick up the can, throwing it into a bin, and getting a sponge"

    So it's not smart enough to empty the can down the sink first? What a lovely mess at the bottom of the bin!

    Quite apart from which, why would one expect a robot to sort this trivial problem, rather than getting off one's butt and doing it oneself?

    What I need is a robot I can ask do things I can't do myself (like picking up the other end of that big piece of furniture to move it).

    1. Yet Another Anonymous coward Silver badge

      Re: "going over to pick up the can, throwing it into a bin, and getting a sponge"

      Throw can in bin? Aren't Google in California?

    2. TrickyRicky

      Re: "going over to pick up the can, throwing it into a bin, and getting a sponge"

      Never mind the business of throwing the can into the bin. The robot seems to upend everything it picks up. There'll be a trail of drink all over the floor on the way to the bin.

      1. Yet Another Anonymous coward Silver badge

        Re: "going over to pick up the can, throwing it into a bin, and getting a sponge"

        Job creation - it's in the same union as the Roombas

    3. Stoneshop
      Terminator

      Re: "going over to pick up the can, throwing it into a bin, and getting a sponge"

      So it's not smart enough to empty the can down the sink first?

      Depends. If there's still some coke in the can[0] the robot should just put it upright again after wiping down the underside[1], then clean the spill. If it's empty it can go into the bin. "And if you don't put in the recycling bin as it should, you'll be going in there yourself after reprogramming you with a large axe."

      [0] surely it should be able to subtract the known weight of an empty can of the type picked up from the sensed weight.

      [1] oh wait, it's one-armed, so that should be "get sponge, put down sponge, pick up can, wipe can on sponge, put can down elsewhere, pick up sponge, wipe puddle"

    4. veti Silver badge

      Re: "going over to pick up the can, throwing it into a bin, and getting a sponge"

      For some reason, the comments on robotics (and AI) stories hereabouts always put me in mind of the quote from Charles Babbage:

      Propose to an Englishman any principle, or any instrument, however admirable, and you will observe that the whole effort of the English mind is directed to find a difficulty, a defect, or an impossibility in it. If you speak to him of a machine for peeling a potato, he will pronounce it impossible: if you peel a potato with it before his eyes, he will declare it useless, because it will not slice a pineapple.

      Only Steve Jobs ever made things happen all at once, because he shrouded development in such secrecy that the iPhone actually came as a surprise. For the rest of the world, development is incremental. This is a cool and exciting development. The time to carp about it not being finished and perfect is when Amazon tries to sell you one.

    5. Cuddles

      Re: "going over to pick up the can, throwing it into a bin, and getting a sponge"

      Also, what happens if the coke was actually in a glass? Or, given that the interpreted command is "find a coke can, what happens if there is more than one can in the vicinity? This is being sold as being more flexible than standard robots which can only respond to specific commands, but it's still limited to a small set of specific commands that can only be interpreted when part of a very specific, highly constrained scenario.

  6. Il'Geller

    By the way, how much better does the Yandex translator work? Please compare it to the best translators in the America? People in Yandex for sure use my lexical cloning technology. The sabotage and betrayal of Brin, Page and Zuckerberg led to the technological lag of the West, primarily the United States, from Russia.

    1. Il'Geller

      While Brin, Page and Zuckerberg were killing me with silence, field tests of an automatic sniper rifle began in Russia, which is based on a lexical clone of a real sniper: the rifle is already in Ukraine. As well as a mine detector and a tank, which all are based on lexical clones. Putin said so.

      1. Yet Another Anonymous coward Silver badge

        >As well as a mine detector

        ie 'conscript with big boots'

        1. Il'Geller

          Google couldn't handle even its self-driving car because Brin and Page are thieves (PA Advisors v Google), and couldn't apply my new AI technology even to the car. However the Russians could and did, as the evidence please look at the way Yandex translates? In the West there is nothing even close! From which I conclude, especially considering the revelations of the Russian President Putin, that the Russians are already using lexical cloning technology for military purposes. From the literature and interviews it can be seen that for sniping, mine clearance and tanks. Probably for many other things, though.

          What Google presented is another proof that my lexical cloning is the true AI technology, even if Brin and Page hide this.

      2. Il'Geller

        Using the absence of The Register censor, which deletes 99% of my posts:

        Bios is the most valuable thing in AI technology. There is no simple-general AI, because an AI without its bios is able to find and give out only nonsense. What OpenAI has convincingly proved by producing its completely meaningless AI generated texts. Only personalization, only lexical cloning of individuals can create AIs. Bios is the most valuable thing in AI technology.

        Therefore, Brin, Page and Zuckerberg are afraid of AI like the devil is afraid of incense: they make and sell individual profiles based on my old technology (PA Advisors v Google), and AI technology has already begun to produce the immeasurably better profiles (lexical clones) for one huge company, which means the ruin of these scoundrels from the hand of this one enourmously big company. And I hope that the most merciful American public and press will not forgive the trio, because they stole my at least $300 billion.

  7. Jan K.

    Some test commands to the bot, please...

    "Kill the [cat] [dog] [baby] [wife] [husband] [yourself]"

    Besides those tiny details, if I ever come across one of those things, it better understand simple commands like "Piss off!", "Go jump into nearest water", "GFY!" etc. etc. You surely get the gist...

    Up until now there's absolutely nothing from google I've ever wanted or needed. And with their record, there'll never ever be.

    Yes, I'm likely not a customer... ^.^

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like