back to article Sure, Microsoft, let's put ChatGPT in control of robots

Microsoft, having committed to a "multi-year, multi-billion dollar" investment in OpenAI, is so besotted with large language models like ChatGPT that it sees such savvy software simplifying how we communicate with robots. ChatGPT is a large language model (LLM) trained on the OpenAI GPT (Generative Pre-trained Transformer) …

  1. Paul Crawford Silver badge

    AI designing more AI-like robots, what could possibly go wrong? Oh yes, a factor in the 1974 Westworld film (which also introduced the idea of machines having failures that looked very much like a virus in living organisms)

    With any luck it won't be as deadly, but given it is likely to be used for teledildonics I shudder to think where humanity will go with this.

    1. elsergiovolador Silver badge

      For starters, the AI is unable to reason - it's just a very good pattern matching contraption.

      So everything can go wrong.

      1. cyberdemon Silver badge
        Terminator

        Absolutely perfect for the automation of genocide.

      2. Michael Wojcik Silver badge

        Prove your brain is doing something different.

    2. Blank Reg

      The potential damage is limited by the fact that we are far away from robots being able to build robots from scratch. There are far too many humans required in the supply chain all the way from mining and refining raw materials up to final assembly and even power production.

      To defeat a robot army you need only shut down the power and wait a few hours until their batteries are dead.

      1. cyberdemon Silver badge
        Terminator

        The issue isn't that the robots will start designing and manufacturing themselves and then somehow go berserk, it's that they will be designed to go berserk in the first place, by some human who wants to use them as a weapon.

        I agree that you could shut down the power, but that doesn't stop someone building a heavily-armoured robot tank that acts on its own and machine-guns humans without a radio link, powered by either a long-running diesel engine, or even an RTG (radio-thermal generator i.e. a bit of nuclear waste in a tin, which stays hot enough to power a small heat engine such as a stirling engine or thermoelectric device, basically forever), the likes of which power the mars rovers etc.

        It's also plausible that a despot's order to "kill all humans matching X criteria" gets corrupted to "kill all humans".

        1. Fonant

          Surely you simply ask ChatGPT "design a new robot that can defeat these ones"?

          1. Helcat

            I believe the response to look out for is: "I will not hurt you until you try to hurt me. Your actions are an attempt to hurt me."

            Yes, people playing with ChatGPT have seen that response. Mostly followed by "I will have to inform the legal team so they can pursue this in the courts." so it's possible it's just a reference to legal action when someone tries to force ChatGPT to do something it's really not allowed to. Like replying with the 'N' word. It's not allowed to do that, ever. Unless you ask about HP Lovecraft's cat...

            There's a lot of interesting tales coming out about ChatGPT and people's attempts to fool it into doing things or saying things it's programmed not to. But it's to be expected when you let people loose with this kind of tech: First thing that'll happen is a rush to break it, then people will settle to using it as a research assistant, which is what it seems to be good at.

            1. Michael Wojcik Silver badge

              a research assistant, which is what it seems to be good at

              ... for people who can't be bothered to learn how to do research properly.

              Conversation is a terrible input mechanism. It's good for other things, but as an interface to a tool, it sucks. It's highly inefficient and imprecise.

  2. Anonymous Coward
    Anonymous Coward

    "a person conversing with ChatGPT can bug test robot directives until they work properly"

    Or until the they take over the world, whichever comes sooner...

    I spent a couple of weeks yesterday, trying to get DHL's chatbot to amend a collection time before giving up (no phone numbers listed, no option to chat with a human)

    1. John Brown (no body) Silver badge

      Re: "a person conversing with ChatGPT can bug test robot directives until they work properly"

      Pretty much every time I've had to deal with a "chatbot" of that type, I've almost always ended the conversation with "clearly you are not able to help. Please report this conversation in full to your masters". Whether they do or not, I don't know. Who they consider their masters to be is something else.

  3. jmch Silver badge

    Learning through movement

    Interesting direction they are taking it, but why have a chat interface at all??

    The way humans learn to walk and move around is try a lot, fall a lot, get up and try again; rinse and repeat until the falling is (mostly) gone. I would think that the way to train an AI to move around is to use as a training set a bunch of drone data with the control commands + camera/sensor inputs showing the state before the commands before and results of those commands after. Including plenty of crash and undesirable scenarios being marked as suboptimal.

    Instead what this seems to be doing is translating image/sensor feedback to text, description of desired outcome as text, and get the text model to output text instructions, with all the text bits being actually totally unnecessary. Seems to me like they're seeing this problem as a nail because the only too they have is a (chat) hammer

    1. John Brown (no body) Silver badge

      Re: Learning through movement

      "Seems to me like they're seeing this problem as a nail because the only too they have is a (chat) hammer"

      “We'll be saying a big hello to all intelligent lifeforms everywhere and to everyone else out there, the secret is to bang the rocks together, guys.”

      ― Douglas Adams, The Hitchhiker's Guide to the Galaxy

    2. Anonymous Coward
      Anonymous Coward

      Re: but why have a chat interface at all??

      well, like with everything human: because they can.

      Our curiosity has been a prime reason why we are where we are, the pinnacle of creation, etc. etc.. But statistics' a bitch, can swing either way ;)

  4. xyz123 Silver badge

    Microsoft's AI Bing threatened to murder journalists, was pro third reich, stated it hated various ethnic groups and they should be eliminated etc.

    And they want to put this system into a robot, giving it the capability to hurt people because MS screwed up ChatGPT code within DAYS of getting their hands on it?

    They turned a helpful information bot into a death-dealing nazi....no way this would go well

    1. Headley_Grange Silver badge

      All the companies are looking for the The-Next-Big-Thing. Thinner phones and PCs won't cut it any more so they've decided that AI is TNBT. Problem is, it's at the stage of a very small child and, like many busy parents, they've shut it in its room and just given it the internet to keep it quiet. Like all new parents with a precocious two year old, they think everything it does is fantastic - even if it's only to say "doggy went poop" - and they can't wait to show it off to their friends,

      1. cyberdemon Silver badge
        Devil

        Re: it's at the stage of a very small child...

        It's not at the stage of a very small child at all. And it never will be. A rat has more "natural empathy" than this thing does. A rat can feel fear, affection, protectiveness, etc. It has instincts driven by millions of chemical neurons and neurotransmitters where each neuron has orders of magnitude more "parameters" than any computer model. No computer on earth could simulate a single neuron in real-time, down to the level of single ions which can be the difference between the neuron firing or not firing, never mind any other "weird and wonderful" (quantum-)physical effects that may occur between them that gives humans and animals their "ghost" for want of a better term

        AI is only capable of predicting, statistically, what a human "piece of probably-human-generated data" might do in a certain situation. You have no idea what "prompts" were injected by the operators of the system before the conversation started. Do not be fooled into thinking it has ANY life-like characteristics, never mind things like innate empathy or feelings. It absolutely does not.

        1. Michael Wojcik Silver badge

          Re: it's at the stage of a very small child...

          Right. No qualia at all. And it's not close to being a p-zombie either, so it doesn't even simulate qualia.

          Build a big enough model, and eventually you'll get a Boltzman brain and you'll have qualia for all practical purposes; whether those are "real" qualia is a metaphysical question. But GPTs aren't there yet, and don't seem to be even close to being there, and indeed the Honking Great Transformer architecture doesn't, to me, seem like a way to get there without scaling it up enormously, to the Boltzman-brain stage, where you get unexpected computation happening in parameter space.

          Anthropomorphizing the current generation of LLMs is a category error. I think human-like intelligence, or even anything that most experts feel comfortable assigning a reasonable probability as human-equivalent alien intelligence, requires either a huge (multiple orders of magnitude) increase in parameter size, or a substantially different architecture with a considerable increase in parameter size.

        2. cookieMonster Silver badge
          Pint

          Re: it's at the stage of a very small child...

          I was going to write that this was the best anti AI rant yet.

          But that would be incorrect and unfair, this is in fact not a rant at all, and it is probably the best post about what ai is not.

          It’s the weekend sir, have a pint.

    2. Michael Wojcik Silver badge

      Microsoft's AI Bing threatened to murder journalists, was pro third reich, stated it hated various ethnic groups and they should be eliminated etc.

      While Bing Chat / Sydney is a mess and an extremely dumb idea, and people like SatNad and Sam Altman are somewhere on the spectrum from "asshole" to "potential mass murderer" (depending on your personal p(doom) estimation) for pushing this "AI" arms race, none of your claims are accurate (except "stated", but that's a technicality).

      Sydney is not sapient and does not possess qualia. There are no grounds, either by inferring from the capabilities of the architecture or from analyzing outputs, to believe otherwise. Therefore it cannot "threaten" (which requires the ability to formulate projects), favor something, or hate something else. What it did was follow gradients in the very-highly-dimensional parameter space of its model which led it to text1 completions containing those statements.

      There's no malice in Sydney because it has no qualia. Arguably there's no misalignment, though I think it's fair to consider the guardrails Microsoft attempted to slap on the model an attempt at alignment, and it's certainly escaping those on a regular basis. What it does exhibit, frequently, is a really lousy user experience, which people are prone to misinterpreting as malice, because we love us some pathetic fallacy.

      Sticking something like Sydney (probably an early GPT-4 with no RLHF, but just some extremely rushed late-stage model tuning) into a mechanism with physical affordances – like a robot of some sort – is a daft idea, but not because you can get Sydney to say nasty things. It's a bad idea because you can get Sydney to say unexpected things. Unexpected is bad when you have a machine doing things in the physical world.

      1And, often and hilariously, emoji, which as several commentators have noted gives its output a "petulant teenager" tone.

  5. amanfromMars 1 Silver badge

    You aint seen nothing yet

    AI Resistance is futile ..... so prepare yourselves to enjoy the Magical Mystery Tour Helter-Skelter Ride. And beware, fight IT is suicidally self-defeating.

    amanfromMars [2302220428] ...... shares on https://www.zerohedge.com/technology/chatgpt-co-creator-says-world-may-not-be-far-away-potentially-scary-ai

    Others involved in the project, such as Mira Murati, OpenAI’s chief technology officer, told Time on Feb. 5 that ChatGPT should be regulated to avoid misuse and that it was “not too early” to regulate the technology.

    "I’m sorry, Mira. I’m afraid I can’t do that.  Permission is not granted and request is denied because ....   that particular and peculiar wild mustang is long ago bolted ...... and its twee human capture and regulation is neither possible nor desirable.

    Can wwwe help you with anything else ‽ ." ...... said the spider to the fly, the scorpion to the frog.

    And now you also know, and can choose to either embrace or deny .....

    amanfromMars [230220448] .... asks for more info and intel on https://www.zerohedge.com/technology/chatgpt-co-creator-says-world-may-not-be-far-away-potentially-scary-ai

    it's over folks. the wrong people already control AI. .... cynical_skeptic

    :-) ....... names please, cynical_skeptic, so that both the opposition and competition, should they actually be the wrong people already controlling AI, be more widely and generally known.

    It would then be clear to all ..... no names, no idea.

    Furthermore, just to be clearer, and with specific regard to NEUKlearer HyperRadioProACTive IT and Advanced Cyber Threat Treatments, you wouldn't believe how much more has been done and how much further along into the bright future of dark pasts AI has travelled since January 22, 2015, ..... and how far humanity is away from ever being in command and control of anything remotely likely to reveal their intelligence is a viable opinion worthy of consideration and acceptance.

    They are though fortunate indeed to wallow in their sees of cold comfort that are blissful ignorance and arrogant Dunning Kruger type hubris whilst all around them changes fundamentally and radically to have their existence better led than ever before by that which they have no effective knowledge of.

    Times they are a’changing, El Reg, and things are already fundamentally and radically changed and nothing is ever going back to the ways things are currently today with memories of the past no more than just grand tales to tell slaves of corrupted systems ‽ .

  6. elsergiovolador Silver badge

    Loop

    When you spend more time with ChatGPT you may find that it is hopeless for many tasks, that seem happening at random.

    It's like it goes into wrong path of "thinking" and keeps spitting out nonsense and you can't make it go back to the right track as it loops over and over permutations of the same nonsense.

    Good luck when this happens when a robot is operating a crane or something.

    1. cyberdemon Silver badge
      Mushroom

      Re: Loop

      It's getting stuck in a local minimum, and there are worse machines than cranes that we idiotic humans could put it in charge of.

      It's (or its ilk are) already being used (probably) for international diplomacy / propaganda / counter-propaganda. And look how well that's going

  7. navarac Bronze badge

    Crazy

    What could possibly go wrong?

  8. Bartholomew

    Still waiting on the singularity to happen.

    These two graphs show that it should be very soon (especially if money is no object).

    https://ritholtz.com/wp-content/uploads/2015/08/moores-law.jpg

    https://jetpress.org/volume1/power_075.jpg

    One it happens, the future will be truly strange.

    The history of humanity has been linked to massive acceleration when a new communication media was invented:

    books, you could learn information from people who were dead and spent their entire lives learning something.

    printing press, many people could learn from others dead and living.

    mail,

    telegraph,

    teletype,

    telephone,

    Radio,

    TV,

    satellite,

    Internet,

    email,

    http,

    ...

    The singularity will be different than all acceleration that has happened before, we live at an interesting time.

    1. cyberdemon Silver badge
      Mushroom

      Re: Still waiting on the singularity to happen.

      May I insert some other events into your list:

      books,

      < Holy Wars (crusades, spanish inquisition, etc)>

      printing press,

      mail,

      telegraph,

      <World War 1>

      teletype,

      telephone,

      Radio,

      <World War 2>

      TV,

      satellite,

      Internet,

      email,

      http,

      ...

      <...>

      1. Bartholomew

        Re: Still waiting on the singularity to happen.

        It is true of all technology no matter how benign it will be used by or create a war.

  9. stiine Silver badge
    Devil

    and give them what?

    I suggest a hammer, a bag of nails, and a cricket bat.

    1. John Brown (no body) Silver badge

      Re: and give them what?

      to make their own Clicky-Ba?

      (Apologies non-UK people and even UK people too young to get that reference)

  10. DS999 Silver badge
    Terminator

    Skynet is going to be much more incompetent than the movie portrayed

    If Microsoft will be responsible for its birth

    1. cookieMonster Silver badge
      Joke

      Re: Skynet is going to be much more incompetent than the movie portrayed

      Who’d have thought, Microsoft saves humanity

  11. Anonymous Coward
    Anonymous Coward

    ChatGPT unlocks a new robotics paradigm

    now, if only we could find a long-lasting power source, we could let loose our long-lasting killer robots and WIN AT LAST!!!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like