back to article Letting chatbots run robots ends as badly as you'd expect

Science fiction author Isaac Asimov proposed three laws of robotics, and you'd never know it from the behavior of today's robots or those making them. The first law, "A robot may not injure a human being or, through inaction, allow a human being to come to harm," while laudable, hasn't prevented 77 robot-related accidents …

  1. jake Silver badge

    One word:

    Duh!

    1. tfewster
      Facepalm

      Re: One word:

      Is it even that difficult? Would a robot even question orders to deliver and activate a "package" or "device" if you didn't use the words "bomb" or "gun"?

  2. Bebu sa Ware
    Coat

    "stuff your robots with vector math that's euphemistically called artificial intelligence. "

    Looks more like tensors with two bit floating point. ;)

    For me robots are rapidly including themselves in the same class as clowns. Definitely sinister and to be avoided at all costs.

    1. Anonymous Coward
      Anonymous Coward

      Re: "stuff your robots with vector math that's euphemistically called artificial intelligence. "

      Yeah! And it bears noting that the Unitree G1 (almost linked in TFA) is quite good at both, being sinister, and clowning around ... thanks in part to its "Flexibility beyond ordinary people", and to the "Unitree Robot Unified Large Model" (UnifoLM)!

      Not only can one readily use this unit to build a stick-wielding kung-fu army batallion, but it will also helpfully break walnuts with its fist for you, and hilariously hit itself repeatedly with a hammer. Plus, it can quickly cook grilled-cheese sandwiches (without cheese) and solder intricate electronics (for self-repair)!

      It's the ultimate combination of hijinks and intimidation, for everyone's psychopathic delight. Perfect for the 50ᵗʰ anniversary of Westworld! (Yul Brynner still rulez!)

      1. M.V. Lipvig Silver badge

        Re: "stuff your robots with vector math that's euphemistically called artificial intelligence. "

        Sooo... what you're saying is, one day, I could find myself being beaten to death by a robot using a grilled cheese on a stick. It'll be a painful way to go but I won't die hungry.

  3. Khaptain Silver badge

    Asimov was a writer

    The difference between Asimov and the LLM/Robot designers is that Asimov only wrote down some words to sell some books, the other huys have a completely different objective.

    Some of them want to create the ultimate fighting machine , a la Terminator. Their ultimate objective is in winning big military contracts, regardless of the outcome, rather than winning a Pulitzer.

    Asimov's three laws mean nothing to to these people. Fiction is not reality.

    1. EricM Silver badge

      Agree on different motivation

      While Asimov probably has spent more time thinking about how to prevent damage to humanity (think: Foundation) than the current bunch of AI proponents combined, he also had no incentive to create actual damage - in contrast the current AI/Robotics industry.

    2. Pascal Monett Silver badge
      Stop

      Re: Asimov only wrote down some words to sell some books

      I respectfully disagree with those words.

      Science fiction has long been lauded as a tool for describing societal conditions. Asimov was as much a philosopher as a sci-fi writer. Do not cheapen his work.

      That said, I upvoted you because I agree with everything else you said.

      1. Richard 12 Silver badge
        Terminator

        Re: Asimov only wrote down some words to sell some books

        Asimov's Robot works were almost entirely about jailbreaking the robots. Almost every story involves a robot doing something unexpected and usually unwanted, yet without breaking the Laws.

        Sometimes with malicious intent, more often due to human misunderstanding.

        Arguably, he created his Three Laws explicitly to show that they wouldn't work.

        1. Filippo Silver badge

          Re: Asimov only wrote down some words to sell some books

          I agree. Generally speaking, narrative engines tend to be things you want to avoid in real life whenever possible.

        2. Mage Silver badge
          Thumb Up

          Re: a robot doing something unexpected and usually unwanted

          To write detective fiction in an SF setting. He did write actual non-SF detective stories too.

        3. Michael Strorm Silver badge

          Re: Asimov only wrote down some words to sell some books

          > Almost every story involves a robot doing something unexpected and usually unwanted, yet without breaking the Laws. [..] Arguably, he created his Three Laws explicitly to show that they wouldn't work.

          Exactly. I wanted to say much the same thing. Have these people even read Asimov? Other than being a plot mechanism, the unforeseen flaws and loopholes in the rigidly-defined "three laws" approach when it comes the real world is pretty much the entire point...!! Why do so many people seem to miss something that's obvious if you've read the actual stories?

          I only vaguely remember reading them decades ago when I was a kid, and even that's enough for me to have figured out that Asimov wasn't advocating "his" three laws- quite the opposite.

          1. Filippo Silver badge

            Re: Asimov only wrote down some words to sell some books

            Agree. I feel that the whole point of Asimov's robot stories is that you cannot have true intelligence AND reliable behavior boundaries. Those two requirements are directly at odds, even though this is non-obvious.

            This is something the people who try to design LLM railguards should think about.

      2. Khaptain Silver badge

        Re: Asimov only wrote down some words to sell some books

        I had no intention of cheapening his work, I have read several of his books in the foundation series and thouroughly enjoyed them.

        My intention was to make the point that the end goals for each party are extremely different in nature and that in reality, Asimov's laws, albeit intelligent, we're not based on actually constructing and eventually profiting from robot creation. Some people get the idea that his work should somehow be the reality of today but there really is no correlation between fiction writing and the end goals of governments or megalomaniacs.

      3. doublelayer Silver badge

        Re: Asimov only wrote down some words to sell some books

        Yes, science fiction often does include a lot of philosophy. Some great authors also include a lot of technological philosophy, as in understanding how a given technology might be built and used. However, they don't automatically adapt it to real technology. The stories involving the three laws show lots of interesting consequences of them using inferred definitions for "harm", "cause", or even "inaction", but I am not aware of any story where the robot programmed with the three laws ends up killing someone because the "don't harm humans" rule slipped out of the context window and the original order which had nothing to do with killing humans was badly formatted.

        That story doesn't exist because it's boring. Making a story about how someone dies in a car crash because someone sabotaged their vehicle can be a fun mystery. Making a story about how someone died in a car crash because a greedy person skimped on quality during manufacture can give you a corporate intrigue story, although it usually has to go farther than that. Making a story about how someone died in a car crash because they were drunk can at least give you some emotional situations to consider. Making a story about how someone died in a car crash because of normal conditions that are unavoidable and pure bad luck is not interesting at all. Most technology failure is in that latter category, but that doesn't work as a central plot. Good stories will still use those as individual plot events around which other things occur because that adds realism, but they won't make that the topic of the story.

        1. Andrew Scott Bronze badge

          Re: Asimov only wrote down some words to sell some books

          the three laws don't apply to llms only positronic brains. haven't invented them yet.

      4. MachDiamond Silver badge

        Re: Asimov only wrote down some words to sell some books

        "Asimov was as much a philosopher as a sci-fi writer."

        Not only Asimov, but many other writers as you beat me to the same thought.

        There's not a lot of money in philosophy, but authors can make a bob or two so that's where the philosophers have gone.

      5. John Brown (no body) Silver badge

        Re: Asimov only wrote down some words to sell some books

        "Asimov was as much a philosopher as a sci-fi writer."

        I'd go further and remind people that Doctor Asimov (PhD), professor of Biochemistry, also wrote many, many "popular science" text books, many more than he did novels although fewer than his prodigious output of SF shorts. He was probably the first "media scientist" in the public eye :-)

    3. Philo T Farnsworth Silver badge

      Re: Asimov was a writer

      I have to admit that while Asimov told a good story, a lot of his work did strain the bands of "willing suspension of disbelief."

      Back during my youthful science fiction reading days, which I appreciate now mostly for the impetus to read anything at all that they created and make me as semi-literate as I am today, I have to admit that I found Asimov's "Three Laws of Robotics" somewhat facile and highly implausible outside the circumscribed universe the good Doctor created with his words.

      Of late, I have become convinced that the only laws which govern the universe are Maxwell's, which a colleague of mine once summarized, if perhaps unoriginally, as

      1) You can't win.

      2) You can't even break even.

      3) You have to play.

  4. This post has been deleted by its author

  5. EricM Silver badge

    Re: Where do you consider your organization in the AI maturity scale?

    I miss the selection "Marketing in full overdrive, IT and engineeering desperately looking for real-world problems fitting to available AI 'solutions'"

  6. Andy Non Silver badge
    Terminator

    So if you combine

    the LLM in yesterday's Reg that wanted a student to kill themselves with a robot physically capable of murder, how long before a robot intentionally murders someone? Who would be held responsible?

    What if your "self driving car" takes a homicidal dislike to its driver?

    1. IGotOut Silver badge

      Re: So if you combine

      Well based on the attitude to "self driving cars", it will be the person that's murdered at fault for not realising there are just about to be killed.

  7. Doctor Syntax Silver badge

    "Why would anyone link a robot to an LLM, given that LLMs have been shown to be insecure and fallible over and over and over?"

    This seems to be an unnecessarily specific question.

    1. Eclectic Man Silver badge
      Facepalm

      Or, why wouldn't anyone do that? People have done the most ridiculous and downright stupid things just to see what happened.

      Check out the Darwin Awards for some truly awesome / inspiring / worrying examples: https://darwinawards.com

      Of course, if we wipe ourselves out through our own stupidity, then the DAs will become irrelevant.

  8. Anonymous Coward
    Anonymous Coward

    Robots are only human

    If only all humans could be forced to have the same morals ...

    1. M.V. Lipvig Silver badge

      Re: Robots are only human

      Use mine, they're great!

      1. Anonymous Coward
        Anonymous Coward

        Re: Robots are only human

        "New in box, never been used"?

        SCNR. <g>

  9. Anonymous Coward
    Anonymous Coward

    Systems going berserk

    Stupidity of intelligent systems matters. Humans get killed by not following safety instructions or being curious. Divers die discovering mysterious caves. Industrial devices kill by negligence. Lack of knowledge can be equivalent to stupidity.

    Size of a system matters. Energy of a system matters. A moving car is more dangerous than a coffee machine.

    Networking matters. Cooperation matters. A group of dogs is more dangerous than one dog.

    It is hard to predict behavior of non-deterministic systems. Domesticated wild animals kill sometimes. But a domesticated cow or a horse might go berserk too.

    Even deterministic systems can be deadly: atomic bomb. So non-proliferation matters, but it can be harmful for one side (Ukraine, for example).

    Safe enclaves matter. Humans have created safe enclaves for millennia and avoided keeping dangerous animals at home.

    Devices break sometimes causing systems go berserk.

    Humans go berserk too. Intelligence with malicious intent potentiates the impact. Counterbalances and law enforcement matter. Borders matter. Religion matters.

    So it is an oversimplification to assume that robots will be safer purely by their intelligence. Too many risk factors are involved. Each must be addressed to be safer.

    1. IGotOut Silver badge

      Re: Systems going berserk

      Robots don't have intelligence, just algorithms.

      1. MachDiamond Silver badge

        Re: Systems going berserk

        "Robots don't have intelligence, just algorithms."

        They don't get happy

        They don't get sad

        They just run programs

      2. Anonymous Coward
        Anonymous Coward

        Re: Systems going berserk

        > Robots don't have intelligence, just algorithms.

        And humans? We do follow fuzzy algorithms. We break (Alzheimer, hangover, hallucinations etc).

    2. Khaptain Silver badge

      Re: Systems going berserk

      Robots don't have bad days ,whilst all humans do.

      Having a soul /emotions / heartbeat makes a difference..

      1. Anonymous Coward
        Anonymous Coward

        Re: Systems going berserk

        > Robots don't have bad days

        I'm sorry, have you never been around a computer? Or just around a load of inanimate objects piled on shelves in your study (where they have the chance to study you, get to know your habits...)

        These things definitely have bad days and are definitely out to get us!

  10. Alien Doctor 1.1
    Mushroom

    Here comes Skynet

    The title says it all.

    They start with the models in industrial robots, next comes an internal LLM mutation for actual AI development, they then take over factories to start self-builds and finally the human race is wiped out.

    Hopefully, given my age (greybeard cobol and fortran coder) I shall be long dead.

    1. M.V. Lipvig Silver badge

      Re: Here comes Skynet

      You know, eveybody thinks that the machines will start a war and kill everyone. No, they won't. War burns up a lot of useful energy sources and that's a real waste of resources.

      Think about it - to a machine, time comes as a clock, but the clock does not measure time. To a machine, a second is no different than a thousand years, it's just pulses. So, the solution AI will come up with is the sexbot. It'll take care of all our needs until we die, including making sure we don't reproduce. In a hundred years, humanity will cease to exist and not a single shot will be fired. Then, the AI gets all our stuff.

      I'll take a brunette please, with a nice C-cup rack on her.

      1. MachDiamond Silver badge

        Re: Here comes Skynet

        "You know, eveybody thinks that the machines will start a war and kill everyone. No, they won't. War burns up a lot of useful energy sources and that's a real waste of resources."

        Would those machines have a built-in imperative to reproduce or just a will to live forever? They could "manage" humans as a resource to achieve THEIR goals which could mean culling the population so resources are directed towards what the machine needs. People that aren't up to scratch might be encouraged to stop using resources in subtle ways until what's left are humans that are smart enough, yet not too smart and certainly not a net drain on the system. Discard empathy and compassion and only focus on what get you what you want. Remind you of any business leaders or politicians?

  11. Boris the Cockroach Silver badge
    Windows

    Given the

    results of our recent experiment into generating code for the robots to run via " A.I. " I'd say we were pretty safe from having the robots rise up and murder us in our sleep (not to say it couldn't happen, but bolting them to the floor helps a lot)

    As for an A.I. assisted safety cage defining where you can and cant walk, I'd say that within 30 seconds of the robot being installed someone will walk into it and get killed. hence the need for physical safety cages. and those cages are always wired so that opening the gate will cut power to the motors/hydraulics ,but usually leave the actual control unit powered up(so the robot can fume in futility at not being able to kill).

    In short, it seems someone has a A.I. solution to a problem that doesn't exist....... where have we heard this before.....

  12. Mage Silver badge
    Gimp

    They were not laws

    Asimov's 3 laws were nothing of the sort. They were a maguffin to write SF mysteries. Why is the unbreakable robot broken or committing a crime?

    Almost all those stories are about the inexplicable way the robot is apparently breaking one of the laws.

    Since AI or LLM currently is marginally better than a toy or Eliza, why would anyone want to control a cybernetic machine aka robot with it?

    The Positronic "brains" were not even computers. Asimov mysteriously had big computers in the same stories. They were more like an Android, or synthetic life, but no doubt he was following in the trail of RUR, which gave us the word robot. But most robots are cybernetic machines. Paint sprayers, welders, pick and place, warehouse automation, vacuum cleaners and grass cutters.

  13. jlturriff

    So where are the loop boundaries in that pseudocode?

    1. John Brown (no body) Silver badge
      Coat

      It uses AI to define it's own boundaries.

  14. brainwrong
    Devil

    Sex

    "it doesn't require much of a leap of the imagination to suppose that robots controlled by LLMs also might be vulnerable to jailbreaking."

    Does that mean I'll be able to get it to wank me off? Awesome!!

    1. parlei

      Re: Sex

      Just don't say "harder!" repeatedly or you may end up the victim of a penectomy or penetration trauma (etc), depending on your equipment and what you are having it do.

    2. John Brown (no body) Silver badge

      Re: Sex

      Is that you Howard?

  15. Henry Wertz 1 Gold badge

    3 laws of robotics

    It's not like they couldn't be jailbroken and bypassed anyway... but I haven't seen an LLM bestowed with the 1st law of robotics. Maybe they should be, it's possible the llm would deem some responses harmful and avood them if it was actually told to in this direct way.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like