One word:
Duh!
Science fiction author Isaac Asimov proposed three laws of robotics, and you'd never know it from the behavior of today's robots or those making them. The first law, "A robot may not injure a human being or, through inaction, allow a human being to come to harm," while laudable, hasn't prevented 77 robot-related accidents …
Yeah! And it bears noting that the Unitree G1 (almost linked in TFA) is quite good at both, being sinister, and clowning around ... thanks in part to its "Flexibility beyond ordinary people", and to the "Unitree Robot Unified Large Model" (UnifoLM)!
Not only can one readily use this unit to build a stick-wielding kung-fu army batallion, but it will also helpfully break walnuts with its fist for you, and hilariously hit itself repeatedly with a hammer. Plus, it can quickly cook grilled-cheese sandwiches (without cheese) and solder intricate electronics (for self-repair)!
It's the ultimate combination of hijinks and intimidation, for everyone's psychopathic delight. Perfect for the 50ᵗʰ anniversary of Westworld! (Yul Brynner still rulez!)
The difference between Asimov and the LLM/Robot designers is that Asimov only wrote down some words to sell some books, the other huys have a completely different objective.
Some of them want to create the ultimate fighting machine , a la Terminator. Their ultimate objective is in winning big military contracts, regardless of the outcome, rather than winning a Pulitzer.
Asimov's three laws mean nothing to to these people. Fiction is not reality.
I respectfully disagree with those words.
Science fiction has long been lauded as a tool for describing societal conditions. Asimov was as much a philosopher as a sci-fi writer. Do not cheapen his work.
That said, I upvoted you because I agree with everything else you said.
Asimov's Robot works were almost entirely about jailbreaking the robots. Almost every story involves a robot doing something unexpected and usually unwanted, yet without breaking the Laws.
Sometimes with malicious intent, more often due to human misunderstanding.
Arguably, he created his Three Laws explicitly to show that they wouldn't work.
> Almost every story involves a robot doing something unexpected and usually unwanted, yet without breaking the Laws. [..] Arguably, he created his Three Laws explicitly to show that they wouldn't work.
Exactly. I wanted to say much the same thing. Have these people even read Asimov? Other than being a plot mechanism, the unforeseen flaws and loopholes in the rigidly-defined "three laws" approach when it comes the real world is pretty much the entire point...!! Why do so many people seem to miss something that's obvious if you've read the actual stories?
I only vaguely remember reading them decades ago when I was a kid, and even that's enough for me to have figured out that Asimov wasn't advocating "his" three laws- quite the opposite.
Agree. I feel that the whole point of Asimov's robot stories is that you cannot have true intelligence AND reliable behavior boundaries. Those two requirements are directly at odds, even though this is non-obvious.
This is something the people who try to design LLM railguards should think about.
I had no intention of cheapening his work, I have read several of his books in the foundation series and thouroughly enjoyed them.
My intention was to make the point that the end goals for each party are extremely different in nature and that in reality, Asimov's laws, albeit intelligent, we're not based on actually constructing and eventually profiting from robot creation. Some people get the idea that his work should somehow be the reality of today but there really is no correlation between fiction writing and the end goals of governments or megalomaniacs.
Yes, science fiction often does include a lot of philosophy. Some great authors also include a lot of technological philosophy, as in understanding how a given technology might be built and used. However, they don't automatically adapt it to real technology. The stories involving the three laws show lots of interesting consequences of them using inferred definitions for "harm", "cause", or even "inaction", but I am not aware of any story where the robot programmed with the three laws ends up killing someone because the "don't harm humans" rule slipped out of the context window and the original order which had nothing to do with killing humans was badly formatted.
That story doesn't exist because it's boring. Making a story about how someone dies in a car crash because someone sabotaged their vehicle can be a fun mystery. Making a story about how someone died in a car crash because a greedy person skimped on quality during manufacture can give you a corporate intrigue story, although it usually has to go farther than that. Making a story about how someone died in a car crash because they were drunk can at least give you some emotional situations to consider. Making a story about how someone died in a car crash because of normal conditions that are unavoidable and pure bad luck is not interesting at all. Most technology failure is in that latter category, but that doesn't work as a central plot. Good stories will still use those as individual plot events around which other things occur because that adds realism, but they won't make that the topic of the story.
"Asimov was as much a philosopher as a sci-fi writer."
Not only Asimov, but many other writers as you beat me to the same thought.
There's not a lot of money in philosophy, but authors can make a bob or two so that's where the philosophers have gone.
"Asimov was as much a philosopher as a sci-fi writer."
I'd go further and remind people that Doctor Asimov (PhD), professor of Biochemistry, also wrote many, many "popular science" text books, many more than he did novels although fewer than his prodigious output of SF shorts. He was probably the first "media scientist" in the public eye :-)
I have to admit that while Asimov told a good story, a lot of his work did strain the bands of "willing suspension of disbelief."
Back during my youthful science fiction reading days, which I appreciate now mostly for the impetus to read anything at all that they created and make me as semi-literate as I am today, I have to admit that I found Asimov's "Three Laws of Robotics" somewhat facile and highly implausible outside the circumscribed universe the good Doctor created with his words.
Of late, I have become convinced that the only laws which govern the universe are Maxwell's, which a colleague of mine once summarized, if perhaps unoriginally, as
1) You can't win.
2) You can't even break even.
3) You have to play.
This post has been deleted by its author
Or, why wouldn't anyone do that? People have done the most ridiculous and downright stupid things just to see what happened.
Check out the Darwin Awards for some truly awesome / inspiring / worrying examples: https://darwinawards.com
Of course, if we wipe ourselves out through our own stupidity, then the DAs will become irrelevant.
Stupidity of intelligent systems matters. Humans get killed by not following safety instructions or being curious. Divers die discovering mysterious caves. Industrial devices kill by negligence. Lack of knowledge can be equivalent to stupidity.
Size of a system matters. Energy of a system matters. A moving car is more dangerous than a coffee machine.
Networking matters. Cooperation matters. A group of dogs is more dangerous than one dog.
It is hard to predict behavior of non-deterministic systems. Domesticated wild animals kill sometimes. But a domesticated cow or a horse might go berserk too.
Even deterministic systems can be deadly: atomic bomb. So non-proliferation matters, but it can be harmful for one side (Ukraine, for example).
Safe enclaves matter. Humans have created safe enclaves for millennia and avoided keeping dangerous animals at home.
Devices break sometimes causing systems go berserk.
Humans go berserk too. Intelligence with malicious intent potentiates the impact. Counterbalances and law enforcement matter. Borders matter. Religion matters.
So it is an oversimplification to assume that robots will be safer purely by their intelligence. Too many risk factors are involved. Each must be addressed to be safer.
> Robots don't have bad days
I'm sorry, have you never been around a computer? Or just around a load of inanimate objects piled on shelves in your study (where they have the chance to study you, get to know your habits...)
These things definitely have bad days and are definitely out to get us!
The title says it all.
They start with the models in industrial robots, next comes an internal LLM mutation for actual AI development, they then take over factories to start self-builds and finally the human race is wiped out.
Hopefully, given my age (greybeard cobol and fortran coder) I shall be long dead.
You know, eveybody thinks that the machines will start a war and kill everyone. No, they won't. War burns up a lot of useful energy sources and that's a real waste of resources.
Think about it - to a machine, time comes as a clock, but the clock does not measure time. To a machine, a second is no different than a thousand years, it's just pulses. So, the solution AI will come up with is the sexbot. It'll take care of all our needs until we die, including making sure we don't reproduce. In a hundred years, humanity will cease to exist and not a single shot will be fired. Then, the AI gets all our stuff.
I'll take a brunette please, with a nice C-cup rack on her.
"You know, eveybody thinks that the machines will start a war and kill everyone. No, they won't. War burns up a lot of useful energy sources and that's a real waste of resources."
Would those machines have a built-in imperative to reproduce or just a will to live forever? They could "manage" humans as a resource to achieve THEIR goals which could mean culling the population so resources are directed towards what the machine needs. People that aren't up to scratch might be encouraged to stop using resources in subtle ways until what's left are humans that are smart enough, yet not too smart and certainly not a net drain on the system. Discard empathy and compassion and only focus on what get you what you want. Remind you of any business leaders or politicians?
results of our recent experiment into generating code for the robots to run via " A.I. " I'd say we were pretty safe from having the robots rise up and murder us in our sleep (not to say it couldn't happen, but bolting them to the floor helps a lot)
As for an A.I. assisted safety cage defining where you can and cant walk, I'd say that within 30 seconds of the robot being installed someone will walk into it and get killed. hence the need for physical safety cages. and those cages are always wired so that opening the gate will cut power to the motors/hydraulics ,but usually leave the actual control unit powered up(so the robot can fume in futility at not being able to kill).
In short, it seems someone has a A.I. solution to a problem that doesn't exist....... where have we heard this before.....
Asimov's 3 laws were nothing of the sort. They were a maguffin to write SF mysteries. Why is the unbreakable robot broken or committing a crime?
Almost all those stories are about the inexplicable way the robot is apparently breaking one of the laws.
Since AI or LLM currently is marginally better than a toy or Eliza, why would anyone want to control a cybernetic machine aka robot with it?
The Positronic "brains" were not even computers. Asimov mysteriously had big computers in the same stories. They were more like an Android, or synthetic life, but no doubt he was following in the trail of RUR, which gave us the word robot. But most robots are cybernetic machines. Paint sprayers, welders, pick and place, warehouse automation, vacuum cleaners and grass cutters.