back to article Without new anti-robot laws, humanity is doomed, MPs told

Robots will destroy humanity unless we write new laws to control them, a UK Parliamentary committee has been told. “The key question is: if something goes wrong, who is responsible?” pondered the Commons Select Committee for Science and Technology, in a report released today. Microsoft's Dave Coplin, the firm's “chief …

  1. hplasm
    WTF?

    Ah-

    I didn't realise this was the current threat.

  2. Anonymous Coward
    Coat

    "Robots will destroy humanity unless we write new laws to control them, a Parliamentary committee has been told."

    I thought Isaac Assimov had already done that?

    1. Alister

      "Robots will destroy humanity unless we write new laws to control them, a Parliamentary committee has been told."

      I thought Isaac Asimov had already done that?

      Don't be silly, you can't expect the British government to adopt a set of rules developed by a foreign Sci-Fi Author, can you?

      Any rules which the government is prepared to sign off will have to be re-imagineered by a highly paid consultancy group, and must embrace diversity and allow for future expansion.

      Asimov's laws are far too prescriptive and narrow in scope...

      1. johnfbw

        "Don't be silly, you can't expect the British government to adopt a set of rules developed by a foreign Sci-Fi Author, can you?"

        Because no government would listen to a RELIGION designed by a Sci-Fi author!

        1. TRT Silver badge

          The UK government would...

          no doubt form an equivalent of 200 bytes encapsulating three elegant laws, but running to a checklist weighing in at around 4Tb.

      2. Alister
        Coat

        Asimov's laws are far too prescriptive and narrow in scope...

        1/ A robot may not injure a human being or, through inaction, allow a human being to come to harm

        See, straight away we have a problem with this. How can we use our robotic weapons if they've got this rule stuck in their programming.

        We need the option to relax the ruleset to include all sorts of conditionals:

        1/ A robot may not injure a human being, except when they are

        i. the enemy

        ii. a terrorist

        iii. a Republican

        iv. a Democrat

        v. a Mexican

        etc...

        or, through inaction, allow a human being to come to harm (except where they are cheaper than a robot, or a terrorist, or a foreigner or...)

        You see? much better...

        OK, on to the next one:

        2/ A robot must obey the orders it is given by human beings except where such orders would conflict with the First Law

        Now this is no good at all. You can't allow just anyone to go giving robots orders, how can you keep control of things?

        No, the revised rule would have to be something like:

        2/ A robot must obey the orders it is given by authorised human beings.

        We don't need the wishy-washy bit on the end, I mean we'd only order them to harm bad people, anyway.

        And then we get this:

        3/ A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

        Really? Come on, that robot cost a fuck-ton of money, we're not going to let it get damaged by trying to protect some no-account humans.

        3/ A robot must protect its own existence at all costs.

        There, that'll do it.

        Now, what was all this about?

      3. JustNiz

        Asimov's laws are far too prescriptive and narrow in scope straightforward, comprehensive, sane and logical.

        There fixed it for ya.

        1. Rattus Rattus

          "or, through inaction, allow a human being to come to harm"

          This part of Asimov's Three Laws is really poorly thought through. Humans do things to harm themselves in tiny ways many times a day, it's part of normal human life. A robot operating under the Three Laws would not differentiate between these things and more substantial harms. So, hope you like your precisely-portioned vegan diet and enforced daily exercise, along with any other lifestyle changes a robot could decide was necessary to keep you as healthy as you could possibly be. And remember, "a robot must obey the orders it is given by human beings except where such orders would conflict with the First Law," so it's not going to be listening to any attempts to command it to allow you that bacon sarnie and a pint.

  3. Anonymous Coward
    Anonymous Coward

    Wow. The government really loves it's zero-day legislation, doesn't it?

  4. Destroy All Monsters Silver badge
    Black Helicopters

    After "Obama wants to go to Mars", a "Govn't Safety Net for Machines"

    It's like there something really big in the pipe that we are not supposed to notice. Maybe the fact that the war on behalf our our ISIS allies and against Russia is getting hot ... OMG Kim Kardashian has been ROBBED!!

    “The key question is: if something goes wrong, who is responsible?” pondered the Science and Technology Committee, in a report released today.

    Yeah you dumb idiot, that's why we have insurance.

    1. JustNiz

      Re: After "Obama wants to go to Mars", a "Govn't Safety Net for Machines"

      > Yeah you dumb idiot, that's why we have insurance.

      Because dollars you can no longer spend will make up for a robot apocalypse?

  5. John Mangan

    "some obscure board game"

    Really! Seriously, Go is obscure?

    Sheesh, cultural bias much?

    1. Filippo Silver badge

      Re: "some obscure board game"

      Maybe he meant chess.

      1. Spoonsinger

        Re: "Maybe he meant chess."

        I think probably more Mouse Trap. Build a big edifice through random events and are somewhat annoyed, (but not surprised), when it's you who get end gamed.

      2. Tom_

        Re: "some obscure board game"

        What's that?

    2. Ken Hagan Gold badge

      Re: "some obscure board game"

      I assumed that the game in question was chess, because the last I heard Go was *waaay* too difficult for brute forcing, er, I mean, sophisticated AI algos.

      However, since the casino games in the word's markets *also* appear to be too difficult, despite benefitting from research budgets that chess players can only dream about, I don't think it actually matters. The only way AI is likely to destroy us in the foreseeable future is if we are ever stupid enough to believe that it has arrived and can safely be left in charge of <insert important thing>.

      1. DavCrav

        Re: "some obscure board game"

        "I assumed that the game in question was chess, because the last I heard Go was *waaay* too difficult for brute forcing, er, I mean, sophisticated AI algos."

        Indeed. Which is why it was such a shock when the latest Go AI wiped the floor with the fleshy meatbag.

  6. 's water music

    Google Lobbying

    Nonetheless, taking legal measures now to prevent the Rise of the Machines later on would be no bad thing.

    I propose an international treaty to adopt the mission statement "Don't be evil". That ought to do it.

    Seems legit?

  7. oldcoder

    Too late.

    They already exist: https://www.wired.com/2007/06/for_years_and_y/

  8. Brian Miller
    Childcatcher

    But we have self-driving cars now...

    To a certain extent, legislation on who is liable for a robot (or car) run amok is necessary. All of the car controls in many modern cars are just suggestions to the computer. Ignition, accelerator pedal, transmission, and brakes are all drive-by-wire. Add in some computer control on the steering, and what input does the driver really have if there is a failure, or malware gets uploaded?

    Of course, autonomous ground-based killing machines can already be implemented. It's just that nobody has bothered to do it as part of their arsenal. We've had missiles that self-identify a target for some time. After all, that's what guidance is all about.

  9. Prst. V.Jeltz Silver badge
    Joke

    "“We support a ban by international treaty on lethal autonomous weapons systems that select and locate targets and deploy lethal force against them without meaningful human control.”

    such as , er , electric fences, bear traps , unguarded level crossings ....

    1. DNTP

      Those devices tend to fail one or more of the "select", "locate" and "lethal force" parameters, unless you count natural selection against stupid/reckless/unprepared people as a deliberate target engagement algorithm.

      Seriously though, that's why in most civilized places there are laws regulating the deployment of electric fences and hunting traps, and mandating visibility and signage of level crossings.

      1. Likkie

        and don't forget

        "...meaningful human control."

    2. I am the liquor

      Mines might be a better example. The algorithm for selecting the victim is much simpler than the ones a T-800 would employ, but it's still an algorithm that executes autonomously without human intervention.

      1. Francis Boyle Silver badge

        Mines are the perfect example

        People have been arguing for decades that mines should be illegal for exactly these reasons. If anything adding AI makes an autonomous killing machine less problematic. Killbots probably wouldn't be blowing children's legs off decades after the war is over.

  10. Fraggle850

    Obscure board game somewhat trivialises the argument

    Let's ignore the issue of robogeddon for a while and consider the current AI hype-gasm. AI is out there and being offered increasingly as a commodity service to enterprise. Businesses will use it for analytics. Some of that analysis may well impact people, I've seen instances where it has been touted as a HR tool to determine who to hire.

    Dismissing AI by trivialising it blinds us to the potential downsides of letting some office drone cobble together a system that they have little understanding of, yet which can impact us all.

  11. Anonymous Coward
    Anonymous Coward

    "We don't want to wake up in a world run by Talkie Toaster"

    Speak for yourself, I like toast. Given the current governments of the world I would take talkie toaster over any of them. World peace through the olive branch that is toast. I can see it now, world leaders eating toast and resolving their differences, Palestine and Israel exchanging toast through the wall, Donald Trump offering toast to Muslims, Theresa May making toast in her cauldron, Boris Johnson eating toast on his bike, Jeremy Corbyn nationalising the toaster industry, toast aid packages to Africa, Putin riding horses bareback eating toast. Sorry if I waffled there.

    1. allthecoolshortnamesweretaken

      You work for The Toast Marketing Board, don't you?

  12. naive

    Why the scare ?.

    Humanity has a track record going back many thousands of years in implementing the most horrific ways of killing other humans in wars and other conflicts.

    So why the scare ?, someone inventing a robot capable of doing what humans do to each other would be a pretty sick person, and it would probably not pass a quality review in the company he is designing the robot for.

    The only law on robots we need is that they are not allowed to have capabilities which enable self replication. For the rest, it is hard to imagine someone could make a robot which can not be reduced to rubble by humans using a WW2 era Rheinmetall 88mm flak, or better.

    1. allthecoolshortnamesweretaken

      Re: Why the scare ?.

      Indeed. If AI systems should misbehave we wouldn't even need heavy weaponry to deal with them. All we'd need would be to come up with some good AI porn to distract them, then pull the plug.

      1. Peter2 Silver badge

        Re: Why the scare ?.

        Frankly, this is the point. What we ought to have is a simple method of ending a terminator style end of the world scenario.

        In my view, that's done by ensuring:-

        1) Humans can easily and simply assert control.

        That's it. All that's required. Pulling the plug is a perfectly acceptable option, IMO. So all we have to do is ensure that we can pull the plug, or at worst ensure that refuelling requires manual human action so that the AI uprising is time limited until the machines fuel/battery runs out.

        If there is an AI apocolypse where the terminator AI takes over the internet then nobody cares as we can just turn the computers off, be that through doing it computer by computer or just by pulling the plug on the power plants. What matters is the effect this has out in the real world.

        For instance cars shouldn't be allowed to start without a physical key inserted, and should not be able to drive off on their own (so the car should not be able to change gear out of park on automatics) The UK is pretty safe from this given that most cars are manuals, but we should ensure that Brakes should always work when you press the brake pedal, and turning off the ignition should stop the engine, kill any computer control and revert to manual unpowered steering etc

  13. John Savard

    Decades

    Since it sometimes takes the government decades to get around to addressing an immediate problem, perhaps the head start is not excessive.

  14. VinceH
    Terminator

    Optional

    "Just because we're decades away from seeing real robo-killing machines..."

    ...doesn't mean we won't see one tomorrow if it travels back in time to kill the mother of the leader of the human resistance.

    1. Kurt Meyer

      Re: Optional

      @ VinceH

      "Just because we're decades away from seeing real robo-killing machines..."

      The sub-head is incorrect. Some of the posters here might be thousands of miles away, none of us are decades away.

  15. Anonymous Coward
    Anonymous Coward

    The UK autonomous weapons

    "The UK doesn't yet have anything like an autonomous weapons capability."

    I think you'll find that the Phalanx and Goalkeeper CIWS are regarded as autonomous weapons. In general, most short-range defence systems need to operate autonomously in order to respond quickly enough.

    1. aberglas

      Re: The UK autonomous weapons

      The first autonomous weapons were torpedoes developed 100 years ago. And mines. They have been becoming smarter ever since.

      The smarter they become, the less human interaction they need. The less people are required to control them. The more precise they can target.

      Modern "torpedoes" could use existing facial recognition technology to pick individuals out in a crowd.

      1. BlackDuke07
        Trollface

        Re: The UK autonomous weapons

        Luckily, not many people group together under water and "torpedoes" are like a fish out of water on land.

        Torpedoes with facial recognition, that's a good one. On an unrelated subject, does anyone want to buy my chocolate fire-guard?

  16. Chris G

    My guess is, the Commons Select Committee will appoint a specific quango to look into who's responsible if it all goes horribly wrong. After several centuries of research,billions of pounds in expenses and trips to foreign places to see how they are dealing with the subject, much question asking and the general population becoming dependent and finally subjugated by AIs and robots, they will do a Google/Facebook search for someone called Serena Butler.

    1. roytrubshaw
      Pint

      "they will do a Google/Facebook search for someone called Serena Butler"

      Have an upvote for Dune reference!

      The Sahara is expanding, so sandworms are next, where is the Mentat school going to be located?

      And is Theresa May a Bene Gesserit or an Honored Maitre?

      1. CDD
        FAIL

        The Time of the Titans

        Yes, good call on the Dune ref. Interestingly, this links into an earlier point. The Titans created AI to serve mankind, but they had human safeguards built in just as proposed by Peter2 above. Problem was, they became lazy, and handed over more and more control to the AI, until finally the last human intervention was removed by a lazy programmer. The AI became Omnium and enslaved the galaxy.

        Far fetched indeed, but you can see it happening in a smaller way. Yes, lots of interventions such as putting in a key, or charging the battery are great safeguards, but a car or robot manufacturer will add keyless entry as a feature, or self charging as a paid for option, and by missing the big picture will hand over control to the machines one feature at a time..!!

  17. All names Taken
    Paris Hilton

    Ouch!

    After watching Les Miserables, or at least part of it, courtesy on Now tv I'd say that there are far worse things than the abstraction that is robotic in sense.

    In interim conclusion may I state the usual stereotypicals?

    Je m'apelleJean Valjean.

    Or

    No, I am Spartacus!

    The poor seem to have a very disproportionate rate of sufferance

  18. Anonymous Coward
    Anonymous Coward

    Simple - if its a public service robot then the taxpayer picks up the tab, but if its a private sector robot then the taxpayer picks up the tab.

  19. bombastic bob Silver badge
    Devil

    Johnny Cab (from article photo)

    Johnny Cab is a crowning moment of awesome for that one line: "I'm not familiar with that address" (watch movie for context)

    [voiced by Robert Picardo, the 'emergency medical hologram' from Star Trek Voyager - kinda looks like him, too]

  20. grapeguy

    Hail the Order of Blood and Bone.

    (circa 1962 -- Creation of the Humanoids)

    May all of us Brothers (and Sisters) of the Order of Blood and Bone rise up and quash the soulless machines, and pass laws to prevent them from looking and acting more and more like us.

  21. Ketlan
    Happy

    Talkie Toaster...

    Talkie Toaster is my hero!!!

  22. Michael Hoffmann Silver badge
    Gimp

    Ticket for the migration fleet...

    - intelligent machines preparing to eradicate or remove us - check!

    - generation growing up with miserable immune systems (thanks to hoverparents) - check!

    - preparation for fleet of space vehicles to get off-world started - check!

    So, turns out *we* are the Quarians!

  23. Adam_OSFP

    To paraphrase old saying: Robots don't kill humans. Humans kill humans ;-)

  24. This post has been deleted by its author

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like