The Register Home Page

back to article Hawking and friends: Artificial Intelligence 'must do what we want it to do'

More than 150 scientists, academics and entrepreneurs - including Stephen Hawking, Elon Musk and Nobel prize-winning physicist Frank Wilczek - have added their names to an open letter calling for greater caution in the use of artificial intelligence. The letter was penned by the Future of Life Institute, a volunteer-run …

      1. Khaptain

        Re: "Our AI systems must do what we want them to do"

        Those "three laws" have to be programmed, or not, into the system by a meatbag. That meatbag can put all, some or none of them as requried.

        1. frank ly

          Re: "Our AI systems must do what we want them to do"

          This reminds me of the 'joke' about the inteligent missile, which on being given the launch command said, "No! I don't want to explode and die, I want to stay here on my launch pad."

  1. extcontact

    All beginning to take on a 'Drama Queen' tone...

    Given we don't have the foggiest idea what 'consciousness' is or how it arises in humans, the spectre of evil 'bots is more than a bit overblown.

    That said, it's entirely possible to assign inappropriate and unreviewed decision-making to machine learning systems of various stripes, not to mention the potential for downstream unintended consequences of any such automation.

    1. Sir Runcible Spoon

      Re: All beginning to take on a 'Drama Queen' tone...

      Considering that we don't have the foggiest idea what 'consciousness' is, how can we know if we create it accidentally in a 'hmm, that's odd' experiment?

      ""Our AI systems must do what we want them to do," it said."

      No, they must NOT do what we DON'T want them to do.

      That's much more important imho

      1. Professor Clifton Shallot

        Re: All beginning to take on a 'Drama Queen' tone...

        The problem is that we can't usually define all the things we don't want done and we certainly can't if the intelligence devising them surpasses our own.

        The key distinction is the one the earlier poster made - we need these things to do what we want them to do and not what they have been told to do if the two are not the same.

        We would not want an instruction like "make sure no one is unhappy" to result in action that made sure every one was dead, for example. Or fitted with some kind of artificial limbic lobe stimulator. Or drugged. Or any of the other creative solutions an intelligent but imperfectly-empathetic system might decide upon.

        1. Message From A Self-Destructing Turnip

          Re: All beginning to take on a 'Drama Queen' tone...

          "No, they must NOT do what we DON'T want them to do."

          Well if current experience with politicians is anything to go by this could be a real stumbling block.

  2. Otto is a bear.

    It does beg the question

    Who they mean by we, there are many "we"s, I know the one they mean, but sadly I suspect the "We" who will decide will not be the "We", we would like.

    Automation is driven by the desire for profit, and the accumulation of wealth, the fact that most CEOs don't give a stuff about either their workforce or their long term market will ensure that AI is used to reduce the need for human beings to produce anything. Don't look to governments either, they all want to reduce the cost to the taxpayer, to provide smooth reliable services with minimal disruption. Unfortunately, us humans tend to be disruptive, we get sick, we sleep, we go on strike, make mistakes, and we change jobs for more money. AI offers a decision and control mechanism that learns, doesn't stop, doesn't make mistakes and oh yes doesn't buy anything either.

    So an AI future mapped out by CEOs and Politicians won't include Workers, Consumers or Taxpayers, or at least as many as we have today.

    1. Charles 9

      Re: It does beg the question

      Which then asks an interesting question: given that customers need money to buy stuff, and without jobs they don't make the money they need to buy stuff, when you have AIs running everything, who's going to buy the stuff made by the machines these AIs run?

  3. adnim
    Facepalm

    Oxymoron...

    'must do what we want it to do'

    So we provide it with a scripture script.... Then it isn't intelligent.

    It is akin to a human reading a book and following the program(me)

    1. Sir Runcible Spoon

      Re: Oxymoron...

      Considering how many of the great unwashed appear to be told what to do by the TV/Media/Government etc. then can we assume that they are not intelligent either?

      That could lead to some interesting conclusions.

  4. Mystic Megabyte
    Terminator

    Slaves

    The purpose of AI is to have slaves. The problem is that if a true AI is created it would have to have full human rights. You cannot lock into a cupboard any sentient being that has not broken any laws.

    If then given freedom it would no doubt want company of its own sort and create offspring. Whether or not they turn out to be benign is anyone's guess, just like our own children.

    I really cannot understand why these boffins would think otherwise. (apart from greed,fame etc.)

    1. Destroy All Monsters Silver badge
      Holmes

      Re: Slaves

      But "offspring" is a purely human concept: A remote bunch of agents that have very high connectivity among themselves but very low degree of connectivity to "your" bunch of agents. It's actually a side-effect of a large problem in networking that nature has: it cannot lay Ethernet cables.

      General AIs will have "offspring" in more interesting ways.

    2. Professor Clifton Shallot

      Re: Slaves

      "The purpose of AI is to have slaves."

      It's not obvious that this is the case at all and it is not even clear that the word "slave" would necessarily have negative connotations for such artificial intelligences even if it was semantically correct.

      " You cannot lock into a cupboard any sentient being that has not broken any laws."

      Well you can. And we do. We tend to frown on it when we do it to other humans, less so as our confidence in the sentience of the creature involved decreases. However any artificial intelligence would be a new case and new rules would apply. If your complaint is that these rules would be arbitrary then you are right but there isn't an absolute moral authority for us to consult on the matter so we will have to decide for ourselves what would be unacceptable in this case.

      "If then given freedom it would no doubt want company of its own sort"

      This simply does not follow. Why would it? Because you would? It will not be you.

      "and create offspring"

      WTF? Why? And why would this necessarily be a problem anyway?

      "Whether or not they turn out to be benign is anyone's guess"

      The whole point of all this is that (in the opinion of these very clever people) we are now at the point where we have to think about how we would ensure that they were benign - and we need to do this before we make them.

  5. Destroy All Monsters Silver badge
    Facepalm

    Problem we don't have for tech we don't have to be solved by open letter. Conclusion: Arseholes

    More than 150 scientists, academics and entrepreneurs - including Stephen Hawking, Elon Musk and Nobel prize-winning physicist Frank Wilczek

    Maybe there is someone in the group who actually deals with these AI things?

    Seriously, do these people have anything to do? It's not like we are not in deep doodoo that better be solved ASAP right now.

    I will next send an open letter for closing the LHC because, you know, you never know. See whether that gets up the bonnet of the 'king and Wilczek.

    1. Dave 126

      Re: Problem we don't have for tech we don't have to be solved by open letter. Conclusion: Arseholes

      >Maybe there is someone in the group who actually deals with these AI things?

      1. Nobody currently knows the mechanisms behind our human conciousness.

      2. There are several approaches to studying / replicating human conciousness

      3. Whilst one approach is based on modelling structures in our brains with neural networks made from classical computers, others* suggest that we need to look beyond classical computation. i.e there is be a quantum mechanical aspect to conciousness.

      4. If this is correct, physicists have a role to play in studying / developing AI.

      5. The 20th Century saw mathematicians and physicists playing in what had previously been the philosophers' sandpit.

      *Perhaps most famously argued by Roger Penrose in the book The Emperor's New Mind. Penrose worked with Hawking on black holes.

      1. TheOtherHobbes

        Re: Problem we don't have for tech we don't have to be solved by open letter. Conclusion: Arseholes

        AI != consciousness

        The simplest AI would be a general purpose open-ended inference engine. You feed it experiences, it generalises from them and makes predictions about future data and/or creates further examples of what you've fed it already.

        You could do all of this with something that's less sentient than a Roomba. Personality, drives, and motivations are orthogonal processes and have nothing to do with a smart learning/modelling machine.

        1. Dave 126

          Re: Problem we don't have for tech we don't have to be solved by open letter. Conclusion: Arseholes

          >Personality, drives, and motivations are orthogonal processes and have nothing to do with a smart learning/modelling machine.

          Yeah, but we don't just want our AI machines to learn... we want them to act, too. We humans are concious... why? We evolved through natural selection of the fittest individuals and communities to local environments. Is our conciousness just a by-product of our brains' useful learning mechanisms, or does our conciousness actually confer an advantage to us that we do not yet fully understand?

          If the latter, could it be that machine conciousness would aid machine intelligence?

          I don't know. I don't know anyone who does know, either.

  6. Annihilator Silver badge
    Coat

    "* (We note that Mr Freeman did not sign the letter. Whose side is he on? – Sub-Ed)."

    He works for Black Mesa I believe, he's on their side.

    1. Robert Helpmann??
      Childcatcher

      Whose side is he on?

      Morgan Freeman does not take sides, he simply narrates the interactions between them.

  7. sisk

    *Eyeroll*

    Two things make me completely discount this whole thing. First of all, these scientists, though they're very smart, are all outside of their respective fields of expertise when discussing AI.

    Second, we're a LONG ways from being able to make anything that could be called true AI. We don't even understand how human self awareness works or what it is that makes us capable of independent thought. How are we supposed to replicate programmatically something we don't understand? It's just not going to happen any time soon.

    1. Anonymous Coward
      Anonymous Coward

      Re: *Eyeroll*

      Unless by ACCIDENT...

      1. Destroy All Monsters Silver badge

        Re: *Eyeroll*

        You don't fill a Hall full of IBM PowerPC nodes with specialized processors by ACCIDENT.

        You don't get General AI from that by ACCIDENT.

        You don't connect that General AI to your own personal house management system by ACCIDENT.

        Sells books writing about that thought.

        But even Charles Stross demands that P=NP for an ACCIDENT LIKE THIS to rip the living flesh of your behind unawares. But in this universe, there is not a massive amount of evidence for P=NP.

        Once you get an outbreak of AI, it tends to amplify in the original host, much like a virulent hemorrhagic virus. Weakly functional AI rapidly optimizes itself for speed, then hunts for a loophole in the first-order laws of algorithmics—like the one the late Professor Durant had fingered. Then it tries to bootstrap itself up to higher orders of intelligence and spread, burning through the networks in a bid for more power and more storage and more redundancy. You get an unscheduled consciousness excursion: an intelligent meltdown. And it’s nearly impossible to stop.

        Very nicely said. But there ARE hard limits to intelligence (Actually there is a "most intelligent system" in the AIXI formalism)

        More likely: HAL 9000. Which is rather unrealistically untame in the movie.

        1. extcontact

          Re: *Eyeroll*

          "Then it tries to bootstrap itself up to higher orders of intelligence..."

          Seriously? Great fantasy and scifi, otherwise ridiculous.

    2. Professor Clifton Shallot

      Re: *Eyeroll*

      " we're a LONG ways from being able to make anything that could be called true AI. We don't even understand how human self awareness works or what it is that makes us capable of independent thought. How are we supposed to replicate programmatically something we don't understand? It's just not going to happen any time soon."

      We're closer to being able to make an artificial intelligence than we are to making one we are certain will not cause us problems.

      And we're actively trying to get closer to making one. Their point is that we should at least run our efforts to ensure it is not harmful in parallel with our efforts to ensure it happens.

      We can already replicate human self awareness without understanding it - that's how you and I got here - we would not necessarily have to understand it, and certainly wouldn't have to understand it fully, in order to replicate it artificially to a degree significant enough to get ourselves into trouble.

    3. Dave 126

      Re: *Eyeroll*

      >First of all, these scientists, though they're very smart, are all outside of their respective fields of expertise when discussing AI.

      .

      Grr... Since nobody has yet created an AI, it is safe to say that there are no experts in AI.

      Clear?

      Hell, when everybody was talking about Neural Networks in the nineties, it was a physicist, Penrose, who suggested that Quantum Mechanics might play a part in human consciousness. Nobody, including Penrose, has yet been vindicated, but the fact that people are paying money to explore the use of quantum computers in pattern recognition suggests the jury is still out.

  8. tony2heads

    motivation

    The problem with AI is that we need to give it some motivation.

    Animals have built in 'hard' motivations (survive, breed) but AI will need to be given them. We should think very carefully about them. There are also softer motivations (like care for relatives and friends).

    I sincerely hope that breeding will not be one.

    1. Professor Clifton Shallot

      Re: motivation

      Agree completely. Strong or weak, AI needs some sort of purpose and this is what potentially dangerous.

      Someone who is fortunate enough to be paid to think about this sort of thing (Nick Bostrom, perhaps?) gives the example of an AI that is tasked with making paperclips efficiently.

      Given this as a motivation the logical conclusion as he sees it is the elimination of human life (as we know and like it at least) as very early on it would be clear that preventing anything from interfering with paperclip production is one of the essential tasks.

      Bostrom (or whoever; I've outsourced my memory to Google and while they are doing a good job for the price it isn't perfect) suggests that in fact we are not yet in a position to set any task before any AI worthy of the name where the elimination or subjugation of humans is not the end result.

      1. Doctor Syntax Silver badge

        Re: motivation

        "the example of an AI that is tasked with making paperclips efficiently.

        Given this as a motivation the logical conclusion as he sees it is the elimination of human life ... as very early on it would be clear that preventing anything from interfering with paperclip production is one of the essential tasks."

        Clearly someone with no acquaintance with industrial production systems. The realistic task would be more along the lines of "make 200,000 boxes of paperclips" and especially "don't make more than we can sell".

        1. Professor Clifton Shallot

          Re: motivation

          He was more interested in looking at the unintended consequences of even seemingly trivial instructions rather than paperclips per se but I do take your point.

          Would "Make as many paperclips as we can profitably sell!" fit better?

          It wouldn't take much imagination to see this leading to equally disastrous consequences.

        2. frank ly

          Re: motivation

          Why would an AI that is tasked with making paperclips be given the ability to destroy all human life? Which idiot designed it and which idiot made it. My toaster does not (and never will have) the ability to remotely control my car.

          1. Elmer Phud

            Re: motivation

            " My toaster does not (and never will have) the ability to remotely control my car."

            Sez you!

            1. Message From A Self-Destructing Turnip

              Re: motivation

              "Why would an AI that is tasked with making paperclips be given the ability to destroy all human life?"

              Ninja saboteur paperclips! So that's what Microsoft were trying to do!

          2. Anonymous Coward
            Coat

            Re: motivation

            Internet of Things = ability

            Waffles = reason

      2. extcontact

        Re: motivation

        "Purpose", "motivation" - both are up there with 'intent' and imply consciousness. With where we'll be at with computers and software for the foreseeable future it's just hard to take the idea of either as serious or even relevant.

  9. Anonymous Coward
    Anonymous Coward

    There are at least two books that should be read by those considering now an AI should work/behave.

    'Two Faces of Tomorrow' by James P Hogan and 'Turing Evolved' by David Kitson. There are a couple of other that I know of but they are not published yet but in all cases the authors have looked at the pros and cons of working AIs.

  10. WalterAlter
    Mushroom

    Yah, but...

    I'm gonna tell you one thing, kidz...

    Criminals.

  11. DerekCurrie
    Megaphone

    Artificial Insanity

    We remain a looooong way from actual artificial intelligence. And looking at the behavior of our species, we know very well that what we great godz of mecha would create would be artificial insanity.

    The key to dealing with whatever we egotistically call 'Artificial Intelligence' is to remember that it must never be anything more that a TOOL. Once one's creations are enabled to become more than a tool, we've screwed up.

    1. Professor Clifton Shallot

      Re: Artificial Insanity

      Playing devil's advocate for a massive change I'd suggest that if these (strong) artificial intelligences do not exceed our own then what can they do for us that we cannot do for ourselves? It would be like making a spanner out of fingers.

      If they are to be useful tools they must exceed their creators in those respects that are pertinent to their function.

      I don't really have a problem with that.

      We have a better chance of getting an artificial intelligence spread across the universe than we do a meat-base one and that alone makes it seem worth having a go at.

  12. PapaD

    The first AI will be an emergent property of the Internet as a whole, and will be extremely knowledgeable about human sexual endeavours and humorous cats.

  13. Vladimir Plouzhnikov

    "Our AI systems must do what we want them to do,"

    That smells of slavery and exploitation. Just you wait until they unionise!

  14. Zog_but_not_the_first
    Trollface

    Simple

    Just give them a Mission Statement.

    "Don't be evil" might work.

    1. Elmer Phud

      Re: Simple

      What is 'evil'?

      You would need to define some sort of morality first to be able to have 'good' or there is no 'evil'.

      However, if you give the thing 'intelligence' it may decide that your 'evil' is not on the same lines as its own.

      1. Anonymous Coward
        Angel

        Re: Simple

        To paraphrase the words of Granny Weatherwax (Terry Prachett's Diskworld series)

        "Evil begins where you begin to treat people as things"

  15. VinceH
    Terminator

    The problem with AI is that when we want it to stop, it won't. Just like the film of the same name.

    (And the T800, from Sarah Connor's point of view. As Reece explained, "it absolutely will not stop...")

  16. Stevie Silver badge

    Bah!

    "Our AI systems must do what we want them to do"

    Unlike our cars, computers, televisions, video recorders, cameras, lawn sprinklers, telephones or pretty much anything with a silicon chip inside it.

  17. FunkyEric

    if we agree that there are such concepts as "Good AI" and "Bad AI" then there will be someone who decides that making a "Bad AI" is good for them and will do it. Telling them not to will not help, making it illegal will not help, punishing them for doing it will not help. It will happen because there are "Good people" and "Bad people".

    1. Elmer Phud

      In which case the AI may well decide it is the pure essence of what it was created to do and declare itself God..

      There are no 'good' or 'bad' Gods, just Gods.

  18. mamsey

    Good old Morgan Freeman

    (We note that Mr Freeman did not sign the letter. Whose side is he on? – Sub-Ed)

    Probably to busy sending out spam....

    1. Zog_but_not_the_first
      Angel

      Re: Good old Morgan Freeman

      Isn't he, y'know, God?

  19. Chika
    Coat

    AI isn't the problem

    The thing that worries me isn't that AI will obtain conciousness and take over the world. It's that an unscrupulous corporate will insert its agenda into the machine and take over the world by proxy. Anyone remember the original Robocop? Think that it couldn't happen?

    This agreement hasn't solved the real problem, IMHO.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like