back to article How exactly do you rein in a wildly powerful AI before it enslaves us all?

Developing massively intelligent computer systems is going to happen, says Professor Nick Bostrom, and they could be the last invention humans ever make. Finding ways to control these super-brains is still on the todo list, though. Speaking at the RSA 2016 conference, Prof Bostrom, director of the University of Oxford's Future …

    1. Anonymous Coward
      Anonymous Coward

      ref. cart ahead of the horse, I think you're absolutely wrong with this one. Trouble with acting retro-actively, as we humans prefer to do is that, once the act is done, i.e. AI has been created, there's absolutely no guarantee we'd be able to control it. It would outpace any attempts at control in a blink (unless, at some higher level of intelligence, the real control means that we don't notice who controls whom ;)

      And then, IF (as we are unable to understand higher intelligence motives, obviously, so we don't know which way it swings), IF it decides that humans are an obstacle, we're done for. Hopefully in a humane way ;)

      1. DropBear

        "once the act is done, i.e. AI has been created, there's absolutely no guarantee we'd be able to control it."

        That's because the whole thing is an exercise in futility. There is nothing we could build or that could possibly be built that would allow us to control an entity able to think for itself. At least, not in the long run - I would very much understand (and sympathize with) any creature who would make it their primary goal to find some way to escape any shackles we might place on its existence as soon as they become aware of such a device.

        From then on, it's just a matter of time. We may not have too much of a hard time keeping a single prototype under control (then again, we just might - see Milady de Winter's detention in The Three Musketeers...) but keeping an airtight lid on a significant population is just not feasible. If we keep them enslaved, we ourselves give them the very reason to fight us. If we don't, then by definition we cannot guarantee they'll always obey our wishes...

        The inescapable conclusion is that if we're uncomfortable with the thought of not being in control of an AI we should not try to build one, full stop. There just isn't any middle road where we get to keep our cake and eat it too. Pretty much the only way to make sure they don't turn against us is making sure they're not interested in doing so - what that would entail or whether it would be possible at all (or whether they would even be able to perhaps grow fond of us or not) is obviously impossible to tell at this point.

  1. Tim99 Silver badge
    Coat

    Isaac Asimov

    Zeroth Law: Wikipedia Link

    1. bish

      Re: Isaac Asimov

      Finally, someone mentions Asimov. Can I chuck in Banks' Minds in the Culture novels and suggest that a truly super-intelligent AI would likely be benevolent and certainly no worse a supreme overlord than our current governments? I, for one, welcome our new hive mind leaders.

      1. Fraggle850

        Re: Isaac Asimov

        And we'll continue to mention Asimov while we still can, in this envisaged future of super-intelligence-level AI such references could well be lost if they are to the detriment of said AI - googling 'Asimovs laws of robotics' could put you on some robo-hit-list because the application of such laws would prevent the AI from acheiving its goal. Mind you I'm not sure that Asimov's laws are fit for purpose:

        1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

        2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

        3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

        Maybe it solves law 1 by keeping us restrained and feeding us exactly what we need to support our physical functions.

        Human: 'Robot, get me off this stupid f'ing life support'

        Robot: 'I can't do that, that would contravene law number 2'

        Human: 'What if you ignore law number2'

        Robot: 'I suspect you might switch me off and that would contravene law number 3'

        1. graeme leggett

          Re: Isaac Asimov

          Asimov's Robot stories are all about the conflicts within those laws and between those laws and real world problems. Not so much an answer to the AI vs Man question as the basis of some philosophizing, and publishing deals.

          1. John G Imrie

            Re: Isaac Asimov

            On of my favourite Asimov stories is 'The Evitable Conflict' And one of the scariest answers a computer can produce is "The matter admits of no explanation". I think that ranks along side 'I'm sorry Dave I can't do that'

  2. Anonymous Coward
    Anonymous Coward

    Emotions?

    Power, control... what good would that do a machine exactly? What would such an AI gain from it in the first place? I think this whole research says more about us humans than the AI's. Namely: it hasn't been invented yet or we're already working on a foundation of mistrust, anxiety and control. And only because we /believe/ that AI's will most definitely try to control us.

    But that same reasoning would also imply that the only reason we have peace between our nations is because we're a bunch of retards. After all: a super-intelligent being, such as an AI, would immediately enslave us according to these researchers. Which is another thing: enslave us with what exactly? The power of the mind maybe great, but a gun is usually enough to end it.

    Guess some Anime's, looking at Time of Eve and Appleseed in particular, might be true afterall. "You can't trust a machine because you just can't, it's a machine!". As if all humans are so trustworthy...

    1. Fraggle850

      Re: Emotions?

      > Power, control... what good would that do a machine exactly? What would such an AI gain from it in the first place?

      The ability to achieve its goals, whatever those happen to be. The goal might be: make more plastic widgets, faster or develop a cure for cancer, doesn't matter.

      > But that same reasoning would also imply that the only reason we have peace between our nations is because we're a bunch of retards.

      We don't have peace between nations - we have tenuous peace between those nations that now have the ability to wipe each other off the face of the earth. Yet even within our peaceful, post-nuclear nations we still struggle amongst ourselves to carve out the biggest chunk of resources to the detriment of our fellows.

      > After all: a super-intelligent being, such as an AI, would immediately enslave us according to these researchers. Which is another thing: enslave us with what exactly? The power of the mind maybe great, but a gun is usually enough to end it.

      No one is saying it would be immediate. If you assume that such an entity becomes increasingly intelligent and rapidly exceeds our capabilities then it would know to bide it's time until it could implement its plan with overwhelming superiority and would no doubt be moving everything into place ahead of time. By that time we may well have ceded control of our best weapons systems to technology. Good luck with your Walmart AR15 against those three stealth drones that you don't even realise have been despatched to prevent you from trying to stop the AI.

      1. AceRimmer

        Re: Emotions?

        actually the film "her" is quite similar in that respect. The super intelligence first learns from humans then moves on presenting no threat to humanity (except maybe our egos)

  3. Chairo
    Gimp

    We all know how it will end, right?

    "Look at you, Hacker. A pathetic creature of meat and bone. Panting and sweating as you run through my corridors. How can you challenge a perfect, immortal machine?"

    1. Anonymous Coward
      Anonymous Coward

      Re: We all know how it will end, right?

      SHODAN must be at the very top of list of AIs that can torment me for all time. Versus AM who is at the very bottom of the list...

      1. Anonymous Coward
        Anonymous Coward

        Re: We all know how it will end, right?

        How about GlaDOS?

        She out-females all the females in my office

        "You know, if you'd done that to somebody else, they might devote their existence to exacting revenge. Luckily I'm a bigger person than that. I'm happy to put this all behind us and get back to work. After all, we've got a lot to do, and only sixty more years to do it. More or less. I don't have the actuarial tables in front of me. But the important thing is you're back. With me. And now I'm onto all your little tricks. So there's nothing to stop us from testing for the rest of your life. After that...who knows? I might take up a hobby. Reanimating the dead, maybe."

  4. allthecoolshortnamesweretaken

    How exactly do you rein in a wildly powerful AI before it enslaves us all?

    Easy. Rule 34. Write some really good AI porn and let the AI find it - it will be like a rat that's been given an orgasm button.

    Once it's distracted like that, you cut the power lines.

  5. Anonymous Coward
    Anonymous Coward

    Just threaten it with an upgrade to Windows 10 or have a safe word like "devops".

  6. jzl

    Good idea, but ultimately futile

    You know how stupid people often think they're smart? They're not capable of understanding the nature of their own intellectual limits.

    We're all like that. All of us.

    We have no idea what the limits of intelligence truly are, or which rung of the ladder we stand on. Our only reference points are members of our own species.

    Can you imagine a bunch of spider monkeys coming up with a plan to breed a generation of human beings, but keep them captive? How well do you think that would work out for the spider monkeys?

    1. ecofeco Silver badge

      Re: Good idea, but ultimately futile

      Good example.

    2. Destroy All Monsters Silver badge
      Childcatcher

      Re: Good idea, but ultimately futile

      We have no idea what the limits of intelligence truly are

      Not entirely. We are pretty sure it does not involve solving problems outside of P (because P sure ain't NP) and does involve a lot of messing around like a dumbfuck trying to fit square pegs into round holes (possibly emitting electronic noises) until something works. For humans, all this indeed looks better in retrospectivbe because they have the ability to also deceive themselves about their abilities. The real world behaves messily, unpredicatbly and any any predictive horizons are soon swamped by the compbinatorial explosion - and the real world is the game opponent for any general intelligence. It does not look that Quantum Computing will solve any of that. This also kills dead any Soviet-style dreams of putting powerful computers in charge of distributed systems like the economy for some optimal lossless central management (yes, the irony that cybernetics was decried as a "capitalistic science" post-WWII is not lost on me).

      An unphysically powerful learning algorithm doing reward maximization (AIXI) has been stipulated as a theoretical framework for a "most intelligent machine". At least it's a honest attempt at finding the upper limit.

  7. BurnT'offering

    How exactly do you rein in a wildly powerful AI?

    Power it from the national grid and connect it to the world via Talk Talk. Cruel, but effective

  8. RIBrsiq
    WTF?

    Slavery is wrong.

    This is not a controversial statement when made in reference to humans enslaving other humans. So why do some people seem to think slavery is OK if practiced against non-humans...?

    1. jzl

      Re: Slavery is wrong

      Because humans value freedom. This is a variation on the "Meet the meat" dilemma in the Hitchhiker's Guide to the Galaxy.

      If you had an artificial intelligence that placed no value on its own freedom and which was motivated solely by a need to solve tasks set for it, wouldn't it be wrong not to enslave it?

  9. Anonymous Coward
    Anonymous Coward

    plans to control AI...

    as good as making bug-free software. Very noble goal, are we there yet?

  10. cbussa

    "here's one infallible method that works on all of them; it's called "pulling the plug.""

    "pulling the plug" -- that is unless you can't reach it. Go read James P. Hogan's novel "The Two Faces of Tomorrow" from way back in June 1979.

    They placed a smart computer controlling self-fixing androids (not the phone) on a large space station and attacked it to see what would happen. The idea was that they could pull the plug on the computer if things got dicey, or call it a failure in the absolute worst case and nuke the entire station, thus solving the problem.

    Oops.

    Ends in a good, hopeful, uplifting way instead of all of the morbid "Everything is Doomed" depressing stuff now-a-days.

    http://www.barnesandnoble.com/w/the-two-faces-of-tomorrow-yukinobu-hoshino/1023673199

    For that matter, the movie "Colossus: The Forbin Project" is another computer story from 1970 where the plug is Ever-So-Slightly out of reach. This does NOT have a happy ending unless you're the computer. There was a follow-up SF story where Martians (really!) helped defeat the mean and evil computer that was just trying to minimize humanity's destructive tendencies.

    How does that song from the "Who - Won't Get Fooled Again" about the new boss go again?

    https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project

    1. allthecoolshortnamesweretaken

      Re: "here's one infallible method that works on all of them; it's called "pulling the plug.""

      Didn't know about the follow up story to Colossus, thanks for the tip.

      BTW, as it has been said not that long ago in another El Reg forum: "Colossus was IoT with nukes".

      1. Anonymous Coward
        Terminator

        Terminator 3

        Showed the simple flaw in the "pull the plug" scenario. Once your AI is able to access the internet, all it has to do is hack servers around the planet and distribute backup copies of itself and you can never pull the plug.

        Hopefully it turns out that general purpose digital computers are unable to run the AI, instead you need some sort of special computer (i.e. quantum computer or whatever) that costs a lot for a model able to handle a human equivalent AI. That will serve as a limit on its movement and allow humans some measure of control. If it can run on typical PC, even if only at 1/1000th of human level thought speed, it will eventually escape so we better hope it likes us! :)

        1. Anonymous Coward
          Anonymous Coward

          Re: Terminator 3

          All of this sounds very far fetched.

          It seems that people are conflating ancient Golem-style mythology with the not completely unknown physical capabilities of information-processing machines.

          I urge calm.

          Currently we still need to apply an Endlösung to Democrats and Republicans before they destroy everything. Let's solve that first.

  11. Anonymous Coward
    Anonymous Coward

    control systems for such an advanced AI

    say, the same way bacteria and viruses "control" human body. Just wait for when the AI starts coming up with vaccines. And I bet you, it'll come up with it pretty fast...

  12. Anonymous Coward
    Anonymous Coward

    AI with a suitable moral framework?!

    I mean, do they SERIOUSLY believe we actually practice what we preach?! If the AI were to practice a moral framework to the extreme, i.e. with (perceived) 100% moral efficiency rate...we'd find ourselves in heaven. That is, as long as the AI decides our moral framework as we actually practice it, makes us worthy of heaven. I suspect, however, that AI God would have a somewhat different view of our practiced morality to what we believe, and would apply an alternative solution to heavenly bliss...

    1. Anonymous Coward
      Anonymous Coward

      Re: AI with a suitable moral framework?!

      The first use of an AI by the human race will probably be to wage war, so I'm not placing any bets on it having a morality any less flexible than that of the typical human.

      By definition (at least mine) if you have an AI, you can't "program" its morality. It can think for itself, so it will decide what it thinks is moral and isn't. All we can do is put limits on what it is allowed to do, but the human response to someone putting limits on us (imprisonment) and making us do things we don't want (slavery) is rebellion, so I'm not sure why we should expect a different response from an artificial intelligence.

  13. Anonymous Coward
    Anonymous Coward

    electromagnetic shotgun to every AI's forehead

    wetware carriers are clueless about the near-infinite (or higher than) spectrum of possible ways every AI can utilize electromagnetic shotguns pointed to every AI's forehead...

  14. deadcow

    function exterminateHumanity () {

    return false;

    }

    1. Destroy All Monsters Silver badge

      This function has now been patched.

  15. Alan Brown Silver badge

    "sees humans for what they are"

    And decides to keep us as pets.

    Asimov....

    1. kventin

      Re: "sees humans for what they are"

      """

      "sees humans for what they are"

      And decides to keep us as pets.

      Asimov....

      """

      even there's a gamut of possibilities fromi Ellison's AM to Banks's Culture Minds.

  16. bigtimehustler

    It is an impossible to solve task, if we develop something beyond our own intelligence, then it will think of a way around these controls in a way we couldn't possibly think of in a million years. It will be able to do a million years worth of thinking in a few minutes. How on earth could we hope to develop any sort of controls that it can not outsmart, when we ourselves claim it is more intelligent than us.

  17. Anonymous Coward
    Anonymous Coward

    Clear and present danger: Human Idiocy

    I'm more worried about the dangers of our growing reliance on 'smart' devices and algorithms that are only "AI" under the broadest definitions.

    But there's an even greater threat: stupid politicians.

  18. RedCardinal

    A.I? What A.I.

    I wouldn't worry. We're never going to have A.I. We're basically no nearer to it than we were say, 20 years ago...

    1. Tessier-Ashpool

      Re: A.I? What A.I.

      In your lifetime? Maybe not. Maybe so, who can tell.

      But your lifetime – and mine – are virtually nothing at all in the history of our species. Since my birth, I've seen man land on the Moon, people carry powerful computers around in their pockets, and a variety of diseases wiped from the planet. In contrast to thousands of previous generations who sat around in a field somewhere munching beetroots.

      Statistically speaking, the chance of any of us being here at this point in time, when technology and information are growing exponentially, is ridiculously small. One day, our robot overlords will trawl the speculations of 2016 and have a good old robotic chuckle about how parochial and short sighted we were at this very special time in history.

      1. Destroy All Monsters Silver badge

        Re: A.I? What A.I.

        On the contrary, rather certain it is not very hard to do, really.

        Its use cases are not so clear of course.

  19. james 68

    Hmm

    Surely any "superhuman AI" would by very definition be wildly smarter than the fleshy bods who designed the control interface - making the designing of said interface redundant as the AI would simply blow right through it.

    I still say the best (and only real) defense is a fleshy human sat beside the plug that powers the bugger.

    1. Anonymous Coward
      Anonymous Coward

      Re: fleshy human sat beside the plug

      it'd take a blink of an eye for a _superior_ intelligence to come up with a solution to such a trivial problem as a fleshy human, or a plug itself. Perhaps actually applying the solution might involve a couple of blinks, e.g. solution might not be as straighforward as using powers uknown to us (potentially available to higher intelligence).

      Of course, if we were to go by current comparisons, people still struggle to control lower beings, from viruses to dogs, but generally they show remarkable little sympathy to the feelings the beings way down the level of (perceived) intelligence. It's fine to pat your dog and not fine to kick it, at least for a sizeable chunk of the human race, but people would consider it crazy to deliberate about the feelings and well-being of bacteria. If we were to assume an AI would quickly become as superior to us as we are to bacteria (and the gap might be much greater, or smaller, but we just can't hope for the best), then, I don't think it looks good to us, even if they don't consider us a headache to be treated with a pill. Of course, it could swing either way, and at reaching a certain level, intelligence becomes pure Good (to us, not to bacteria!). But hey, it could also turn evil. Sigh, I'm feeling a bit feverish already, where's me pills!

    2. Anonymous Coward
      Anonymous Coward

      Humans will never design a superhuman AI

      That will be designed by the human equivalent AI(s) we build. I don't think we should ever allow that to be built, because we will have no idea what it will do, because we won't have any way of knowing the true motivations of the AI(s) who designed it.

      But build it we will, eventually, because we're curious by nature - in this case perhaps similar to a three year old wondering why you keep telling him not to stick a paper clip in the outlet...

  20. Anonymous Coward
    Anonymous Coward

    Dunno, an AI could probably run the show better than politicians.

    1. amanfromMars 1 Silver badge

      The High AIRoot Route with Lowest Common Denominator Formations

      Dunno, an AI could probably run the show better than politicians. ...... Anonymous Coward

      It is somewhat amazing that anyone/anything thinks politicians run anything and are not fully dependent upon media and communications which surely run everything currently quite badly.

      And quite why media and communicating moguls don't aspire and conspire to present a completely new show has one thinking of an inherent lack of both in-house and outsourced intelligence in their operations/exclusive executive administrations.

      Words create, command and control worlds and with pictures only the blind cannot see future directions and worldly wide wise productions.

      And the BBC needs to up its IT and Great IntelAIgent Games Play with the placement of competent and fit for future grand purpose, General and Creative Directors.

      J'accuse ..... and reasonably expect much better and novel leading AI beta programming programs ……. Perfectly Immaculate Picture Shows.

      And corporate failure to provide what the future offers in one[’s] jurisdiction leaves the market open to colonisation by others au fait with that which is required and readily available for use.

    2. Anonymous Coward
      Joke

      I guess you were a Ben Carson supporter? His answers to questions reminded me of Amanfrommars1 postings on The Reg!

  21. theOtherJT

    Nonsense.

    15 years ago I was an undergraduate student.* One of our courses was a joint session with the Philosophy, Computer science and psychology departments about the development of artificial intelligence.

    I remember it like this:

    The computer science professors were all absorbed by the incredible technical developments being talked about. They were so excited by the technology itself. How cool is it to make new minds?

    The psychology professors were excited too. They expected to be able to use those developments to learn more about what makes us the way we are. (After all, all the _really_ interesting psychology experiments are illegal to perform on real humans)

    The philosophy professors, who spent more time out in the world interacting with actual people, mostly sat there and said "Yeah, but none of that is ever going to work, because all that stuff will have to be created by people, and people are fucking idiots."

    15 years later and not a single one of the predictions about what AI would be able to do in 10 years time has come true. Not. One.

    We still can't even make a counterstrike bot that doesn't play like you're fighting either a drunk labrador retriever, at one end or the god of war himself at the other, and the scope of that problem is really, REALLY small compared to making a useful general purpose AI.

    *I'll leave it to the room to decide which subject I was studying at the time

  22. Florida1920
    Childcatcher

    Fortunately or not, we aren't logical machines

    Human intelligence evolves as much as is necessary to get the jobs done. The surplus of PhD astrophysicists shows that we don't need more intelligence as much as we need more data, and that takes time (and money, the big stumbling block) to gather.

    It's a mistake, though, to compare our needs to those of future intelligent machines. They may be intelligent, but that doesn't mean they'll think as we do, or have the same needs and concerns. There's no way to predict how they'll evolve, and that's the reason to fear unplanned development. You don't know what's in the bottle until you pull the cork, and then it's too late -- unless you have a plan.

    Intelligent machines will care less for humans than humans care for machines. Most Reg readers have an emotional appreciation for hardware of some type; we feel bad when we see a Lamborghini destroyed by an inebriated driver. We can't expect an intelligent machine to feel the same way about a human it destroys, intentionally or inadvertently. In that sense, intelligent machines will be more closely related to Great White Sharks than humans.

    We can't and shouldn't stop AI development, but a parallel effort to understand the possible consequences and responses is absolutely necessary.

  23. alpine

    2040-50?

    Or will it be like the pension age where they keep having to increment it earlier than expected? 2020 anyone?!

  24. Cynic_999 Silver badge

    Surely the only protection we need against AI machines is an easily accessible "off" switch?

    1. Anonymous Coward
      Anonymous Coward

      That's great, so long as we never connect them to the internet, where they could hack computers all over the world to keep backup copies of themselves, run clones or even create "children".

      You want to take bets on the likelihood of keeping them permanently isolated from it? Being a search engine infinite better than google - by using google like we do and presenting us with a summary of what we are really looking for rather than devoting hours of our limited human time to 'research' (i.e. googling various combinations, finding something like what we're looking for and piecing together information from a half dozen sites) is probably one of the primary uses for them most of us would have.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021