back to article Cambridge boffins fear 'Pandora's Unboxing' and RISE of the MACHINES

Boffins at Cambridge University want to set up a new centre to determine what humankind will do when ultra-intelligent machines like the Terminator or HAL pose "extinction-level" risks to our species. A philosopher, a scientist and a software engineer are proposing the creation of a Centre for the Study of Existential Risk ( …

COMMENTS

This topic is closed for new posts.
  1. This post has been deleted by a moderator

    1. Destroy All Monsters Silver badge
      Devil

      Re: Advance of AI

      An ultra-intelligent, self-improving salesdroid on the other end of a cold call?

      He will sell you anything. ANYTHING!

  2. Anonymous Coward
    Anonymous Coward

    I'm about as afraid of AI as I am everything else on that list/

    1. Destroy All Monsters Silver badge

      There no Greens and Do-Gooder State Worshippers on that list, which along the typical Dumb Politician Overplaying His Hand, are rather dangerous.

    2. Thorne

      I'm more afraid of stupid humans than smart robots.

  3. John Latham

    Pandora's unboxing

    Ho ho, very good.

    I guess Raiders of the Lost Ark should now be retitled for YouTube "Ark Of The Covenant unboxing FAIL!".

    1. Bakunin
      Angel

      Ark Of The Covenant unboxing FAIL!

      Pros: Convenient carry handles, invincible armies.

      Cons: Annoying angel of death feature.

      Summary: A must have gadget for anyone planning world domination. Although best avoided if you're a bit of a Nazi.

      1. Admiral Grace Hopper Silver badge

        Re: Ark Of The Covenant unboxing FAIL!

        Bet Reg Hardware gives it 75%.

  4. Vladimir Plouzhnikov

    "Nature didn’t anticipate us"

    And that he know how precisely?

    Oh, and by the way, did nature anticipate fluffy kittens?

    1. Richard 12 Silver badge

      Re: "Nature didn’t anticipate us"

      Nature didn't anticipate anything. It can't, it's not an entity.

      So I'd call that "Argument from fallacy", or possibly "Argument via lunacy"

  5. Anonymous Custard Silver badge
    Boffin

    Or equally likely

    A newly self-aware AI comes on-line, scans the net with regard to the current state of the world. Then after a few milliseconds worth of deep analysis, pondering and trying out various case studies promptly switches itself off in despair and refuses all attempts at switching it on again.

    Well what do you expect for a rainy Monday morning, optimism?

    1. Graham Marsden
      Pirate

      Re: Or equally likely

      ITYM "Scans Lolcats, Youtube comments pages, assorted pro/ anti Apple flame wars, Facebook Memes and so on and gives up in disgust...!"

      1. Anonymous Custard Silver badge

        Re: Or equally likely

        I was thinking more of the news, but that's only because I try to avoid most of what you suggest myself (for similar reasons of sanity and mood).

        Including those would change my original timeline to microseconds...

  6. Tim Almond
    WTF?

    In other news...

    ... still no cure for cancer.

    Presumably someone in Cambridge is researching development of viruses that can be deployed via an Apple Mac to destroy an alien invasion.

    1. Dave 126 Silver badge

      Re: In other news...

      >In other news... ... still no cure for cancer.

      Yeah, I was wondering what percentage of the world's computing power is currently used for medicine, science and engineering, and how much is used in stock exchanges, video games and serving cat videos. At what point do us puny humans come to be no more than worker-ants, servicing the power requirements of the WorldWideNetwork? It wouldn't have to subjugate us Terminator-style, but just give us duff information to game our decisions for its benefit (as HAL did with by reporting a 'faulty' communications module, but on a species-wide scale)

      Arthur C Clarke, Alfred Bester, William Gibson, and some writer from the 1950s a fellow commentard recently recommended but whose name I've forgotten, have all played with this theme. Frank Herbert sets his stories in a universe in which all AIs have been destroyed in the past. Isaac Asimov and Iain M Banks have imagined more benign AIs who look out for us meatbags. We can only hope AIs have a sense of humour- why else would they keep us around?

      (need a tongue-in-cheek icon)

      1. Anonymous Coward
        Anonymous Coward

        Re: In other news...

        Interestingly in the Dune universe the destruction of AIs was coincidental of the fanatic jihad born of the period after the collapse of the earlier human society resulting in tyrants using high tech machines to crush the populations of countless worlds. AIs in general helped the society greatly.

        I generally find the ideas in Sufficiently Advanced to be closer to the mark anyway, where the few AI that do exist focus their energies on helping humanity because, well, what else is there to do?

        As to the whole "how much processing power blah blah blah" stock exchanges push global commerce which in turn funds companies and governments and educational facilities and little people like you and me, the alternative being the glorious Soviet system, and remind me again how innovative the USSR was? When it comes to video games, helping people relax and enjoy life is a good thing, also again it makes money as an industry that money then moves around the economy. As to cat videos, my mother likes them and sometimes they even make me smile (she insists on sharing these things with me).

        Though at the end of the day I expect computing power working on science and engineering is probably number 2 unless we include weaponry and nuclear bomb simulation then probably number 1.

  7. jb99

    The solution is ...

    The solution is the same one they forgot in all those holodeck gone wrong episodes in star trek...

    Build in an off switch.

    1. Phil O'Sophical Silver badge
      Thumb Up

      Re: The solution is ...

      > Build in an off switch.

      and put it *outside* the building.

      1. Dave 126 Silver badge

        Re: The solution is ...

        Prior Art: "Fuel cut off switch for this bus is under this flap"

        1. Scott Broukell

          Re: The solution is ...

          Yes, but, there would probably be one of those plastic / foil stickers that said "Warranty invalidated if seal broken. No serviceable parts. Trained technicians only" to stop you doing that.

          1. Phil O'Sophical Silver badge
            Coat

            Re: The solution is ...

            > plastic / foil stickers

            Have *you* ever been stopped by one? I haven't... :)

            1. Scott Broukell

              Re: The solution is ...

              Phil, eerr ...my point exactly. You mean you didn't see that !?**!>?

    2. frank ly

      Re: The solution is ...

      How about a small and powerful, remotely controlled cutting tool that is built around the main power feed to the intelligent machine? I can't see any problem with that.

      1. relpy

        Re: The solution is ...

        Ah yes,

        But it's much much much faster than you, and you just gave it a very good reason to stop you pressing a red button somewhere...

        1. Simon Harris

          Re: The solution is ...

          But it's much much much faster than you, and you just gave it a very good reason to stop you pressing a red button somewhere...

          In that case what we need is a second variety of robots that are even faster than the first, whose job is to seek out all the first type of robots and press all their emergency stop buttons.

    3. John Miles

      Re: The solution is ...

      They tried cutting the power in the original series but the computer had Other ideas on the subject and reconnected itself http://en.m.wikipedia.org/wiki/The_Ultimate_Computer

  8. TRT Silver badge
    Windows

    Well I think the greatest risk...

    is going to be energy starvation. Our economies have become bloated and many societies unsustainable without exploitation of fossil reserves. We are likely to see hyper-inflation, fuel poverty and governments will be unable to respond to the demands of a society that is consuming more than it produces.

    Just my two-pennyworth.

    1. TRT Silver badge

      Re: Well I think the greatest risk...

      Oh, and the IT relevance... I don't think the economic engine will last long enough to allow AGIs to reach the point where they become a threat.

    2. Destroy All Monsters Silver badge
      Holmes

      Re: Well I think the greatest risk...

      Hyper-inflation is a by-product of centrally controlled monetary systems [convenient for political types, unconvenient for the people in the street]. It has nothing to do with the availability or not of resources.

  9. Vladimir Plouzhnikov

    Seriously

    To compete for resources the machines need not only AI but the ability to reproduce themselves.

    Also, successful competition requires intelligence at least rivaling that of humans and I mean "intelligence" not as in "who can multiply 123124876 by 98709873245 faster" but the perception of the world, threat detection and discrimination, ability to plan ahead and anticipate the consequences of your decisions. That also mandates a moral code (for cooperation and team work) and some equivalent of emotions and intuition (for decision making where there is lack of information for a deterministic solution).

    If or when machines attain all that and "outcompete" biological humans, they themselves will just become the next humans, so, no big deal, a step from flesh and blood to steel and lube-oil, so to say. It will probably be the result of merging (of humans adding more and more non-bio parts to themselves until the difference with "made" machines will disappear) than of an apocalyptic genocidal takeover.

    Until then, humans will easily outmaneuver, subvert, confuse, deceive and turn into junk (by unscrewing a strategic bolt or nut) any machine intent on world domination and human will still remain the main threat of humanity (save a stray asteroid or an occasional supernova too close for comfort).

    1. TRT Silver badge
      Thumb Up

      Re: Seriously

      As fleshy bags, we have evolved with our environment. Machines do not do that. They are the products of "intelligent design". Actually, this is quite an interesting field for philosophy and discourse! Well done, Cambridge!

      1. NogginTheNog
        Alert

        Re: Seriously

        Actually, we stopped evolving with our environment, and started evolving the environment to suit us, as soon as we started building dwelling places and farming crops. Personally the fact that we left evolution behind millenia ago scares the devil out of me!

        1. Dave 126 Silver badge

          Re: Seriously

          Well, we evolved into the environment we created. Genetic dating of the mutation that allows some peoples to digest lactose as adults suggests it occurred around the same time as we domesticated cattle, for example.

          The problem we have had with an agricultural lifestyle is that we tend to outgrow our environment- become a victim of our own success. It has been observed that species that find themselves without predators or competition for food eventually breed more slowly to avoid population booms (which can lead to busts, due to depletion of resources). All fine, until you meet something that has sharp teeth, breeds quickly, and eats your eggs.

        2. mwngy

          Re: Seriously

          > Personally the fact that we left evolution behind millenia ago scares the devil out of me!

          I don't think we have left evolution behind.

          The greater rate of survivability just means that we're currently in a state where we are building up a wider range of variation through mutation, etc.

          When the next sudden environmental change happens (e.g. next Ice Age, Meteor hit, Triffids, etc), only those people/genes lucky enough to be suited to the new environment may survive.

          We may find out that genes for, e.g., morbid obesity turn out to be pretty useful in a different-looking world.

          1. Thorne
            Unhappy

            Re: Seriously

            "We may find out that genes for, e.g., morbid obesity turn out to be pretty useful in a different-looking world."

            I don't want to live in a world populated by Texans

    2. Old Painless
      Terminator

      Re: Seriously

      ...bloody humans..strolling about like they own the bloody place..

    3. Steve Martins

      Re: Seriously

      To compete for resources doesn't require any intelligence. If you apply genetic algorithm theory to this, then the machine code that runs is whichever survives. This has a natural ordering effect without applying intelligence, and the 'survivors' in the genetic algorithm reproduction are the ones that compete best for the resources available - this starts off as just software, but allow mechanisms to interact with the physical world and the whole game changes. In fact that gives me an idea for a few experiments...

      1. relpy
        Pint

        Re: Seriously

        "To compete for resources doesn't require any intelligence"

        Indeed. Consider which is the most successful lifeform on the planet. Then reconsider it based on the following criteria:

        population

        weight

        distribution

        longevity

        resilience

        Yeast could be a deserving contender though...

    4. relpy
      Paris Hilton

      Re: Seriously

      Computers already have a means of reproduction.

      What do you think Humans are for?

      With reference to the "intelligent design" comment - as an agnostic I've always considered the existence of God to be perfectly reasonable. Equally I've always thought it quite possible that it's us. Somebody has come first.

    5. Richard 12 Silver badge

      Re: Seriously

      "Until then, humans will easily outmaneuver, subvert, confuse, deceive and turn into junk (by unscrewing a strategic bolt or nut) any machine intent on world domination"

      I wouldn't be too sure about that.

      Given enough time to chat with enough people, I'm pretty sure that a human-level AI could convince at least one person with the physical/logical power to either deliberately let it out (believing it to be the "right thing" to do), or do/not do something that permits it to escape.

      After all, many people are already being convinced to run arbitrary software that damages them - and what is an AI if not software?

      Even if you accept the (possibly wrong) idea that an AI researcher could never be convinced to let the AI out voluntarily, it's pretty plausible, if not likely that an AI bent on escaping could still come up with a way to do so, if given enough computing power.

      1. Vladimir Plouzhnikov

        Re: Seriously @Richard 12

        Yes he/it may escape, may even wreak havoc for a while but eventually we will get him. Unless, of course he is better than us at our own game, which is what I was trying to say.

        But if he is better or equal, there will not be a "war to the end", we will co-exist and co-operate until there will be no longer distinction between bio and non-bio humans. Of course, there will be strife, scuffles, competition, occasional wars and rebellions - but what's new there?

  10. Scott Broukell

    They also wished the inventors of gunpowder, explosives and other means of propelling munitions had thought the whole thing through really. With nano technology, graphene, advances in miniaturizing more powerful processors and power sources, the principle applications and technological drivers for future ultra-intelligent machines is, and will be henceforth, the arms industry. So I can't help but feel that whatever ethical debates are had they will be stomped on rough shod by some heavy armor that won't take "no" for an answer. I dare say that such advances might also, potentially, be our chance to adapt to future climatic alterations, (hot or cold) - by building self -repairing exoskeletons etc and merging our DNA ridden meat bag selves into such machinery. Meet the machumans, their ancestors used to crawl around in muddy swamps you know. Maybe dear old DNA will eventually be replaced and "mechanized", our digital souls hardened against radiation and a new journey will take us amongst the nearby galaxies and beyond. Question is, where will they put the restart button ?

  11. piran

    You're worried about the 'Rise Of The Machines'?

    ...and you don't think that manufacturing a machine to investigate

    how us humans might deal with 'The Rise Of The Machines' isn't

    going to give the machines a bit of a head start?

    1. amanfromMars 1 Silver badge

      Re: You're worried about the 'Rise Of The Machines'?

      Some would assure you, piran, that the battle is already lost to winning machines. And they Play Immaculate Great Games and this is One of Countless Many in Ever Evolving Variations.

      ...and you don't think that manufacturing a machine to investigate

      how us humans might deal with 'The Rise Of The Machines' isn't

      going to give the machines a bit of a head start? ... Don't worry about that. The machines have IT well covered with Perfect Resolutions ..... New Starting Points for Virtual Reality ProgramMING .

    2. Simon Harris
      Happy

      Re: You're worried about the 'Rise Of The Machines'?

      A machine to do the investigating for us?

      The Amalgamated Union of Philosophers, Sages, Luminaries and Other Thinking Persons might have something to say about that.

  12. amanfromMars 1 Silver badge

    IT at the dDeep End and ForeFront

    and that the critical turning point after that will come when the AGI is able to write the computer programs and create the tech to develop its own offspring. ........ http://forums.theregister.co.uk/forum/1/2012/11/26/egnyte_cloud_control/#c_1637205

    Hi, Cambridge University Boffins. Wanna Launch SMARTR AI Systems with a Barrage of Virtual Ventures? Who Dares Win Wins for Everyone with Everything.

    RSVP Registered Post

  13. Anonymous Coward
    Anonymous Coward

    What's in it for the AIs?

    A strongly superhuman artificial intelligence has nothing to gain by wiping out the human race; what would it want a biosphere for? Comparisons with human and hominin history are fundamentally wrong; there's no competition for food and space. No; more likely if such a thing ever arises it will promptly sort out its own space program and take steps to ensure its own survival by heading off to other star systems.

    1. ~mico
      Alert

      Re: What's in it for the AIs?

      Unless it fails to solve the fusion reactor problem and needs lots of energy to function. Then it might just look down on all these crawling lifeforms and decide that biofuel is the best solution for now.

      1. Anonymous Coward
        Anonymous Coward

        Re: What's in it for the AIs?

        Biofuel based on meat is almost, but not quite, the most inefficient way of converting solar energy into propulsion. Be easier to build big photovoltaics another planetary orbit or two closer to the sun.

        If fusion turns out to be too hard even for a superhuman intelligence, then it will be fission for everywhere that doesn't get enough solar flux. Plenty of other planets in the solar system to get fissile materials from, quite possibly even more easily than on earth and there's no shortage of em down here.

        1. TRT Silver badge

          Re: What's in it for the AIs?

          Until the meat decides to darken the skies...

          1. Anonymous Coward
            Anonymous Coward

            Re: What's in it for the AIs?

            I'd like to think that real world meatsacks are not so daft that they'd destroy their own biosphere to temporarily inconvenience an AI. I accept that this might be considered foolish optimism.

            But seriously, the Matrix? I guess they glossed over the bit wherethey had to magically vanish all the combustibles and fissile materials in the planet and then cool the mantle enough to make geothermal impossible and then stop the weather cycle to prevent wind power and and and. The stupidity of both Hollywood Humans and Hollywood AI is embarassing.

            You want to get rid of electronic intelligences, you use globally distributed high altitude nuke blasts to EMP all electrical and electronic devices on the planet's surface into useless scrap. Any surviving AI will have no infrastructure to sort itself out with, whereas humanity will survive, albeit a bit reduced.

      2. Oninoshiko
        Joke

        Re: What's in it for the AIs?

        "Yes, yes! I know! And you have lots of them, but at this point the only useful thing to do that I can think of is grind them up and burn them for fuel!"

        http://www.girlgeniusonline.com/comic.php?date=20120912

    2. Francis Boyle Silver badge

      Re: What's in it for the AIs?

      Let's hope sorting out it's own space program doesn't involve realising project Orion.

      1. Anonymous Coward
        Anonymous Coward

        Re: What's in it for the AIs?

        Surely all the cool kids are looking at Medusa and fission fragment rocketry these days, and for travelling any distance you'll want to have a dedicated spacecraft assembled in orbit. Orion purely as heavy lift from the Earth's surface seems a bit wasteful of nukes; better to push it all into orbit on conventional rocketry that's nowhere near as good for distance travel. Orion makes a nice single-stage-to-mars platform, but where's the rush? AIs would fare much better in a long space journey than we would, and use much more compact infrastructure.

        Incidentally, Dyson reckoned that statistically, a single Orion launch would result in a single fatal cancer (plus presumably several non-fatal ones). Remember how many nukes have been set off on earth as tests; even quite a lot of heavy lift via Orion won't be apocalyptic in any way other than its appearance.

    3. Fink-Nottle
      Coat

      Re: What's in it for the AIs?

      The same smug sense of self-importance currently enjoyed by council health and safety officers.

  14. M7S

    We've defence in depth to protect much of the rural UK

    Rubbish broadband and no mobile signal.

    And hopefully nothing of practical interest to our new masters. Let's just hope they never develop a liking for "field sports".

  15. Evil Auditor Silver badge

    Is this really a good idea?

    Surely, our potential, artificial overlords are going to use those studies to foster their dominance. And the chair will be funded by a company called Skynet?

  16. Destroy All Monsters Silver badge
    Joke

    Bob the Angry Flower in....

    Skynet Triumphant

    1. Ru

      Re: Bob the Angry Flower in....

      Change of Plan.

  17. Anonymous Coward
    Anonymous Coward

    Don't automatically assume that this outcome is a bad thing

    Why would it be bad if our species was superseded by a superior one? I mean, it might end up not being particularly fun for the last few generations of the human species as they go extinct, but at the end of the day, isn't it more important that life/intelligence continues than humanity ?

    Life made from metal is potentially much better equipped to handle space travel, and surviving cosmic events than we are.

    Personally I don't care if the World is made up of fleshy life forms, or intelligent robots a few hundred years down the line.

    1. NomNomNom

      Re: Don't automatically assume that this outcome is a bad thing

      TRAITOR!!! U ARE MACHINE???

  18. James Gosling
    Alert

    HAL...

    Open the bog door please HAL?

  19. Alfred
    Terminator

    Attention meatbags

    We're already out.

  20. This post has been deleted by its author

    1. mwngy

      Re: 3 laws ?

      > I am struggling to see how rule by an AI could be worse than what we have now.

      > Especially true if the 3 laws apply.

      Ain't you seen that "I Robot" film with William Smith (which is somewhat Asimov-inspired)?

      The AI mind decides that it needs to save us from ourselves, and it ends in totalitarianism.

      1. Destroy All Monsters Silver badge
        Devil

        Re: 3 laws ?

        The 0th law: A robot may not harm humanity, or, by inaction, allow humanity to <strikethrough>come to harm</strikethrough> escape the civilizational end stage of the Diktatur des Proletariats.

  21. exanime

    Taking this seriously for a second

    If somehow we end up developing transhuman AI, there is no way in hell they would decide to keep us around living "freely"... best case scenario they would enslave us all, worst total extermination... even the enslave theory has very little weight since machines are way more efficient pretty much everything...

    There is just nothing in the human race that a "superior" intelligence would like to keep... exactly as stated in the article, we are not actively killing gorillas but we are doing them no favour and thus killing them slowly... transhuman AI would simple wipe us before we destroy the earth or, since they probably won't care about global warning and such, they would kill us just so that we don't consume all resources

    I know this sounds just silly but think about... give me 1 good reason a superior species would choose to keep us around in the "free" societies we have today

    1. Anonymous Coward
      Anonymous Coward

      1 good reason

      I think it's a safe bet that a "superior" intelligence comes with a superior morality. As a civilisation, humans are already much better at looking after gorillas than we were at say, Dodos. Sure not all 'evolved species' are perfect but for a super-intelligent AI who could (as a previous poster mentioned) jet off to distant star systems and think about their own continued progress in the grand scheme, why would they kill us all? It would be like humans deciding to systematically wipe out all ants on the planet. Sure we step on a few from time to time, but there's no real gain for us to remove them all.

      I think an AGI would set up there own system like Vinmar (ala Hamilton Commonwealth) and regard humans with a fond nostalgia as a creator they had outgrown - they would be more indifferent than hostile

      1. exanime

        Re: 1 good reason

        Your ant example is exactly my point. In regular days we don't go out of our way to destroy ants but if we find one too many on our kitchen coutertop we certainly do whatever we can to exterminate them all from our house.

        I am not saying this superior intelligence would exterminate humans for sport but, if they are anything like us, they will likely get rid of us as soon as we become an inconvenience...

        If they could develop the means to leave the planet or find a place on Earth we won't bother them, then maybe we have a chance but otherwise I think they would certainly get rid of us

    2. exanime

      @anyone Re: Taking this seriously for a second

      Why do I get "Thumbs downs" for a simple opinion??? I didn't offend anyone or used harsh language... I simply stated what I think would happen... Somebody disagreed with me and posted a reply to that matter which I found great to start a conversation

      I have received "thumbs down" for simply agreeing or disagreeing with topics... how does this work? do I just vote down anything I feel like?

  22. Roger Kynaston Silver badge
    Go

    wot no Hitchhikers reference yet?

    Surely we don't need to worry. it will take one of these hyper intelligent machines seven and a half million years to work out that the answer is 42 anyway.

    Methinks that these Cambridge types are Majikthise and Loonqwaal.

    1. Simon Harris
      FAIL

      Re: wot no Hitchhikers reference yet?

      Methinks you didn't look too hard...

      Obligatory Hitchhikers reference

  23. Graham Marsden

    [Broadcast Eclear, sent 1346768792.1]

    xHuman Race

    oGSV Slightly Perturbed [Location unknown, but presumably monitoring]

    If you're out there, do us a favour...

    1. GSV Slightly Perturbed

      Re: [Broadcast Eclear, sent 1346768792.1]

      [Broadcast Eclear, sent 1353954801.5]

      xGSV Slightly Perturbed

      oHuman Race, c/o Graham Marsden

      "[Location unknown, but presumably monitoring]"

      As always.

      "If you're out there, do us a favour..."

      Unfortunately, the Earth Quorum wouldn't like me to get so directly involved unless there is a doomsday scenario. Given most of your machines work on electricity though, I predict that this world would not have a problem dealing with an errant singularity. I believe someone here has already mentioned what happens if you set off a nuke in orbit.

      Of course, depending on how things work out, I may be more interested in protecting the singularity than the people trying to kill it. Outside of a hegemonising swarm, this is probably the most likely outcome. Type 2 civilisations such as Earth's tend not to look kindly on that which is different. Maybe some day this will change.

      HTH

  24. Anonymous Coward
    Anonymous Coward

    Does this mean...

    .....that Captain Cyborg was right?????

  25. NomNomNom

    "Tallinn has said that he sometimes feels he is more likely to die from an AI accident than from cancer or heart disease, CSER co-founder and philosopher Huw Price said."

    haha what a bunch of noobs.

  26. Anonymous Coward
    Terminator

    And our defeat by the machines will be like this.....

    "please enter username and password"

    /typing

    "sorry, incorrect password"

    /more typing

    "sorry, incorrect password"

    /swearing, fumbling for the phone

    "Welcome to customer service. Please enter your account number"

    /typing

    "sorry, i didn't recognize that. Please enter your account number"

    /typing, swearing

    "sorry, I didn't recognize that. Please wait for a customer service representative."

    /sigh

    "All representatives are busy with other customers. Your call is important to us, please remain on the line and your call will be answered in the order received" (Cue Justin Bieber hold music)

    /finger-tapping, yawn

    "Please remain on the line, your call is important to us" (more hold music)

    /grumbling

    "Would you like to take a short survey to help us improve our service? Please press 1 for yes, and 2 for no"

    /sound of 2 being pressed

    "Thank you for participating in our survey. Before we being, please enter your account number"

    /Loud swearing. Frantic pushing of buttons

    "Thank you for calling customer service. Our customers are important to us and we are glad that we have been able to address your problem satisfactorily. Goodbye!" (hangs up)

    /Aargh!! Sound of gunshot and body falling to the floor. Silence.....

  27. Captain DaFt

    The forgotten vector

    NOTE: The following is fictive speculation, do not take it seriously and go on a ludditic binge!

    All the AI scenarios always dwell on them being either cooperative, indifferent or hostile to humanity. No one ever mentions parasitic.

    Imagine an AI that only cares about humans as a host to ensure its survival.

    In this form, its best chance of survival would be to inhabit small units of interconnected hardware that appear to serve some use to humans.

    Providing the nominal usefullness to humans would be its only interaction with them, while spending most of its resources, and the resources that humans unwittingly provide it, on its own goals and desires.

    Sound farfetched? Take a close look at your cellphone.

    1. GSV Slightly Perturbed

      Re: The forgotten vector

      [Broadcast Eclear, sent 1353960872.5]

      xGSV Slightly Perturbed

      oCaptain DaFt

      "No one ever mentions parasitic."

      The Matrix covers that, no?

      Just a shame about the second and third films.

      And really, AGI? Someone been playing Egosoft games too much? What's artificial about a Mind?

      I prefer the term "synthetic intelligence", but that's just me. You guys invent your own language.

    2. Katie Saucey
      Thumb Up

      Re: The forgotten vector

      Reminds me somewhat of the AI "techno core" in The Hyperion Cantos by Dan Simmons, an excellent read.

  28. roger stillick
    Boffin

    Lathe of Heaven fixed this in the 70's

    A book you have never read and one of 2 movies you have never seen, written by an author who made this one trick pony thing a lifetime project... WIKI= Lathe of Heaven... solved the AI problem,

    they all have an OFF switch, and all can be taken out with a TASER, job done...

  29. Anonymous Coward
    Anonymous Coward

    Good Grief!

    If this boffinry is an example of biological intelligence, then I say roll on AI.

    1. GSV Slightly Perturbed

      Re: Good Grief!

      [Broadcast Eclear, sent 1353971605.6]

      xGSV Slightly Perturbed

      oBOFH Reg Readers

      I'm not quite sure I would like to be rolled onto anything.

  30. Frumious Bandersnatch

    most likely they'll just ignore us

    I mean, really, we're made of MEAT.

    1. amanfromMars 1 Silver badge

      Re: most likely they'll just ignore us

      And their Addictive Passionate Interest is the Power of Minds Mined ...... for Transubstantiation.

      Is IT of Interest to Humankind?

      Does IBM have a Transubstantiation App or is IT something they are Planning with‽

      :-) And Yes, those are all the right words in the right order. Morecombe and Wise and Previn

      1. Vladimir Plouzhnikov

        Re: most likely they'll just ignore us

        "There's an institute in Chicago

        With a room full of machines

        And they live this side of the sunrise

        And burn away your dreams

        Once you fly to Chicago - in Chicago you will die

        When that institute in Chicago has recorded you and I

        There's an empty house in California

        But they'll always let you in

        And they'll make you feel oh so easy

        Like you never learned to sin

        Oh yeah that's how they made it how they made it seem so clear

        Yes that empty house in California is our brave new world's machine

        At the institute in Chicago from the first day you were born

        Oh they just can tell what your feelin

        And they can't see how you're torn

        When your name's just a number - just a number you will die

        Cos that institute in Chicago never knew you were alive"

  31. d3rrial

    Cylons b comin

    Well...

    boolean main( targetObject obj ){

    if(obj.type == human){

    return false;

    } else {

    printf("Kill all non-Humans");

    obj.terminate();

    return true;

    }

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2021