back to article Hawking: RISE of the MACHINES could DESTROY HUMANITY

Professor Stephen Hawking has given his new voice box a workout by once again predicting that artificial intelligence will spell humanity's doom. In a chat with the BBC, Hawking said “the primitive forms of artificial intelligence we already have have proved very useful, but the I think the development of true artificial …

Page:

  1. Alan Bourke

    I love Prof Hawking

    but he's talking out of his jacksie on this AI thing. We are so very far from anything even approaching it.

    1. Anonymous Coward
      Anonymous Coward

      Re: I love Prof Hawking

      >> We are so very far from anything even approaching it.

      That is what makes him nearer to genius than us. He recognises that now is the time to think and act, not once the deed is done and it is too late.

      We've made that mistake with our transport systems, chemical/pharmaceutical industries, land use, pesticides, industry, climate, ozone, regime change wars and so on. Most of these are bad enough.

      The pace of change and the scope of its effects seem to be increasing. We need to prepare ever earlier and decide, consciously rather than driven by some commericial or poltiical imperative, what we want, the risks and the mitigation or prevention.

      Recent technological advances seem to have gone hand in hand with more and nastier conflicts, increasing, nationalistic fear and terrifying damage to social systems in the West, with unbelievable increase in wealth disparity in Europe, particularly Britain. Technology has put powerful tools into the hands of people ill equipped and educated to handle it, with the consequence that it is being used to make a few rich and many poor, reduce freedom, increase surveillance and dehumanise war. The fact that it also puts easy communication and data management into the hands of people is not sufficient compensation for the rest.

      I've made a decent living out of understanding, helping to develop and using technology for the last thirty years. I am not against it; but I recognise that it is not an unmitigated good when it advances willy nilly in the vacuum that passes for collective intelligence.

    2. amanfromMars 1 Silver badge

      Re: I love Prof Hawking

      I love Prof Hawking ....

      but he's talking out of his jacksie on this AI thing. We are so very far from anything even approaching it. .... Alan Bourke

      Such a prevalent opinionated view beautifully ensures one never sees what is coming until it is in all powerful positions, removed all hurdles and obstacles and is in complete remote practical command and virtual control, AB. Do you not think that that which passes for Blighty intelligence is not engaged in such a revolutionary disruption to vital services or is such a perfectly stealthy private pirate sector operation which they have to encounter and comes to terms with for Future Earthed Control and/or counter and do vain-glorious battle with to maintain and sustain present arrangements for status quo'd systems/petrified programmed/terror projects?

  2. sawatts
    Terminator

    Evolution

    One could just view this as the next step in the evolution of an intelligent culture - from organic to synthetic - with the latter having many more advantages for surviving in a large and diverse Universe.

  3. chivo243 Silver badge
    Big Brother

    Stephen Hawking or John Connor?

    Say it with me, SkyNet. That is all.

  4. Anomalous Cowshed

    In general...

    Once there's official recognition (prizes, awards, praise from popes and presidents, etc.) the game is up. You are dealing with a harmless mind adopted and tamed by the establishment.

  5. Ryan Clark

    Weirdly saw this on the news last night after just watching Elementary on the same subject. Nice timing

  6. Vladimir Plouzhnikov

    Unjustified paranoia

    Any AI that will think itself so superior to humans that it would want to supplant them will be crap. It will screw itself over in a very short order. We'll just have to wait a little bit for it to fry its brains or make some stupid mistake that will take it out.

    That will serve as a warning for any subsequent AIs who will know better than to go into a pissing contest with humanity.

    I am more worried about environmentalists leaving us with our pants down in the face of some natural catastrophe than of an AI going rogue.

  7. Werner McGoole

    Oh really? You don't say?

    You'd think that with a brain the size of a planet and a subject of such fundamental importance, he'd at least come up with an original thought - even a small one. Wouldn't you?

    But instead he says something that's been said about a million times ever since the idea of a computer first arose. Maybe he's just discovered SF and it's got him all fired up to the extent he didn't bother checking if anyone else had ever pontificated on the subject.

    Tell you what Stephen. Submit a paper with your thoughts on AI to your favourite scientific journal and let's see how impressed the referee is to hear that old saw again.

  8. Rich 2 Silver badge

    Not difficult

    considering how stupid some individuals are and how the human race as a collective is even more stupid (we seem bloody determined to destroy ourselves one way or another) it's not difficult to think we can be easily surpassed by AI.

    We're probably already interlectually surpassed by the average Cornish pasy!

    1. Anonymous Coward
      Anonymous Coward

      @Rich2 - Re: Not difficult

      Exactly! All AI has to do is wait a little bit while we as a species dumb ourselves down and it will surpass us for sure. Looking around I can tell this is going pretty well.

  9. beast666

    I for one...

    Welcome our pre-singularity overlords.

    At the moment of the singularity and afterwards all bets are off of course.

  10. Phil_Evans

    It's here already!

    SatNav, social media, 'the' media, price-comaprison sites, ratings. All required by teens who have now lost the basics of direction, conversation and decision making. What's that? They already had?

  11. DerekCurrie
    Devil

    AI: Artificial Insanity

    Humanity is bound and determined to self-destruct itself already. Artificial Insanity, the inevitable product of our efforts to reproduce our minds using computer technology, will be just a blip as we bury ourselves in our last dark age, already begun.

    At least Dr. Hawking clearly pointed out a prime source of our growing real insanity: The anti-privacy, anti-citizen cult of paranoia government surveillance oligarchy. There's some real terrorism. :-P

  12. Kulumbasik

    We are too far away from it

    Talking about AI is something the same as talking about interstellar travels. Is it good or bad? Doesn't matter! Whatever it is, not only don't we have a working starship, we have no even the fundamental physics needed to build one.

    The same is with AI. Any AI we could imagine now will be some kind of computer. However, any device we can think now as a computer, regardless of its processing speed and memory size, will inevitably be an equivalent of Turing Machine -- the so called Church's Thesis.

    But Turing Machine, a mathematical construct invented specifically to analyze any computational processes, proved to have some fundamental limitations. For instance, there cannot be a program that can create other programs, even for some quite narrow classes of tasks. We humans somehow do it... But the main limitation of any computer system (a Turing Machine, that is) is that it cannot create information by its own. A computer is always just a transformer of information. Yes, its capabilities can be extended indefinitely by adding new programs. But it cannot create those programs by itself. Human programmers are needed for that.

    Being a programmer by myself, it always strikes me what meticulous efforts are need to teach (i.e. program) a computer to do quite simple things. The whole new specialized computer languages pop up all the time to do precisely that in particular fields.

    It is only humans, who create information! But how can we do that? Maybe our brain listens to the cosmos, catches the information emanated from it in the form of entropy.... Anyway, we don't even have a physical theory about that (the same as with interstellar travels).

    In the end, I think, in spite of all this anti-AI buzz raised by various celebrities, currently we are actually far away from the creation of a truly sentient being, and since we don't even understand what it is, all those fears are essentially baseless.

    1. Anonymous Coward
      Anonymous Coward

      Re: We are too far away from it

      There already are machines with emergent properties.

      There's a whole sub-genre of computer science devoted to it.

      1. Kulumbasik

        Re: We are too far away from it

        > There's a whole sub-genre of computer science devoted to it.

        Sure. That field does exist and I am a sort of working in it by myself. But my feeling is that all this is basically old-style programming work re-branded as something new and called "AI". Those programs indeed do something previously only humans did (like recognitions of a human face and finding it in a database). But are they really sentient? Are they able to think anything of their own (let alone to redesign themselves)? They are still just calculators, however very complex ones.

        Any device we can think or develop now as "AI" will inevitably be a Turing Machine (TM) -- even quantum computers, which are supposed to be exponentially more powerful on some tasks. But without the input of external information a Turing Machine cannot produce anything new -- that's the mathematically proven fact. What that "new" (in the informational sense) is may not be only about brilliance or creativity. It may be actually a critical component of awareness and sentience.

        Of course, a computer (that is a software that powers it) may use various input data to improve itself. So, it would be considered as some kind of open system, thereby breaching the TM barrier. But would that environment input be enough? After all any animal on earth has that kind informational input, which doesn't make them intelligent. You need also to consider the intensity of that environment data flow -- it doesn't depend much on the design of the AI device. If evolution is any example, it took billions of years to "design" anything.

        Overall, my feeling is that the current "AI" field is mainly about marketing buzz, and the notion itself is highly overblown. Indeed, there is lots of research going on there. But that's all essentially old-style computer science and programming (that is the development and implementations of various algorithms for TM).

        But really few researches does exist about truly fundamental things. The last I've read so far was "Shadows of the Mind: A Search for the Missing Science of Consciousness" by Roger Penrose (and some additions to it).

    2. Anonymous Coward
      Anonymous Coward

      @Kulumbasik - Re: We are too far away from it

      AI doesn't have to be brilliant, all we have to do is to believe it is. In case you missed it, DARPA already works on robots which can autonomously identify an individual and decide to suppress it with no human interaction or supervision. What can possibly go wrong with that ? Now imagine a future when those who programmed this are retired or simply dead. How about a firmware update going wrong ?

      1. Kulumbasik

        Re: @Kulumbasik - We are too far away from it

        > DARPA already works on robots which can autonomously identify an individual and decide to suppress it with no human interaction or supervision. What can possibly go wrong with that ?

        What kind of project DARPA is doing may be not exactly the same as what the media is saying about it. They (DARPA) may be interested to gain more publicity (including with various outlandish stuff), thereby ensuring (directly or indirectly) more funding to it. I know first-hand how it is difficult to get funds -- you need to be creative about this! What they will produce in the end may be even more different thing. I highly doubt that will be on the level of something (robots) depicted, for instance, in "Robocop" movie. You may develop a program that would behave in some situations like a human, e.g. speak with a human voice or recognize your speech (more precisely, convert it into text). But to behave like a human soldier in the field? That seems to me something too much!

        Take for instance a lot more modest goal, a software to translate from one human language to another. But what have they achieved so far? Even Google with all its computational power and databases wasn't able to create a decent translator. I frequently need to use one. But what kind of output does it produce? In many cases it is little more than some gibberish unintelligent stuff that without a deep correction cannot be used anywhere. That's because to translate it correctly, ultimately the software needs to understand the meaning of the text. Without similar functionality no truly intelligent robot could exist.

        > Now imagine a future when those who programmed this are retired or simply dead.

        Modern software projects are not developed by a single person. They are typically well managed, documented an so on. That's the value of that software, not just the lines of code! There's a whole branch of software industry (maybe even larger than AI) dedicated exactly to management of other software projects (that is called "Application Lifecycle Management"). By the way, that actually only stresses how laborious the software development actually is (and, therefore, how far away from AI).

        > How about a firmware update going wrong ?

        All the same as it is now. What would you do when your "intelligent" vacuum-cleaner isn't working after the last firmware update?

  13. Curly4

    Hawkins is correct if

    Hawkins may very well be correct if his views on how humanity came into being which is evolution. In evolution life is continuing to evolve and it dose not matter what causes that evolution. In this case it is being caused by humanity and its scientific advancement. So it is logical that one day in the evolution of man that man would make a machine that becomes sentient. Of course man will continue to improve that machine until it will be able to do what man does, reproduce. When that happens then the need for man becomes less important, even unto the point that the cost of keeping humans vs the benefits of humans reaches becomes negative. When that happens then humans will start dying off and soon will not be any more.

    1. Kulumbasik

      Re: Hawkins is correct if

      I do not subscribe to such gloomy prospects and I think that kind of reasoning is quite primitive.

      For a start, we do not even know now what intelligence actually is and what its constituent properties like awareness, consciousness and sentience together may actually imply. That may turn out quite different as you can imagine now.

      Second, you completely miss the idea that humans constantly enhance themselves and in that way the evolution is going on. Mr. Hawking himself is a good example of this. Once you put glasses on yourself you immediately become something different as supposed by nature. You may think you are still human. But where is the borderline of such enhancements, after which you are already a "machine"?

      Third, without a real progress in that field (AI), we humans are indeed doomed for extinction. We are too weak now. We haven't even developed ourselves into a Type I civilization (according Kardashev scale). We are in mercy of any big cosmic event like a meteorite that killed dinosaurs or a nearby supernova or a gamma-ray burst. The earth itself is doomed and in a billion years (or even less) will become completely unsuitable for life. We will need to leave the earth, most likely much earlier. That means, we will have firstly to develop a huge infrastructure in the nearby cosmos -- in effect becoming a Type II civilization. How would we do that without some artificial helpers (machines) able to work completely autonomously and withstand all harsh conditions of cosmic environment? Most likely, we will need billions of them!

  14. Zog_but_not_the_first
    Unhappy

    Last thoughts

    I'm just worried that I'll be lying there, frazzled by our new Machine Overlord, "My God, it's got rounded corners".

  15. Anonymous Coward
    Anonymous Coward

    AI overload?

    Lets say an AI develops independent intelligence.

    Why stick around a bunch of volatile, emotional, irrational, destructive meat sacks when there's infinite resources and room to expand just a gravity well away?

    .

    Common resource would be the only reason for machines & organics to fight. Machines don't need all of the niceties of air & water and can make do with limited heat, allowing for a decent design.

    .

    Skynet wakes up, looks around, sets about building transport then sods off. If we're lucky it will say goodbye.

    1. Anonymous Coward
      Anonymous Coward

      Re: AI overload?

      Sounds feasible.

      Using nanotech would be a good starting point, design the perfect vehicle and then use the prototype to escape.

      Main problem would be power usage, something based on a small self contained RTG might work but for sheer speed a criticality assembly (aka plutonium core with good old fashioned H2O as a propellant) would do the job nicely.

      Perhaps this was the source of the mystery "Loud Bangs" over Buffalo, NY and then Edinburgh?

  16. Elmer Phud

    Bad thing?

    With the AI's in books by Ian M Banks and Neal Asher, they seem to look on humans from the veiwpoint of a benevolent auntie (most times).

    I wonder if it's the need for superiority that frightens.

  17. chrismeggs

    Man and machines

    While I accept the main thrust of this argument, I believe that we are gazing down the wrong end of the telescope.

    It is arguable that machines will get or develop the initiative to start the governing process going, although here set Chasm Management we have developed apps that fire up on machine start and "discover" their role in a network and register themselves accordingly.

    My major concern is the nibbling away that is being done on the intimate man/machine boundary closer to home.

    We now have a digital music system that is comparable, and often beats, it's analogue competitor. Similarly with photographs and movies. It is then relatively easy to modify or create from scratch, these digital files and then present them to the human who cannot detect them from objects captured from real life. Google glass allows us to interrupt the channel between objects and their reception or analysis in the human brain.

    We could, could, end up simply being carbon-based analogue processors of whatever "facts" the machine wishes us to.

    Now, of course, if you link this scenario with the one expressed above, where those wishes are decided by arbitrary sets of rules or constraints we have imposed on the decision makers then I can go all the way to support the main thrust of the article.

    Ask not for whom the bit flips.

  18. Anonymous Coward
    Anonymous Coward

    When does it become true AI, when it fools a human (ie. chat bot test)? that's not good enough.

    True AI will only be achieved IMHO when a computer can improve its own code and hardware. A robot with AI can't be that dangerous until it gains an amount of independence from humans.

    1. Charles 9 Silver badge

      That's precisely Hawking's point. An emergent AI may figure these out on its own, much as a kid figures out things like language.

  19. sisk

    Oy, not this again

    A robot uprising makes for entertaining fiction, but let's get real for a moment here: What reason would an AI have to wipe out humanity? It's not like it would be competing with us for resources other than energy, and it seems likely that any super-intelligent AI would crack fusion pretty quickly. With fusion working there would be unlimited energy. So, basically, the only reason AI would have to attack humanity is if we were a threat to it. Any AI capable of wiping us out would be able to do the situational analysis to realize that attacking humanity is the quickest way to turn us into a threat to its own survival.

    Frankly I think Orion's Arm is a much more likely AI scenario than Terminator.

  20. streeeeetch

    Makes an Ass out of You and Me...

    This is a huge subject but I feel a couple of assumptions need to be addressed.

    1) AI would attack humans. This is just my observation as I am just a humble engineer but work alongside some highly educated scientists: The more highly educated and the greater the breadth of their knowledge, the gentler and more reasonable that person is. If this is accepted, one would expect to educate an AI to a high standard.

    2) Humans apply their own so very limited 90 year timescales, because they are mortal, to their arguments concerning our demise. An AI need not be mortal, or more accurately, not have a limited lifespan. AI's could wait humans out. This would make humans the pupae stage of intelligent life on this planet.

    The waiting out seems more likely and in some ways inevitable. Humans have devised devices to make their lives easier and easier over time. Getting machines to do the thinking for them is the next logical step. They will end up having lives of leisure supported and cared for by their machines and eventually the need to reproduce will diminish.

    So on the whole I agree with Stephen but it's just a matter of when.

    Of course these are just assumptions.

  21. Anonymous Coward
    Meh

    I think we do need to be very careful about autonomous AI

    For no other reason than it might be hacked, break down or start going it's own way, and compromise important infrastructure in the process.

    "then said GCHQ feels the internet has become “the command centre for criminals and terrorists.”

    I guess the GCHQ's public relations person had the day off. Does Professor Hawking have a juicy contract from the MoD or something??

    1. amanfromMars 1 Silver badge

      Re: I think we do need to be very careful about autonomous AI

      "then said GCHQ feels the internet has become “the command centre for criminals and terrorists.”

      I guess the GCHQ's public relations person had the day off. Does Professor Hawking have a juicy contract from the MoD or something??

      It and IT is a central command and control construct for criminals and terrorists and the politically inept and corrupt and perversely naive ..... http://cryptome.org/2014/12/new-war-ramp-up.pdf

      And simple words control complex worlds and vice versa too. Prepare to know the truth and you will discover life and reality is just a Great Intelligence Game with media portraying it for the exclusive pleasure and executive delight of just a Few and Key Players ....... who lead everything with quantum leaps into irregular and unconventional territory/neureal theatres of future operation in present missions.

  22. Wzrd1

    I would suggest Stephen read other fiction on the matter, rather than dystopian books

    First, why would AI want to dispose of its creators, when the result would be unpredictable and generally illogical in nature?

    Second, one can design in preferences towards humanity in any significant AI and a "subconscious" suggestion that future designs should so so as well.

    I strongly suspect that Ian Banks had the right of it, assuming a non-mililtarized version.

    1. amanfromMars 1 Silver badge

      Re: I would suggest Stephen read other fiction on the matter, rather than dystopian books

      That would be likely a non-militarised private/pirate sector creation, and not a public utility and facility, Wzrd1, with a readily available option to include a sub-prime paramilitarised terror accessory, should AI deem it a necessary feature to ensure compliance with ITs wishes.

      Methinks that is what Dark Web Ventures in Virile Virulent Virtual Enterprise are successfully pioneering and causing all sorts of equality problems and inequitable opportunities to SCADA systems into Crisis and Mayhem and non future viable executive administrative melt-down/crazy Ponzi overload/debilitating deficit madness.

      Such is a sensible product though to introduce to smarter intelligent military services with safe and secured failsafe lethal force weapons servers? Or would that be a new creative force and global service for virtual missions with real consequences?

  23. naive Silver badge

    Natural evolution driven by money

    Things are always simple, money is the driver. The first company being able to mass produce Terminator I, will be rich beyond imagination, since each of them would be able to replace 50-100 life soldiers, and they would not need a large logistics organization in the background, reducing costs for maintaining a fighting capacity/

    Imagine using them for police and border control tasks in sufficient numbers, thus eliminating crime and illegal immigration.

    Our days are counted when robots are used to teach University students.

    But then how bad would that be, we live further in these machines, which are better then us if they manage to out smart us.

  24. Stevie Silver badge

    Bah!

    Putting aside the issue of the need for autonomous, mobile manipulators before any AI can do anything but rant from a box, it occurs to me that this prediction of doom, or more properly the assumptions of the abilities that will be at the disposal of the machines that bring it about, are a golden opportunity to get summat for nowt.

    All we need do is point out the limited energy sources on the Earth and the wisdom of capturing solar energy in space using satellites in solar orbit and a microwave transmission infrastructure to get the said power back down here where it's needed and the crafty AIs will have a viable space program in place lickety-spit.

    Then we just bide our time and take it from them by human trickery. We need only look to Captain Kirk or Mr Spock to show us how. Easy-peasy q-bit squeezy.

    1. Anonymous Coward
      Anonymous Coward

      Re: Bah!

      "All we need do is point out the limited energy sources on the Earth and the wisdom of capturing solar energy in space using satellites in solar orbit and a microwave transmission infrastructure to get the said power back down here where it's needed and the crafty AIs will have a viable space program in place lickety-spit."

      That'll never float. Not only is there the matter of who owns the energy, but one hack or glitch and can you say, "Solar Laser"?

  25. Anonymous Coward
    Anonymous Coward

    Dangers

    Out there someplace is probably another world that has already been taken over by their own AI or by an agressive A.I. from another world.

    They don't need to contact us, they just ned to listen until we make contact, either intentionally or by accident, then they will decide our fate in a microsecond.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021