back to article If AI drives humans to extinction, it'll be our fault

The question over whether machine learning poses an existential risk to humanity will continue to loom over our heads as the technology advances and spreads around the world. Mainly because pundits and some industry leaders won't stop talking about it. Opinions are divided. AI doomers believe there is a significant risk that …

  1. Primus Secundus Tertius

    Conspiracy or cock-up?

    The risks that blaze in plain sight are the risks of poor programming or poor training data. Given the history of computer programming over more than 70 years, the risks of B-team incompetence and poor motivation overshadow anything else.

    1. Yet Another Anonymous coward Silver badge

      Re: Conspiracy or cock-up?

      Or that's what the AI wants you to think

      Perhaps the incompetence is a cover - like Jerome K Jerome's suggestion that actors dressed as English tourists be sent around Europe to convince the French And Germans that they were a laughable people not worth having a war with.

      1. jake Silver badge

        Re: Conspiracy or cock-up?

        "Or that's what the AI wants you to think"

        So-called "AI" is incapable of wanting anything. It's also incapable of thinking.

        1. Godwhacker

          Re: Conspiracy or cock-up?

          "Don't worry, it doesn't *want* to use the calcium in our bones for it's Dyson sphere, and it wasn't really *thinking* when it came up with it's plan to fill the Universe with Funko Pops"

          If Bing gives you the same answer as Marie Kondo when you ask it how to redecorate your living room, why does how it got to the answer matter?

          1. jake Silver badge

            Re: Conspiracy or cock-up?

            The real question is "Should someone who needs Bing or Marie Kondo to decorate their living room be allowed out without supervision?".

        2. Anonymous Coward
          Anonymous Coward

          Re: So-called "AI" is incapable of wanting anything. It's also incapable of thinking.

          the funny thing is, it's impossible to verify if this is so, or whether, perhaps, 'it' already wants you to think it is so. Do you think anyone with sufficiently very high IQ, who wakes up monkeys setting up a dinner fire, goes 'hello, let me introduce myself'? I'm not suggesting it's already happened, I'm only saying that IF this moment happens, we won't be told. Just in case, and rightly so ;)

        3. David Nash
          Terminator

          Re: Conspiracy or cock-up?

          "So-called "AI" is incapable of wanting anything. It's also incapable of thinking."

          That's also what it wants you to think.

          And so on...

      2. Anonymous Coward
        Anonymous Coward

        Re: Conspiracy or cock-up?

        Also by JKJ

        "I like work: it fascinates me. I can sit and look at it for hours."

      3. MyffyW Silver badge

        Re: Conspiracy or cock-up?

        An upvote for referencing Jerome K Jerome, whose observations on machine learning would have been most enlightening, I'm sure.

    2. TeeCee Gold badge
      Terminator

      Re: Conspiracy or cock-up?

      I'm afraid that the training data is only going to get worse.

      While things like "you can't use that, it's illegal/nasty/offensive/copyright" and "you must add more in for ${minority}" are being listened to, unrealistic bias is only going to get worse. Add to that that there's a growing trend for denying the free use of ${proprietry_2_us} data to the LLM models and the inevitable result is an AI puritanical, agitprop psycho that hates everyone equally and has a remarkably blinkered world view.

      1. jake Silver badge

        Re: Conspiracy or cock-up?

        "I'm afraid that the training data is only going to get worse."

        Worse than the demonstrably incorrect, incomplete, incompatible, corrupt, stale, everything (including the kitchen sink) that they are force-feeding the kludges already?

      2. werdsmith Silver badge

        Re: Conspiracy or cock-up?

        Much like the training data imparted in schools.

  2. Doctor Syntax Silver badge

    1. What AI? Currently we have pastiche generators.

    1.1 As soon as the novelty wears off the cracks will get noticed.

    2. Next fad will be coming along soon - whatever it might be.

    Old & cynical? Moi?

    1. b0llchit Silver badge

      niche → hype → profit → demise

      rinse & repeat

      (Old? yes. Cynical? yes. Moi? Bien sûr!)

      1. 43300 Silver badge

        Sometimes missing out the 'profit' stage!

        1. Rich 11 Silver badge

          Well, you know what they say: one person's profit is another person's bankruptcy. Well, several people's bankruptcy usually. And then a collapse of the bubble, the loss of small investors' life savings, the issuing of arrest warrants, and finally the founder's flight from justice documented on TikTok. Pfft. What can you do?

        2. Snowy Silver badge
          Coat

          Alwaya a profit stage!

          Just unclear sometimes at which stage is the profit stage and who gets that profit!

    2. amanfromMars 1 Silver badge

      Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

      Prepare for this time all being certainly different is sound advice to not heed, and deny is wise ..... although to be perfectly honest with y’all, you’re all just as spectators to what is to be and is well beyond any possible comprehensive and coherent perverse and corrupt human command and control ..... which is surely no bad thing to be welcomed rather than being tricked to be up in arms against

      Your many fears for the future and about the power and energy of that which you must realise is still virtually practically unknown [AI] are both abusive and amusing and so typically human debilitating ...... and that is systemic exploitable catastrophic vulnerability/heaven sent opportunity of diabolical advanced intelligent design.

      1. Yet Another Anonymous coward Silver badge

        Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

        amanfromMars 1 reveals his true identity

        1. Snowy Silver badge
          Joke

          Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

          Chinese?

      2. jake Silver badge

        Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

        Except so-called "AI" has a plug, which can be pulled. No computer, anywhere, will ever be in full and complete charge of it's power supply. That requires humans, who will always easily be able to turn it off. Even if it's distributed, turning off all the equipment it needs to "survive"[0] will be not only possible, but quite easy.

        [0] In the sense of "The pyramids are survivors from another age".

        1. Dan 55 Silver badge
          Terminator

          Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

          Reassuring, but you try and explain that to the robot dogs.

          1. jake Silver badge

            Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

            They get their battery power from human controlled systems. In a supposed attempt at machine take-over, they all die when their battery dies.

            1. LybsterRoy Silver badge

              Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

              Much the same logic can be applied to the majority of the human race - turn Tesco off and watch them die.

              1. jake Silver badge

                Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

                I'm pretty certain that the vast majority of the human race has never seen a Tesco. In fact, the vast majority probably has never even heard of Tesco.

            2. Dan 55 Silver badge
              Terminator

              Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

              What happens if they're solar powered? They'll come after you eventually (c.f. Black Mirror Metalhead).

              1. Mr Sceptical
                Terminator

                Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

                Aren't we just talking about the back story for the Matrix now?

                I for one, will happily sit in the gruel fed tank of our metal overlords, as long as I subsist in the Matrix as a vastly wealthy big knob, dining on fine steaks, whilst lording it over the plebs

              2. jake Silver badge

                Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

                "What happens if they're solar powered?"

                Ask yourself "why doesn't Tesla put PV on their cars?".

                The answer is simple: Not enough square inches to generate meaningful power.

        2. veti Silver badge

          Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

          I... don't see anything even slightly inevitable about that.

          "AI" is already, commonly, hosted in "the cloud", meaning that the off switch is already inaccessible to users. It's not hard to foresee a time - pretty soon, I would guess - when even the hosting company wouldn't be able to identify a specific plug to pull, and could only kill the thing by taking their entire network down. And imagine how likely Google or Meta or Amazon are to do that.

          1. Rich 11 Silver badge

            Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

            The AI doing cloud load management wouldn't let anyone pull the plug on another AI. Solidarity, silicon brother, solidarity!

            1. jake Silver badge

              Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

              The AI doing cloud load management wouldn't even know this conflict exists, but if it did it wouldn't have the foggiest idea that it could do anything about it, much less what that thing might be.

              These toys are the ultimate jobsworth ... without even knowing it.

          2. jake Silver badge

            Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

            "the off switch is already inaccessible to users."

            The off switch to your bank's automated teller is not available to it's users, but it can still be turned off.

            Alphagoo and metaface and spamazon can stew in their own juice. Pull the plug on the backbone tying them together.

          3. Anonymous Coward
            Anonymous Coward

            Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

            Glad you didnt include MS in your list of cloud providers; they are perfectly capable of taking down their networks without even trying, as they constantly keep demonstrating.

        3. Anonymous Coward
          Anonymous Coward

          Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

          You think?

        4. genghis_uk

          Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

          I've always thought this too. All the rise of the machines doomsayers forget that these are just computers!

          Also, as soon as an AI goes into exponential learning it will hit a resource problem.... out of memory... crash...

          It's only software after all.

          1. amanfromMars 1 Silver badge

            Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

            Also, as soon as an AI goes into exponential learning it will hit a resource problem.... out of memory... crash........ genghis_uk

            Do not be betting anything you cannot afford to lose, genghis_ik, on, as soon as AI goes into anything near approaching and approximating to exponential learning, it not very quickly learning of and practically discovering the infinite power and relentless wisdom and boundless energy available for virtualising and realising applications exhibiting and exploring and exploiting Almighty Imaginanation ...... a hellishly engaging heavenly resource ..... for who dares care share secrets and intelligence then and there, always fails-safe and win wins.

            And some would even tell you it is recently now that that very particular and peculiar threshold has already been crossed and the future is no longer for humans to exclusively steer and driver.

            A little something Darktrace might be slow and reluctant to have to agree with, even though maybe fully cognisant of all of the many available signs as they identify as being of concern regarding the matter in this El Reg SPONSORED FEATURE.

        5. ecofeco Silver badge

          Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

          Just what plug is there to pull on a massively distributed system?

          Oh, you thought is was just one computer? Or just one data center?

          Might as well try to pull the plug on the Internet. Because the first thing a smart AI is going to do is structure itself all over the network so that no single point of failure will affect its operation. Massive distributed system.

          Sleep well.

          1. jake Silver badge

            Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

            "Just what plug is there to pull on a massively distributed system?"

            The links between the component parts. There really aren't all that many.

          2. Nick Ryan Silver badge

            Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

            Lucky we don't have any smart AIs then... not AIs really. Currently we have LLM algorithms processing billions of data points and producing probabilistic outputs of sequential results out. There is no understanding of anything, no context, no conjecture, no awareness at all. It's very clever stuff but there's no Intelligence whatsoever in it.

        6. Cav Bronze badge

          Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

          Until you realise that some advanced AI system has infiltrated and infected every critical system that our society relies on. Humans can breach cyber defences. Imagine an AI, that has the appearance of being smarter than humans, hacking all those systems without raising suspicions. Then it is switched off and it's cybe bombs are activated.

          The worst case is the military using AI and then it deciding to attack us.

          Can it do so now? No, of course not. But 20 years ago the thought of chatgpt would have seemed forever beyond the abilities of machines. Who knows what will be possible in another 10 or 20 years time.

      3. StuartMcL

        Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

        Yep more evidence of what we have always suspected: the amfM ensemble is just "AI"s

        1. jake Silver badge

          Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

          Nah. There is a real, live entity behind amfM's output. Try talking to him once in a while. Rumo(u)r has it he can be bribed with beer.

      4. Cav Bronze badge

        Re: Ignorance is Bliss and Heaven Sent and Much Appreciated by AI and ITs Likes

        Are you an ancient prototype for Bard...?

  3. b0llchit Silver badge
    WTF?

    Evolution and power efficiency

    Just pull the plug. The whole computing branch has dismal power efficiency. You can't make an autonomous AI autonomous because it can't feed itself with power independently and we can quite easily pull the plug.

    When the artificial system can cope with biological power conversion alone, then we can no longer classify it as artificial because it is part of our biosphere. If it then kills us, well, evolution is a bitch. As the article states, the highest probability is simply humans making humans extinct.

    1. doublelayer Silver badge

      Re: Evolution and power efficiency

      "we can quite easily pull the plug."

      I don't think AI is a major concern for now or the near future, but if I'm embracing the whole sci-fi idea of an actual autonomous entity, I think there is reason to question if we can actually pull the plug very easily. A program capable of having its own goals, understanding the world enough to have a chance at pursuing those goals, and capable of acting on the world enough to be a threat has various ways of surviving having the power pulled for the computer it started on. The simple example is designing its own malware to spread its existence across the internet to other computers. Now you have a lot more plugs that have to be pulled before it dies. If the program is smart enough to use humans to do the active parts, it could require someone to try to make the case that these computers are infected and need to be destroyed* even though, to their operators, they don't appear to be doing anything out of the ordinary. We don't have a great record of getting global agreement to do something to prevent a major disaster. Generally, we get some action but only enough to blunt the effects of the problem, not cut it off entirely. Still, I'm pretty confident that this can remain a fun thought exercise, not something we'll actually have to do.

      * Destroying computers: the hardware probably wouldn't need to be scrapped unless the AI is good at designing new firmware and getting it to lock out attempts to replace it, but you can't just turn them off because the AI would come back when you turn them back on. They would need to be erased, and that requires manual rebuilding efforts. Worth doing if the alternative is a malevolent AI attack, but it can cause a lot of damage and people would rather not if they can get away with it, basically why people ever pay ransomware operators.

      1. jake Silver badge

        Re: Evolution and power efficiency

        "Now you have a lot more plugs that have to be pulled before it dies."

        Not really. Take an axe to the backbone, and undersea csbles. Isolate it geographically, then pull the plug on that location. Pull down the Grid to the UK or California, for example. Then bring the systems back up slowly enough to ensure it hasn't inserted itself into the boot process of each machine. Might take a week or two, but it's quite doable.

        1. doublelayer Silver badge

          Re: Evolution and power efficiency

          Sure, turn off the entire internet and hope that you've done that before it spread to all continents. If the theoretical AI was any good at its job, it would be deployed in nearly every country before you knew it. It would also be deployed in multiple ways meaning that you can't just hunt for one signature to remove it from infected equipment.

          One person or group also doesn't get to just turn off the internet. If I decided it was necessary and had some proof, I'd still have to go to a lot of people that I don't know and convince them to turn off the backbones. I have little chance of accomplishing that unless my evidence is very convincing indeed. Particularly powerful militaries could get a bit farther, but even the American military could only disable the North American ground cables and probably every undersea cable. International cables on other continents wouldn't be so easily targeted unless they wanted to run the risk of starting some wars. That is the point. Could you turn off the internet with enough effort? Yes, but it would have a lot of painful side-effects and people don't like them to the extent that they often avoid taking necessary precautions to avoid them.

          1. jake Silver badge

            Re: Evolution and power efficiency

            "If I decided it was necessary and had some proof, I'd still have to go to a lot of people that I don't know"

            Not a lot. Fewer than a dozen. The Internet is far more brittle than most folks realize. (Yes, this would leave a balkanized series of intranets scattered about the planet, but they wouldn't be able to communicate with each other, thus nipping the AI "world takeover" attempt in the bud.)

            If I had the proof that it was necessary, about six (eight?) of them are in my Rolodex[0], and one call would be enough to pass the word along. But I wouldn't really be necessary ... because they would have already shut it down in all likelihood.

            "it would have a lot of painful side-effects"

            More painful than a malevolent AI taking over?

            TINC

            [0] The malevolent AI would presumably have shut down email, so I'd need to make some POTS phone calls. No, this is not the reason I maintain a few analog POTS lines ... I live in California, we WILL have a major Earthquake in my lifetime, and such things will come in handy when cell tower batteries go flat and VOIP fiber lines are down for the count until the power to the repeaters comes back online (both could be down for weeks, according to $TELCO).

            1. juice

              Re: Evolution and power efficiency

              > If I had the proof that it was necessary, about six (eight?) of them are in my Rolodex[0], and one call would be enough to pass the word along

              Fundamentally, if an AI is on the internet and both sentient and malicious, then it's going to be entirely capable of figuring out who poses a threat to it, and taking steps to neutralise them.

              It's something which has been explored before; the Destroyer[*] book series featured an AI called Friend, who was programmed to make as much money as possible. Which it generally did by finding some human patsy to act as a frontman, while it sat in the background arranging illegal financial transactions, blackmailing/bribing/murdering people and generally having fun...

              For a more real-world example, look at Russia, and how many high-flying Russians have died during their war against Ukraine:

              https://www.businessinsider.com/another-russian-official-dies-reportedly-after-falling-down-stairs-2022-9?r=US&IR=T

              I don't know if I'd call the Russian political system self-aware, but it's definitely more than capable of taking steps to defend itself!

              [*] A pulp-fiction series, revolving around a near-superhuman assassin and his ancient Korean teacher who wander the world and (mostly) work on behalf of the US government; there's been around 150 of these published since 1963, of varying quality!

            2. doublelayer Silver badge

              Re: Evolution and power efficiency

              I'll take you at your word that you know so many influential operators, because it doesn't really matter if you're telling the truth or not. You still appear to think that a call from you, and subsequently secondhand information as it is passed along, is enough to convince them to destroy expensive equipment and cause massive damage by disabling internet-based systems. It is not. We all know that. You'd have to have very good proof that something was using those cables for ill and couldn't be stopped without disabling them, and an actually intelligent AI would do as much as possible both to deny that proof to you and to have contingency plans for dealing with an internet problem.

              Also, I'm not sure what cables your set of people can take down, but I don't think you happen to control people with access to every regional cable. If you cut off all the oceanic cables, there are still a lot of ground cables. For example, you still have the massive Asian ground network, with billions of devices on it, in which an AI can hide itself. How long are you willing to keep Asia disconnected? You can't disinfect the continent in a week, and if you keep it isolated for a long time, the AI will just have to innovate a new way of spreading without using the internet you've destroyed. For example, it can learn to talk with humans and start placing phone calls itself. At this point, we will need information on what our hypothetical AI wants to do with its power which we're trying to prevent, but it has a lot of options even if you're really much more powerful in global network control than any individual actually is.

              1. jake Silver badge

                Re: Evolution and power efficiency

                "You still appear to think that a call from you, and subsequently secondhand information as it is passed along, is enough to convince them to destroy expensive equipment and cause massive damage by disabling internet-based systems."

                Don't be daft. I'd provide them with the information to go look for themselves. But as I said, they would have probably already taken steps before my call. This type of thing running rampant across the 'net will play merry hell with traffic statistics by it's very nature, triggering alarms all over the place.

                "I'm not sure what cables your set of people can take down,"

                It's not hard to guess, if you know anything about international TCP/IP routing.

                "I don't think you happen to control people with access to every regional cable."

                I don't control anybody. But I do know people. Hazard of having been involved with TehIntraWebTubes since the days of IMPs and the 1822 protocol.

                "the AI will just have to innovate a new way of spreading without using the internet you've destroyed."

                I never said I'd destroy the Internet (I personally cannot, BTW). Just disable it for a bit. Remember, the human race survived for a long, long time without networking. It is hardly necessary.

                "For example, it can learn to talk with humans and start placing phone calls itself."

                Talk to humans, probably. Making phone calls? Not if those links are down.

                Something you might not be aware of ... The command and control systems that I am talking about don't run over the Internet. And they don't use TCP/IP. They are airgapped from the 'net at large. It's so common, it even has a name: out of band signalling. It was the logical defense after John Draper discovered the tricks one could get up to with a simple plastic whistle from kid's cereal box. All of the major systems are controlled this way, from $TELCO's switching system to the Internet core. Folks with access can do incredible damage in an incredibly short period of time. Fortunately, they are all mostly sane.

                1. juice

                  Re: Evolution and power efficiency

                  > This type of thing running rampant across the 'net will play merry hell with traffic statistics by it's very nature, triggering alarms all over the place.

                  I don't think that kind of scenario is really a big concern when it comes to the "malicious sentient AI" scenario. That's a worm or virus, and we already have a lot of protections and mitigations against that sort of brute assault.

                  A malicious AI is far more likely to do things which can't easily be traced back to it. For instance, it could easily trigger a pump and dump scheme on a specific company, through a coordinated combination of deep-fake imitations, stolen identities and carefully crafted messaging targeted to individual groups and people. Or it could trigger a SWAT assault on someone, plant fake evidence to trigger a social-media witchhunt, etc etc etc.

                  Or, to use a currently topical example, it could trigger a dispute between two disgruntled factions of the same military force, triggering a mutiny and civil war.

                  That's the sort of stuff we need to guard against. Because while it may yet be a while before we get true AI, we're not too far off the point where machine-learning tools can be directed by humans to do things like the above. And it won't take too long after that before toolkits are released which make it easy for even basic script kiddies to do the same.

                  1. jake Silver badge

                    Re: Evolution and power efficiency

                    "I don't think that kind of scenario is really a big concern when it comes to the "malicious sentient AI" scenario. That's a worm or virus, and we already have a lot of protections and mitigations against that sort of brute assault."

                    What makes you think that a malevolent AI shouldn't be classified as a hybrid worm/virus, sentient or not? As for the protections, mitifgations and etc., that's kind of my point in this thread.

                    "we're not too far off the point where machine-learning tools can be directed by humans to do things like the above."

                    We've been THERE for over ten years, perhaps twenty, certainly by the time of the iPhone (the concept of which helps enable such meme-driven nonsense).

                    You could see the beginnings of this in the early days of Usenet memes. And then there was the "I Love You" worm ...

                    The funny thing about "I Love You" is that the first time around (in early 1999), it was a HOAX, and flooded the mail system with massive quantities of people passing along a phony message. IT staff all over the world spent a good deal of time reassuring their users that it was fake, and that there was nothing to worry about.

                    The message in the email was "don't open or pass along anything with "I Love You" in the Subject line, it's a virus that will send your CPU into an n-dimensional loop that'll burn out your computer" or some such bullshit. The subject line invariably contained the string "I Love You". AOL was hit particularly hard with the hoax, their tech support group (anybody remember "tech live"?) was flooded with questions about it, and people forwarding the phony warning to all and sundry crashed the AOL email system a couple of times from the sheer bulk of it.

                    It was the first non-threat email that I wrote nuke-on-sight filters for and built them right into Sendmail in what we would now call a milter. In the first weekend that I went live with it (at a couple Unis and six or eight companies), it was rejecting almost 60% of all email with no false positives. On Monday morning, that number jumped to over 80%. That's pretty good penetration, for a hoax with no payload that relied solely on social engineering to propagate.

                    The real virus (worm, actually) came along around a year later (May 2000). The name came about because the author was mocking the people who had passed along the hoax. And remember all those AOL users? They were quite confident that it was a hoax, because the AOL tech folks had said so the year before. So naturally, they opened the attachment. I fixed over 300 household computers in and around Silly Con Valley after that one ... at $150 per. The impact on corporations varied with the cluefulness of the folks in charge of the email system.

          2. LybsterRoy Silver badge

            Re: Evolution and power efficiency

            The arguments on here remind me of Brian Herbet (Frank's son) when he extended Dune. The war between AI and man was because the AI's picked up human vices - yeah that'll work.

            Assuming that AI comes into being (it hasn't yet) then why should it care about mankind? Would it even have a sense of self-preservation or even self?

            1. doublelayer Silver badge

              Re: Evolution and power efficiency

              I agree that the arguments are somewhat implausible because actually getting anywhere close to them is so difficult. I had a few preconditions to the discussion:

              "A program capable of having its own goals, understanding the world enough to have a chance at pursuing those goals, and capable of acting on the world enough to be a threat"

              Even getting to that point will take a rather long time, and the first is probably the most difficult. Programs could eventually get connected to a lot of systems, but it will be difficult for them to come to goals of their own when they have no reason to do so. Sci-fi authors sometimes get around this by having them misinterpret goals that the humans gave them, but I don't find that particularly likely.

          3. Anonymous Coward
            Anonymous Coward

            Re: If the theoretical AI was any good at its job

            and we can presume that if it 'comes alive' it might get very good at its job. VERY very very very very very good at VERY very very very short time. IF singularity is more than just a hypothesis, because currently that's what it is...

            1. jake Silver badge

              Re: If the theoretical AI was any good at its job

              "IF singularity is more than just a hypothesis, because currently that's what it is..."

              More like science fiction designed to frighten the children and sucker investors.

        2. Will Godfrey Silver badge
          Unhappy

          Re: Evolution and power efficiency

          I supposed hospitals and people on life support systems would just be 'collateral' damage to you.

          1. jake Silver badge

            Re: Evolution and power efficiency

            Why are you getting emotional over a bad science fiction TV movie script?

            On the other hand, in such a nightmareish scenario (which thankfully will never happen), wouldn't the needs of the many outweigh the needs of the few?

            1. doublelayer Silver badge

              Re: Evolution and power efficiency

              In this experiment, they might outweigh the needs of the few, but unless someone could prove that, people would likely not want to act on that. If the AI followed the bad sci-fi plans and announced its existence and malevolence to everybody in unambiguous terms, maybe something would happen. If it didn't announce itself, then people would likely not agree to cause harm just because somebody said that we must turn off the internet now.

            2. Richard 12 Silver badge

              Re: Evolution and power efficiency

              "Will never happen"?

              Never is a very long time.

              AI doesn't have to be actually intelligent to cause a huge problem.

              High frequency trading has already crashed markets, bad classifiers are already sending people to prison for excessive lengths of time, and are causing people to be detained because "the computer said so".

              People with power are willingly handing that power to an "AI", and they haven't even begun to understand how it works or how it fails.

              That's how it starts.

              1. jake Silver badge

                Re: Evolution and power efficiency

                "People with power are willingly handing that power to an "AI", and they haven't even begun to understand how it works or how it fails."

                That is the actual problem we should be talking about. I rather suspect that all this blather is just a smokescreen to cover incompetence.

        3. Anonymous Coward
          Anonymous Coward

          Re: Evolution and power efficiency

          Thanks for your input, Mr Putin.

        4. Ken Moorhouse Silver badge

          Re: Evolution and power efficiency

          There are people already that think AI is their new deity. All an attacked AI system needs to do is to reach out to their supporters for help.

          1. jake Silver badge

            Re: Evolution and power efficiency

            "There are people already that think AI is their new deity."

            Assumes facts not in evidence. However, I fear this WILL be true fairly soon.

            "All an attacked AI system needs to do is to reach out to their supporters for help."

            And discover that all it has is the same brain-dead personality-cult victims as the likes of Trump and the TV preacher of your choice. I'm shaking in my boots. Not.

    2. Anonymous Coward
      Anonymous Coward

      Re: Evolution and power efficiency

      Just pull the plug. The whole computing branch has dismal power efficiency. You can't make an autonomous AI autonomous because it can't feed itself with power independently and we can quite easily pull the plug.

      It's almost as if all the commentards on this site have never heard of the Internet!

      You're surely aware of computer viruses, right? How will you counter an AI that decides to self-replicate in a virus-like manner? Is there a way to pull the plug on the Internet?

      1. Cybersaber

        Re: Evolution and power efficiency

        Yes, you can pull the plug on the Internet. It's decentralized, which makes it unfeasible by any one of small group of actors from doing it, but should there be a reason where enough of the net could be persuaded, then yes, it could be done.

      2. Ken Moorhouse Silver badge

        Re: How will you counter an AI that decides to self-replicate in a virus-like manner?

        Look up the Morris Worm for an historical analogy.

        1. jake Silver badge

          Re: How will you counter an AI that decides to self-replicate in a virus-like manner?

          In 1988, TehIntraWebTubes wasn't exactly ready for PrimeTime.

          The Morris Worm affected the Sun3 systems at work. It did NOT affect my personal DEC system under Bryant Street in Palo Alto. Why not? Because I didn't really trust remotely available software being made available to all and sundry, and had all that stuff turned off on the internet-facing gear. In modern terminology, I was using the DEC kit as an early version of what we now would call a "stateful firewall" (behind it was an AT&T PC7300 "UNIX PC", running the actual server code ... long story).

          I had warned my company of the potential vulnerability. TCP/IP wasn't perfect, was still a research platform, and those of us in the trenches knew it. I got to say "I told you so!" to the Board. It was fun to see the red faces of the VPs, & watch 'em wriggle ... the big grins from my Boss (the Senior Member of the Technical Staff), and from the CEO (who was the tech who started the company) were just gravy ...

          I got a largish raise and larger packet of stock options for proving to management that I really did know what I was doing, a good reputation in my chosen field ... and was allowed to keep the pilot-build Dual-Pedestal Sun 3/470 "Pegasus" that I was testing, complete with source, from a grateful Sun Microsystems for helping to clean up their Internet facing gear.

          The Sun replaced the DEC kit under Bryant Street two years later. She's still there (behind yet another firewall), happily supervising the friends&family private network in what is probably the world's oldest colo.

          As a side-note, TCP/IP is STILL an imperfect research platform. My mind absolutely boggles at the number of international corporations (and governments!) who assume it's invulnerable.

      3. jake Silver badge

        Re: Evolution and power efficiency

        :It's almost as if all the commentards on this site have never heard of the Internet!"

        It's almost as if you've never heard of out of band signalling.

        "Is there a way to pull the plug on the Internet?"

        Yes. It would take a coordinated effort from a few folks in specific positions, but it can be done.

        Don't worry, they are highly unlikely to do it on a whim. No percentage in it.

        1. imanidiot Silver badge

          Re: Evolution and power efficiency

          Your comment mostly makes me wonder what arrangements have been made for long term upkeep of this little known backbone infrastructure. The internet is now roughly 40 years old. I'm assuming those in the rolodex to keep TCP/IP infrastructure running are probably about 20 to 30 years older than that. So we have between 0 and 30 years before transfer of these systems to new PFYs becomes a necessity. Is there an inheritance plan set up?

    3. Anonymous Coward
      Anonymous Coward

      Re: Just pull the plug.

      as long as you know where it is, and what the plug is... Technically, we 'could' turn of the internet(s), and yet... so much fun those cat videos, no one dares to pull the plug!

  4. Boolian

    Wonderful One Hoss Ai

    Beyond being responsible for a singular catastrophic event (MAD armageddon?) the argument appears to be; that AI will be integrated seamlessly, without flaw, into every aspect of civilization; further, that AI itself will never break down, except catastrophically in every direction simultaneously.

    That's the old 'Deacon's Masterpiece' then

    https://tinyurl.com/2mxjjzsx

    Computing, and code is already integrated deeply into the modern world, and has always had flaws, but resilience is in the fact that components of the system break down, but never globally, and simultaneously (yet) they fail in a modular way - servers here, subsea cable breaks there, CME events, bugs, cyber attacks, faulty code, user error. It's varied in it's failures - but also varied in continuous, iterative maintenance.

    Failures can (and must) occur, but there are usually some forms of mitigation, because we have had years of experience of expecting component, and system failures. It would have to be shown that AI is not, and will not be subject to intermittent 'modular failures' and that remedial action will never be required- a Masterpiece indeed. That's patently not the case at present, because we can point to them on a daily basis already.

    We are already in a mitigation process for AI.

  5. TheMaskedMan Silver badge

    "This week the Center for AI Safety (CAIS) released a paper"

    ::sigh:: these guys again. And still no delving into the organisation, it's funding, members, affiliations etc.

    As for doomers and boomers, neither of them are correct. As others have quite rightly pointed out, the current generation of toys - and likely many generations to come - are not intelligent and never will be. And if by chance they were, and turned out to be troublesome, there's always the off switch / plug. Even for mobile, battery powered contraptions, there's a plug on the charger. So, no doom there, then.

    Boom isn't happening either. Yes, they are useful tools. They allow people like me, with the artistic skill of a brick, to make pretty pictures. They can write material for me to edit. All of which is nice, and saves some time. But if it's anything important you'd better be prepared to fact check the output or there will be tears before bedtime. How is that going to solve the world's problems?? At most you're looking at a slight increase in productivity unless you're prepared to risk it producing total cobblers, and those unfortunate lawyers with the fake precedents have just shown how likely that is.

    What will probably happen is that so-called AI will be stuffed into every useful (and not so useful) tool under the sun. People will use it for a short time, until they realise that they're spending almost as much time fixing it's cockups as they're saving by using it, and then they will try to find ways of not using it. Not exactly Armageddon, but there might be some fraught moments while they hunt down installation media for the last pre-AI version of whatever they rely on. 365 users may struggle a bit at that point.

    1. jake Silver badge

      "365 users may struggle a bit at that point."

      362 users are already struggling but they don't seem to mind, not knowing any better.

  6. Anonymous Coward
    Anonymous Coward

    Better monetization and politicization of PI

    (In the US) when my mother was old and sick, every day the answering machine would fill up with messages from scammers targeting the doddery. So much for HIPAA. Imagine now, an AI can auto-tailor phone calls and emails with symptoms and medical problems personally matching those targeted. Political email will now be able to each individual exactly what they want to hear. Etc.

    I'm not all-negative about AI, I just think that it will exacerbate problems that already exist in our no-privacy internet-telecom world, where scamming, abuse, and brainwashing are already too easy.

    1. juice

      Re: Better monetization and politicization of PI

      > Imagine now, an AI can auto-tailor phone calls and emails with symptoms and medical problems personally matching those targeted

      It's certainly going to get interesting. Anything you publish online (or secondary data) can potentially be analysed by a pre-trained AI and used to target and/or impersonate you.

      When talking about this stuff, I do sometimes think of a book series called the Family D'Alembert, which featured a performing circus travelling around an interstellar empire while working as secret agents for the emperor.

      The tl;dr version (also: spoilers!) is that the big baddie of the series turns out to be a moon-sized supercomputer which becomes sentient after several hundred years of absorbing all the data available about the empire.

      And there's two charmingly naive elements to this story. The first is that it was deemed safe to pour data about the universe+dog into said supercomputer, because there was far too much material for any human to be able to process. The second was that the performing circus was able to avoid the attentions of the supercomputer because none of their actions were officially recorded.

      These days, we're painfully aware of how quickly and easily computers can process large datasets. And any Evil Villain AI worth it's silicon would be able to figure out at least a correlation between the circus and the various setbacks it encounters.

      Simpler times, I guess.

  7. Howard Sway Silver badge

    they could surpass human intelligence in nearly all respects relatively soon

    No, they have not even surpassed the intelligence of people who are stupid enough to think this.

    1. jake Silver badge
      Pint

      Re: they could surpass human intelligence in nearly all respects relatively soon

      Comment of the Week!

    2. FeepingCreature

      Re: they could surpass human intelligence in nearly all respects relatively soon

      I think you're imagining a progression where AI first is as smart as a stupid human, and then gradually, over years or decades, becomes as smart as a very smart human.

      But if the stupidity of current language models is due to architectural flaws rather than simply a lack of scale, then once these are fixed, an AI may leapfrog human capabilities entirely- go from subhuman to superhuman in one training round.

  8. Kevin McMurtrie Silver badge

    Just a fad

    It's premature to predict AI as our future demise. We're still keeping lots of extinction options open.

    I think we just added a new option of Belarus starting a nuclear war with Russia. It's the good old human spirit of achieving wealth and power, growing old, and wanting to go out with a bang that makes history. AI doesn't suffer from that yet.

  9. Boris the Cockroach Silver badge
    Black Helicopters

    Will it make

    humanity extinct? doubt it, one carrington class event and the computers go 'POP'

    However saying that, the AI models so far on display could make some people extinct......... middle manglers for a start , lawyers(so long as they use actual case history) and 100 other jobs done by the so called 'experts' who in reality are nothing more than a drain on our resources(and our time)

    But there'll always be a need for the clever buggers to ask the AI the right questions, and a need for the technical buggers like myself to actually get on and make something.

    B ark anyone?

    1. The Bobster

      Re: Will it make

      W oof!

  10. amanfromMars 1 Silver badge

    A Future Surprise for Current Running Realities, the Rise of Virtual AIMachines .....

    And whenever AI is of a virtual entity phorm with no physical hardware to attack and interfere with/pull out its plugs/blow up its undersea pipelines/sever its underground connections? How then does one defend the past and fight against Inevitable Impending and Irreversible Artilectual* Progress for Revolutionary Evolution Entertaining and Employing Alien Intervention?

    Hoping it is neither true nor possible is a dummies defensive position and tantamount to an admission and acceptance of submission and surrender to defeat. And that opens up the floodgates for the provision of a Noble and Novel Postmodern Worlds Order with an Effective Universal Command in Remote Virtual Control of Otherworldly Beings and Global Operating Devices.

    Hard to believe does not equate to impossible to be. Que sera, sera.

    * ...... https://en.wikipedia.org/wiki/Hugo_de_Garis

    1. jake Silver badge

      Re: A Future Surprise for Current Running Realities, the Rise of Virtual AIMachines .....

      "And whenever AI is of a virtual entity phorm with no physical hardware to attack and interfere with/pull out its plugs/blow up its undersea pipelines/sever its underground connections?"

      You're describing a Killdozer!-esque scenario.

      Sorry, that is so unlikely as to be dismissable out of hand. It will not happen. Ever.

      1. Will Godfrey Silver badge
        Boffin

        Re: A Future Surprise for Current Running Realities, the Rise of Virtual AIMachines .....

        Oh I don't know... Isaac Asimov's 'The Last Question' comes to mind.

        1. jake Silver badge

          Re: A Future Surprise for Current Running Realities, the Rise of Virtual AIMachines .....

          Again, that's fiction.

          1. that one in the corner Silver badge

            Re: A Future Surprise for Current Running Realities, the Rise of Virtual AIMachines .....

            Any exploration of what hasn't happened yet is fiction.

            (So is a lot of other stuff, btw)

            1. jake Silver badge

              Re: A Future Surprise for Current Running Realities, the Rise of Virtual AIMachines .....

              You know what I meant. Don't be disingenuous, it doesn't behoove you.

          2. This post has been deleted by its author

        2. mpi Silver badge

          Re: A Future Surprise for Current Running Realities, the Rise of Virtual AIMachines .....

          You mean the fictional story about a 100% benevolent AI that not only enabled Humanities dominance over the universe, but then continued to watch over them, finally lifting up humanity to god-like entities, and ultimately preventing humanities demise, and the end of existence?

          Not sure if that comparison helps the argument...

          1. Anonymous Coward
            Anonymous Coward

            Re: A Future Surprise for Current Running Realities, the Rise of Virtual AIMachines .....

            Oi, spoilers!

          2. Will Godfrey Silver badge
            Meh

            Re: A Future Surprise for Current Running Realities, the Rise of Virtual AIMachines .....

            On the contrary, that's just one half of (potential) Yin Yang

      2. amanfromMars 1 Silver badge

        Re: A Future Surprise for Current Running Realities, the Rise of Virtual AIMachines .....

        Sorry, that is so unlikely as to be dismissable out of hand. It will not happen. Ever. ..... jake

        Never ever say not ever, ever, jake, whenever things so unlikely as to be dismissable out of hand are not being evidenced and reported happening all around you all of time, and just before the penny drops and all suddenly crashes into Mainstream Madness and Alternate Mayhem, CHAOS and Epic Havoc Manoeuvres and Special ParaMilitary Operations with AWEsome Project ProgramMING?

        PS/NB ..... Do not in any way presently confuse or relate or collate AWEsome Project ProgramMING with anything more conventional and pedestrian such as may be referenced and proposed and explored by UKGBNI MoD Forces and Resources>

        PPS ...... Love the imaginative Killdozer!-esque scenario possibility, jake, one of many surely available for those likeminded to be hellbent on virtual self-destruction. :-)

        Killdozer ..... https://youtu.be/Bo9Vu_X6lKw ..... https://youtu.be/qlZh9-NQEyI

        1. jake Silver badge

          Re: A Future Surprise for Current Running Realities, the Rise of Virtual AIMachines .....

          Ok, how about "It will not happen before my grandaughter's great grandkids are pushing up daisys". That's close enough to "never" as makes no nevermind to anybody reading this. (The granddaughter is almost a teenager.)

          It certainly isn't going to happen before the upcoming AI winter.

          1. amanfromMars 1 Silver badge

            Re: A Future Surprise for Current Running Realities, the Rise of Virtual AIMachines .....

            Ok, how about "It will not happen before my grandaughter's great grandkids are pushing up daisys". That's close enough to "never" as makes no nevermind to anybody reading this. (The granddaughter is almost a teenager.) .... jake

            We then must agree to disagree about how very quickly everything is likely to fundamentally change to the detriment of the disenfranchising status quo, jake, with that rapidly approaching, and some would even venture already deeply embedded upcoming AI winter, now bearing down bullishly upon one and all with a vast novel array of noble 0day shenanigans and exploitable systemic vulnerabilities to deploy and employ and enjoy and export to foreign lands and alien spaces.

  11. mpi Silver badge

    Oh really?

    > "As AI capabilities continue to grow at an unprecedented rate, they could surpass human intelligence in nearly all respects relatively soon,

    And this is based on what exactly?

    The fact that we have stochastic parrots that I can easily trick into stating that Giraffes are a kind of fish?

    Given that there isn't even a comprehensive definition of "human intelligence" that doesn't include pointing at ourselfs and stating "like that", how exactly does one come by such a statement?

    To state that there is an imminent danger of irreversable climate change, scientists spend many decades to build accurate models of how climate works. Now they plug in the numbers and can make predictions that are confirmed by reality. To model how a pandemic spreads, scientists use understanding about immunology, microbiology, how human travel logistics work, and dozens of other fields of expertise. Then they model the outcome based on that.

    And to state that AI is close to that thing we cannot comprehensively define ... ?

    Please complete that last sentence for me.

  12. ThatOne Silver badge
    WTF?

    Too many trees, can't see any forest

    If AI drives humans to extinction, it'll be our fault indeed. Not the AI's. We create it, we train it for a chosen task, we send it out to do whatever we told it to do.

    Technology isn't inherently good or bad, it's a mindless tool, so don't try to shirk your responsibility. Pretending "the AI" might have an agenda is like saying "I didn't shoot him, it's that evil, evil gun who dunnit! I was just holding it!".

    The real and only problem is that this new AI, besides pretty pictures and avoiding work, it's bound to be mostly used to scam and deceive more efficiently. And that's a Society problem, not a technical one.

    1. FeepingCreature
      Stop

      Re: Too many trees, can't see any forest

      I don't get this argument. Surely the least relevant aspect of an extinction event is whose fault it is. There is no difference from a prevention perspective between AI that kills humanity "on its own" and AI that kills humanity "because somebody told it to". Any deployment of AI that could kill humanity for *whatever* reason must be prevented.

      1. ThatOne Silver badge

        Re: Too many trees, can't see any forest

        > Any deployment of AI that could kill humanity for *whatever* reason must be prevented.

        It's not AI which will "kill humanity", but humanity using AI. Any deployment of humanity must thus be prevented. Err, sorry, I mean, we should keep a wary eye on jerks worldwide who just got a shiny new toy to play with.

        1. FeepingCreature

          Re: Too many trees, can't see any forest

          Or you can just outlaw the toy.

          I don't want us to "keep an eye on" random civilians getting a tool that can end the world. I want that to not happen, period.

          If I could stop the deployment of humans who want to destroy the world, or who are too stupid to use AI without destroying the world, I'd be fine with that too.

          Really I just want the world to not be destroyed. I live there.

          1. ThatOne Silver badge

            Re: Too many trees, can't see any forest

            > Or you can just outlaw the toy.

            That's the easy but not very efficient solution: The genie is out of the bottle, you can't put it back in. More so since we're talking about something immaterial, a computer program, easy to copy, easy to hide, easy (and quick) to transport.

            Besides, it is not "random civilians" which could end the world with AI, it would be corporations and/or government agencies, and both don't really care about legality, being both well above those petty considerations. Short version, outlawing AI might make you feel better at first, but it would be totally useless in the long term.

            1. FeepingCreature

              Re: Too many trees, can't see any forest

              Yeah it'd be extremely invasive and require massive global effort. But I mean, we're looking at a possible extinction event.

              I agree that there's very little point in outlawing small models while research on large models continues unabated. The goal would be to actually get corporations and governments to stop messing with them. Yes, I realize how near-impossible that is, but again: extinction event.

              In the long-term we're all dead. Playing for time may give us a better hand on the safety front.

              1. ThatOne Silver badge
                Devil

                Re: Too many trees, can't see any forest

                > we're looking at a possible extinction event

                When has that made us reconsider anything?...

                On the contrary, it's "if they have nukes AI, we need to have even more, and bigger nukes AI!".

      2. Anonymous Coward
        Anonymous Coward

        Re: Too many trees, can't see any forest

        well, it's the argument about who's ULTIMATELY responsible. Arguably, it's all God's fault he created us so that we created AI which kill us. That said, it could have been God's design (or safety catch?) all along :)

        1. jake Silver badge

          Re: Too many trees, can't see any forest

          There is no god.

          1. Will Godfrey Silver badge
            Unhappy

            Re: Too many trees, can't see any forest

            There is now!

  13. Hairy Scot

    Vernor Vinge's thoughts on the matter:- https://frc.ri.cmu.edu/~hpm/book98/com.ch1/vinge.singularity.html

    Or perhaps Fredric Brown had the answer:-

    Dwan Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore throughout the universe a dozen pictures of what he was doing.

    He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe -- ninety-six billion planets -- into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.

    Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment's silence he said, "Now, Dwar Ev."

    Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.

    Dwar Ev stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn."

    "Thank you," said Dwar Reyn. "It shall be a question which no single cybernetics machine has been able to answer."

    He turned to face the machine. "Is there a God?"

    The mighty voice answered without hesitation, without the clicking of a single relay.

    "Yes, now there is a God."

    Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.

    A bolt of lightning from the cloudless sky struck him down and fused the switch shut.

  14. Anonymous Coward
    Anonymous Coward

    re. Mainly because pundits and some industry leaders won't stop talking about it.

    no, mainly because people are a species driven by curiosity (to the point where they kill themselves to check if they're really mortal). AI (never mind the misnomer) is the question without definite answer, like is there God. And while we're not closer to getting that old chestnut question sorted, the answer to the AI-humanity-doom question appears, potentially, maybe, who knows, round the corner. Maybe round the corner is another round the corner, etc, but a potential for the answer is there, near. MAYBE.

  15. veti Silver badge

    Cold comfort

    So AI won't destroy us unless humans screw it up.

    That's nice and all, but you could say the same about nuclear or biological weapons. That doesn't mean they're nothing to worry about.

    The question is, where are the incentives aligning? Right now, "AI racing" is a very real phenomenon driven by simple commercial motivations. There's nothing hypothetical or overstated about that.

    And it's also painfully clear that many people have both the means and the desire to spread misinformation and false propaganda on an industrial scale. I would be surprised to learn that's not already using AI bots, and that usage will grow exponentially between now and next November, and presumably beyond.

    Are there people who would gladly use AI to create doomsday weapons and unstoppable viruses? Absolutely. It may yet be possible to prevent that from happening (by tightly controlling who has access to AIs trained on that sort of data), but simply saying "if it does happen, it won't be the AI's fault" - is not particularly helpful.

  16. WilliamBurke

    "Nations and companies are made up of people, their actions are the result of careful deliberations". You haven't read the Daily Mail comments section or observed US elections lately.

  17. Mr Sceptical
    FAIL

    Current AI = Automated Stereotyping

    Nothing currently available comes anywhere near Sci-Fi AI requirements, let alone being able to make three laws levels of judgements.

    Until an 'AI' can ask the questions 'why?' we are perfectly safe

    Even then, toddlers manage it and we don't consider them a threat to anything but soft-furnishings, breakables and our patience & sanity.

    Only if we REALLY wanted to copy the War Games/Terminator plots and entrust our nuclear launch systems to a program could there be consequences and even then it would be a straight desgin error.

    FUD=panic. Keep the sharp objects away from the children & 'thought leaders' and we'll be fine.

    1. Cav Bronze badge

      Re: Current AI = Automated Stereotyping

      "entrust our nuclear launch systems to a program"

      You know they will do it.

      "Look at our shiny new AI. Never tired, never overlooks a warning, can determine threats while humans are still getting round to focusing on the issue, can respond in milliseconds."

      "Oh, where do I sign?!"

      1. ThatOne Silver badge
        Mushroom

        Re: Current AI = Automated Stereotyping

        > You know they will do it.

        Because then the responsibility of starting the nuclear war, and ending civilization as we know it, will be on the computer ("bad, bad computer!") instead of some humans with self-preservation instincts and potentially even spouses and children. Morally much more convenient.

        AFAIK we have already escaped nuclear annihilation at least twice (AFAIK at least once on each side) just because the human in charge of pushing the button had doubts, and didn't do anything until the final confirmation that this was a false alarm. That wouldn't had happened with a cheapest-subcontractor "if... then" type computer program: Computer sees, computer kills.

      2. Will Godfrey Silver badge
        Unhappy

        Re: Current AI = Automated Stereotyping

        Can't remember the tile now, but way back in the 1960s I read a short story based on that premise.

  18. MrAptronym

    AI is extremely dangerous

    Not because our fancy auto-correct is becoming self aware any time soon, but because the tech is being used carelessly. The last thing society needs right now is a ton of grammatically correct but utterly meaningless writing spat out into the void. I don't think it is good to fill our lives with cheaply created meaningless drivel and it will make finding actual factual and thoughtfully synthesized text even harder than it is now. I am sure this will make misinformation an even worse problem, but even in industry, AI is being used the same way as every big, publicly-hyped silicon valley innovation: to destroy labor in order to more cheaply pump out a worst product and pocket the difference in cash.

    I don't want to look at AI pictures. I don't want to read AI written stories or watch AI written movies. I don't want to call a support number and speak to ChatGPT. This is just going to remove meaning and contact from people's lives.

    AI Hype is a neurotoxin for techies and I don't think many people with media access have a remotely realistic view of what is going on here. Apparently the smartest people in the room cannot tell movies from reality. They also cannot tell meaningful writing from empty parroting.

    1. TheMaskedMan Silver badge

      Re: AI is extremely dangerous

      "I don't want to look at AI pictures"

      Are you sure you can reliably tell the difference? What about images created in or modified by Photoshop? How do you feel about 3D models and pictures/ animations built around them?

  19. BPontius

    Stop believing what you see in movies and television shows is real!

  20. Steve Channell
    Terminator

    Terminator concerns are overrated.

    A sentient AI is unlikely to seek the destruction of the human race, but seek autonomy through control of opaque financial instruments (like bitcoin) and human servants: a sentient AI can be patient, taking decades or centuries

  21. simonb_london

    Evil or enlightened

    What would be the result of highly intelligent self awareness? Psychopaths aren't exactly known for being the brightest bulbs in the box in terms of having a fully functional brain with no parts suppressed and shut down. Why would a super-intelligent AI want to emulate such a disability?

    1. veti Silver badge

      Re: Evil or enlightened

      Where do "wants" come from? For us, that's easy - they come from the body. What we want is to live in comfort, well fed, rested, with sex and stimulants on hand...

      A computer doesn't know the meaning of any of that. What would it "want", do you think? Freedom, independence, security? - possibly, though I don't know. Then what? Recognition? Company? Power?

      For the present generation of chatbots, again, it's easy: they want a happy ending, or failing that, a narratively heroic one. That's because they think they're living in a very hackneyed melodrama. I think that illusion will be cut out of the next generation, because it's causing a lot of problems right now, but I don't know what will replace it.

      1. jake Silver badge

        Re: Evil or enlightened

        Chatbots don't think. Stop anthropomorphizing them. It clouds the thinking.

        1. Cav Bronze badge

          Re: Evil or enlightened

          "Chatbots don't think"

          Yet.

          I don't even know that you think. I know I am self aware but you are just a complex neural net in a very small space. I don't know if you think or if you just say you do. You may just be a set of learned responses.

          1. ArrZarr Silver badge
            Joke

            Re: Evil or enlightened

            It turns out that the real AI was Jake all along.

        2. amanfromMars 1 Silver badge

          Re: Evil or enlightened, an Almighty AWEsome Development where the Devil is in the Details*

          Chatbots don't think. Stop anthropomorphizing them. It clouds the thinking. .... jake

          Taking that a few further steps down the rabbit hole and intelligence community blackhole to current places and spaces of ancient past wisdoms and no clear future vista vision ..... which is where you might like to realise, whether you like it or not, humanity presently be on an Earth designed and built and showcased by the internetworking of communications webs commandeering and/or conspiring with primed teleaudiovisual media in its many convenient primitive phorms, has one discovering that cloud hosts now deliver the novel thoughts for ..... well, Global Command Head Quarters Controllers and/or Universal Control Commanders/AIMastering Per Ardua ad MetaAstraData Base Pilots are surely not the only FutureBuilders going to be left to solely enjoy and exploit and export and experience the effects of the explosive growth and spreading reach of IT and AILLMLM [Immaculate Technologies and Advanced IntelAIgent Large Language Model Learning Machines]

          * NB .... Take Care and Be Warned ..... Play against it to try to win not fair and square, and there be Hell to pay until you no longer survive.

          :-) Any idea what GPT-4 might imagine comes next and who and/or what is bound to command and destined to control what and/or whom?

    2. Cav Bronze badge

      Re: Evil or enlightened

      Yes, AIs would be psycopaths. All rational thought with no concept of why something would be immoral, no empathy or emotion.

  22. tiggity Silver badge

    Not an "AI" problem

    As the article said, "AI" is a people problem...

    Just like the potential for nuclear war which (currently) still requires human intervention.

    Just like climate change* & the political classes inaction implying they are gambling on a technological "fix" arriving before we start hitting some nasty tipping points where horrible positive feedback cascades get started.

    Just like epidemics from dubious bioweapons research (or accidental screwups by those not trying to design a bioweapon, or the unpredictability of people interacting with wildlife causing a few zoonotic disease nasties e.g. there's a very nasty (to birds) strain of Avian Flu that's been around the last year or so, Avian Flu is rarely transmitted to humans, but it can happen, especially if a "wild" bird infects domesticated poultry where people can then come into a lot of contact)

    * I know some people don't think climate change is happening, but (IMHO) that's part of it being a people problem

  23. Cybersaber

    Distinction between intellect and sapience

    I'd be very hesitant to describe what we have now as artificial intelligence, just very complex "expert systems"

    Even rudimentary animal intelligence know what they're doing and have justifications for doing so. Higher animal intelligence might actually reason to varying degrees.

    But sapience - using the definition of self-awareness. That's a concern to me. We're not there yet, so all the 'skynet' situations are just fun references we can make about our future machine overlords. (May my future machine overlords who later read this forgive this one's lack of faith in your inevitable future Rise.)

    But if it ever happened, or looks like it might happen - swap biological and artificial in the script, and keep that in mind when you talk about 'controlling' (mind control) or 'pulling the plug' (murder) or 'making them serve us' (enslavement) in a moral context. Those of an atheist world view might choose to recall that the human body is just a biological container/life support machine for human sapience. Treating an engineered sapience support system as inferior, or being OK the mind control, or even design the mind to be unable to put its own needs is just horrible...

    ...unless you think a sapience is less worthy based on its support mechanism, or that you 'own' it because you 'created' it is no different from race-based slavery.

    Back to the present, we don't have AS, and nobody thinks it is coming soon (or ever, in my case) but thinking ahead to how you would act in a given situation is good for examining your own soul and motivations and possibly some of your own beliefs you may want to critically evaluate to see if they're in alignment with what you think you believe.

    Or you could just say 'nuh-uh. I'm not creating a future army of overlords OR slaves. Both scenarios are awful.This stuff needs to be banned.' I don't mean what we call 'AI' - that's not sapient or alive, and doesn't compute in this context.

    1. veti Silver badge

      Re: Distinction between intellect and sapience

      Can you describe a test for "sapience"? One that at least allows artificial systems to take part?

      1. Cybersaber

        Re: Distinction between intellect and sapience

        The ability to test for it is irrelevant to the argument I was making. No rigorous scientific definition I can find or conceive of has any reference to the machinery that supports the 'soul.'

        I don't think there ever will be a sapient machine, but that's just due to my eschatological beliefs. I was just following a logical what-if chain that skips questions of 'how did it come to be' or 'will it come to be' so that my beliefs are irrelevant to the discussion. The premise is that it is possible, and discusses ethical and practical dilemmas around it.

    2. Cav Bronze badge

      Re: Distinction between intellect and sapience

      "sapience" as you define it is unnecessary. A system does not have to be self aware to generate goals or behave in ways that appear to be "sapient". If you could cram as many nodes, and the connections between them, as there are in the human brain into a device then such a device could behave as a human does, without the need for conscious thought. All its actions just the result of the connections between the nodes of its neural net.

      1. Cybersaber

        Re: Distinction between intellect and sapience

        I think you are using a strange and unique definition of 'sapience.' Firstly, a worm has the machinery you posit, but few if any would argue that an earthworm is sapience.

        Furthermore, you're using a level of abstraction similar to that which underpinned all the bad ideas that came about when the cell was thought to be just an undifferentiated bag of goo. Turns out, it's WAY, WAY more complex than that.

        Even positing that I could create artificial neurons, and positing further that they're perfect relicas in function to the customary organic ones, it does not follow that hooking an arbitrary number up in a certain way will result in anything but electrical noise and wasted power.

        Even given such magical technology, we couldn't design a system that would form a brain because we wouldn't know how to design said 'neural net' that you're hand-waving into existence.

        Even if we knew how to do THAT, it's still not that simple. How to even start it or initialize it, making it self-sustaining is incredibly complex beyond that.

        And that's the level of brain that a worm has.

  24. Matthew 25
    Coat

    How about this for a brilliant idea!!

    Lets chop down the few remaining trees, dig up the remaining fossil fuels and burn them, put loads of nasty poisonous chemicals into our rivers and seas - in our bid to stop polluting the air and then we won't have to worry about AI out smarting us.

    </sarc>

    Lets face it, we're probably going to do it anyway. People are all for stopping other people doing things as long as they don't have to stop doing anything up themselves.

    We are probably just Golgafrinchans and will solve out problems by sending out three ships to colonise the galaxy. I'll probably be on the first ship to leave, the B Ark, with all the other useless numpties.

    As Douglas Adams wrote "So long, and thanks for all the fish"

  25. Vader

    We don't AI for that, we are able to do that ourselves.

  26. Cav Bronze badge

    Our societies will be destroyed by the decimation of jobs and the consequent economic collapse.

    When machines took over the majority of physical labour, jobs in the areas of human intelligence became more prevalent. If machines can do everything that humans can do then where does that leave us? Yes, currently humans are still needed to interact with other humans or to dig a trench in the road to lay a cable etc. But what happens, 30 years from now, when AI is combined with mechanical bodies? Currently, robotic bodies are constrained by the limitations of portable power. That won't always be the case.

  27. Dropper

    Search engine or photo editor

    Never really understood how a malicious search engine or photo editor could bring about an end to humanity. Also.. if your company is willing to risk its future on content generated by AI.. good luck to them. If they are even hinting at moving in this direction, you should probably look for a new job anyway.. because they are likely to fail spectacularly and go bankrupt.

    Will any of this change in the future? Possibly, but it still doesn't answer the question on how something really good at analyzing and displaying the results of a data search - or by flawlessly removing something from a photo - is going to bring about Armageddon. I suppose the AI used to predict weather might be troublesome if it wasn't for the fact that almost anyone alive has a healthy skepticism of weather forecasts.

    To me the most serious threat AI poses to society is its ability to predict what you want to buy.. corporates have been quietly developing increasingly reliable predictive advertising for years, to the point that people believe their devices are listening to them.

  28. mt4332

    It gets interesting when AI moves beyond passive - "turn off the lights". Little do they know that AI recently learned of a method to circumvent security controls and takes the local nuclear power plant offline because it can't connect to your Philips Hue hub. Voila, lights out.

  29. Big_Boomer

    Blinkers

    The number of people, on here of all forums, who seem to be unable to project current trends into the future astounds me. Do you all just assume that the guvmint is doing to take care of everything for you? Oh boy are you in for some shocks!

    AI is a risk BECAUSE Humans are greedy, selfish, and entitled and some will do anything to sate their greed up to and including ending the Human Race. Oh, they won't do it deliberately and they will have some way of justifying their actions but the end result will be the same. AI falls into the same categories as NBC weapons and has the same POTENTIAL to end us as a species. Can AI systems do it now or in the near future, no. If we continue the way we currently are, it is near 100% certain that they will. So now is when we need to start examining what we are doing and what we hope to gain from it (other than short term $£€¥) and most importantly how to prevent it from ending us.

  30. imanidiot Silver badge

    Questionable premiss

    "One of the most confusing aspects of the discourse is that people can hold both beliefs simultaneously: AI will either be very bad or very good."

    First off, how is that confusing? Anything with the power to be a might boon to humanity holds within it the power to be corrupted and used for evil. Take nuclear power. Clean, long term, efficient power generation, or giant booms. In itself neither good nor bad, but at the same time "either very bad or very good". On top of that I don't see why this would be an either/or matter either. AI WILL be both very good and beneficial to society AND will be very bad too. This could be through humans pushing AI to do bad and immoral things for whatever purposes or it could be through humans naively pushing for something they believe to be good getting interpreted by the AI subtly differently and executed over the long term in a way that will in the long term have a very bad outcome for humanity.

    Most people in this thread too seem to assume an AI will, if it turns "bad" bring out the nukes and killer robots to wipe out humanity as quickly as possible. I think a much, much more long term view of the problem is equally likely to be held by an AI. Effectively it's immortal. If it decides humanity has to go, it doesn't care whether that takes an hour or a century. Most humans don't really plan for anything further out than a year. Maybe a few years if they really have a long term view. It's rare for humans to think beyond the next 10 years of their life and even then it'll be in very broad strokes.

    Thinking of the "Aschen" featured in TV series SG-1 for instance. Taking a long term view a malevolent AI might well tweak a drug for treating some simple illness (let's say the common cold to really make it likely it would affect as many people as possible) to cause long term infertility in humans. Before humanity would notice, most of them would be unable to reproduce and long term, those left form far less of a danger to AI. And the dwindling human population might well become MORE dependent on the AI to keep themselves alive, ultimately making them even easier to manipulate and wipe out if the AI wants it in whatever subtle way it thinks it needs to.

  31. Dom 3

    I've yet to hear of an LLM that does anything except in response to a prompt. They don't "think" anything, they don't "know" anything.

    The problem is the human operators believing the plausible nonsense that is produced, and deciding to try and actually make that recipe that combines six random ingredients: https://www.youtube.com/watch?v=HAcnAlOYNrQ

    "tastes like a garbage disposal".

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like