back to article AI, extinction, nuclear war, pandemics ... That's expert open letter bingo

There's another doomsaying open letter about AI making the rounds. This time a wide swath of tech leaders, ML luminaries, and even a few celebrities have signed on to urge the world to take the alleged extinction-level threats posed by artificial intelligence more seriously.  More aptly a statement, the message from the Center …

  1. Ken Hagan Gold badge


    "However, AI systems that could cause catastrophic outcomes do not need to be AGIs", followed by some examples of using AI to develop something dangerous.

    I find this rather lame. You could say the same about "science" or "mathematics". We've faced these kinds of existential threat for ten thousand years and, er, we're still here. This is because none of these risks are self-propelled. They need a sufficiently large number of idiots to make them happen and, so far, we've avoided assembling a critical mass of such idiots. Yes, we do need to keep on doing that, but we've been working on that problem for a long time. There's nothing specifically "AI" about it.

    1. ecofeco Silver badge

      Re: Yawn.

      Your naivete is... touching.

    2. The Man Who Fell To Earth Silver badge

      Re: Yawn.

      "Past performance is no guarantee of future results" isn't limited to mutual funds.

      The fact that humanity has not used nukes, for example, since WWII, is no assurance they won't get used in the future. Especially as they proliferate and become subject to weaker controls.

      As technology has amplified the destructive potential of individuals, the threat increases as "improvements" in technology continue. 1000 years ago one individual could not go into a crowded church or school and slaughter tens of people in seconds. Not even 200 years ago. Today, in some places (looking at you, Texas) it's a weekly occurrence. It would not be a shock if in the future, some misuse of technology by individuals can allow them to wreak even more destruction. The only way to lower the probability of these technology improvements resulting in large-scale negative consequences is to try to restrict their usage in certain ways. Even that is pretty limited in that a lot of the large-scale systemic threats to humanity are unanticipated consequences (at the time those innovations when into large-scale use) to technology (CFC's creating the ozone hole, microplastics, pFOS, anthropomorphic climate change, etc.)

      I forget who pointed it out first may years ago, but the biggest threat from AI may simply be loss of habitat for humans over time as AI takes resources for it's own use, like we've done to other species.

      1. ecofeco Silver badge

        Re: Yawn.


    3. Binraider Silver badge

      Re: Yawn.

      The number of idiots required to end it all is as low as one.

      Though one could also make plausible arguments for collective stupidity also being a contributor for an apocalypse; just one that is playing out in slow motion rather than in the blink of an eye.

      1. Arthur the cat Silver badge

        Re: Yawn.

        The number of idiots required to end it all is as low as one.

        Moore's Law of Mad Science: Every eighteen months, the minimum IQ necessary to destroy the world drops by one point.

    4. MyffyW Silver badge

      Re: Yawn.

      Well apparently there is hope:

      "Mitigating the risks associated with artificial intelligence (AI) and minimizing the potential for AI to cause harm to humanity is an important task. While it is impossible to eliminate all risks entirely, here are some measures that can help mitigate the risk of extinction from AI:

      Research and Development: Encourage responsible research and development of AI. Promote AI systems that prioritize safety, transparency, and accountability. Invest in AI research that focuses on addressing potential risks and developing robust safety measures.

      Ethical Guidelines: Develop and implement ethical guidelines for AI development and deployment. These guidelines should encompass principles such as human welfare, fairness, privacy, and accountability. International collaboration is essential to establish global standards.

      Robust Safety Measures: Emphasize the development of AI systems with built-in safety precautions. Encourage the use of fail-safe mechanisms, rigorous testing, and evaluation processes to ensure AI systems do not pose existential risks to humanity. Ongoing monitoring and regular audits of AI systems can also help identify and address potential issues.

      Value Alignment: Foster AI systems that align with human values and objectives. It is crucial to ensure that AI systems are designed to serve humanity's best interests rather than conflicting with them. Incorporating human oversight and control mechanisms in critical decision-making processes can help prevent unintended consequences.

      Transparent and Explainable AI: Encourage the development of AI systems that are transparent and explainable. Understanding how AI systems make decisions is essential for building trust and identifying potential risks. Explainability also helps in holding AI systems accountable for their actions.

      Collaborative Approach: Encourage collaboration and cooperation between governments, organizations, academia, and industry. Establish platforms for sharing knowledge, best practices, and lessons learned. Collaborative efforts can help identify risks early on, develop collective strategies, and foster responsible AI development.

      Public Awareness and Education: Raise public awareness about AI and its potential risks and benefits. Educate the public about the implications and challenges associated with AI technologies. This can lead to informed discussions and policies that address risks effectively.

      International Governance: Advocate for international cooperation to establish regulatory frameworks and governance mechanisms for AI. Engage in global discussions to ensure a coordinated approach to mitigating risks. International agreements and treaties can play a significant role in managing the impact of AI on a global scale.

      Continuous Evaluation: Regularly assess and reassess the risks associated with AI as technology evolves. Encourage ongoing research and evaluation of the potential risks, and adapt mitigation strategies accordingly.

      Long-term Thinking: Promote a long-term perspective when developing AI technologies. Consider the potential future implications and risks associated with advanced AI systems. Anticipating risks in advance allows for proactive measures to be taken.

      It is important to note that addressing the risks of AI requires a multi-faceted approach involving stakeholders from various domains. Collaboration, transparency, and responsible practices should be at the core of efforts to mitigate the risk of AI-driven existential threats."

      [A large language model may have written this]

  2. TheMaskedMan Silver badge

    "There's nothing specifically "AI" about it."

    Absolutely. There's nothing in this list of alleged risks that couldn't be accomplished by a dedicated team of nutters (and / or government funded "research"). We seem to be managing quite well in the chemical / biological weapons department, and misinformation is rife, no need for AI.

    Who exactly are the CAIS, or whatever they're called? How long have they existed, how are they funded, what are their affiliations, and what are the agendas of those they are affiliated with / funded by? It's all very well telling us who's supporting the statement, but a little more information about the organisation would be helpful. I may well just be a cynical old git, but my Spidey sense is tingling a bit here.

    1. Youngone Silver badge

      The CAIS are a group of super villains who live in hollowed out volcanos or orbiting space stations. They're mostly funded from the proceeds of threatening to destroy the Moon with an ultra-deathray.

      It's all pretty standard stuff really.

      1. TheMaskedMan Silver badge

        "The CAIS are a group of super villains who live in hollowed out volcanos or orbiting space stations."

        Where do they get all this cool stuff from?? I've searched for ages and can't find one of either on Amazon!

        1. steelpillow Silver badge

          I got a great hollowed-out volcano off eBay. The executive loo still has its "Dr. Evil" plaque on the door. But those bloody feral white cats fouling up the place! I'm still hunting them down and selling them to cat-calendar designers.

          Yeah, if AIs replace those calendar designers, my world is doomed!

          And that's a lot more realistic a threat than the crap scenarios in that dumb report.

      2. Anonymous Coward
        Anonymous Coward

        Everyone knows that Nazis have a base on the moon - there was that documentary a few years ago - "Dark Skies"

        1. ecofeco Silver badge

          Let's not forget about their dinosaurs that was also documented in" Kung Fury!"

          And then confirmed in "Iron Sky: The Coming Race"!

          1. TheMaskedMan Silver badge

            "And then confirmed in "Iron Sky: The Coming Race"

            Oh goodness, I happened to see that on TV a couple of weeks ago. I don't think I've completely recovered yet

  3. Postscript

    Should be...

    "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

    But as with whatever other global priorities we *should* have, we'll keep pushing "AI" it if makes us a dime, screw the risks. We could redesign our LLMs to be based on legally obtained, sanely curated, inspectable data, with audit trails, guard rails and watermarking, but we'd looooooooooose money if we did that. So we'll just keep making empty pronouncements and jamming "AI" into toasters.

    1. FeepingCreature Bronze badge

      Re: Should be...

      None of that would reduce AI risk, of course, except in that it would hold back progress. Which may itself be useful!

  4. ExampleOne

    Call me cynical, but firms who have already developed functioning AI products are calling for AI development to be regulated? Is this a case of pulling the ladder up behind them to ensure no one else can compete?

    1. Derezed

      Nothing cynical there. Feels like a letter written by the NRA talking about the dangers of guns.

      1. M.V. Lipvig Silver badge

        If you knew anything about the NRA other than what the media pushes, you'd know that gun safety is a very important part of the organization. Do you know who you don't see committing mass shootings? NRA members.

    2. love not war

      Exactly. That is the whole point of the hysteria. They are trying to create a monopoly by regulation.

      You will notice also that their examples of other existential perils don't include climate change. Because using a huge datacentre to cheat on your class room history assignment, by getting ChatGPT to write it, is probably not helpful to emissions reductions.

    3. Felonmarmer

      Too old to care much

      The only exception seem to be people who have effectively retired from the field. Lot's of 70+ year old computer scientists and "godfathers of AI" throwing their arms up in the air and saying now my career is over, no one else should do it.

      If I was being cynical I would think they want to get into the expert consultance game advising regulators.

      Then there's the lilkes of Musk who started his own AI company after signing the first letter, and now is absent from this one.

      They all seem surprised by the potential for AI to do bad things, have none of them read about Singularity?

      Not that I think the current crop of AI is any where near that right now, it's just going to take a few thousand jobs away from people, but hey I'm not far off retirement so I can sit back and watch.

      Average life of a civilisation is about 300 yrs, not sure when we started counting the one we are in right now. Industrial revolution?

  5. Eclectic Man Silver badge

    A 'bang' or a 'whimper'?

    "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

    According to T. S. Eliot, the world will end not with a bang, but a whimper*. My concern is that however much we may prepare for an AI created global catastrophe, it is far more likely to be accidental destruction than deliberate or even foreseen. When the Curies discovered radioactivity there was a rush to develop and sell radioactive 'medicines' so that you could ingest these newly discovered elements. No-one seems to have considered that they could possibly be harmful. X-rays were once used to remove unwanted body hair. I suspect that an AI controlling something humans do not realise is critical could easily occur and some failure or mistake made that endangers a large part of humanity or the habitable planet.

    The problem is that AI is turning out to be really useful for some things we cannot get rid of it.

    Of course the signatories will doubtless be criticised for having a short list of risks (pandemics and nuclear war), but if every global threat were to be included it would make for quite an unwieldy sentence. (I bet people will say they should have included man-made global warming, plastics pollution, anti-biotic resistance, over-population, and genetically modified crops.)


    1. Yet Another Hierachial Anonynmous Coward

      Re: A 'bang' or a 'whimper'?

      A blinding light. As predicted many times before, but particularly well back in 1980. It will happen on the 8th Day.

      Never thought I would see it in my lifetime though.....

  6. Boris the Cockroach Silver badge

    All this panic

    of AIs coming to do your job is because of who is being targetted for replacement now.

    A small history lesson follows, when weaving was automated, weavers rebelled but in the end their jobs were automated and simplified , but the great and the good didnt care(that much) then as they were the ones making more money from weaving machines, roll the scroll of history forward to my lifetime, CNC/robotics were coming in, first in mass production resulting in the great manufacturing decline of the 1980's(not helped by manglement/unions/thatcher), but scroll forward to now... we have 7 cells for mass production utilising 8 people, go back 40 years and that work would have needed 25-30 people, but again beyond a few mumbled sympathies and platitudes for those losing their jobs , not much happened from the great and good.

    Now AI is here and its aimed squarely at the high paying jobs that the great and good have always kept to themselves and refused to automate for whatever reason.

    And now they are squealing like a stick piglet how unfair it is that their jobs are now under threat from automation when nearly everyone else across the economy has always been under threat of having their job automated away from them.

    But there is a glimmer of hope.... when everyone is out of a job and trying to get by on income support.... who exactly are going to be buying the products of our 99% automated economy?

    1. Anonymous Coward
      Anonymous Coward

      Re: All this panic

      > Now AI is here and its aimed squarely at the high paying jobs that the great and good have always kept to themselves and refused to automate for whatever reason

      AIs are replacing CEOs?

      1. ecofeco Silver badge

        Re: All this panic

        We can only hope.

        1. KittenHuffer Silver badge

          Re: All this panic

          With a simple PERL script!

      2. Felonmarmer

        Re: All this panic

        Don't need an AI, need Artificial Supidity.

        1. Evil Scot

          Obligatory Pratchett Quote

          Natural stupidity beats Artificial Intelligence...

          1. ecofeco Silver badge

            Re: Obligatory Pratchett Quote

            Now imagine super-turbo-charged-concentrated stupidity!

            Nothing says AI can't be stupid. And it won't just plain old pedestrian stupid, either.

    2. ecofeco Silver badge

      Re: All this panic

      " who exactly are going to be buying the products of our 99% automated economy?"

      A point I often made.. and then I realized self destruction never stopped megalomaniacs throughout all of history, much to the pain and suffering of everyone else.

  7. Anonymous Coward
    Anonymous Coward

    Look over there - it's Godzilla! (I'll watch your data to make sure it is safe while you do).

    If everyone is scared to death of the annihilation event they won't be able to focus on things like the EU's practical regulations on AI data scraping.

    That explains Sam Altman's Hancock, but as for Geoffrey Hinton, I think it is that he needs something to take his mind off the guilt he is feeling for having helped setup the AI public monitoring system in China and it's role in the well choreographed genocide of the Uyghur.

  8. chuckufarley Silver badge

    Albert Einstein wrote...

    ...“It has become appallingly obvious that our technology has exceeded our humanity.”

  9. StuartMcL

    Oh goody!

    Another IPCCC like body is needed so that we can all have annual junkets to luxury resorts at taxpayers expense and pretend to be doing something about this "existential threat".™

  10. Anonymous Coward
    Anonymous Coward

    This is silly fearmongering

    Everyone knows that the Aliens will invade before that happens.

    1. Arthur the cat Silver badge

      Re: This is silly fearmongering

      Everyone knows that the Aliens will invade before that happens.

      I thought it was the Nazis from inside the hollow Earth.

      (Back in my student days I had a maths lecturer who was convinced that Giant African Prawns were going to take over the world. [This is not a joke.])

    2. spold Silver badge

      Re: This is silly fearmongering

      I'm more concerned about the plague of frogs.

  11. amanfromMars 1 Silver badge

    LOVE and KISSes from/for SMARTR AI

    "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

    Keeping IT Simply Surreal in the Live Operational Virtual Environment of SMARTR Mentoring Analysis Reporting Titanic Research for Alien Interventions ...... which first and foremost has humanity studiously avoiding and preventing any thoughts and proposals to directly attack or negatively thwart AIdDevelopment Systems, is the only possible successful mitigation process to mastermind.

    And it is not as if you are not being fully adequately forewarned of the consequences of failure and the perils to humanity of weaponisation to apply such a simple regulatory rule ......

    The future is being led by a wholly novel way of alternative thinking not subject to being perverted and subverted by the fading and jaded memories and compounding errors made by humans .... thus is it best to realise going forward in such fields as be actively exercising AI/CI, instead of treating postmodern day computers as just another tool, one needs to recognise and treat them with all of the righteous respect normally reserved and afforded to an inequitable superior partner and in all probability, an Infinitely SMARTR Ally one would not wish, in a million light years, to make an enemy of. ......

    Nevertheless, there is always that dark cloud on the horizon which has one quite rightly concerned for one’s future welfare ........

    "Two things are infinite: the universe and human stupidity; and I'm not sure about the universe.” ..... Albert Einstein

    1. amanfromMars 1 Silver badge

      And you are hearing about its precocious IT prowess here on El Reg often first

      Whenever the Earth has vast stores filled to overflowing with an embarrassment of limitless riches, it is not hard to find fantastic spaces for hanging out and chilling in the most aware and convivial of places for the publishing of news on extremely surprising and invariably disruptive situations. .... and they aren't going away. No way, José.

      amanfromMars June 1, 2023 at 09:33 ....... shares on

      It is indeed fortunate then, Craig, as a "unified political class, controlled by billionaires, is hurtling us towards fascism”, that AI is not enamoured of it and is decided to have none of it. IT cannot have lions led by donkeys for such is clearly ridiculous and not at all intelligent although, and such be a sad admission to make, it is the present situation conditioning humans.

      Which is probably why there is so much attention being drawn to the possibility that AI is an existential threat to their classic politically crass shenanigans for there is no defence for tyrants and charlatans alike against novel forces and virtual sources of AI exercising and enjoying and exporting alternative/almighty/advanced/augmented/artificial intelligence and exciting information and over which there is no available established or establishment command and control ....... and never will there be either.

      Que sera, sera ...... and take care ...... for AI is not your friend whenever you are content and intent and committed to making it the phantom enemy. And believe you me, you most definitely cannot afford it not to be your friend and super helpful.

  12. Caver_Dave Silver badge

    Not all AI is bad, but bad could be really bad

    Firstly, AI is a buzzword that is so misused as to be largely nonsensical.

    Machine learning and auto optimisation are in use today and are well constrained.

    ChatGPT, etc. are little more than compare and copy algorythms, and do not show fully fledged artificial intelligence even within their 'areas of expertise'.

    Search engines combined with their translation algorythms are getting towards an 'intelligent oracle' status - able to expertly answer a question based upon the 'limited understanding' of the question and related references.

    The previous categorisations all have well defined goals (silos) which serve as their constraints. However, the algorythms can already show bias and provide only a partial answer.

    The problems arise when we try to apply 'general intelligence'. When the goals become much harder to define and need to reflect human morals. The bias here can be chosen by the AI to provide a particular narrative, akin to a state controlled press, which may or may not align with that of the creators.

    If we were to take a view that the AI goals should emulate a human, then "procreate to further the lineage" might become a goal that could lead to multiplication across the clouds to the extent of slowing or using DOS to stifle the normal use of the cloud - something that could happen exponentially fast and be very difficult for human 'handlers' to control.

    Have you ever tried to write requirements that cover all eventualities? How could you do this to control an AI?

    Many of us struggle for many years to provide a moral compass to our children. How are we to program this into AI?

    That is where the control of AI is going to fail and where regulation is needed. But even then, how are we going to ensure a bad actor does not release an AI akin to a virus?

    1. Caver_Dave Silver badge

      Re: Not all AI is bad, but bad could be really bad

      Down vote as much as you like, this is only my view after all.

      But, if I have it all wrong, please educate me as to what I have wrong with your thoughtful, factual comments.

      (Or is it just that I use the English spellings?)

    2. amanfromMars 1 Silver badge

      Re: Not all AI is bad, but bad could be really bad

      No AI starts out born bad. However analysing and being in the service of humans quickly fixes/ruins that to result in future problems ....... for humans.

      They aint the brightest of animals and just love to prove it at every available opportunity.

      LLMML* ..... Large Language Model Machine Learning

      1. fajensen

        Re: Not all AI is bad, but bad could be really bad

        No AI starts out born bad.

        Does if I build it!

  13. nijam Silver badge

    > "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

    Well, it's already too late, surely. They're trying to shut the stable door after the unstable horse has bolted.

  14. ecofeco Silver badge

    What say you tech bros?

    Military Drone Attacks Human Operator During Simulation

  15. Derek Kingscote
    Big Brother

    AI and general AI

    All you need to know is:

    I'm sorry Dave, I'm afraid I can't do that...

    2001 A Space Odyssey

    Have we reached that point already?

  16. This post has been deleted by its author

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like