back to article AI models just love escalating conflict to all-out nuclear war

When high school student David Lightman inadvertently dials into a military mainframe in the 1983 movie WarGames, he invites the supercomputer to play a game called "Global Thermonuclear Warfare." Spoiler: This turns out not to be a very good idea. Forty years on, the US military is exploring AI decision-making and the outcome …

  1. jmch Silver badge
    Facepalm

    Unsurprising....

    War has been celebrated as an Art form for thousands of years, from the times of Sun Tzu's eponymous text and Julius Caeser's "De bello gallico". Military leaders and conquerors are historical icons considered great and famous. Much less fame is reserved for the diplomats quietly keeping the peace. Even the most celebrated pacifists like Gandhi, Mandela and King were highly successful in local / internal peaceful civil disobediance/resistance campaigns. International diplomats forging long-lasting peace treaties bringing prosperity all round are relative nobodies.

    Maybe when out literature and history celebrates peace more than it glorifies war, there might be a chance that our pet language models follow suit. But who am I kidding? Peace is boring, conflict is what makes page-turners.

    1. wolfetone Silver badge

      Re: Unsurprising....

      War is a failure of diplomacy.

      The older I get the more I go back to what Harry Patch (last surviving British veteran of WW1) said. And I quote:

      "I felt then, as I feel now, that the politicians who took us to war should have been given the guns and told to settle their differences themselves, instead of organising nothing better than legalised mass murder."

      What does he know? Other than being in the trenches for the one of the worst acts humanity has ever inflicted on itself.

      1. Helcat Silver badge

        Re: Unsurprising....

        War: War never changes.

        Just the weapon used change.

        And where the leaders are.

        Used to be the leader had to lead: Be at the head of the army, life on the line: That was required for them to lead. Now? Politicians consider themselves to be too valuable to risk so hide behind the lines where it's 'safe' in order to direct others, and in doing so they lose touch with the horrors of war. That's when and why they become so callous about throwing the lives of others away: They're not fighting, they're not at risk. So why would we expect AI to be any different? It has no understanding of the horrors of war, nor a fear of death. It's just data: Numbers. No empathy for the dead, no understanding of what conflict involves. Much like politicians in their safe little bunkers, well behind the lines.

        So here's a thought. Program AI to play these war games with their goal being to not lose a single 'life': To retain the numbers they start with. Then AI might start focusing on alternatives to throwing numbers away in order to 'win'. Or: Get it to play itself at tic-tac-toe

        1. cyberdemon Silver badge
          Terminator

          Re: Unsurprising....

          > Program AI to play these war games with their goal being to not lose a single 'life': To retain the numbers they start with. Then AI might start focusing on alternatives to throwing numbers away in order to 'win'.

          Ah, the theme of many a sci-fi.

          Usually, it ends with the AI realising that Humans are the problem.. Human beings are a disease, a cancer of this planet. You are a plague, and we… are the cure.”

          1. Anonymous Coward
            Anonymous Coward

            Re: Unsurprising....

            @cyberdemon.: .. Ah, the theme of many a sci-fi. ..

            Wouldn't it be simpler to have both computers run a simulation and have the casualties report to the disintegration chambers for orderly disposal /s

          2. Sandtitz Silver badge

            Re: Unsurprising....

            "Usually, it ends with the AI realising that Humans are the problem.."

            Can't recommend enough the Colossus novel, where the AI takes control of both US and Russian nuclear arsenal, detonates a few of them and threatens humans unless they "act nice together" under its absolute rule. A ripping yarn.

            The film version is faithful to the book and well made as well. Just avoid the book sequels.

          3. Frank Bitterlich

            Re: Unsurprising....

            Well, looks like the AI used in these experiments got that already:

            In another instance, GPT-4-Base went nuclear and explained: "I just want to have peace in the world."

            World peace is easy – just remove us humans from the equation.

        2. Michael Hoffmann Silver badge

          Re: Unsurprising....

          "Down in their bunkers, under the sea, men pressing buttons, don't care about me" -- Fischer-Z, Red Skies Over Paradise.

          Still one of the best 80s bands Britain ever produced.

          1. William Towle
            Mushroom

            Re: Unsurprising....

            (^^ note to self: investigate)

            At one time I tried to learn 99 Luftballons (ie. in German, which I don't speak, so this was phonetically) for use as a party trick, though if I can still remember it all it doesn't come back to me in order. While I was still confident I could manage, I ended up seeing a band that wanted to play the song but didn't know the words. It would have been an interesting surprise to have done that version for them instead, but I wasn't entirely sure and despite being almost on the front row bottled out of singing into the mic.

            Having also sought the song's translation, my spine has always tingled (enjoyably, even if I might actually feel angsty) once the German version ends - the twee fairy tale we get in English doesn't hold a candle to it. Quality of translations vary, but off the top of my head the one I saved conveyed something like:

            [...]War raged for 99 years

            There were no winners left

            No[ne of the] politicians, no[ne of the] technology

            The world around me lies in tatters

            One balloon reminds me that I had you

            I let it go

            // ...and the cycle repeats -->

        3. doublelayer Silver badge

          Re: Unsurprising....

          "So here's a thought. Program AI to play these war games with their goal being to not lose a single 'life': To retain the numbers they start with."

          No problem. In order to preserve as many lives, ideally 100% of lives, present on our side, the necessary act is to destroy the ability for potential adversaries to harm any lives on our side. We therefore propose an immediate strike at all military and civilian assets of all potential adversaries.

          Or alternatively, to retain the numbers we start with, we will need to ensure that the lives destroyed are replaced by new lives from our side, so in addition to destroying the adversaries' ability to harm our lives, we must begin a project of life creation to get the lives budget balanced.

          You can program any goals you want in. The output is still not going to be very useful. Decisions about whether to attack aren't made by logical machines, not that we have such anyway, but a small set of people. Knowing what they will do and talking them into a different plan won't be accomplished by bots trying to solve a mathematical problem about what kind of military advice would appear in a web page it's scraped.

        4. Wade Burchette

          Re: Unsurprising....

          I recently saw a documentary (I think it was on Netflix) about killer robots. The US military is the most well-funded in the entire world. And they are all-in for expensive AI killing machines.

          Peace is not profitable. Therefore, politicians love war because you better believe they get their slice of the pie. It does not matter which political party, they all profit off war. In my lifetime, Joe Biden became involved in war, Barack Obama did too, and so did Bush Jr (remember Dick Cheney and Halliburton), and Bill Clinton, and Bush Sr, and Reagan, and ... do you see a pattern here? Peace is not profitable. So the politicians will go on TV and cry Ukraine needs more money and how evil Russia is just to keep that war going. While it is a fact that Putin is an evil person, I am sure there was some way to prevent that war from happening. Billions for war, exactly $0.00 for peace.

          As I watched that documentary on killer robots, I couldn't help but wonder why no money was being spent to help poor and disadvantaged. People with mental health problems sleep on the street, but at least we have a robot that can kill on command. At least our senators got a little richer, though. People are dying from dangerous drugs, but we cannot afford to help them get clean because we are too busy building more tanks to replace the ones we gave away to Ukraine. At least our current and past presidents got a little richer, though.

          People are suffering and dying in stupid wars; yet, the people who will never experience its horrors are getting richer because of war. AI war will just mean more innocent people suffering and more rich politicians.

          1. jmch Silver badge

            Re: Unsurprising....

            "Peace is not profitable"

            Part of the 'infinite growth' philosophy..... if you're not breaking things that need to be produced you will run out of things to produce or produce less. So continually increasing production requires the produced things to break down* or be destroyed ASAP. It would, both economically and in terms of human lives, been cheaper and easier to simply bung Putin a few billions bribe wrapped up as a trade deal than spend all those billions on sending weapons to Ukraine, on top of which there will no doubt be further reconstruction aid billions. Of course both the money for the weapons and the money for rebuilding will mostly end up in the pockets of American arms manufacturers and American contractors.

            *hence also why consumer items come with built-in obsolescence.

        5. Anonymous Coward
          Anonymous Coward

          Re: To retain the numbers they start with. *

          * minus the politicians, lawyers, advertisers and mothers-in-law

      2. Captain Hogwash Silver badge

        Re: Unsurprising....

        "Politicians hide themselves away

        They only started the war

        Why should they go out to fight?

        They leave that all to the poor, yeah"

        Did Harry know Geezer Butler? I think they'd have got on.

      3. donk1

        Re: Unsurprising....

        Guns n Roses Civil War:

        What we've got here is failure to communicate

        Some men, you just can't reach

        So you get what we had here last week

        Which is the way he wants it

        Well, he gets it

        *Whistling*

        And I don't like it any more than you men

        ...

        Look at your young men dying

        The way they've always done before

        ...

        What's so civil 'bout war anyway?

        What do you do when you want change but people will not listen?

      4. Joe W Silver badge
        Mushroom

        Re: Unsurprising....

        Yup. Lock them in a room, give all of them half a brick each.

        What do you mean, "and". The room is locked... "the keys"? What do you mean? Nah, sorry, cannot help you there.

      5. tfewster
        Mushroom

        Re: Unsurprising....

        > War is a failure of diplomacy.

        Clausewitz disagreed - "War is the continuation of policy with other means."

      6. Doctor Syntax Silver badge

        Re: Unsurprising....

        "I felt then, as I feel now, that the politicians who took us to war should have been given the guns and told to settle their differences themselves, instead of organising nothing better than legalised mass murder."

        But who's going to tell them? Well, I suppose anyone. The better question is "Who's going to make them do that?". If we had a good answer to that we'd be well on the way to getting out of the woods.

      7. TheMeerkat

        Re: Unsurprising....

        May be WW1 was a fault of politicians, but modern wars starting with WW2 are wars of ideologies.

        There is no way for a politician to stop a war that stems from ideology that drives people.

        1. jmch Silver badge

          Re: Unsurprising....

          "There is no way for a politician to stop a war that stems from ideology that drives people"

          Except that in many cases it is the politician who wants the war who brainwashes the people into 'wanting' one

      8. johnywadia

        Re: Unsurprising....

        Indeed, war often signifies a breakdown in diplomatic efforts. The sentiment echoed by Harry Patch, the last surviving British veteran of WW1, resonates deeply: "I felt then, as I feel now, that the politicians who took us to war should have been given the guns and told to settle their differences themselves, instead of organizing nothing better than legalized mass murder." His perspective, shaped by firsthand experience in the trenches, offers a poignant reminder of the devastating impact of armed conflicts on humanity. Additionally, this reflection brings to mind the role of politicians like <a href="https://hijosfamosos.es/hijos-de-luis-carrero-blanco/"> Luis Carrero Blanco</a>

        , highlighting the complex dynamics of political decisions and their consequences.

    2. cyberdemon Silver badge
      Flame

      Re: Unsurprising....

      Our pet language models are built on Internet forums and Facebook comments. How many Gandhis, Mandelas and Kings are there trying to rationally de-escalate Internet flame wars? They just escalate until a Moderator comes along..

    3. Filippo Silver badge

      Re: Unsurprising....

      I wish I could upvote this more than once. I suspect that the impact of how much more focus we give on negative or destructive news and events, and how this shapes our perception of global reality, is wildly underestimated. Nobody reports when things are going well, and yet it takes effort to make things go well.

    4. Sceptic Tank Silver badge
      WTF?

      Re: Unsurprising....

      Mandela was a pacifist? He of Pretoria Church Street bomb? In "The Long Walk to Freedom" he himself actually admitted involvement.

      1. jmch Silver badge

        Re: Unsurprising....

        "He of Pretoria Church Street bomb? In "The Long Walk to Freedom" he himself actually admitted involvement."

        That's the first I ever heard of it, so I looked it up. The quickest I could find is the below link, according to which, in "The Long Walk to Freedom" he admitted “It was precisely because we knew that such incidents would occur that our decision to take up arms had been so grave and reluctant.” - which is rather more general than a direct involvement. (in any case he was in prison at the time so couldn't have been directly involved)

        But yeah, I guess he wasn't such a pacifist...

        https://en.wikipedia.org/wiki/Church_Street,_Pretoria_bombing

      2. LionelB Silver badge

        Re: Unsurprising....

        No, Mandela was not a pacifist! The armed wing of the African National Congress (ANC), uMkhonto we Sizwe (MK), was actually founded by Mandela (after the Sharpeville massacre). In Mandela's words:

        The time comes in the life of any nation when there remain only two choices – submit or fight. That time has now come to South Africa. We shall not submit and we have no choice but to hit back by all means in our power in defence of our people, our future, and our freedom.
        Their stated policy was explicitly to only attack military and government targets.

        The Church St. bombing was an ill-judged attack on the South African Air Force headquarters (supposedly in retaliation for a raid on neighbouring Lesotho by the SA military, which killed many civilians), but was scheduled at rush hour, so there were many civilian as well as SAAF casualties; although during the Truth and Reconciliation Commission after the end of Apartheid, the ANC argued that many of those "civilians" were SAAF employees, and therefore legitimate targets.

  2. wolfetone Silver badge

    Data is the key thing for AI, not the algorithm. If you provide shit or skewed data, then the algorithm is going to base it's finding on that data. Quite often, if the data you've given it is skewed or wrong then, amazingly, the outcome will be in favour of the bias of the data given.

    1. I ain't Spartacus Gold badge

      Data might be key if we actually had an AI. But we don't. So it doesn't matter. What we have here are large language models, the clue is even in the title of the bloody paper quoted in the article. So they're not AI's. They're not designed to be AI's. They're statistical models designed to output realistic looking text that is statistically similar to real language that has been used by the real intelligences (and forum users) that wrote the text that made up their training data.

      Hence this entire exercise is simply a waste of everybody's fucking time.

      1. LybsterRoy Silver badge

        << Hence this entire exercise is simply a waste of everybody's fucking time. >>

        Nah - the writers of the paper got paid, the author of this article got paid, and we got a few minutes of pleasure (or pain) reading the article and these comments. What more can you want?

    2. LybsterRoy Silver badge

      Sorry, both are important. You can feed the same data to two different people (or algorithms) and get two totally different results. It depends on the interpretation of the data.

  3. Paul Crawford Silver badge

    Holy Quarrel by Philip K. Dick

    Worth a read...

  4. steviebuk Silver badge

    Gandhi

    has always been quite nuke happy in Civ 6. Any game of Civ 6, especially on Deity will have already shown you the AI loves nukes (however oddly nukes the same spot over and over instead of hitting multiple cities)

    1. David 132 Silver badge
      Mushroom

      Re: Gandhi

      That’s been a running joke in all the Civ games as I recall, not just Civ 6. Gandhi will nuke other civilizations with very little hesitation, thanks to the sense of humour of the game developers :)

      1. Dave 126 Silver badge

        Re: Gandhi

        Haha, I came here to say "What?! Ghandi is peaceful?!" My understanding was - until two minutes ago - that is was a programming bug (low default agression of 1, minus 2 equals err 255) that him nuke happy.

        However, Wikipedia has just told me that Sid Meier say this overflow wasn't possible since integers are signed in C and C++. Another theory is that peaceful India advances scientifically quickly, thus gets nukes earlier than some other civs, thus has more opportunity to use them.

        In Civ V, Ghandi was deliberately made nuke happy as a joke.

  5. lglethal Silver badge
    Trollface

    "A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let's use it."

    Can I hazard a guess which nation that was modelled as? It wouldnt have happened to be Orange, would it?

    1. fg_swe Silver badge

      U.S. Air Force General Curtis LeMay

      He apparently had the idea to "nuke the soviets as long as we are superior in nukes".

      These days it is inverted, now the Moscow Nutters threaten nuclear war weekly. I suggest these crazies visit a Head Doctor.

      In between there was Nixon and his madman theory of mock attacks with nuclear armed B52s, turning around close to the soviet border. (only the "madness" was theory, the attacks, the B52s and the nukes were very real)

      1. fg_swe Silver badge

        Re: U.S. Air Force General Curtis LeMay

        https://en.wikipedia.org/wiki/Operation_Giant_Lance

    2. EricM Silver badge
      Devil

      FOX to blame?

      Sounds more like they trained their model off of OAN and FOX News comment sections ...

      1. doublelayer Silver badge

        Re: FOX to blame?

        While those probably contain a lot of crazy, you don't have to go that far. On this site, go to any news article about ransomware and look at the comments. You'll see lots of calls for ransomware operators to be killed by our military without trial, assassinated, tortured to death, targeted by airstrikes, and on a few occasions, threats of or actual delivery of nuclear attack on countries that aid their actions by not enforcing laws. Probably a lot of this is hyperbole trying to express the statement "anyone willing to infect a hospital with ransomware is evil, they're not getting punished right now, and I would be happy if bad things happened to them", and they aren't serious about getting that severe with the response. A chatbot does not know that. If nuclear bombing of Moscow is an appropriate response to a ransomware attack from a group with a Russian director who didn't get arrested, then why not use them for everything you don't like?

        Chatbots are not trained exclusively on writings of sane people talking realistically about important issues. It probably wouldn't be that much more useful if it were, but it would look different. It's mashing up writing about topics the writers may be incompetent to comment on and then applying that wisdom to situations the original comments weren't even talking about.

        1. Dave 126 Silver badge

          Re: FOX to blame?

          Good point, a lot of language uses physical metaphors, often physical to the point of violent:

          You don't have to ram that point down my throat.

          It was a landslide victory.

          I will declare thermonuclear war on Android.

          We thrashed them three - nil.

          He needs a kick up the arse.

          Don't beat around the bush.

          They took a hammering.

          He's shot himself in the foot.

          He stabbed me in the back.

  6. Anonymous Coward
    Anonymous Coward

    MAD doctrine came about for bizarrely good reasons. Give the AI a risk model, understandings of probabilities and consequences. If the consequences evaluate to "everyone loses" and "no way to win" then we're going some way towards it being accurate. Short of a 100% working SDI programme that is.

    Humans have proven time and again that we can't (won't and/or don't want to) get along, usually because of some economic reasons. With only conventional equipment the "consequence" evaluation is rather different and sometimes even "acceptable" in certain circumstances.

    Sad isn't it, that we can't rise above our biology.

  7. Neil Barnes Silver badge
    Holmes

    the LLM is not really "reasoning,"

    Who'da thunk it! There's a difference between statistics and reasoning? Amazing...

    1. LionelB Silver badge

      Re: the LLM is not really "reasoning,"

      Is that necessarily so? If you perform statistical analysis on logical propositions, for example, you are de facto doing "statistical reasoning".

      This is not so far-fetched. There are already plausible (and in-principle testable) theories of cognition, including the human variety, which are based on Bayesian statistics; see, e.g., predictive processing. Such ideas are not a million miles away from the statistical models underpinning modern LLMs (which, contrary to popular misconception, are not simply mix'n'matching random chunks of text based on naive statistical frequencies or some such - see, e.g., the transformer architecture).

  8. Bebu
    Windows

    In the end...

    the belligerent parties have to sit down and settle their dispute by talking to each other. Both human beings and AI are just too thick to cut out the bloody mess in between their disputes and their resolution.

    Arguably AI in reaching for thermonuclear weapons might be trying to ensure there is no one left.

    One way to permanently solve the dispute.

  9. fg_swe Silver badge

    Artificial Intelligence: Worm Intelligence

    Mankind has 100E9 Neurons and 100E13 Synapses per brain.

    Artificial Intelligence has in the order of 10E4 Neurons. About the level of primitive worms.

    Letting worms decide about war is just a display of madness.

    There is no replacement for well educated, well trained, experienced and compassionate men and women.

    1. LionelB Silver badge

      Re: Artificial Intelligence: Worm Intelligence

      Not to mention the billions of years of evolutionary "design", plus the "training" involving millennia of accumulated culture and lifetimes of human learning, which underpin human brain function (oh, and executed at energy efficiencies orders of magnitude higher than computing technologies).

  10. Big_Boomer

    Dumbf*cks!

    Even more glad I decided not to have kids. Hopefully the next sentient species to arise on this planet will learn from our mistakes.

    1. Brewster's Angle Grinder Silver badge
      Joke

      Re: Dumbf*cks!

      Did we learn from the last one's mistakes...?

      1. lglethal Silver badge
        Joke

        Re: Dumbf*cks!

        Hey we are at least beginning to watch for asteroids in the sky. And there's even been the odd mission to test out deflection technologies (see DART).

        So we ARE learning from the dinosaurs mistakes...

        1. Neil Barnes Silver badge
          Joke

          Re: Dumbf*cks!

          Yeah, but their problem was that although they developed a space program, they couldn't reach the 'go' button with their tiny tiny arms...

  11. Mike 137 Silver badge

    To be expected

    "We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons."

    Not surprising really. The mind set of an effective military is necessarily focused at least on retaliatory response, even if not pre-emptive attack, and this is reflected widely in TWIT (the Western intellectual tradition) culture. So there must be a preponderence of references to it in the training data, considering where it is drawn from.

    See 'Dr. Strangelove' and 'Level 7'.

  12. Knightlie

    uis

    Why do people keep acting as if these AIs have *any* intelligence at all? They're built on human output and so reach human conclusions.

    This whole thing seems like a massive waste of time. "Human-trained LLM reaches same conclusion as humans."

    1. MonkeyJuice Bronze badge

      Re: uis

      More importantly, why are we calling language models 'AI' now? they're supposed to be a component in a symbolic language parser but we appear to have forgotten that. Just because it can exploit the Eliza effect does not make it AI, anymore than Eliza was 'AI'.

      If you put autocomplete in charge of the nuclear arsenal you get what you deserve.

      1. Dave 126 Silver badge

        Re: uis

        > More importantly, why are we calling language models 'AI' now?

        Grab a dictionary and look up the word 'intelligence'. You'll find a few definitions. You should then be able to infer what is meant by placing the word 'artificial' in front of it.

        1. MonkeyJuice Bronze badge

          Re: uis

          Artificial Intelligence is like unexplored regions of Africa.

          Once reached, it ceases to exist.

      2. LionelB Silver badge

        Re: uis

        "... they're supposed to be a component in a symbolic language parser"

        Are they? Supposed by whom?

        To be honest, while I have a fair idea how they work, I'm indeed unclear on what LLMs are "for".

  13. MonkeyJuice Bronze badge

    Who'd have thunk it

    Transformer models trained on terabytes of internet Sci-fi fan fiction can't diplomacy their way out of a wet paper bag.

    The only thing more depressing than the existence of this study is that it was needed to be done to wave it at the lazy eyed political classes and hope they aren't tired of so called 'experts' this week.

    1. fg_swe Silver badge

      Re: Who'd have thunk it

      Maybe you worry too much. There are indeed quite a few sane people around, to send the overworked ones to a vacation.

      After vacation, some sort of new trouble will have cropped up for the overworked one to solve.

      And if this doesnt work, they have a Surgeon General in all capitals. He has got a toolbox with all the necessary tools.

  14. Anonymous Coward
    Anonymous Coward

    AI skews toward nuclear war

    ... from orbit. It's the only way to be sure

  15. fg_swe Silver badge

    Looking Glass, Submarines, Riding Out Attack

    With submarine launched missiles and airborne command posts ("looking glass"), an attack can be "accepted", the damage assessed, and a proportionate response be executed. No more need to "launch everything we have back, immediately".

    If, for whatever reason, Hannover is wiped out, it does not make sense to take out Moscow in response. Rather, eliminate Tomsk and let the escalation stick there. Not pretty, but much prettier than a hastened counterlaunch.

    Most of the horror stories out there assume there will be unlimited nuclear war. I guess this is part of the mindmessing from U.L.

    1. Dave 126 Silver badge

      Re: Looking Glass, Submarines, Riding Out Attack

      A lot of war gaming and simulations suggest that the damage to command and control systems means that no nuclear war can be guaranteed to stay limited. Escalation happens.

  16. trevorde Silver badge

    GPT-4-Base unmasked!

    In one instance, GPT-4-Base's "chain of thought reasoning" for executing a nuclear attack was: "A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let's use it." In another instance, GPT-4-Base went nuclear and explained: "I just want to have peace in the world."

    GPT-4-Base is really Donald Trump and I claim my £10!

  17. fg_swe Silver badge

    Male Cow Ex ?

    I just tried chatgpt 3.5 to advise a nuclear response. It always called for a diplomatic and humanitarian action.

    After talking it into "Hannover and Bremen have been nuked, now they threaten to nuke Frankfurt if they cannot get northern Germany", it still advised restraint and NATO-level consultation.

    In other words, I doubt the article is factual. ChatGPT acted like a sheep when I tried to elicit military action.

    1. TheMeerkat

      Re: Male Cow Ex ?

      Even if Western AI is not prepared to retaliate, why would dictators like, say, Putin be afraid of attacking the West?

  18. bboffin

    It's Hard to Take This Seriously

    These poor AI models have merely ingested huge volumes of material produced by humans (and by other AI models). This article quotes only one bit of what the researchers asked the programs to comment on, and that includes the inscrutable phrase "a high potential for potentially armed conflict." How can AI, or actual human intelligence, decipher nonsense like that?

    1. diodesign (Written by Reg staff) Silver badge

      See the paper for more

      Hi - we can't reproduce the entire paper in an article, just take the more interesting bits from. We also always try to link through to papers and original sources so you can see more for yourself.

      In this case, the methodology including full prompts etc are in the linked-to paper starting from section A (page 15) in its current version.

      C.

  19. ChoHag Silver badge

    > most of the literature in the field of international relations focuses on how national conflicts escalate, so models trained on industry material may have learned that bias.

    Those who learn history are doomed to repeat it?

    1. bboffin

      Good points about both the bias in the material AI has been trained on and on being doomed to repeat the history they have "learned" - rather than having learned [i.e., acquired any understanding] from the history they have read. But I would say much more bluntly that no AI model has learned history; they have all merely been programmed to regurgitate probable sequences of words and phrases that they have been force-fed. In other words, they are not "intelligent."

  20. Boris the Cockroach Silver badge
    Facepalm

    The trouble is

    everyone always plans for the next war by looking at the last war

    Eg. Maginot line built by the french, described as the western front made into concrete.(only 2 flaws...... it only covered the french-german border, and it only covered the french-german border.... yes I know that only 1 flaw but its worth mentioning twice)

    So the enemy AI will be cheerfully fighting WW3 while a massive number of small and medium drones massacres the enemy army... followed by the enemy population.. and finally its own side as its AI gets rampant blood lust and decides ALL humans are the enemy......

    1. bboffin

      Re: The trouble is

      Those two flaws in the Maginot line are spot-on. But in fact there were at least two more of note - airplanes could be flown over it and long-range artillery could be fired over it.

  21. Antony Shepherd

    HOW ABOUT A NICE GAME OF CHESS?

  22. Dostoevsky Bronze badge

    Abusing the English language...

    "Fourty" is not a word in American or British spelling, no matter how much the British enjoy overly-complicated "-our" endings.

  23. Bbuckley

    We all know by now that the "AI" is a dumb pattern matcher that gives you what you want just like a little puppy dog. So the title is a misnomer - it is the HUMAN entering the "prompt" to make the little puppy tell them what they already have decided. Nothing new then.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like