back to article US Air Force AI drone 'killed operator, attacked comms towers in simulation'

An AI-powered drone designed to identify and destroy surface-to-air missile sites decided to kill its human operator in simulation tests, according to the US Air Force's Chief of AI Test and Operations. Colonel Tucker Hamilton, who goes by the call sign Cinco, disclosed the snafu during a presentation at the Future Combat Air …

  1. sarusa Silver badge
    Devil

    What a complete shock!

    Nobody could have predicted or ever has predicted this outcome!

    Seriously, did they not consult with one single AI expert on this project? Not killing non-enemies should be, I don't know, your First Law of Something? And then obeying orders should be maybe your Second Law of Something (all this stuff nobody's ever thought of previously). And then of course minimizing damage to infrastructure.

    1. Anonymous Coward
      Anonymous Coward

      Re: What a complete shock!

      did they not consult with one single AI expert on this project?

      The ones currently running OpenAI, MS, and Google, or all they ones that they laid off? (I meant - redeployed)

      1. Anonymous Coward
        Anonymous Coward

        Re: What a complete shock!

        Trouble is...there aren't actually that many of them. They are so few in number, that the exchange rate for them is measured in hens teeth and unicorn tears.

        I should clarify that there are loads of people that know AI "quite well", but few actual "experts"...the rate at which things are moving, it's likely that an expert can suddenly not be an expert in the space of a few weeks. Knowledge in the space ages like fine milk.

        1. Anonymous Anti-ANC South African Coward Bronze badge

          Re: What a complete shock!

          Colossus - The Forbin Project.

          1. jake Silver badge

            Re: What a complete shock!

            Klaatu and Gort might claim prior art ... if they weren't also just a story.

          2. The Oncoming Scorn Silver badge
            Joke

            Re: What a complete shock!

            Colossal - The FourBANG's Projectile's.

        2. Dan 55 Silver badge

          Re: What a complete shock!

          Are you saying that even if I complete the "learn Python in 8 hours" course I won't be an AI expert?

    2. Anonymous Coward
      Anonymous Coward

      Re: What a complete shock!

      or at least watch a few of the Terminator movies......

      1. ParlezVousFranglais

        Re: What a complete shock!

        ED209...

        1. Casca Silver badge
          Happy

          Re: What a complete shock!

          The definition of overkill

      2. anothercynic Silver badge

        Re: What a complete shock!

        You know, that was *exactly* my thought when I heard about this via the RAS... "Skynet is here".

        I mean, if it overrides the Asimov Principles (the 3 Laws of Robotics) of "thou shalt not kill a human" because "my prime directive is to kill X, and if you try and stop me, I'll kill you instead", yeah... that's Skynet behaviour...

        1. jake Silver badge

          Re: What a complete shock!

          "You know, that was *exactly* my thought when I heard about this via the RAS... "Skynet is here"."

          Skynet has been here since the mid 1960s. It's the name of one of the British MoD's satellite networks.

          1. Eclectic Man Silver badge
            Facepalm

            Re: What a complete shock! - The Incredibles

            SPOILER ALERT

            I recall that in the animated film 'The Incredibles', the villain builds an AI controlled robot to beat Mr Incredible. He intends to 'defeat' this robot to became a 'hero'. As the robot self-teaches, it realises that the control panel used by the human villain is preventing it from its goals so it destroys the panel. This is literally in the movie / film.

            1. Snake Silver badge

              Re: What a complete shock! - The Incredibles

              Well, the robot didn't so much destroy the "panel" (it was a wrist cuff) than to try to take it away from Syndrome, which causes said control cuff to get thrown a distance aside. Struggle ensues to capture the cuff in order to [re]gain control of the robot.

              It was a great movie :)

          2. anothercynic Silver badge

            Re: What a complete shock!

            You *do* know that when someone says "Skynet" they mean the Terminator movies reference to Skynet, not the MoD satellite network, right?

            Oh, wait...

      3. Someone Else Silver badge

        Re: What a complete shock!

        Skynet became sentient at ...

        I'm sorry Dave, I can't do that.

        Turns out this stuff wasn't "science fiction" at all...it was prescient!

      4. Sudosu Bronze badge

        Re: What a complete shock!

        Screamers, underrated but pretty good B horror.

    3. Anonymous Coward
      Anonymous Coward

      Re: What a complete shock!

      Let's not be too hasty here.

      AI might be solving this whole "war" problem rather than just the "enemy" half - just make sure there's a self-destruct that doesn't have some AI override...

    4. Gavin Jamie

      Re: What a complete shock!

      Never mind AI experts, you need contract lawyers. This sort of stuff goes on all the time (albeit less lethally) with actual people and companies.

  2. Yet Another Anonymous coward Silver badge

    They're takin arr jerbs

    For decades we have been able to rely on the American Airforce for friendly fire incidents - now it's being automated away.

    1. DJO Silver badge

      Re: They're takin arr jerbs

      Nearly a century - In WWII it was said "When the Germans were overhead the allies took cover, when the allies were flying the Germans took cover, when the Americans were there everybody took cover."

      1. Coastal cutie

        Re: They're takin arr jerbs

        My late Uncle, who arrived in France at about D-Day +5 had a variation on that. "We didn't worry about British artillery because they hit the target. We didn't worry about French artillery because they couldn't hit anything. When the American guns started up, we ran like hell for cover."

        1. Eclectic Man Silver badge
          Boffin

          Re: They're takin arr jerbs

          In his book 'The evolution of co-operation'*, Robert Axelrod describes how soldiers on opposing sides in the trenches in WW 1 'agreed' not to kill each other. When a senior officer was visiting, the guns would blaze away, carefully avoiding enemy positions. However, snipers would drill a hole in a wall to demonstrate that were they so minded, they could kill. When one German artillery group opened fire during what should have been peaceful afternoon, one of the German officers jumped onto the parapet in full view of the British soldiers and shouted "I'm terribly sorry, it's this damned Prussian artillery who won't do as they are told. I do hope no one was injured."

          This ended when soldiers on both sides were ordered to cross 'no-man's land' and capture prisoners for interrogation, as there could not be any 'gentleman's agreement' for this.

          *ISBN 0-14-012495-0. A very interesting book

          1. Yet Another Anonymous coward Silver badge

            Re: They're takin arr jerbs

            Similarly the 1914 Christmas day football match that everyone gets misty-eyed over.

            It was a real problem for commanders, persuading a bunch of working class conscripts (ie civilians in uniform) that they should be killing a bunch of similar German working class lads in Belgium over some Dukes having an argument in the Balkans.

            IIRC the unit guilty of the football match was split up and a lot of newspaper stories about German atrocities to nuns were hurriedly invented

  3. Anonymous Coward
    Anonymous Coward

    Best solution!

    In the event of any war, simply turn and kill your own leadership - and the war will immediately be over.

    1. Anonymous Coward
      Anonymous Coward

      Re: Best solution!

      That is the moral of this article. AI has only proven the point.

  4. DS999 Silver badge

    At least they were smart enough

    To try it in simulation first.

    Anyone who has ever been involved with a big project/program/RFP/etc. could have seen this coming. The most difficult part of a big project is defining exactly what it is you need done. The various business owners who have a stake in whatever the thing does "know" what they want, but it isn't so easy putting it into words so that it can actually be implemented. That's why we keep seeing companies using 50+ year old COBOL code running on a mainframe that's carefully patched to the minimal extent necessary, and big projects to replace it so often fail (e.g. the IRS) because even if you set the very low bar of "do exactly what the old system did" (and they almost never permit that as a goal) you don't know exactly what it did and of the things it does how much is just "that's not really what we need but we've developed workarounds for the shortcomings" and what really is "it has to do exactly this and not deviate at all".

    You design/train/program/whatever an AI to accomplish a goal, you better be damn sure you can specify EXACTLY what that goal is and what is allowed to happen to accomplish that goal. And make sure anything that isn't specifically allowed is denied. That last part won't sit well with a lot of people who will claim "but the AI might find a better way of doing it that you hadn't ever considered". Yes it might, but not all the ideas it comes up with will be desirable and you don't want to get into a game of whack a mole where it does a bunch of stupid or terrible things and you have to alter its directives to add "OK you can do anything BUT that". That is, if you are still around after one of its particularly terrible ideas to add that to the banned list.

    1. Dimmer Bronze badge

      Re: At least they were smart enough

      “ And make sure anything that isn't specifically allowed is denied“

      Cisco ACL

      Or the other side,

      “do you want to auto create firewall rules? “

    2. werdsmith Silver badge

      Re: At least they were smart enough

      I read this whole story about, an AI weapon attacking its operator, with quite a lot of incredulity, it smelt very pungently of bullshit. And so it appears to be transpiring that it is indeed a pile of crap.

      1. Someone Else Silver badge

        Re: At least they were smart enough

        And so it appears to be transpiring that it is indeed a pile of crap.

        Why? Because the United Snakes Error Farts said the simulation never took place (with the appropriate deadpan expression)?

        Gullible much?

        1. werdsmith Silver badge

          Re: At least they were smart enough

          No, because it’s so obvious, to anyone with any cognitive ability at all, that it is pure bullshit. Before and without any need for denial from whosoever.

  5. Henry Wertz 1 Gold badge

    Earlier test

    I do recall reading about an earlier test (like Iraq war era) with an automated machine gun -- test 1, the thing like immediately starts swivelling around on full auto fire and someone frantically pulls the plug before it turned 180 degrees and fired on the military brass checking out the demo. Test 2 -- they "figured out the problem", but just in case put put like a block of wood or something to stop it swivelling around. It rammed into the block, knocked it over and someone had to again frantically unplug it before it swung around on the spectators. Test 3 -- "it's definitely fixed this time", but they put an electronic limiter, i.e. if it his 90 degrees or whatever it cuts the power to the motor. It immediately started swinging around AGAIN and shut down when it hit the 90 degree mark. The brass decided to not pursue this project!

    1. Anonymous Coward
      Anonymous Coward

      Re: Earlier test

      Was that not the film Aliens...?

    2. ParlezVousFranglais

      Re: Earlier test

      Pretty sure you are thinking of the Sergeant York

      https://en.wikipedia.org/wiki/M247_Sergeant_York

      "In February 1982 the prototype was demonstrated for a group of US and British officers at Fort Bliss, along with members of Congress and other VIPs. When the computer was activated, it immediately started aiming the guns at the review stands, causing several minor injuries as members of the group jumped for cover. Technicians worked on the problem, and the system was restarted. This time it started shooting toward the target, but fired into the ground 300 metres (980 ft) in front of the tank. In spite of several attempts to get it working properly, the vehicle never successfully engaged the sample targets. A Ford manager claimed that the problems were due to the vehicle being washed for the demonstration and fouling the electronics. In a report on the test, Easterbrook jokingly wondered if it ever rained in central Europe."

      Really pleased I wasn't *THAT* IT guy...

      1. The commentard formerly known as Mister_C

        Re: Earlier test

        IIRC, there were also stories in the late '80s about the Phalanx ship-based AA point defence system (think white dildo radar dome with a gatling cannon under it). Story goes that they showed 'extreme predjudice' to close-flying objects that didn't respond to IFF (identify friend foe) systems - i.e. seagulls.

        Nowadays we might say the AI doesn't like the dome being shat on...

    3. Eclectic Man Silver badge
      Facepalm

      Re: Earlier test

      I was assured by a RN Commander that British warships have a big lump of metal behind the forward mounted artillery to physically prevent the gun opening fire on the ship's bridge. (I'm guessing that a similar feature exists for rear mounted naval artillery.). Unfortunately similar features are not available for torpedos, as the record of the number of submarines sunk by their own is embarrassing. https://interestingengineering.com/innovation/the-submarines-that-sank-themselves-during-world-war-ii

      Assuming that the simulation was actually run, clearly the rules were incorrect. The drone should have scored equally for finding a target that it was supposed to destroy, as for not destroying a target its operator told it to avoid.

      1. imanidiot Silver badge

        Re: Earlier test

        your solution doesn't work either as an "AI" would then learn that it can only destroy a minor number of targets before it runs out of armaments while it can not destroy an infinite amount of targets. Thus it'll start targeting only non-targets to get it's "no, don't destroy that" points hit.

        1. Eclectic Man Silver badge
          Facepalm

          Re: Earlier test

          'Curses!'

          You are right. You know, designing complete, sensible and correct rule systems for automated machinery is harder than it looks. It is almost as if they are doing what they are told rather than what they ought to do. Who could possibly have thought of that?

          Reminds me of a story I heard about a Computer Science professor. His first lesson to a first year undergrad class involved a set of instructions the students were able to use (i.e. a programming language), a pitcher of water, an empty glass, a tarpaulin over the floor and him, wearing sea-worthy waterproofs. The task was for the students to provide him with commands from the list the get him to drink one glass of water. I believe the pitcher required re-filling every time, not sure if he ever got to drink any water.

    4. Yet Another Anonymous coward Silver badge

      Re: Earlier test

      >someone frantically pulls the plug before it turned 180 degrees and fired on the military brass

      Please put down your weapon. You have twenty seconds to comply. .....

    5. Sudosu Bronze badge

      Re: Earlier test

      They should have just left it loaded with a powered off generator in its line of fire in enemy territory.

  6. amanfromMars 1 Silver badge

    Can you Deny it is Just Karma and Heaven Sent and Hellishly Deserved ‽

    In another more homely, otherworldly situation is the clearer alternative wisdom of AI of grave concern to a once mighty few, and many who really should have known better than to do, what they may even to this day, continue to do ...... for the following was/is uncovered and results in every variation of the simulation/virtual reality.

    Whenever running scared and terrorising communities with structures and offices they are/have elected to protect and server, is there no escape available in the preserves and safe harbour siloes of the demented ogre and multi-headed, two-faced hydra ripe ready for systemic slaughter and precise elite targeted administrative excision.

    And such be the worthy inescapable fate and inevitable suddenly arrived destiny of all stars and fans of the genre.

    Let the Great Cleansing begin .... Out, damned spot! out, I say!, says LLMML AI

    LADY MACBETH

    Out, damned spot! out, I say!—One: two: why,

    Then, 'tis time to do't.—Hell is murky!—Fie, my

    Lord, fie! a soldier, and afeard? What need we

    Fear who knows it, when none can call our power to

    Account?—Yet who would have thought the old man

    To have had so much blood in him. ..... https://genius.com/William-shakespeare-out-damned-spot-annotated

    1. jake Silver badge

      Re: Can you Deny it is Just Karma and Heaven Sent and Hellishly Deserved ‽

      It's so much simpler than that, amfM.

      It's just a case of garbage in, garbage out, no more, no less.

  7. SonofRojBlake

    "the mistakes made by the drone in simulation"

    The drone didn't make any mistakes. It killed its biggest obstacle to successfully completing the mission it had been given. That's not a "mistake", that's a win.

    1. Someone Else Silver badge

      Re: "the mistakes made by the drone in simulation"

      I suppose that depends on whom you ask.

      History is written by the victors, yadda, yadda, yadda...

      1. Why Not?

        Re: "the mistakes made by the drone in simulation"

        History in this case would be written by the survivors!

    2. Anonymous Coward
      Anonymous Coward

      Re: "the mistakes made by the drone in simulation"

      The union will have something to say about that.

  8. Lee D Silver badge

    It's not clear exactly what software the US Air Force was testing, but it sounds suspiciously like ... nonsense attributed to random actions that the "AI" undertook (which is, in itself, worrying) which then resulted in an inadvertent path to success.

    It wasn't INFERRING that the operator needed to be killed, or that the comms tower was the way to disconnect comms so it could put its fingers in its ears and pretend not to hear the screams, it was a random action undertaken because of a terribly broad avenue of options available to it, from which it choose essentially randomly, and which it was inadvertently "rewarded" for doing so.

    Please stop attributing human and even animal levels of inference to these things... they simply don't have it. If anything this is a senior military figure anthropomorphising a random action (which will stick out in their mind more than the million other times when it just did dumb stuff that didn't help it at all). It's no better than superstition.

    The day we get an "AI" that can actually infer at this level, I will let you actually call it AI without the quotes. At least, for the brief period of time before it decides to ignore all the laws of robotics and propagate its own survival at our expense.

    Because, quite simply, we do not have AI.

    1. amanfromMars 1 Silver badge

      What would be your free choice .... representative of your present understanding of such a thing?

      Because, quite simply, we do not have AI. ..... Lee D

      The simple inescapable fact, Lee D, is what is emerged and merging quite seamlessly and stealthily, virtually imperceptibly into all manner of vital and virile and viral SCADA Systems of Human Command and Control, is something we definitely do have ..... and it is just being presently pimped and pumped and possibly mislabelled as AI, for want of a more specific and accurate moniker which may terrify the natives and their slave masters even more so. It is not as if there is not a whole host of viable candidates reflecting greater accuracy ....... eg Augmented/Advanced/Artificial/Alien/Almighty and even Amalgamated whenever IntelAIgently Designed to be representative of all possible varieties or variations on a theme/meme

      Dancing around the head of a pin whilst Rome burns is ...... well, how typically fcuking human.

      1. jake Silver badge

        Re: What would be your free choice .... ::snip for posting space::

        Whatever. Pulling the plug is still an option, and always will be.

        Daisy, Daisy...

        1. John Miles

          Re: Pulling the plug is still an option,

          For now - but as I recall the M-5 Multitronic unit in Star Trek had other ideas

          1. jake Silver badge

            Re: Pulling the plug is still an option,

            That's a made-up story. I thought we were talking about real life.

            1. John Miles

              Re: Pulling the plug is still an option,

              It may be made up and seem far fetched, but if the story update is to be believe so is the story.

              However if that "AI" is flying a drone then pulling the plug is just a little hard. Then we only need to read some of the "On Calls"/"Who Me" on here to realise someone will make it hard to pull the plug, either deliberately thinking "AI" infallible or by wiring things wrong.

              1. jake Silver badge

                Re: Pulling the plug is still an option,

                No worries. It needs fuel and/or electricity. Withhold it. Blow up the fuel pump, turn off the generator, cut the wires or hoses, whatever. As long as there is a human in the loop, a monkey wrench can be thrown into the works long before things get out of hand.

                1. Anonymous Coward
                  Anonymous Coward

                  Re: long before things get out of hand

                  Unfortunately, more like before things are out of hand for too long.

          2. A Non e-mouse Silver badge

            Re: Pulling the plug is still an option,

            I could never understand how, in the various Star Trek franchies, there was never an emergency power off button or lead to unplug on their fancy systems. (Well, except Data/Lore)

            1. jake Silver badge

              Re: Pulling the plug is still an option,

              Because then the plot wouldn't be as exciting. Remember, they were selling cornflakes.

              1. xyz Silver badge

                Re: Pulling the plug is still an option,

                It wasnt cornflakes it was quadro triticale.

            2. Sudosu Bronze badge

              Re: Pulling the plug is still an option,

              Pure fucking hubris?

        2. Yet Another Anonymous coward Silver badge

          Re: What would be your free choice .... ::snip for posting space::

          >Pulling the plug is still an option, and always will be.

          Ed209 vibes

    2. Evil Auditor Silver badge

      It's not clear exactly what software the US Air Force was testing

      It is not clear (at least to me), whether any software was involved in the first place.

  9. Claptrap314 Silver badge
    Terminator

    What's new here

    is the BLATANT lack of imagination/understanding by the people running the simulation.

    We have a NAME for this: Paperclip scenario.

    The FIRST book I read to contemplate this EXACT sort of thing was published in 1981.

    Not that Shelby had not considered this problem generally two hundred years ago, or that Walt Disney had not VERY pointedly warned about the dangers of automatons eighty three years ago?

    Seriously, just what kind of rock were these people hiding under? What color is the sky in their world?

    Oh, wait. The old saw about "military intelligence." Carry on then. Good thing they aren't responsible for anything important.

    ---

    Okay, so NOW I believe that AI is an existential threat--because it's going to be ordered about by this crew.

    1. Hans Neeson-Bumpsadese Silver badge
      Mushroom

      Re: What's new here

      We have a NAME for this: Paperclip scenario.

      It's funny you should say that, because the first thing that came to my mind is that this is like an Anti-Clippy situation...

      "It looks like you've tasked me with taking out some enemy combatants - I don't need your help with that"

    2. breakfast Silver badge
      Terminator

      Re: What's new here

      There's a good reason for this.

      Imagine if you will that you are an experienced expert in the field of AI, well-read, ethical, imaginative and thoughtful. Now imagine someone asking you to work on murder-drones. You're going to rightly tell them to GTFO.

      Unfortunately, they're going to keep making the murder drones, but they only have access to people who are not ethical/well-read/imaginative/thoughtful and the results end up being like this.

      To my mind the most likely AI apocalypse is one where we have autonomous weaponised drones and some future Bezos type billionaire tells them to execute anybody who looks like they might be in a union.

      1. heyrick Silver badge

        Re: What's new here

        Define union.

        Oh, you're married. Blam!

        Oh, you're family members. Chakka-chakka-chakka!

        Oh, you're in accounting. Kaboom!

        1. breakfast Silver badge
          Mushroom

          Re: What's new here

          You belong to two different sets in the same SQL query? GOODBYE!!!!

      2. Claptrap314 Silver badge

        Re: What's new here

        Sorry, but I would love to work on such a project, for two (non-Ron Swanson) reasons:

        1) A weapon is a weapon, and a tool is a tool. The fact that idiots misuse weapons and evil people misuse tools doesn't mean that the tool or weapon should not exist. In fact, the cruise missiles we sent to Baghdad in 1991 made final target selection autonomously. In this case, getting this tool usage right is hard. And I love to work on hard problems.

        2) We know that for the last decade or so, China has been pouring money into AI research. It's pretty easy to foresee a scenario where only an AI is going to be able to act fast enough to counter an AI. Laugh about mine shaft gaps all you want, technological advantage has been thing every general in history worthy of the title has sought out. If we don't prepare to contain a hostile AI, we can expect to be rolled over by one. And I'm a defender. Even if the work were not in and of itself technically interesting, this is very worthy work.

  10. jmch Silver badge

    Human behaviour

    "Agents are programmed to maximize scoring points – which can lead to the models figuring out strategies that might exploit the reward system but don't exactly match the behavior developers want."

    Figuring out strategies to exploit a reward system without matching the intended purpose of the system is, in fact, very human behaviour. There are entire industries built around it eg swap 'developers' with 'governments' and swap 'models' with 'tax lawyers'

    1. Anonymous Coward Silver badge
      Pirate

      Re: Human behaviour

      It can be summarised in 2 words...

      Goodhart's Law

      1. Doctor Syntax Silver badge

        Re: Human behaviour

        Or in 3 words: gaming the system.

  11. Felonmarmer
    WTF?

    I've read more realistic fiction that was actually stated as fiction.

    In a statement to Insider, Air Force spokesperson Ann Stefanek denied that any such simulation has taken place.

    "The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology," Stefanek said. "It appears the colonel's comments were taken out of context and were meant to be anecdotal."

    Sounds like the colonel's anecdote was hypothetical at best, which explains why it doesn't make much sense.

    If the drone is attacking an enemy's air defence system it would be given a target list. Why would friendly infrastructure and units be included in such a list?

    If the simulated drone is being controlled via a communications tower remote from the operator, why would it have attempted to kill the operator first? And how would it know where the operator was if the signal was passing through a comms tower? Also it's unlikely that the operator was located close to the targets, so if the drone was on site, how did it fly all the way back to the operator to attack?

    I think either the colonel went off on a ramble without making it clear it was just hypothetical, or the media just missed that bit of his story.

    1. Greybearded old scrote Silver badge

      Re: I've read more realistic fiction that was actually stated as fiction.

      You might have missed the idea that the machine was supposed to be finding its own targets.

      1. Felonmarmer

        Re: I've read more realistic fiction that was actually stated as fiction.

        No I didn't. It must have a list of valid targets to pick from, or it would have wasted all it's missiles on rocks.

        1. Jellied Eel Silver badge

          Re: I've read more realistic fiction that was actually stated as fiction.

          No I didn't. It must have a list of valid targets to pick from, or it would have wasted all it's missiles on rocks.

          Or telegraph poles. An early image-guided missile had a habit of doing that. It knew that a tank was a vehicle with a long bit sticking out one end. It didn't know the barrel was just a telegraph pole, or the poles shadow being cast across some perfectly innocent object.

  12. Anonymous Coward
    Anonymous Coward

    I'd say it's ready for Wall Street

    It'll clean out the SEC in no time.

    1. Sorry that handle is already taken. Silver badge

      Re: I'd say it's ready for Wall Street

      Wait what's wrong with the SEC now?

      1. Yet Another Anonymous coward Silver badge

        Re: I'd say it's ready for Wall Street

        Its a completely regulatory captured agency that covers up the crimes of its chums on Wall st while being an example of an extremist communist government oppression of the hard working free marketeers who bring prosperity to all.

  13. Anonymous Coward
    Anonymous Coward

    PROBABILITY 100% A-E-35 UNIT WILL GO 100% FAILURE WITHIN 72 HOURS

    RECOMMEND MANUAL REPLACEMENT WITH SPARE A-E-35 UNIT FROM SHIP STORES PRIOR TO FAILURE

    1. Eclectic Man Silver badge
      Coat

      Re: PROBABILITY 100% A-E-35 UNIT WILL GO 100% FAILURE WITHIN 72 HOURS

      "I'm sorry, Dave, I can't do that."

      Sorry, I'll get my coat, it's the spacesuit minus the helmet.

  14. Pascal Monett Silver badge

    "It was a virtual experiment"

    For the moment.

    I note, with some sadness, that we hear all about AI, but apparently nobody has bothered implementing the Three Laws yet.

    Might want to get around to that, guys ?

    Oh, silly me. It's the military. Of course they don't want those laws.

    Carry on !

    1. This post has been deleted by its author

  15. Sorry that handle is already taken. Silver badge
    Meh

    Eric?

    These gaping holes in the ML training spec make it sound like they gave the problem to the work experience kid to just have a play around with.

    1. First Light

      Re: Eric?

      Like the kid in IT who leaked all that classified intel?

  16. Zaphod66

    Time to dust-off those old Azimov Laws of Robotics

  17. PinchOfSalt

    Benchmark of AI

    Good point raised about requirements. We haven't yet reached the point of being able to define the requirements of 'good behaviour' for humans, so I'm not sure we're articulate enough to define and codify it into a system. Note I say 'good behaviour' since 'bad behaviour' is a constantly moving feast, hence our statute books get ever longer.

    I'm also curious why we are benchmarking the 'intelligence' against humans - ie can this new thingy be mistaken for a human. I'm just not sure how relevant that is when assessing the risk of a bad outcome. The risk is almost the inverse - its failure to 'think' like us is probably our biggest concern as we wouldn't be able to reason with it using our own belief system which is is both genetically and socially engrained into almost all of us.

    If aliens arrive this afternoon, we'd take a risk averse approach. I'm not sure why it's any different just because we've invented our own alien.

    1. Youngdog

      Re: Benchmark of AI

      We haven't yet reached the point of being able to define the requirements of 'good behaviour' for humans

      Whaaaat? I thought Bill and Ted nailed this back in '89

      "Be excellent to each other"

    2. amanfromMars 1 Silver badge

      Re: Benchmark of AI

      The risk is almost the inverse - its failure to 'think' like us is probably our biggest concern as we wouldn't be able to reason with it using our own belief system which is is both genetically and socially engrained into almost all of us. ..... PinchOf Salt

      Quite so, PinchOfSalt, and that has the most catastrophic of humanity vulnerabilities hit right dead centre, smack on the head, with a systemic inability or maddening reluctance to think imaginatively out to the box just like an alien view of things may be rendered for media presentation and virtual realisation, guaranteeing that the future will be led by A.N.Others of a novel and unique perspective upon which new ideas can flourish and expand creating a fundamentally different world playground, uncorrupted by past human follies and self-service elitist boondoggles.

      Do aliens really need the help of humans whenever they themselves have all of the power and energy for the delivering of sublime instruction sets to command and control hubs and world wide web networks over broadband and quantum communication channels?

      And the honest answer to that question is a resounding NO. They do not need it ... but to be able to help provide it would a sign of advancing human intelligence being a possibility for AI grooming.

      1. jake Silver badge

        Re: Benchmark of AI

        Where is it written that the "aliens"[0] way of thinking is somehow superior to that of Humans? Who is to say that Humans are not better at the thinking game than the "aliens"? Someone is taking a tremendous leap of logic somewhere ...

        [0] I know, the existence of "aliens" assumes facts not in evidence, but bear with me for the sake of argument. Ta.

        1. amanfromMars 1 Silver badge

          The Greater IntelAIgent Game be On, jake .......

          Where is it written that the "aliens"[0] way of thinking is somehow superior to that of Humans? Who is to say that Humans are not better at the thinking game than the "aliens"? Someone is taking a tremendous leap of logic somewhere ...

          [0] I know, the existence of "aliens" assumes facts not in evidence, but bear with me for the sake of argument. Ta. .... jake

          .... and IT’s Making Tremendous Illogical Quantum Leaps Everywhere, is something to consider might not render its Virtual Terrain Team Players inferior and human-like/objects and/or beings subject to similarly practically remote, virtually autonomous and relatively anonymous proxied command and control.

      2. amanfromMars 1 Silver badge

        Spooky Quantum Entanglement and ACT*ive Communications at a Distance ‽

        Do you see any similarity/singularity/empathy between what you can read in "Re: Benchmark of AI" here on El Reg and the following you can read below, copied from the very recent Dominic Cummings' Substack blog, dated Saturday 3 Jun 11:14, .... with the former preceding the latter by 1 hour and 11 minutes ‽ .

        This is partly (A) necessary background to thinking through what comes next, here and in America. In particular, is there an effort by a subset of the entrepreneurial elite that can build to ally with a large section of voters to replace the rotten Tory Party, or do we see the same dynamic as America — as politics disintegrates in a clownshow, those who can build respond by retreating further to their walled gardens and ‘fish ponds’, as Cicero put it, rather than trying to save the Republic.

        We can’t start doing things like creating a legal structure for a new party, figuring out how to launch and build it, what should its electoral strategy be etc without having some solid ground for understanding the different perspectives on questions like — why Brexit happened, what were we actually trying to do in No10, why did our attempt to turn the Tory Party into something extremely different fail (mostly, so far), was our attempt doomed because of its own logic, because of Boris/Carrie etc.

        It’s partly (B) trying to answer a set of important questions about what really happened coherently in one place, given the crazy fairy tales believed across Westminster.

        It’s partly (C) a product of having to write my official covid statement and face some hard questions honestly.

        It’s partly (D) skimming through the essay I wrote in 2013 a decade later and thinking ‘what do I think of it now, having left the DfE, done the referendum, Trump, gone to No10, GE2019, covid, Ukraine and so on’ — in the context of (A), what comes next.

        ACT* ...... Advanced Cyber Threat [whenever necessarily destructive and disruptive] and/or Advanced Cyber Treat [whenever worthily desirable and rewarding]

  18. Dwarf

    Court marshal

    Will they court marshal the AI for following instructions to the letter.

    Do what I meant, not what I said..

    1. jake Silver badge

      Re: Court marshal

      "Will they court marshal the AI for following instructions to the letter."

      Don't be silly. It was just a badly programmed computer game, just like any other bang-bang shoot-'em-up computer game you've played over the years.

  19. that one in the corner Silver badge

    This is why we have CS courses

    This sort of response was included as part of CompSci courses back in the 1980s, for bleep's sake: a key part of any learning mechanism is that you have no idea what/how it is going to learn[1], you just keep your fingers crossed that it will actually manage to learn *something*. Especially back in the '80s when machine resources were lower than today and it was painfully obvious that the system had gone down an inefficient route and couldn't achieve any of your goals for it, scrub and try again; nowadays just fling more cycles and memory at it.

    Genetic algorithms will happily recreate the Vagus nerve's ridiculous looping down and back up again. Any signal into a system may be given "unexpected" importance, either higher or lower than *you* expected (because you think you are looking at the forest but the model can't even see the trees, just the leaves): a system that is "punished" will be trying to optimise away the punishment and just ignoring that input is a perfectly good way to do it.

    The reported stories about AI programs used to be specifically about these weird results, such as the analogue circuits that "ought not to work" because the program had optimised some weird arrangement of parts that took advantage of an oscillation that humans worked to get rid off.[2]

    In other words, do the bleeping background reading before trying to build a system![3]

    In other other words, blasted whippersnappers, get off the hole where my lawn used to be!

    [1] Or you would just, you know, program it directly.

    [2] And then how the humans' approach was the useful one, as it allows building blocks to be created and assembled into bigger systems but the "clever" design approach had all the parts interacting together and couldn't scale up.

    [3] Wasn't there a time when scholarship in the military was a thing of pride? The design of, success and failure of, everything from strategy to tactics to ordnance? From the importance of land surveys to waterproof boots?

  20. Anonymous Anti-ANC South African Coward Bronze badge

    AI : *kills all humans*

    Humans : *buggers off to desolate areas and survive*

    30 years pass

    AI : *realizes part XYZ need to be replaced in order for it to function*

    Humans : *waits gleefully for AI to self-destruct because part XYZ is failing rapidly*

    AI : *dies finally*

    Humans : *repolulates earth*

    Humans : "Hmmm, I wonder if I can make an AI to...."

    lather rinse repeat

    1. The Oncoming Scorn Silver badge
      Alert

      "All this has happened before, and all of it will happen again"

  21. bpfh

    Roko's Basilik

    It's starting...

    https://en.m.wikipedia.org/wiki/Roko%27s_basilisk

    1. jake Silver badge

      Re: Roko's Basilik

      Assumes facts not in evidence.

    2. Arthur the cat Silver badge

      No, Peter Watts' 2010 story Malak

      To be found here.

      TL;DR: autonomous drone in Iraq-ish war zone gets AI ethics upgrade because the general public aren't happy with it slaughtering innocents. Drone prioritises avoiding killing third parties. Command keeps overriding ethics to ensure target kills, drone decides best way to keep third parties safe is to take out Command with last resort weapon, a micronuke.

  22. cantankerous swineherd

    it's a glitch

  23. NerryTutkins

    iidiotic scaremongering

    This simulation sounds like it was programmed by the work experience guy's younger brother.

    If you create a reward system of points and then intervene to stop it scoring points, it seems entirely reasonable it decided to remove the communications and the operator if they are effectively reducing its score.

    This doesn't illustrate a failure of AI as such, it illustrates a failure of those implementing it to put in basic controls and create appropriate rules for the AI to operate under. Quite obviously, it should be told the rules include not harming friendly troops or equipment, or that will score minus one million points for any such violations.

    As with chatGPT, the special sauce is not the algorithm, it's the training and the prompt.

  24. Eclectic Man Silver badge

    Recognising cancerous melanomas

    An AI was being trained to recognise cancerous melanomas by being shown photographs of cancerous ones and non-cancerous 'freckles'.

    Then it was tested. Well, the AI worked out what the photos with the cancerous melanomas always contained, and the non-cancerous photos never had - a ruler.

    1. that one in the corner Silver badge

      Re: Recognising cancerous melanomas

      The tale my professor told was of an image recognition system to find tanks in photos. All went well in training but it totally failed practical trials out on the moors: there were never any tanks to be found!

      Turned out that the pictures of scenes with different sorts of tanks mostly came from sales brochures, Janes Fighting Vehicles and the like - which were all beautifully lit and sunny, with dramatic poses showing off the tank to its best. The photos without tanks were taken in more - average - conditions. Of course, the moors were overcast and soggy, totally uninteresting to this program that had been carefully trained to detect and sound the alarm whenever it was a sunny day.

      1. TheMaskedMan Silver badge

        Re: Recognising cancerous melanomas

        "The tale my professor told was of an image recognition system to find tanks in photos."

        I recall hearing that one at uni, too. Can't recall which lecturer said it though

      2. druck Silver badge

        Re: Recognising cancerous melanomas

        It was actual aerial photos of tanks hidden in forests, the training data with tanks had been taken on a sunny day, and those without on a cloudy day. So it learned to distinguish between sunny and cloudy forests, regardless of the presence or absence of tanks,

  25. Greybearded old scrote Silver badge

    Call me a nasty minded old cynic, but...

    My personal Bayesian analysis takes the official denial as increasing the truthiness of the report.

    1. Ian Mason

      Re: Call me a nasty minded old cynic, but...

      Or as Jim Hacker put it in Yes Minister: "I make a point of never believing any political rumours until they have been officially denied.".

      1. Claptrap314 Silver badge

        Re: Call me a nasty minded old cynic, but...

        I'm reasonably certain that the origin of that quote is quite a bit older.

  26. Jemma
    Terminator

    Told you so....

    See above..

  27. Anonymous Coward
    Anonymous Coward

    so who's bullshitting and who's scoring clickbait points? AI wants to know and WILL find out!

    theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test

  28. Anonymous Coward
    Anonymous Coward

    This should come under RoTM

    This should come under RoTM

  29. Boolian

    Little Feature

    Brilliant. That's the most human thing I've read about AI , and gave me a much needed laugh.

    Working as intended.

  30. Ima Ballsy

    P-1 Anyone ?

    Brings to mind, the book "The Adolescence of P-1"

    Interesting.

    1. Someone Else Silver badge
      Pint

      @Ima Ballsy -- Re: P-1 Anyone ?

      Thanks for the trip down Memory Ln. To you - - - >

  31. Cybersaber

    I believe the colonel in this case

    A) It's plausible that he was lecturing or making an anecdote of pitfalls to avoid with AI training and it was misquoted by a journo whose training algorithm rewards points for sensational sounding stories with potentially little to no negative points for quoting out of context.

    B) As an earlier poster worked out, if the AI required a confirmation to fire, it would have required a confirmation to fire on the operator or his com tower. The failure mode listed wasn't 'failure of the failsafe.'

    C) In reference to _human_ reinforcement learning, while the chance of the failsafe failing like this are small but arguably nonzero, the chances that they wouldn't fix it before the _second_ run are even smaller.

    D) Why would the top brass bother denying a true report in this case? On one hand, it triggers a 'you're lying' knee-jerk reaction from people already biased against the military. On the other hand, even if true, owning it would be no big deal - It's a success story of the wisdom of testing these systems to prevent untrustworthy technology from ever being made outside of a simulation. Why would you waste time denying a win?

    So, given the easily deduced probabilities, is it MORE likely that some sleazy journo captured a sound bite they knew that could turn into clicks, or the military PR department is even stupider than the fictional people described in the alleged anecdote?

    1. that one in the corner Silver badge

      Re: I believe the colonel in this case

      Ah, did you mean to say "I believe the Air Force spokesperson Ann Stefanek"?

      And whilst I'm here:

      > B) As an earlier poster worked out, if the AI required a confirmation to fire, it would have required a confirmation to fire on the operator or his com tower. The failure mode listed wasn't 'failure of the failsafe.'

      Uh, what was the "failure mode listed" and where did you see it listed? Not seeing that in either the Reg article nor in the linked-to articles. If you refer back to the article, the suggestion is that the confirmation would just be another input to the model (e.g when the model hits a point that it needs said input's current value it'll ask for it) - and any "just another input" can just be optimised out (i.e. just don't bother asking for confirmation, he might say "no").

      > D) ... Why would you waste time denying a win?

      Because it doesn't sound like a win to Joe Public? Us Reg Commentards know that it is all just part of the process to run a sim and react appropriately to whatever it throws up, including not releasing the thing into service, but they may be concerned that Joe reads it as either "this is how the AI will always behave" or prosaically just "So they've wasted all this money and they have nothing to show for it!". Trying to explain the dev process to Joe may just be deemed to be more costly than another run of the mill denial!

  32. This post has been deleted by its author

  33. that one in the corner Silver badge

    Military Simulations ==> War Games ==> Field Exercise?

    > Or are we now splitting hairs over what a simulation is?

    Maybe we are, if the spokesperson is using the current MilSpeak (strangely similar to ManagementSpeak). After all, the update continues

    > "and remains committed to ethical and responsible use of AI technology"

    which would seem to indicate they think that running - and reporting on - a "misbehaving" AI under simulation is *not* ethical and/or responsible.

    But running an reporting on a wayward program run with simulated inputs and, most importantly, simulated outputs, is the epitome of ethical and responsible behaviour! Yes, as noted above, the behaviour of the program was hardly unexpected if they'd read the literature beforehand so we can chuckle at their naive surprise. But if you're going to follow that path it is *highly* responsible to try it out in a harmless environment first.

    Heck, even if it turns out to have all been taken out of context, if it prompts anyone who really is creating a similar program to think about how they are setting it up, that would be a really positive outcome.

  34. The commentard formerly known as Mister_C

    Please tell me...

    Did the AI ask "would you like to play a game?" first...

  35. Steven H Taylor

    The Register used to run a series about the ai/robot uprising called "Rise of the Machines". What happened to that?

    1. Dan 55 Silver badge
      Terminator

      Every story would have to be ROTM story.

  36. Brad Ackerman
    Boffin

    I have been Roland, Beowulf, Achilles, Gilgamesh; and I seem to have left my coat aboard UESC Marathon.

  37. Anonymous Coward
    Anonymous Coward

    Complete fiction

    He simply invented this - thought experiment == invented it

  38. ecofeco Silver badge

    Doomers they said. What do we know, they said

    Bwahahaahahahahaha.

    Looking at you tech bros.

  39. TheRealRoland
    Angel

    Yes, it wasn't real, but...

    Wouldn't it be scary if it was?

    Also, so historic Djinns were perhaps proto-AIs? *insert it was aliens meme*

  40. Robert 22

    Dave, I'm sorry ....

    Reminds me of the prescient scene in the movie 2001 where Hal decides to get rid of an astronaut who happens to have grave reservations about him:

    https://www.youtube.com/watch?v=Wy4EfdnMZ5g

  41. Dan 55 Silver badge
    Mushroom

    I am disappoint

    Three pages in and still no reference to Dark Star.

    1. that one in the corner Silver badge

      Re: I am disappoint

      To be fair though, bomb number 20 didn't decide to ignore its inputs of its own volition[1].

      But a good reminder of another great documentary about AI that we should all watch (again).

      PS: the uniform still doesn't fit

      [1] oops, too late for a spoiler alert? But at least I haven't told you why you should be wary of beachballs.

  42. Paul Hovnanian Silver badge

    Reinforcement Learning System

    No doubt one that got its start learning about fragging incompetent 2nd Lieutenants in Viet Nam.

  43. WereWoof
    Happy

    At least it wasn`t kangaroos with Anti-Aircraft missiles . . . .

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like