back to article Experts scoff at UK Lords' suggestion that AI could one day make battlefield decisions

Experts in technology law and software clashed with the UK House of Lords this week over whether it was technically possible to hand responsibility for battlefield decisions to AI-driven weapons. During the AI in Weapon Systems Committee hearing on Thursday, the Lords struggled to draw the experts into the idea that it might …

  1. Pascal Monett Silver badge

    Is it technically possible ?

    Of course it is. We already have movement detectors and infrared detectors. All you need is to couple one or both with a machine gun, tell it where to point and fire.

    I'm pretty sure that's technically feasible.

    Is it the right thing to do is altogether a different question.

    1. katrinab Silver badge
      Pirate

      Re: Is it technically possible ?

      But if you have that instead of a human guard, you just need to wave some object at the sensor repeatedly until the machine gun exhausts its supply of bullets, then walk past it.

      1. Dabooka

        Re: Is it technically possible ?

        Or hide under a box

        1. Anonymous Coward
          Anonymous Coward

          Re: Or hide under a box

          my preferred method not to be seen is behind the bush, mobile or otherwise.

          1. spold Silver badge

            Re: Or hide under a box

            How not to be seen https://www.youtube.com/watch?v=VokGd5zhGJ4

      2. Elongated Muskrat Silver badge
        Alien

        Re: Is it technically possible ?

        Wait for the automated sentry gun to exhaust it's ammunition and then attack the humans through the suspended ceiling?

    2. Elongated Muskrat Silver badge

      Re: Is it technically possible ?

      It's technically feasible for an automated gun to shoot at a target, that isn't what is in question. Whether it is feasible to have the gun determine whether it *should* shoot the target is another matter. It is a perfectly reasonable argument that only the battlefield commander may know the context of that potential target, and know whether the gun should shoot it. Even then, if all information were known, a gun can't make a moral or ethical judgement on whether to shoot. It's arguable that humans can't either; certainly not correctly every time, with the benefit of hindsight.

      1. jake Silver badge

        Re: Is it technically possible ?

        My veggie garden's automated anti-deer sprinklers[0] are supposedly programmed to ignore human targets.

        I get soaked every now and then ... I'd sue the manufacturer, but he is me.

        Now ask me why I use water, not my preferred paintballs.

        [0] Individually aimed 1.5 inch, 60 GPM@100psi impact sprinkler heads ... the sudden noise scares the deer off as much as the water.

  2. xyz Silver badge

    Mmmmm let me think...

    An AI or some bloke from Eton calling the shots... Choices choices.

    1. katrinab Silver badge
      Childcatcher

      Re: Mmmmm let me think...

      AI has an IQ of 0. The bloke from Eton might possibly have an IQ of > 0. So I would take my chances with the bloke from Eton.

      1. steelpillow Silver badge
        Joke

        Re: Mmmmm let me think...

        Some AIs are claimed to have passed the Turing test. Does anybody claim as much for Old Etonians?

        1. katrinab Silver badge
          Alert

          Re: Mmmmm let me think...

          Well yes, they pass as that overconfident know-it-all bs merchant down the pub that we've all met.

          I think it demonstrates that the Turing Test isn't a suitable test of intelligence.

          1. jake Silver badge

            Re: Mmmmm let me think...

            "I think it demonstrates that the Turing Test isn't a suitable test of intelligence."

            That's hardly a great revelation, considering that Turing himself called it "the imitation game".

        2. jake Silver badge

          Re: Mmmmm let me think...

          Passing the so-called "Turing Test" is fairly easy. Any idiot can do it.

          What is difficult is having the ability to take the test in the first place.

          The machines, being built specifically for the purpose, should be capable of this.

          Old Etonians? Maybe not so much.

    2. Doctor Syntax Silver badge

      Re: Mmmmm let me think...

      Perception is all. I momentarily misread Eton.

      1. Aladdin Sane
        Coat

        Re: Mmmmm let me think...

        It's a mess

        1. Paul Crawford Silver badge

          Re: Mmmmm let me think...

          At least they don't have rifles. Yet.

  3. Spazturtle Silver badge

    The UK already operates weapons that use autonomous target selection such as the Brimstone missile.

    None of these 'experts' seem to actually be defense experts.

    1. Doctor Syntax Silver badge

      Wikipedia mentions avoiding collateral damage. Perhaps the experts have a point.

      1. Anonymous Coward
        Anonymous Coward

        Hmm. Counterpoint: mines.

        Or in general any latent killing device that will lay in wait forever. When you lay a mine, you take the decision to kill anyone who steps on it. What you do not have is (a) a means to determine who will get injured or killed (which could be one of your own) and (b) a means to undo this and disable the mine. Which menas it could be moved elsewhere and kill on your behalf, and that doesn't need to be deliberately (the broken dam in Ukraine washed away a lot of them so they're now all over the place).

        Add an AI to this and you could make the mine slightly more discriminate. Or commit suicide when moved (although - would an AI understand the concept of self termination and could it decide not to comply?).

        It's not an easy topic IMHO so I'm not sure it should be let loose on politicians just yet.

        1. Elongated Muskrat Silver badge
          Coat

          I'm not sure we shouldn't let an automated kill decision loose on politicians. Oh wait, wrong chat...

        2. Vincent Ballard

          I'm not sure about your point (b). I distinctly remember from the book Bravo Two Zero that they placed a claymore anti-personnel mine as part of their defences at one point (possibly while they slept) and dug it back up and disarmed it when they moved on. The person who placed it was responsible for digging it up because they knew exactly how they'd placed it.

          1. Anonymous Coward
            Anonymous Coward

            That's a small team. In bigger wars, the minefields tend to become the legacy long after the wars themselves have stopped. Look at areas in the world where there have been long conflicts, and nobody is helping the people having to cope with it if the country is poor.

          2. Richard 12 Silver badge

            It's only possible to move a mine when there are very few of them, perhaps only one, and the team doing so has plenty of time to do so.

            It's also a high risk activity, because of what a mine is designed to do, so it has only ever been done when the mine itself is a valuable and irreplaceable asset.

            In the situation you mentioned, they only had one mine, and could not get any more.

            It was also fictionalised - the event may not have actually ever happened (although the team probably considered it)

        3. MrDamage

          >> (although - would an AI understand the concept of self termination and could it decide not to comply?).

          if moved=send alert

          The AI doesn't have to know the wire to the piezo speaker, also sends a signal to the detonator.

    2. Peter2

      Technically, we've had autonomous target selection for a long while. The SMART 155 artillery shell is a good example of this.

      However, these are still fairly closely human controlled in that they are fired by a human at a particular killbox which is assumed to contain a large concentration of armoured vehicles and they then blow up something in that killbox. The only thing Brimstone brings along that things like SMART155 doesn't have is pattern matching to prioritise a customisable target list; for instance try to hit artillery first, then tanks, then APC's etc.

      I think what the politicians are looking at is a Dalek which can choose what it kills. [And notably that didn't end well even in fiction; practically the first autonomous decision to kill something the Daleks made was to exterminate their creator Davros...]

    3. Jason Bloomberg Silver badge
      Mushroom

      The UK already operates weapons that use autonomous target selection such as the Brimstone missile.

      Replacing manned anti-aircraft and anti-missile batteries with machines which can do the job better than humans, with fewer consequences when taken out, is quite a long way from having computers replace generals and those making the high-level decisions on how wars are fought.

      There are definitely things 'AI' would be good for. For me it's a question of how much control and decision making we can or should surrender to 'AI'.

  4. alain williams Silver badge

    Look at Wagner troops

    Do they take care to distinguish right from wrong targets ?

    Neither Putin nor Prigozhin care if the wrong people are killed. Do you think that they would not deploy battlefield AI just in case it made mistakes ?

    1. lglethal Silver badge
      Trollface

      Re: Look at Wagner troops

      The Wagner Troops do care that they distinguish the right target from the wrong.

      It's just that for them, the wrong targets to shoot are other Wagner soldiers. Everything else (including civilians, other Russian soldiers, Russian planes, etc.) are the right things to shoot.

      It all comes down to perspective...

      1. jake Silver badge

        Re: Look at Wagner troops

        Part of Wagner's issue with Putin is that he's sending unprepared civilians in to do what they perceive as a job for professionals. Sending children in as cannon-fodder (as Putin is doing) is, in the minds of professional soldiers, a despicable thing to do.

        Yes, it all comes down to perspective.

        No, I'm not defending the Wagner thugs. Read what I actually wrote, not what you think I wrote.

  5. lglethal Silver badge
    Go

    I think the whole point of this is who holds the can for failures. If you take out the Soldier at the bottom and replace it with an AI, then the Operator of the AI who gives permission to fire or not, is responsible.

    If you take out the Operator, the Commander who tells the AI where to focus (even if the Commander no longer needs to give permission for firing) is responsible.

    Take out the Commander, and let the AI decide where it needs to focus, and give it firing permission, and it will be the Politician who gave permission for the system to be utilised who will be responsible.

    Make that the system of delegation, and I guarantee you there will always be an Operator, as Politicians and Commanders (who are effectively just politicians of a military stripe), will NEVER want to be held responsible for mistakes. They will always need someone under them to blame...

    1. AVR Bronze badge

      You've missed out the possibility of some organisational failure which isn't any real person being the one left holding the bag. It's certainly known outside the military, I'd be surprised if it was entirely unknown in the armed forces for all their insistence on the chain of command.

  6. Howard Sway Silver badge

    What about AI camouflage?

    Surely the race to develop AI targeting will spawn a similar race to develop AI camouflaging. So, if you're being invaded by the Duchy of Grand Fenwick, strap big photos of Grand Fenwick tanks flying their flag to the front of your tanks, and they won't shoot at you because they'll have been trained to avoid shooting at their own side.

    Conversely, putting up lots of photos of your own tanks and aircraft on billboards all over the place will get the Fenwickian AI firing at these decoys, wasting ammo and luring them into your trap.

    1. that one in the corner Silver badge

      Re: What about AI camouflage?

      I believe we have a recording of the last time that sort of subterfuge was attempted:

      Henry Crun: It's much too dark to see, strike a light.

      Seagoon: Not allowed in blackout.

      Minnie Bannister: Strike a dark light.

      Seagoon: No madam! Madam we daren't. Why, only twenty eight miles across the Channel the Germans are watching this coast.

      Henry Crun: Don't you be a silly pilly policeman.

      Minnie Bannister: Bravo Henry.

      Henry Crun: Pittle Poo.

      Minnie Bannister: Pittle Poo. They can't see a match being struck.

      Seagoon: Oh, all right.

      FX: [Striking match - bomb whistle - explosion]

      Seagoon: Any questions?

      Henry Crun: Yes, where are my legs?

      Minnie Bannister: Where are my legs?

      Seagoon: Now are you aware of the danger of German long range guns?

      Henry Crun: Mnk ahh I have it! I've got it, I've got the answer. Just by chance I happen to have on me a box of German matches.

      Seagoon: Wonderful! Strike one. Ha, they won't dare fire at their own matches.

      Henry Crun: Of course not. Now...

      FX: [Striking match - bomb whistle - explosion]

      Henry Crun: ...Curse... The British, the British!!!

    2. cookieMonster
      Joke

      Re: What about AI camouflage?

      Baldrick, is that you??

    3. Roj Blake Silver badge

      Re: What about AI camouflage?

      Not relevant, as Fenwick famously only launches invasions when the enemy are all in their shelters for an air-raid drill.

  7. Anonymous Coward
    Anonymous Coward

    In other news

    Common serfs scoff at UK Lords' suggestion that they could one day make good decisions.

  8. Boolian

    AM I AI

    Artificial Military Intelligence or Artificial Intelligence?

    There's an an oxymoron in there somewhere, or at the very least, a moron.

    I'll assume AI will scrape the back catalogue of military tactics throughout the ages; in which case it will be a very short scrape, and very simple maths to code, as a zero is all that's required to sum.

  9. Arthur the cat Silver badge

    Of course you can

    … put an AI in charge of a battlefield.

    In much the same way as you can put a pyromaniac toddler in charge of a fireworks factory.

    Whether it's a good idea is an entirely different question.

  10. Anonymous Coward
    Anonymous Coward

    decisions, and not-decisions

    Even if the AI are not used to make Real Decisions (tm); they might still used to process, filter, enhance, or select from available data, or obtain new data according to some AI process. This will be presented to the official Decision-Maker, thus biasing (for good or bad) the so called Real-Decision. If the AI's "not-a--decision" might be to label some feature as (e.g.) a probably weapon system (or not), should we really pretend that that is not a decision of consequence?

    It is all very well claiming/asserting/deciding that an AI will never "pull the trigger", as it were, but any AI presence in the decision-making environment is part of the decision making; and might be able to strongly bias outcomes, despite perhaps only appearing to be some kind of useful minor assistant.

  11. mark l 2 Silver badge

    It probably not a good idea to have AI in charge of being able to fire weapons, as although a individual soldiers could be turn by the enemy to become double agents, if your military AI is compromised it could be used to launch massive strikes against your own troops, potentially in many different locations simultaneously.

  12. amanfromMars 1 Silver badge

    RAF Clubbing AIMaster Pilots Swoop to Politically Incorrect GCHQ and Army and Navy Rescue

    Pioneering High Fliers get to Share the Much Bigger Epic Pictures showing All the Truth that Both Reveals and Destroys a Great Wall of Deceitful Lies and Self-Servicing Obfuscation in the Employ of Arrogant Ignorance .... A Totally Unnecessary Present Evil and Clear Existential Danger Threatening the Evolution of Humanity.

    With El Reg leading the Fantastic Light Way, Biting the hand which feeds IT popular politically incorrect and unpalatable bullshit and chicken feed ....... Evil or enlightened, an Almighty AWEsome Development where the Devil is in the Details* .... https://forums.theregister.com/forum/all/2023/06/25/ai_rogue_paper_comment/#c_4688592

    I Kid U Not.

    What new exciting news did you hear about today? Anything fundamentally different from yesterday to give tomorrow every chance of being Earth shattering/groundbreaking ....... and with you invited to take part and play an ACTive role in ?

    I'm betting none, and that is your norm and the current petrified Globalised SCADA Systems Administrative Default ..... More of the Same for More of the Same ..... :-) and they Spin it to IT as Progress.

    Strewth ....... It is almost as if they are not capable of joined up future thinking.

  13. StrangerHereMyself Silver badge

    Inevitable

    I believe it will be inevitable that AI will eventually make battlefield decisions since time to act will increasingly become critical in bringing a battle to a successful outcome.

    So I'm going to row against the flow and side with the MP's on this one.

  14. TheBadja

    Of course it is technically possible

    If a vehicle can tell if that is a pedestrian crossing the road, then a vehicle can tell if it is a pedestrian it should shoot.

    1. Anonymous Coward
      Anonymous Coward

      Re: Of course it is technically possible

      A vehicle isn’t supposed to decide if a pedestrian is a good pedestrian or a bad pedestrian, it’s supposed to avoid all pedestrians. Seeing how tricky even this can be, working out if it’s a bad pedestrian it’s about to plough into is an order of magnitude harder. The only way to always get it right, is to avoid all pedestrians - not very useful in the case of a war machine.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like