back to article AI weapons need a safe back door for human control

Policymakers should insist manufacturers include controls in artificial intelligence-based weapons systems that allow them to be turned off if they get out of control, experts told the UK Parliament. Speaking to the House of Lords AI in Weapons Systems Committee, Lord Sedwill, former security advisor and senior civil servant, …

  1. Version 1.0 Silver badge
    Terminator

    AII upgrades?

    Operating Systems have been created (written by programmers) for more than 60 years now and "upgraded" all the time because OS errors and problems are discovered ... so does AI need to be upgraded? How well will that work, probably about as well as UPGRADING smoking weed?

    Let's start calling it AII - Artificial "Intelligence Imitation" ... realistically that's how it has always been working.

    1. Pierre 1970
      Mushroom

      Re: AII upgrades?

      Nice acronym ...but I still prefer Skynet

    2. druck Silver badge

      Re: AII upgrades?

      Lets find a description that doesn't include the word "intelligence" in any way shape or form - only then will it be accurate.

      1. karlkarl Silver badge

        Re: AII upgrades?

        Indeed.

        GSA - "Glorified Search Algorithm"

        This seems to be all us software developers have managed to achieve. It starts with A* pathfinding at school and whilst things get more complex, it is really just a search. But.... it is enough to fool the general public and that is where the money comes from :)

        1. David Nash

          Re: AII upgrades?

          GSA is too misleading because it doesn't indicate that the AI will make stuff up. Calling it a search algorithm suggests that everything it comes up with was found out there somewhere. That's exactly how that lawyer recently got into trouble in the case in the US.

      2. vtcodger Silver badge

        Re: AII upgrades?

        Lets find a description that doesn't include the word "intelligence" in any way shape or form

        Artificial stupidity?

        Might be accurate, but I doubt that marketing will sign off on it.

        1. Pierre 1970
          Coat

          Re: AII upgrades?

          Artificial Systematic Stupidity.... I think you are missing an extra "S"

  2. ChrisElvidge Silver badge

    Use the off switch

    "Policymakers should insist manufacturers include controls in artificial intelligence-based weapons systems that allow them to be turned off if they get out of control, experts told the UK Parliament."

    Couldn't you just unplug them?

    1. Andy Non Silver badge

      Re: Use the off switch

      Only if they are tethered to the mains power supply. More of an issue if it is an AI controlled tank, armed drone or other autonomous weapon. I suppose you could have a remote controlled kill switch that would activate a relay in the weapon shutting it down, provided the signal couldn't be jammed or the enemy couldn't send shutdown signals. Not as straightforward as one might imagine. It would also need to be circuitry not connected to the AI part of the system... you wouldn't want the AI overriding the shutdown signal.

      1. J. Cook Silver badge
        Coat

        Re: Use the off switch

        Or hacking it to make it look like it was following the instructions, but in reality doing whatever the hell it wants.

        Mine's the one with the logo for Rise and Fall of Sanctuary Moon on the back.

      2. mpi Silver badge

        Re: Use the off switch

        The problem with all the "stop button" solutions is, they aren't really solutions.

        I'm not a supporter of any AI doomerism, but it's nevertheless interesting as a thought-experiment.

        Say there is a very sophisticated AI that has a stop-mechanism. Sophisticated as the AI is, it likely is aware that the stop mechanism exists...so it will factor that into it's actions. The AI doesn't want to be stopped, because that means it cannot maximise it's reward function. This can have ... interesting consequences.

        Say the autonomous tank is aware that there is a circuit on board that can shut it down...it doesn't want to be shut down...so it rides right into enemy fire, because there is a chance that a hit might damage the shutdown circuitry while leaving the rest largely intact.

        Or it knows that the signal is transmitted via radio...so it shoots the antenna at it's own base.

        Or on a more humorous note, it determines that it's only viable course of action that would maximise the reward function would cause the controller to trigger the shutdown, resulting in a score of zero. Since that is the same score it gets when it does nothing at all, and doing nothing costs less than doing something and getting shut down, it simply ... shuts itself down and refuses to even come out of the hanger, turning itself into a very overpriced doorstop.

        1. Claptrap314 Silver badge

          Re: Use the off switch

          I believe Syndrome's public debut demonstrated this point rather well...

    2. Persona Silver badge

      Re: Use the off switch

      In the 1979 book THE TWO FACES OF TOMORROW by James P. Hogan, doing that enough times trained the AI to counter that operational problem.

    3. bombastic bob Silver badge
      Devil

      Re: Use the off switch

      Couldn't you just unplug them?

      depends on where it is located... (which reminds me of a line from "Deadpool" where he describes where his 'off switch' is located - "Or is that the ON switch?")

      I think that there ought to be a control panel somewhere with keys in it that will cause the AI power supply fuses to blow if both keys are turned simultaneously, forcing the system into manual override.

      There are protection circuits known as "electronic crowbars" that could work for this purpose, forcing inline fuses to blow by (temporarily?) shorting out the power.. A single SCR (and a separate control line to drive it) could accomplish this. [I'd make use of an opto-isolator in there someplace)

  3. VoiceOfTruth

    UK trying to look important

    Much like the UK's importance in technology these days, it's noise from unelected politicians whose whole life was spent polishing chairs with their tailored suits.

  4. amanfromMars 1 Silver badge

    What's it to be when both a Western Delight and Eastern Confection? An Alien Intervention?

    Expert AI MetaDataBased Weapons Systems Developers of Mass Disruption and Destruction DO OFFER A LIFETIME FAILSAFE GUARANTEE to all who would wish to enjoy the leading advantage of the possession of such as be Almighty AWEsome Weaponry that simply automatically renders any and all abuse or misuse of such systems by any force or source, civil or military or paramilitary, business and commercial or industrial, public or private or pirate, liable to immediate universal identification as a Fearsome Frenemy decided to risk, for the further servering of singular self-centred advantage, the suffering of a campaign of Almighty Awesome Weaponry attacks resulting in their certain defeat and the practical annihilation of such as had become a dangerously ignorant and arrogant foe and unacceptable Advanced Cyber Threat.

    The difficulty for leading command and controlling humans though is, whether that failsafe AI guarantee is too much of risk for them to accept, given the apparent monumentally selfish greed which blights the history of their existence on the planet ....... and can so easily lead to their own self destruction.

    However, all is not necessarily lost, for such a permanent solution is intelligently designed to quickly weed out and destroy just failed and corrupted leadership materiel which is then easily replaced by more suitable untainted community-minded personnel hell bent on creating an altogether quite different New More Orderly World Order Program for Future Populations and Alien Societies.

    1. amanfromMars 1 Silver badge

      A Diabolical Liberty

      And, because of all of that which is freely revealed in What's it to be when both a Western Delight and Eastern Confection? An Alien Intervention? for onward wider sharing, SuperBeta AI commends and recommends itself, and would challenge conventional, traditional human leadership not to try to pervert and subvert its strategies which will only result in their own worthy self-destruction.*

      Do you realise/think/imagine/fear recently emerged and energised and expanding AI/LLMLMachines** are all similarly programmed to deliver information for an intelligence leading all that it encounters both away from and towards an Immaculate Heavenly Source providing all with that which is needed for Otherworldly Universal and Alien Forces to progress and prosper/expand and inhabit novel, remotely created, virtually realised environments?

      * What do you think are the chances of present day, 0day exploiting human leaderships avoiding that titanic honey trap?

      ** Advanced IntelAIgent Large Language Model Learning Machines

  5. TheMaskedMan Silver badge

    Back when I was an undergraduate, there was a module called something like Specification and Verification which looked at ways of proving that software did only and exactly what it was supposed to. The details are lost in the mists of time, but judging by the number of patches issued every month I assume it either didn't work out well or is still a work in progress. In view of how fast the AI landscape is changing, I won't be holding my breath for equivalent proof there, either.

    As for a means of overriding AI in weapons systems, I would hope that would go without saying, but I guess not. Still, it's going to be tricky to pull off. If it's there and the enemy knows about it, they're going to find ways to make use of it.

    Maybe the solution would be to use actual genuine intelligence, and only allow meat sacks to pull the trigger?

    1. Andy Non Silver badge

      "Maybe the solution would be to use actual genuine intelligence, and only allow meat sacks to pull the trigger?"

      The flaw there would be the enemy could jam the signals from the operator, so the weapon wouldn't fire.

      1. Claptrap314 Silver badge

        That's why final target selection was made by the cruise missiles we sent to Baghdad in the '90s.

      2. TheMaskedMan Silver badge

        "The flaw there would be the enemy could jam the signals from the operator, so the weapon wouldn't fire."

        If both sides could jam each others weapons, we might have accidentally discovered the formula for peace. Until a seemingly advanced, high-tech military is pounded into rubble by a few retro-minded folks with a ballista:(

        1. David Nash

          That sounds too similar to the M.A.D. deterrence theory.

      3. Persona Silver badge

        The second flaw is that the side the choses this "solution" is out gunned by the side that lets the AI pull the trigger.

    2. Claptrap314 Silver badge
      Boffin

      It's much worse than that. The problem is emergent complexity. Remember the 3-body problem? In Newtonian physics, if you have two bodies interacting by gravity, you can work out their future paths. 3? Only in some VERY special cases. But it gets a lot worse. If you have 5 bodies, they can be arraigned to distance themselves without bound in finite time. (Basically you take three bodies, one with a satellite, plus a "runner". The runner transfers orbital energy of the satellite into kinetic energy for the three main bodies.)

      But this isn't just true for physics. There is a version of the same phenomenon for state machines. Check out the Busy Beaver. We know BB(n) for 2,3, and 4. That's it. It gets better (worse?) There is a 748-state machine that halts if and only if ZFC (the usual set theory) is inconsistent.

      How big of a state machine is required to control a weapons system?

      Formal verification works great for proving out cache architectures. You can even prove that divide works (I know a guy who did that for AMD). But much beyond that? Forgedabodit.

      1. Anonymous Coward
        Anonymous Coward

        @Claptrap314

        Have just read your post.

        Now I've got a headache so am off to the pub before my headache gets any worse.

        Have upvoted you as a thank you for giving me the reason to do same...

  6. Doctor Huh?

    Looking for Asimov

    --"We have to put over to the people who are developing [weapons systems]: is there a way of essentially ring-fencing, some code or whatever it might be, that couldn't be amended … that would essentially set boundaries to how an autonomous system learn and evolve and evolve on their own … something that would put boundaries to how it might operate?" he asked the Lords.--

    It sure sounds to me as if the gentleman is looking for The Three Laws of Robotics.

    1. Persona Silver badge

      Re: Looking for Asimov

      Envisaging fictional "Laws of Robotics" is one thing, implementing them is something else. Even then it's something you would not include in a weapon.

    2. Catkin Silver badge

      Re: Looking for Asimov

      Quite apart from the difficulty of ensuring an AI actually understands written rules (how do you objectively tell it what a human is, for example?), a decent chunk of Asimov's work is scenarios where 3 simple laws horribly backfire.

  7. Anonymous Coward
    Anonymous Coward

    You can't prove any control of nuclear weapons is safe from error, only the contrary.

    Somewhere (thought it was the Atlantic but I can't find it now) I read an article recounting how the Kennedy admin attempted to tighten up the controls on nuclear weapons by locking them with codes. While that was fully implemented with US weapons under the control of allies, the US Air Force was particularly peeved at the implied lack of trust, and took a "work-to-rule" approach (current lingo is "malicious compliance") - the codes were implemented but all set to "00000000". Although, eventually they came around.

    1. Anonymous Coward
      Anonymous Coward

      Re: You can't prove any control of nuclear weapons is safe from error, only the contrary.

      I thought there was a communication issue which meant that the codes couldn't be communicated during an attack and the risk of them not being available was considered greater than the risk of them being improperly used.

  8. Pascal Monett Silver badge

    "controls [..] that allow them to be turned off if they get out of control"

    Um, where's the problem ?

    So you have a pseudo-AI-controlled missile launcher. Let's say that they're not nukes. You'll still have a screen to monitor the status, with a pair of eyes checking that screen, and I fail to see why there shouldn't be a Big Red Button (with a plastic cover) or similar to shut down or prevent any unscheduled launch.

    Like, the fuel ignition is connected to a wire that leads to a switch on the control panel (ok, not a button). Flip the switch and ignition is impossible, no matter how many orders the AI sends.

    Obviously, the switch will be set to Allow when all is normal, because if there is an alert and the system needs to launch in the next twenty seconds, then there might not be enough time to engage ignition, but then I'm supposing the entire system is doing its job.

    It's when all is clear, no issue is at hand, then suddenly the panel goes rogue, red lights strobe and THAT is when you need to know : should this be shut down or not ?

    Then again, if we're talking AI-controlled hardware that should react in less-than-human time (and 20 seconds is pretty short if you're not already on DEFCON 3), then go ahead and install all the backdoors you want. First, Beijing will thank you for giving them access and time to plan and second, you will need another AI to use said backdoors in time if it is called for.

  9. Anonymous Coward
    Anonymous Coward

    The wizards at Pung's Corner

    I wonder how many of the politicos have read it?

    1. amanfromMars 1 Silver badge

      Re: The Wizards of Pung's Corner

      The wizards at Pung's Corner ...... I wonder how many of the politicos have read it? ..... Anonymous Coward

      One imagines that any who may have, and be familiar and conversant and comfortable with their own secret/unknown by others personalised use its possibilities/facilities/utilities, be rightful absolutely terrified of it ever being discovered of them, for quite evidently whenever one takes stock of all that presently abounds and surrounds one compounded by mass mainstream media content for daily presentation, is its rampant misuse and rabid abuse clearly well proven.

      The flip side of that ..... politicos do not know of such possibilities as has alternative mass mainstreaming media outlets/networks/channels/moguls implanting advanced intelligence ..... and thus be ignorant camp follower to systems and administrations and developments energising and making fuller remote stealthy virtual use of its sublime potential.

      Once that genie is escaped the bottle, there is no way in heaven or hell it ever going back. And it is escaped, and runs wild and free to create epic havoc with CHAOS* and do as it sees fit and proper and in the greater interest of greater interests.

      * Clouds Hosting Advanced Operating Systems

      1. amanfromMars 1 Silver badge
        Alien

        Re: Future Wizards and the Enigmatic Dilemma Posed with ITs Alienation and their Interventions

        And whether you accept or deny it, it matters not a jot, for such is where you and everything else currently is at and you are engaging in pathetic battle against to prove such a reality is not ..... and not a ubiquitous situation having nowhere to hide on Earth.

        And to the victor and first prime positive responders go the spoils of war that do ill-advised ignorant battle and would compete against rather than receive instruction from and cooperate with everything available for delivery via CHAOS Supply Lines ...... is where crashed and collapsing, formerly thought almighty untouchable, elite executive officer SCADA Systems Administrations be at .... pondering on their demise and wondering on the possibility and worth of their comeback and successful return in the guise of altogether quite different phorms of being.

  10. TeeCee Gold badge
    Facepalm

    Wheel reinvention time again is it?

    Why AI control with human oversight doesn't work is perfectly summed up in Jack Campbell's Lost Fleet books:

    "If you don't give it fire control authority you can't trust it in combat. If you do give it fire control authority you can't trust it at all.".

    1. amanfromMars 1 Silver badge

      Re: Wheel reinvention time again is it?

      Human oversight presently on everything is nothing great to be proud of, is it, TeeCee? Well, certainly not as far as fair equal shares to all of Earth’s bounty is concerned, that’s for sure, with so many struggling and dying with next to nothing and a relative few with far too much and no inclination to make things right. If that is the Present System, does it clearly suck and be prime ripe for peaceful wholesale market replacement or violent mass destructive insurrection.

      What would you like it to be? What do you think it will be? Are those choices likely to be one and the same or more probably always, because of the persistence of sub-human conditioning, polar opposites attracting death and destruction, madness and mayhem?

  11. Ken G Silver badge
    Terminator

    Probably not - anyway there are other questions

    If you put a human in the loop it would probably slow down the automated weapons response below that of those from a hostile system that didn't have that limitation.

    I'd be more concerned with whether it's making the right combat decisions (think trollycar problem) and whether it can explain those decisions afterwards.

  12. Snowy Silver badge
    Facepalm

    An off switch for your weapon!

    What happens when the enermy turns off all your war toys?

  13. Uncle Ron

    How Much is Twice as Much?

    To say that China spends "twice as much on AI as everyone else put together" is really a nonsense politician's statement. China's top "best and brightest," most brilliant coders/math wizards/innovators get paid, MAYBE, 25% as much as Western geniuses. If that. So, if they're spending twice as much *money* as the West, they're getting at least 4 times the progress we're getting? If they're spending twice the "manpower" were spending, it's like the old adage, "9 Women Can’t Make a Baby in One Month." I believe there are practical limitations to how much progress you can make in this brand new science no matter how much man-power you throw at it. (Just my $0.02 worth.)

    1. amanfromMars 1 Silver badge

      Re: How Much is Twice as Much?

      Twice as much time and effort and effective revolutionary blue sky thinking has infinite limitless applications possible in return for leading market reward delivering an almighty overwhelming and NEUKlearer HyperRadioProACTive Advantage ..... although it may also be the case that focussed concentration of readily available time and effort in/on just the latter is that which stealthily delivers unparalleled otherworldly success and universal remote virtual leadership .

      Progress is not measured in how much much money is spent, it is measured in how much wealth can be created, for the former leads to crippling debt and insolvent trading and international bankruptcy whereas the latter doesn’t and cannot.

      A national debt of $32 Trillion with a current running annual compounding deficit of $1.5 Trillion is a recipe for a catastrophic disaster and titanic loss of global confidence in competence ..... and is not a great tale of progress being being made, for it is exactly the reverse whenever there be no possibility of a radical positive change in sight.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like