back to article 'Slow AI' needed to stop autonomous weapons making humans worse

A public evidence session about artificial intelligence in weapons systems in the UK's House of Lords last week raised concerns over a piece of the AI puzzle that is rarely addressed: the effect of the technology on humans themselves, and specifically its potential to degrade our own mental competence and decision-making. The …

  1. bo111

    Issues to focus on for "AI dev pause"

    Computational AI has lots of benefits. If protection is needed, it is here:

    - Robotics. Robots should be made in such a way that prevents them from harming people physically. Not too strong. Or if strong, then without legs and wheels. Unable to self assembly. Basically incapacitate them in some respect. Separate from one another. Make them only partially functional.

    - Limit communication channels and bandwidth. This one is already big in relation to malware.

    - Authenticate institutions and companies properly. Now it is common for organizations to call from unidentifiable phone numbers. DEAR REGULATORS, please force all phone numbers from public and private organizations to be identifiable, at least cross-checkable on their web-sites. Same with non-private email senders, all of their sending domains. All messages from organizations to be cross-checkable on their official web-sites, at least by some hash codes or PINs.

    1. DJV Silver badge

      Re: Issues to focus on for "AI dev pause"

      Absolutely!

      I've just had to chastise part of the local NHS about their use of withheld numbers when phoning. In an effort to reduce the number of spam/scam calls, I have blocked any number that doesn't reveal its CLI. Yet, the NHS still regularly does this.

      1. bo111

        Authentication assimetry

        It is puzzling that organizations contact you without authentication.

        But if you contact them, they demand you to prove who you are.

        1. DJV Silver badge

          Re: Authentication assimetry

          Yes, I had this a couple of years ago with my $PENSION_COMPANY. I'd had an arranged online meeting to discuss pension arrangements with $COMPANY_PERSON1 which all went ok. $COMPANY_PERSON1 didn't indicate that I would be getting a follow-up call regarding how the meeting went.

          Then, a few days later, I had a call on my mobile phone with the number withheld (alarm bell 1 goes off) from someone (let's call her $COMPANY_PERSON2) who claimed to be from $PENSION_COMPANY. She wanted to talk to me about my "recent contact" (very vague - alarm bell 2 goes off) with the company. She then asked me to provide answers to security questions. I refused and asked her to prove that she really was from $PENSION_COMPANY and why was she calling from a withheld number when this is now extremely frowned upon if not actually illegal now. I thought it reasonable to ask her to provide me with either one of my policy numbers or some digits (and their positions) from one of those numbers. She refused saying it was personal information and, after getting in a bit of a strop about my refusal to do what SHE wanted, in the end hung up on me.

          I immediately contacted $COMPANY_PERSON1 and told her about my experience. She agreed that it sounded very suspicious and asked if I wanted to officially report it, which I agreed to. She took the full details and said I would be hearing from someone in a few days.

          A few days later I received a call from $COMPANY_PERSON3 from a number that was associated with $PENSION_COMPANY and, as he had details about the "rogue" call and other things that only someone from the $PENSION_COMPANY should have possessed, I was happy to talk to him. He apologised as it turned out that the "rogue" call HAD come from someone employed by $PENSION_COMPANY who was working from home but hadn't done as she should have and routed the call via $PENSION_COMPANY's normal phone network. We spent some time discussing ways in which $PENSION_COMPANY could improve their ability to prove their own identity when asked for it (mainly the same as I'd asked $COMPANY_PERSON2 to do, which he thought was a reasonable way of going about things).

          Then he asked, "Is £75 compensation for all the hassle ok?" Having not expected anything of the sort, I readily agreed. This was duly paid into my bank account a few days later and, also around the same time, I received a package containing a written apology along with 2 bottles of wine and a box of chocolates!

          So, I think the lesson there is, if you complain properly, you can actually get good results and a proper company will learn from its mistakes. I do wonder, though, what sort of reprimand $COMPANY_PERSON2 got - hopefully, it was some decent training!

    2. imanidiot Silver badge

      Re: Issues to focus on for "AI dev pause"

      Your first point about self assembly and mobility is entirely moot. Any sufficiently advanced AI can figure out how to combine robots or let robots cooperate. Put a robot on a mobile chassis and suddenly it's mobile. Have Robot B be assembled by robots C, D and E. None of which can self assemble, but working together they can.

      1. bo111

        Re: Issues to focus on for "AI dev pause"

        > Your first point about self assembly and mobility is entirely moot

        This is a fair point and introduction to a discussion. I am proposing something specific, because I see many generic articles in "serious" papers without any details.

        Why details matter demonstrates my third point about authentication. Not sure it is easy to implement, but puzzling that it has not been implemented. Only those nonsense GDPR cookies warnings everywhere.

        Let's start with details. Maybe something good comes out.

    3. Curious

      Re: Issues to focus on for "AI dev pause"

      Vis-à-vis

      "CAN do attitude: How thieves steal cars using network bus"

      Break a robot tank's headlight and talk directly to the swarm.

      It seems that we've a few decades of slowdown needed before we have decent tools and education for managing public / private certificates and permitted roles on all of these previously trusted internal subcomponents. (i.e. communication signed with cert A can call Device B with function C, versions D-E, parameters F and accept response G)

      A LLM will be needed for creating the decision tree alone.

      The Reg lists multinational corporations humans and alert systems having very public struggles with basic cert rollover on top-tier revenue critical services.

  2. Simon Harris

    Government by AI

    I remember reading a novel quite a while ago where the process of government had become so automated and efficient that things were happening too fast and a government department existed solely to mess things up a bit and slow things down to a human speed again.

    I have a feeling it might have been one of the Niven/Pournelle collaborations, but without going through my book collection I can’t be sure. Can anyone remember the book?

    1. DJV Silver badge

      Re: Government by AI

      Ask ChatGPT - it has probably read it!

    2. Anonymous Coward
      Anonymous Coward

      Re: Government by AI

      Frank Herbert, apparently. Specifically Whipping Star and The Dosadi Experiment

      1. Simon Harris

        Re: Government by AI

        Ah, thanks - there’s a fair sized Frank Herbert section on my bookshelf too!

  3. Scott Broukell

    This - ". . the potential of AI systems to enhance our own worst qualities . ." - So, the plan is to feed all this lovely data (about humans, made by humans, yes the very same humans who are so well known for NEVER EVER having made any cluster-f*cking mistakes throughout our entire history), have humans input the data - and you are expecting what exactly to emerge out the other end?!

    Ever heard the phrase "Garbage IN - Garbage OUT".

    All right, granted, we do (should) tend to learn, a bit, from our own mistakes. But expecting man-made AI and ML to magically conjure up the answers to all our self-made problems is utterly ludicrous!

    It is also a stretch to even believe that the same AI and ML can teach itself and actually correct the errors of our ways, simply because we ourselves are at the very chuffing root of the whole shebang! And any way that would put AI and ML (and whoever is behind it / them) in effing charge of all of us!

  4. amanfromMars 1 Silver badge

    Food for Thought and to Choke Over

    If the potential of AI systems to enhance our own worst qualities as human beings, and the fact that this effect is the last thing you want in the military with things like ill-considered decision-making, misunderstandings amplified by shortened reaction times, tactical missteps and the odd blustering lie, as well as the escalation of hostilities before the usual diplomatic procedures have time to kick in is to be considered a valid case and cause to be launched for any termination or temporary suspension of AI development/activity/employment, such doesn't bode well for the future of supposed democratically elected politicians with their very well documented catalogues of tactical missteps and odd blustering lies/dodgy dossiers/phantom WMDs, does it.

    What do you want to do about that glaring abomination and continuing deepening malaise, apart from diddly squat and nothing, that is, because ....... well, any advance on the fact that you be no greater than convenient idiots in that case is most welcome for consideration/AI development?

    With particular and peculiar regard to military and paramilitary grade AI interventionism, is one best advised to understand it is first and foremost a renegade rogue private pirate rich enterprise in what is most definitely to humans, alien territory, and thus is any nationalistic humanistic bent on resources/preference for supply of novel almighty forces, both obstructive and self-defeating, sound advice which Uncle Sam has been clearly enough made aware of, although whether inclined to do anything constructive and positive with it, will clearly enough be seen in the fullness of time, which ideally should really be in next to no time, however don't bet anything you can't afford to lose in accepting that bet .....

    amanfromMars [2304040821] .... points out on https://www.nationaldefensemagazine.org/articles/2023/4/3/investing-in-people-is-the-real-key-to-integrated-deterrence

    [Thank you. Your comment will be displayed soon after reviewing.]

    It is a mistake, easily able to prove itself catastrophic to parties intent on a demonstrating a leading advantage, to not realise and engage with others of another nationality/state or non state actor mentality with significant abilities required for assured international security and universal safety.

    Any insistence on US citizenship as a red-line precursor for employment/engagement/payment of foreign services expected to be provided, and maintained in continuous good working order ..... for such services may be uniquely designed as far too complex for home based handover ..... is guaranteed to not deliver that which is available better and cheaper and more advanced in further and deeper stealthy developments exercising and experimenting in missions from/for Live Operational Virtual Environments elsewhere.

  5. Dr. Ellen
    Flame

    Accountable?

    "Afina added that "there will always be a human accountable. That is for sure."

    This is - dumb. All my life, I've watched governments, people and agencies alike, do stupid or evil things, things that would have had me or you in court or in jail. I haven't seen that many heads rolling, or even job losses. When there were job losses, they were mainly low-level folk caught near the calamity, not the people whose orders caused the calamity. Accountability in governance is even less effective than reliability and truth in ChatGPT.

  6. Bitsminer Silver badge

    confusion and wishful thinking

    These are, without a doubt, very intelligent and experienced and dedicated people who are, like many of us, scared shitless about this new AI "chat" technology. And some of it's implications.

    But there are a number of things mentioned that are cause for concern because these people somehow don't get it as far as how systems are built and operated.

    Yasmin Afina says slow AI is that it would allow us more robust, thorough testing, evaluation, verification and validation processes... Verification? Against what requirements? And what "validation" can we achieve for a system build that takes maybe 5 years designed for a world view that might be 10 years old? The tempo of (military) systems deployment is measured in decades, not calendar quarters. (Even assuming rapid requirements changes are made to adopt AI the systems are still going to be years late).

    It's much more likely that new military systems will be built by "just putting together technologies from the commercial sector", which means drones, smallish missiles, autonomous guns, and not heavy or difficult things like tanks, aircraft, or ships. Especially since you can build hundreds of smallish missiles, guns and drones for half the price of a jet fighter. Coming soon to a non-state group near you.

    Afina is also quoted as saying "Only a handful of companies have access to these kinds of facilities.". (This is the wishful thinking bit.)

    True but false. Only a handful of companies (and not a single nation-state, significantly) can build the models, but once released anybody with a decent gaming rig can run a model on a laptop (https://github.com/ggerganov/llama.cpp).

    Which means text-based AI composed with other technologies like weather modelling, graph calculators, linear programming and any other interesting mechanical stuff all put together with langchain (found at langchain.com of course) makes anything possible. To anyone with an itch to scratch.

    If you aren't frightened now, you will be.

  7. Anonymous Coward
    Anonymous Coward

    Colossus: We can coexist, but only on my terms.

    Did nobody ever watch Colossus: The Forbin Project? Or 2001: A Space Odyssey? Or even Dark Star?

  8. Anonymous Coward
    Anonymous Coward

    help, I'm dumb

    I went wow as I went to the site, cause it was clearly laid out and with a big 'download' tab. Including option to download 'audio', another wow (this is how every gov site should be like, man!) - and then I started to scratch my head. I get why anyone would want to download a 12 second clip when an honourable gentleman would scratch their balls, etc. But what if I want to download everything (EVERY! THING!!!!). No such luck. OK, so I have to set... start time and end time. OK, I enter the f... numbers, to the second, start... end... then I confirm to bard-capcha I'm not a bot, enter my e-mail, because this is a great way to collect live email addressess, and then I get this charming, absolutely clear message, in apologetic red:

    End Time cannot be later than the meeting end time.

    aka WTF.

    Or is it that to make it even simpler (elementary dear Watson), I'm supposed to practice my calculations, i.e. subtract 11.52.36 - 10.05.58 = 1 hr, 46 min, 38 sec, thus, enter 0:00:00 - 01:46:38

    Can anyone enlighten me?

    ...

    OK, I enlightened myself, the whole video and time stamps displayed is LONGER than what they entered (?) in some download parameters. WTF do they have to make something mega-awkward instead of a simple 'download' button?

    Quote:

    Download created successfully. You have 4 downloads remaining. This will reset in 24 hours.

    Wow.

  9. imanidiot Silver badge

    "which sees its parallels in European law, where the GDPR says data subjects have the right not to be subject to a decision based solely on automated processing."

    Unfortunately as we all known any bureaucracy finds it's laziest point where it can still be said to be something akin to functional, which means that any human intervention/review/decision in an automated process will boil down to "computer says no, so no". No original thought will have ever entered their heads.

  10. Cliffwilliams44 Silver badge

    Don't shoot the Sheriff!

    As the Sheriff told Mr. Marley, "Kill it before it grows!"

    Mankind, thinking they can control this technology, especially once quantum computing becomes a real thing, is delusionally suicidal!

  11. Cliffwilliams44 Silver badge

    ZARDOZ

    This also reflects the result of immortality in the movie ZARDOZ!

    Humans after achieving immortality (which was reserved only for the elites of society) had become lazy, complacent, men had lost interest in sex, reproduction came to a halt, innovation had stopped.

    What will happen when there is no new human created knowledge fed into the LLM? Will all new knowledge only be machine created knowledge?

    Thankfully, I will not live to see this.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like