back to article MPs ask who's responsible when AI crashes the UK finance system

UK financial regulators must conduct stress testing to ensure businesses are ready for AI-driven market shocks, MPs have warned. The Bank of England, Financial Conduct Authority, and HM Treasury risk exposing consumers and the financial system to "potentially serious harm" by taking a wait-and-see approach, according to a …

  1. that one in the corner Silver badge

    There must be clarity on who is responsible:

    > the developers, the institution deploying the model, or the data providers.

    Take the earlier sentence

    > Yet trade association Innovate Finance testified that management in financial institutions struggled to assess AI risk. The "lack of explainability" of AI models directly conflicted with the regime's requirement for senior managers to demonstrate they understood and controlled risks, the committee argued

    and the answer is trivially clear: the institution.

    How could the devs be responsible? Explainability - or the lack thereof - is a fundamental* to the design *and* it is up to the institution to verify a system before they put it in place. And if they can't examine the workings then, even if the data providers are entirely useless, there is no way to demonstrate that, which means there is no standard to hold them to.

    * As we are clearly talking about LLMs whenever they say "AI" this time

    1. Chloe Cresswell Silver badge

      Re: There must be clarity on who is responsible:

      Pretty sure their view will be "the end user for not understanding the risks of using our AI systems"

    2. Pickle Rick

      Re: There must be clarity on who is responsible:

      A: the institution deploying the model

      Referring to "AI" makes absolutely no difference, and it muddies the water. It's a data system, regardless of claimed "unknown unicorn magic" and as such any and all output used for decision making is the responsibility of the organisation deploying their _chosen_ systems. It's then up to them to take up any dispute with suppliers if they feel that's justified.

    3. Pete Sdev Silver badge
      Stop

      Re: There must be clarity on who is responsible:

      How could the devs be responsible?

      It depends. We all (at least adults) have our own personal responsability. Have they implemented something knowing it's ethically or functionally dubious? The "lack of explainability" of a machine-learning-model doesn't excuse for example biases in input data, or training, or risk analysis.

      "I was only following orders" hasn't been a valid defense for 80-odd years. There's also the wonderful German word "Schreibtischtäter".

      1. Doctor Syntax Silver badge

        Re: There must be clarity on who is responsible:

        "Have they implemented something knowing it's ethically or functionally dubious?"

        I'd rephrase that slightly. Dubious leaves a little wriggle room. Where there's doubt there's room for it to be either wrong ot OK.

        "Have they implemented something without knowing it's ethically and financially sound?"

        1. BinkyTheMagicPaperclip Silver badge

          Re: There must be clarity on who is responsible:

          It's not developers' jobs to do that. Knowing whether it is ethically sound is debatable, unless you're literally coding a baby eating machine.

          Financially sound? You have *got* to be kidding. If an employee should resign if they think the company is making financially unsound decisions, most people would have to quit. Especially since you could quite convincingly argue that the way to fix a company is to spend more money on staff, etc. Where do you draw the line?

          Leadership supposedly sets the company's direction : if it goes wrong, it is their responsibility.

    4. Eclectic Man Silver badge
      Facepalm

      Re: There must be clarity on who is responsible:

      Oh dear.

      The whole point of automating systems that make decisions is that no one is to blame, no one gets fired, and no one gets punished (apart from us underlings, and we were always expendable anyway, so not much of a loss). 'AI' is just another way of confusing anyone who tries to find a causal link or a decision making chain between a disaster and an identifiable living person.

      I once worked for a company that was very badly managed. The two owner - directors had told one person to do a piece of work that was literally essential to the continuing existence of the company. They had said that if he failed they would 'cut his head off'.

      I then went to this actually very good and competent person and asked "do they understand the difference between delegation of authority and abrogation of responsibility?" Before I had finished he had already replied "No." (His work was fine and the company survived another few years.)

      Just look at the astonishing number of corporations that settle out of court, without accepting liability, for an undisclosed sum, having forced a non-disclosure agreement on the victims. When Harry Stanley was shot by two armed police officers who thought he had a shotgun (it was a wooden chair leg he had been restoring, in a plastic bag), the verdict was that the officers were not to blame. There was no consideration of what would happen when the armed police officers told someone who did not have a gun to "put the gun down NOW!" Similarly, Jean Charles de Menezes was shot by accident, but the police officer who emptied a magazine of seven bullets from his automatic pistol into the bak of his head while sitting on top of him was in full control of himself.

      Basically if you follow the rules and there is a disaster, you will be ok. It, whatever 'it' is, will not have been your fault. The people who wrote the rules will have moved on and there will be no-one left to take the blame in person. And if the rules are incomplete, contradictory, unclear, ambiguous or merely 'commercially confidential' so cannot be released for general review (like search engine results or social media 'recommendations') so much the better.

      SO: The committee said there should be clear lines of accountability when AI systems produce harmful or unfair outcomes.. is pointless and never going to happen.

      1. ThatOne Silver badge
        Devil

        Re: There must be clarity on who is responsible:

        > the difference between delegation of authority and abrogation of responsibility

        Much too complicated! Actually the rule is as simple as it is universal: "If you succeed, it's my success, if you fail it's your failure"...

    5. doublelayer Silver badge

      Re: There must be clarity on who is responsible:

      The problem with that is that it's never that simple. All three of those things could have responsibility. For example, the devs or rather the institution employing them are responsible if they include criteria that make the program decide against someone, either deliberately (dev writes a rule to make sure his enemy will be rejected if they apply) or negligently (they don't clean irrelevant junk out of their training data and end up biasing their model and not testing it). The data brokers are responsible if they have stolen lots of data but not correlated it properly and end up blaming people for things they didn't do which then gets used to judge against them.

      In some ways, I think we'd start in the same place. The institutions can get the first round of blame and, after they've done whatever is necessary for restitution, they can make claims against the responsible suppliers. There will be a lot of legal agreements designed to prevent that from happening, which could be a problem for regulators.

    6. frankvw Silver badge
      Devil

      Re: There must be clarity on who is responsible:

      I shouted out

      "Who killed the Kennedys?"

      When after all

      It was you and me

      The great prophet Jagger was entirely correct...

  2. corb

    The obvious answer is that the people who deploy AI are responsible for the impact of their use of #AI

    But explicitly evading responsibility is one of the core attributes of the AI hucksters.

    1. Doctor Syntax Silver badge

      So caveat emptor

  3. ParlezVousFranglais Silver badge

    The real answer of course is that *nobody* will end up being held responsible, a public enquiry will be held in 10 years to decide that "someone should have done something and lessons will be learned", 5 years after that no lessons will have been learned, and the cost of the inaction, the public inquiry, and the unlearned lessons will be footed by the UK taxpayer

    1. 7teven 4ect

      Another real answer of course is that mogri will end up being held responsible until the question can be resolved appropriately

      mogri fixes a lot of problems caused by nobody

    2. Jamie Jones Silver badge
      FAIL

      To add to that, if the failure is catastrophic for the financial institutions involved, we, the tax-payers will be the ones to bail them out. "Too big to fail".

      Anything that would be considered "too big to fail" should have to follow this law, not just banks: https://www.theguardian.com/business/2022/jun/10/uks-largest-lenders-no-longer-too-big-to-fail-says-bank-of-england

      Banking Act 2009

  4. 7teven 4ect

    Mogrisponsible - Fuzzy responsibility

    When you name the container, Mogri, rather then negating it, you open up the power of mogri for your disposal

    One can only hope there is a mogri derivative for your problem or a mogrivative solution in some works.

    Robot-assisted analysis follows:

    MOGRISPONSIBLE – ONE-PAGE INQUIRY LENS

    (Fuzzy Responsibility for AI-System Failure)

    Problem:

    When AI-linked systems fail at scale, responsibility cannot be cleanly

    collapsed onto a single actor without losing explanatory power.

    Asking “who is responsible?” produces theatre, not repair.

    Container:

    Mogri = the named container for distributed responsibility across time,

    actors, incentives, and silence.

    Core Shift:

    Responsibility is not an event.

    Responsibility is a gradient.

    Inquiry Reframe:

    Replace “who caused this?” with:

    1. Where did responsibility diffuse faster than oversight evolved?

    2. Which incentives were rewarded as warnings weakened?

    3. Where was uncertainty suppressed rather than surfaced?

    4. Which actors benefited from non-decision?

    5. At what points could friction have been introduced but was not?

    What to Measure (not who to blame):

    - Override frequency and trend

    - Uncertainty signalling vs suppression

    - Update velocity vs comprehension capacity

    - Human-in-the-loop authority in practice (not on paper)

    - Kill-switch ownership and latency

    - Budget pressure at deployment boundaries

    - Time gaps between known risk and action

    Failure Pattern (Typical):

    - Distributed gains, centralised blame

    - Local optimisation, global fragility

    - Responsibility evaporates before impact

    - Accountability invented after impact

    Post-Crash Use:

    Do not seek a neck.

    Map the mogri field.

    Allocate responsibility proportionally without pretending singular causation.

    Design controls where mogri thickened.

    Restore friction where gradients became too steep.

    Outcome:

    A system that survives being wrong

    without requiring a villain to function.

    If Mogri is unnamed, failure is moralised.

    If Mogri is named, failure becomes legible.

    1. 7teven 4ect

      Re: Mogrisponsible - Fuzzy responsibility

      Super sorry, silly robot forgot to provide you links to the mogri assets - https://github.com/lumixdeee/CSP-105/tree/main/spec

      completely understand the thumb down ire, it won't happen again

    2. Doctor Syntax Silver badge

      Re: Mogrisponsible - Fuzzy responsibility

      Not even false.

      1. 7teven 4ect

        Re: Mogrisponsible - Fuzzy responsibility

        mogric

        you mean?

  5. elsergiovolador Silver badge

    Rearserrance

    The MP likely just wants to know if they keep pushing the AI, they'll become liable.

    The gravy train must roll.

  6. Anonymous Coward
    Anonymous Coward

    AI is the end result for the UK Government’s policy of “outsourcing”. All done by stealth under the guise of (…insert saving v of choice).

    Arms’ Length Bodies - NHSE, for example - put another layer between the “customer” and the Government. It just results in everyone giving up. Because, frankly, what’s the point?

  7. Tron Silver badge

    It's simple really.

    Committee: Who is responsible?

    Minister for Administrative Affairs, U turns, and AI: Not us.

    -Have you done anything to mitigate issues?

    -Yes, we have 'guardrails'.

    -Can you explain how they work?

    -No.

    -Do they work?

    -No. They are a palliative concept to calm the moral panic of the masses.

    -Are you not concerned about the risks?

    -No. The UK has been an economic and political shitshow since Brexit, and nobody will even notice if anything else goes TU, or if any other services fail.

    -Do you personally have a plan?

    -Yes. I have a villa in Barbados and a house in New Zealand.

  8. Neoc

    Let me use a non-AI, non-IT, real-world example:

    I paid Company A to do some work. Company A decided to use subcontractors. Subcontractors did a piss-poor job. Who was responsible? Speaking from experience where Company tried to pawn me off to the subcontractors, Company A is legally responsible (at least where I live). That's who my contract was with. It's up to them to fix my problem, not me to try and keep track of all the subcontractors they may or may not have used.

    Same thing here. Bank A decides to use AI. AI screws up my account/transaction/whatever. Bank A should be responsible for fixing my problem as THEY decided to use the AI. If Bank A then decides to turn around and extract a pound of flesh from whoever supplied them with the AI solution, that's up to them. But *my* dealings are with Bank A - not the AI supplier. It's not like I was given a choice to use their AI or not.

  9. amanfromMars 1 Silver badge

    A N Other Variety of Liz Truss lettuce*

    UK financial regulators must conduct stress testing to ensure businesses are ready for AI-driven market shocks, MPs have warned.

    Oh please, not yet another complete and utter waste of moronic effort and endless time. FFS.... get with the NEUKlearer HyperRadioProACTive IT Program.

    Those sorts of pie in the sky and pigs will fly actions are sure to be every bit as successful should it ever be discovered failing exclusive executive systems administrations and their fiat currency and debt based banking systems were advised ........

    UK financial regulators must conduct stress testing to ensure businesses are ready for Parliamentary Cabinet Office driven market shocks, AI have warned.

    Liz Truss lettuce* ...... https://en.wikipedia.org/wiki/Liz_Truss_lettuce

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon