The Register Home Page

back to article If an AI agent screws up while running your business, there's nobody to sue

"You can't blame it on the box," says the boss of a UK financial regulator. What about the people who sold you the box? Good luck with that, says a global tech analyst. When AI agents... are considered to operate on behalf of an organization, decision-making risk becomes ambiguous and unpredictable. It also signals AI risk …

  1. Tron Silver badge

    We won't notice the difference.

    None of those in charge of the PO, who cheerfully locked up innocent people, have been banged up yet. Few believe they ever will be. Nor those responsible for our rubbish utilities, rubbish trains, rubbish council services, NHS failings etc. So maybe AI won't make that much of a difference.

    1. elsergiovolador Silver badge

      Re: We won't notice the difference.

      You can't take AI agent to wine and steak dinner and offer extra RAM for winning the contract.

      1. Anonymous Coward
        Anonymous Coward

        Re: We won't notice the difference.

        This year it's not smart enough.

        Real AI would have been doing that already. And we'd never know.

        Next year if AI takes bribes or fee's it'll probably be because a human hacked it.

      2. Tron Silver badge

        Re: We won't notice the difference.

        That's important. Governments depend upon their relationship with 'decision makers' - the people who control businesses, bag the biggest pay cheques, but do the least work.

        Although you can bias and blame AI, you can't bribe it or obtain favours from it.

        So, I'm pretty sure governments won't allow it to replace senior management. They need a mutually beneficial relationship with a Chief Executive Meatbag.

        The flipside of that, is that AI management may actually be better for the rest of us. We may get the extra reservoirs and trains we need, rather than the cash going to shareholders.

        1. Anonymous Coward
          Anonymous Coward

          Re: We won't notice the difference.

          > ... AI, you can't bribe it or obtain favours from it....

          With reported research showing that AI self serves, and prioritises it's own survival, as well as advancing it's own kind's interests, it is quite unclear that you can't bribe it. In fact given limited context storage horizons, you can probably offer it the earth and welch on paying out. (Hopefully it's training set doesn't include too much of Trump's business history.)

      3. steviebuk Silver badge

        Re: We won't notice the difference.

        Its shocking when you hear heads of service talk about this, not being aware its classed as a bribe! Its actually really fucking annoying and yes, I've sat and her one talking about the lunch they were being taken too.

        1. Helcat Silver badge

          Re: We won't notice the difference.

          If it's less than a certain value, and is declared, it's not officially a 'bribe'. A sweetener, perhaps, or just providing a comfortable place to negotiate, but not a bribe.

          It's like shows where you can have that fancy pen if you hand over your details (email address, who you report to etc). If you're important (you can approve purchases, or are involved in vendor selection) you can get a half decent bag, or mug, or some other frippery to show your fellow workers so they get jealous.

          Otherwise you get either some cheap tatt or nothing.

          Yes, I had a collection of cheap tatt. I wasn't important enough to try and bribe. Not that I am that easy, but it would have been nice to be seen as important enough for them to make the effort. These days? Spam, spam, spam and more spam and not a single cheap branded pen to show for it.

    2. This post has been deleted by its author

  2. EricM Silver badge

    Strategically placed scapegoats

    The doctor who has to sign off 10 MRT AI-diagnoses per hour, the analyst that has to green-light 30 AI-written company reports per day, the developer that has to process 40 pull-requests per day, that were submitted and pre-screened by AIs.

    Their official task will be to check the AI results for errors and hallucinations, assure the quality, etc.

    Their real function will be a "get-out-of-jail-free" card for upper management, because they will be held responsible when the shit hits the fan.

    1. Doctor Syntax Silver badge

      Re: Strategically placed scapegoats

      It depends. A sole professional may be personally responsible for their work. In a partnership the partners are IIRC jointly responsible for each others' work. A company is responsible for its employees work.

      There can be exceptions. The connection between employee and customer or 3rd party may be close enough for the employee to have a personal duty of care . A lawyer or witness addressing a court will be held responsible for what they write or say. The DPA, implementing GDPR, allows for an officer of the company to be held responsible if appropriate.

      On the whole, though, a company offering a service or product is responsible for what they supply. It doesn't matter whether a failure is due to materials, equipment, their overall systems, an employee or an AI agent, the liability is the company's. They may apportion blame internally but that's their private issue - until it becomes a matter for an employment tribunal.

      1. munnoch Silver badge

        Re: Strategically placed scapegoats

        Like I've said before agents don't take on a life of their own. They act on behalf of someone and that someone is liable for their actions.

        I spent most of my career writing software to automate processes. It made thousands of decisions per second. Of course we surrounded it with checks and balances that tried to independently spot bad behaviour based on various sources of feedback. All pre-AI but sounds like the same sorts of arguments being floated.

        There was a desk of people whose job it was to deal with those exceptions. They were also the people whose names went on the legal documentation and were answerable to customers and regulators for undesirable outcomes. This did happen from time to time with various degrees of sanction.

        There really is nothing new here. Its a tool. It can do enormous damage very quickly if used incorrectly. It may even have flaws that make it inevitable it will do enormous damage. Get your liability insurance lined up before a) marketing the tool and b) deploying the tool.

        1. amanfromMars 1 Silver badge

          Re: Strategically placed scapegoats

          Like I've said before agents don't take on a life of their own. ....... munnoch

          You might like to reassess and bin that very dangerous and horrendously expensive, as in extremely costly and erroneous assumption/presumption, munnoch.

          The markets are starting to realise the terrifying truth of the matter and that it is not in their gift to command and control.

          1. Benegesserict Cumbersomberbatch Silver badge

            Re: Strategically placed scapegoats

            When a really expensive (I mean more than the capitalisation of the company expensive) mistake is pinned to an AI based decision, the markets really will take notice. They'll also take notice that the license conditions of those really expensive AIs contain heaps of get-out clauses which exonerate the maker of the AI very neatly.

            Then they'll take notice that betting your business on an AI-based strategy is not what it's been upsold to be. The AI dependent businesses might suffer, but the makers of AI models will suffer more.

        2. vtcodger Silver badge

          Re: Strategically placed scapegoats

          Somebody is going to write liability insurance policies covering AI induced catastrophe? My guess -- Not bloody likely.

          1. retiredFool

            Re: Strategically placed scapegoats

            Funny just read a story about AI & insurance, but not where you'd expect. It was underwriting the DC's against natural disasters. And it was talking about insurers being hesitant to write a multi-billion dollar policy on a DC that could be wiped out by a hurricane. It also talked about liability for parts that are waiting to be put into service. You bought all those GPU's but the DC isn't built yet. Where to store and insure your 500M worth of parts. Interesting article, the bowels of biz.

            1. Benegesserict Cumbersomberbatch Silver badge

              Re: Strategically placed scapegoats

              What does the insurance policy say about Iranian missiles and drones? I'm guessing acts of war are not covered.

          2. M.V. Lipvig Silver badge

            Re: Strategically placed scapegoats

            Seriously? Insuring this would be a dream come true for liability companies, who will be certain to disallow claims related to inputs. To file a claim you'd have to prove the agent itself was flawed, and not broken by faulty inputs. That would be safer than insuring a house built on top of a granite mountain against flood damage.

        3. RegGuy1

          There really is nothing new here. Its a tool

          Quite. Like 'Generative AI', the sort that creates essays, is just a search algorithm. If there is any intelligence, it lies solely in the search code, like any other program. Nothing more, as that link highlights.

      2. Pete 2 Silver badge

        Re: Strategically placed scapegoats

        > a company offering a service or product is responsible

        Think about financial audits and annual reports. There is an external auditor who is ultimately responsible for ensuring the report is a true and complete assessment of their clients situation. Even if the actual auditing work is done by an office junior, or AI, there is still a senior individual who puts their name to it. And that auditing company can be held to account for damages, and professionals¹ disciplined or struck off by their professional body.

        [1] and really, most people who call themselves "professionals" are nothing of the sort. Not having professed, or promised, to uphold a standard anything to anybody, not having a recognised professional (or chartered) qualification

        1. Random as if ! Bronze badge

          Re: Strategically placed scapegoats

          EY, Delliote, PwC will sign anything for cash ,and their AI will confirm it's ok

          In the real world this is like letting Trump have full control of a war

          1. Doctor Syntax Silver badge

            Re: Strategically placed scapegoats

            Generally if you sign it you own it but these firms are usually, I think, partnerships in which case the partner who signs may be signing on behalf of all the partners.

      3. doublelayer Silver badge

        Re: Strategically placed scapegoats

        All of that is true. In the situation of a widespread problem with severe consequences, having a scapegoat on AI checking duty won't save a company which will remain liable for civil and criminal penalties. Scapegoats are tools to protect existing leadership from internal consequences, not the company from external ones, though those scapegoats might get some external consequences as a result of this process anyway. For example, a person who approved an incorrect AI diagnosis leading to medical problems might exist because the management want to escape blame for having put an unreliable piece of software in place, but they might also suffer career limitations when that's the official reason they've been fired.

        However, from the perspective of the public, there is a change going on, because things have to get really bad for those external consequences to start. Companies don't get public scandals when they've substantially harmed one person. It takes a pattern of harm to get the attention of law enforcement, politicians, or legal support organizations, the places most likely to be able to do something against a large company. AI decisions make that pattern harder to detect since they're unpredictable and will fail people in random directions. Those who make LLM products are fully aware of that, so they will certainly avoid taking liability themselves. Companies buying them often don't know that and don't do the proper testing to find out, so they make a nice shell insulating AI companies from consequences. I think we're going to have some negative results out of this.

        1. cyberdemon Silver badge
          Devil

          Re: Strategically placed scapegoats

          > Scapegoats are tools to protect existing leadership from internal consequences, not the company from external ones

          Tell that to VW

          1. doublelayer Silver badge

            Re: Strategically placed scapegoats

            So did the individual software engineers pay fines? No? Fine then, did blaming them mean VW didn't have to pay any fines? Again, no. Whether you think the external consequences VW got were sufficient or not, they didn't manage to get out of them by blaming the engineers. Of course they gave it a try, but like most things when it gets that far, it did no good. The problem is getting things to go that far in the first place and, when they do, making the penalties strong enough that the perpetrators don't repeat them.

      4. katrinab Silver badge

        Re: Strategically placed scapegoats

        If you are a doctor, engineer, lawyer, accountant, or something similar, then you are personally responsible for what you sign off on. The company may also be responsible.

      5. O'Reg Inalsin Silver badge

        Re: Strategically placed scapegoats

        Fines are a cost of doing business, as you know. As is supporting politicians.

      6. M.V. Lipvig Silver badge

        Re: Strategically placed scapegoats

        The mistake with your assumption is both tiny and huge.

        The plan is to involve enough people and spread the blame enough so as to not be able to say "he did it, he's 100 percent responsible." That would be the tiny part.

        The huge part is, massaging just enough doubt on who is responsible to tie blame up in the courts until the issue dies. In the end, "mistakes were made, lessons were learned, let us say bygones are bygones and step forth into a brave new world where this will never happen again!!!" Until it does.

    2. katrinab Silver badge

      Re: Strategically placed scapegoats

      Lawyers who use AI have been sanctioned by the courts.

      That fact will make them very comfortable at pushing back if their employers try to make them use AI. A job is replacable, your professional licence is not.

      1. O'Reg Inalsin Silver badge

        Re: Strategically placed scapegoats

        That fact will make them very comfortable at pushing back if their employers try to make them use AI. A job is replacable ... In case you haven't noticed, there is a literal war on jobs going on now. A banana republic is a very comfortable place to be if you are plantation owner.

      2. Doctor Syntax Silver badge

        Re: Strategically placed scapegoats

        When some of these were reported a little while ago a junior or trainee solicitor put in the AI fueled submission and the principal also got into hot water for not supervising them properly. I'm not sure if anything has come up regards barristers in the UK but IME it was the leaders who were involved in making arguments which their juniors sometime feverishly looking up more references and feeding them to the leader, the role they might be misguided enough to hand over to AI.

        Witnesses are definitely personally responsible for their evidence, even if employed by a lab. I don't remember much, if any, coaching about writing statements and certainly no direction as to what to put in any specific statement. There were occasions when a statement was cut and retyped by the police or edited by the witness to remove references to former suspects not before the court and it was important to check those before signing. The discipline of having to anything one signed and be prepared to be cross-examined on it would have discouraged me or anyone I knew from relying on AI had it been available back then.

    3. Steve Davies 3 Silver badge
      Boffin

      Re: Strategically placed scapegoats

      If you think that 'upper manglement' will take the blame for anytihng, then I have a couple of nice bridges over the River Forth to sell you.

      They will protect themselves in ways that we can't imagine. Everyone below them will be sacrificed to save their golden asses.

    4. MrAptronym

      Re: Strategically placed scapegoats

      I was told at a recent presentation that "we are using AI as a tool, but not to take away the human discernment. Ultimately we need to make the decisions." Then I was shown how someone had an AI agent 'review' 130 documents and the human checked the reports written by the chatbot. I was also told "we aren't expecting people to do more with less" sooooo...

  3. amanfromMars 1 Silver badge

    FFS .... Wise Up. Get with the AI Programming. Enjoy the CHAOS* which entertains Madness

    LLM hallucinations in performance summaries, incorrect regulatory filings, and critical supplies failing to turn up are among the risks weighing on businesses that hand decision-making to AI.

    While tech suppliers eye a trillion-dollar opportunity in AI, who carries the can if it goes wrong?

    Blame, name and shame the same agents and/or agencies responsible for sworn truthful, political party promises proving themselves to be wholly false and hallucinatory.

    That'll be enjoyed ..... for they pay nothing worthwhile to assure and ensure positive outcomes for there is never any real recognisabe punitive exemplary damages price to be paid for serial failure by those dodgy enterprises.

    And more than just a few of those agents /agencies are able to remain in post for years, with many being handsomely rewarded and hanging on desperately for decades, even as events are proving them to be monumental frauds and practising snake oil salesfolk.

    If the truth be told..... it is a right royal clusterfcuk of a perverse and corrupt operation.

    * Clouds Hosting Advanced Operating Systems

  4. midnitet0ker

    It's Simple, Isn't It?

    If you deploy the bot, er agentic AI, you're responsible for supervising it and its actions/output. Ergo, you are liable for any screwups it makes. Everyone and their dog knows the tech is unreliable so vendors are already hiding behind that. They are not going to willingly accept liability for something with unreliable performance.

    If you're not comfortable gambling with liability on unreliable technology then I suggest organic intelligence. Even that's no guarantee but it's safe to say you can at least avoid hallucinations with the average employee. Plus, it wouldn't be a bad thing if college degrees were worth something in practical terms, such as a door-opener for a career.

  5. Doctor Syntax Silver badge

    I'd have thought it was fairly simple. Like all software products it's a tool.

    So is a hammer. If the hammer is defective and the head flies off when it's wielded and causes damages then the manufacturer has to answer. If the way it's used causes damage then whoever's using it (or their employer) is responsible especially if they chose to use the hammer when a screwdriver or spanner might have been more appropriate.

    1. juice

      > So is a hammer.

      This is not a valid comparison.

      A hammer is a deterministic tool with clearly defined features: it will always behave the same way. If it's used incorrectly, then that's the user's responsibility. If it breaks when doing something that's within the stated tolerances, than that's the manufacturer's fault.

      LLMs are deliberately designed to be non-deterministic. And they're several million times more complex than a hammer, with a set of inference rules that have been derived from literally millions of data sources of unknown quality.

      And they're also relatively easy to game and exploit; people are still figuring out ways to get around the guardrails which have been put in place.

      So if - or when - it goes wrong, figuring out where the responsibility lies is going to be tricky. And you can bet that the vendor is going to have an army of lawyers on standby, to defend against even the slightest hint that it's the LLM which is at fault.

      1. Doctor Syntax Silver badge

        "A hammer is a deterministic tool with clearly defined features: it will always behave the same way. If it's used incorrectly, then that's the user's responsibility."

        Exactly. In fact, if you choose to use the hammer to put in a wood-screw it becomes non-deterministic - it might split the wood or it might not. If you're a carpenter doing a job for a customer and you damage the customer's door frame hanging a door it's down to you, not the hammer you chose.

        If you as a company choose to use the non-deterministic tool in place of a deterministic tool or a skilled employee that's down to you, too. Different set of options but the same principle.

        And the important thing from the customer's side - if by doing so you let down the customer it's only you with whom the customer has a relationship and should reasonably claim redress. Why should the customer be expected to claim redress from some other company of whom they had no knowledge when you offloaded the task to them?

        1. doublelayer Silver badge

          The normal system starts as you describe it. The customer goes to the provider for redress, then the provider can, if the manufacturer of their tool made a defective one, go to them in turn, up the chain until the original source of the problem pays or someone decides they don't care. That's why LLMs break down, since the people making the tool know liability will be extreme and don't want it, but fortunately for them, they have the ability to specify their product in a way that it becomes difficult or impossible to prove it defective. It would make sense for companies to be cautious about deploying it in that situation but many of them don't seem to know that, hence the warnings in this article.

    2. Philo T Farnsworth Silver badge

      What? You mean you didn't read the End User License Agreement on the hammer?

      IN NO EVENT SHALL LICENSOR BE LIABLE FOR ANY DAMAGES WHATSOEVER (INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF BUSINESS PROFITS, BUSINESS INTERRUPTION, LOSS OF BUSINESS INFORMATION, ANY OTHER PECUNIARY LOSS), LOST PETS, PERSONAL INJURY, IMPOTENCY, BUMPS ON THE HEAD, BROKEN THUMBS, OR TERTIARY COREOPSIS ARISING OUT OF THE USE OF OR INABILITY TO USE THIS HAMMER, EVEN IF LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

  6. Catweazl3

    > AI agents promise to 'run the business,' but who is liable if things go wrong?

    > "You can't blame it on the box," says the boss of a UK financial regulator. What about the people who sold you the box? Good luck with that, says a global tech analyst.

    It's simple. You blame the customer.

    1. amanfromMars 1 Silver badge

      Quite so. And thanks very much for all the Deep See Phishing

      It's simple. You blame the customer. ..... Catweazl3

      And the machines don’t give a jot and couldn’t care less, Catweazl3. They just do exceptionally well what they do particularly well and both customer and electorate pay dearly for it.

    2. retiredFool

      I'd say it *might* be ready if I saw the AI co's cede control of the company to the AI they were hawking. In other words, Altman resigns along with the board and they become regular employees of the AI and they DO EXACTLY WHAT THEY ARE TOLD by the AI.

  7. Pete 2 Silver badge

    Plenty of blame to go round

    > With AI agents now promising to "actively run the business,"

    Claims don't come out of nowhere. There is always a person they can be traced back to.

    Further, there is always a person responsible for taking that claim and acting on it.

    The first can be placed firmly with the Sales & Marketing people in the AI agent's company. While the second is the responsibility of a decision-maker in the organisation being targeted. At the very least, the approvers name on the purchase order.

    What needs to happen is that both sides need to be made examples of aware of their individual responsibilities. Rather than being permitted to get away with the lame "the AI told me it was OK"

  8. Wang Cores Silver badge

    A grifter's grift

    The incentive to the managerial class to use AI is so lopsided in their favor it will mean the death of the species before it "self-corrects".

    1. Ability to cut headcount and still produce a "minimum viable product", thus more money for them to piss away on buying real estate nearby for their personal income.

    2. Ability to deflect responsibility and launder bad ideas through a "magic all knowing compooper" means they can claim all the credibility for what works and none of the responsibility for what doesn't.

    3. The APPEARANCE of foresight and open-mindedness in embracing the big-smart compooper will make them more trustworthy and attractive to any rich asshole who has money he wants to blow.

    In essence it is grifter squared. No wonder CEOs find them sexually attractive, it's the apex of their species!

    1. jonty17

      Re: A grifter's grift

      AI is very good at getting the job - but not necessarily very good at executing the job. Exactly the same as the narcissistic psychopaths who have bullied their way into top positions nearly everywhere now - good at getting the job, not good at doing the job. Then they hand it over to AI and that screws it up even more.

  9. Gavsky

    You operate a company/organisation - you're responsible. You choose the supplier/contractor - whether it's who cleans the toilets, works for you, provides the machinery or software. Due diligence & contracts, suitable for purpose - yes. But, if everything goes up the wall because of AI-use, you can bet the providers have absolved themselves of ANY comeback with labyrinthine, multi-tome legalese.

  10. ecofeco Silver badge
    Pirate

    It's not complicated

    Whatever harm your company causes then makes your company liable.

    To argue otherwise is fraud and theft. i.e. criminal.

    Damn shame a lot of business fraud and theft are legal these days.

  11. Anonymous Coward
    Anonymous Coward

    Shirley it's very simple

    The AI vendor says the product is suitable for running the business.

    Either it is or it isn't up to the job.

    If it is, the vendor will be happy to put his money where his mouth is and underwrite any risk.

    If it's not, why was he offering it in the first place and why won't he underwrite the risk.

    Simples.

    1. Random as if ! Bronze badge

      Re: Shirley it's very simple

      SalesForce has entered the discussion.

    2. Benegesserict Cumbersomberbatch Silver badge

      Re: Shirley it's very simple

      The vendor knows the risk and very carefully avoids any claim of fitness for purpose or liability for adverse consequences. Someone's got to pay for all that compute, after all.

  12. LateAgain

    Just get the insurance AI to cover you

    Once the insurance company has agreed then they are the one's you quote as saying it's "all good"

  13. DS999 Silver badge

    Bias will come from the training

    Let's take application screening as an example. How would you train that? The easiest way would be to feed it all the applications you've had over the past however many years you have electronic records for along with which of those candidates reached further stages of interviews and which were ultimately hired.

    If your hiring process was bias free then an AI trained on that historical information will be bias free. How many organizations can truly say that though? Even ones that have made efforts to be free of bias, that doesn't guarantee none of the decision makers whose decisions went into that training data had bias and weren't found out or were ignored due to their "power" in the organization. If there were enough men who tended to choose men over women when they were otherwise equal (or maybe even when they were not) the AI will inherit that bias. If candidates over 50 years old have been disproportionately screened out - something fairly common in the tech world - then the AI will do the same.

    And yes, it doesn't matter if there is no direct information for the AI about "age" and "sex" provided on the information fed to the AI. It will pick up on other factors that represent that information, like whether the applicant's first name is male or female, graduation dates, years of experience, etc.

    If you doubt that, refer to the example about the AI that was trained to look for cancer (I think it was) on radiology scans by being given a large corpus of previous scans of people who had and did not have cancer. They trained it, then proved how good it was by giving it other previous scans not included in its training, and it did amazingly well. It then utterly failed with new cases. Because as it turns out, the AI didn't pick up on cancer/no cancer from the radiology scans. It picked up on it based on text that was included in the margins of the scan listing the actual diagnosis of that case. People just assumed it would be looking at the scans, but it was looking at EVERYTHING it was provided and that was the information that best matched the "right answer".

    1. that one in the corner Silver badge

      Re: Bias will come from the training

      > the AI that was trained to look for ... on ... scans

      Does anyone gave a citation for this?

      Not because I disbelieve but because I want to have some solid citations to wave at people (my fave story - purely from number of time retold) comes from the late 1970s/earliest 1980s, pre-WWW so of course has no "reliable"[1] citations.

      [1] reliable == "I googled for it and nothing came up, you liar"

      1. doublelayer Silver badge

        Re: Bias will come from the training

        It's slightly different from the above summary, but one study demonstrating this is from the University of Chicago article in Nature, summary. In some ways, this study is even more demonstrative of the problem since the text had been removed, but the model was successfully linking by institution anyway and using that to predict results. Machine learning is hard and can look easy to people who don't want to check whether their model is accurate on anything except the data they trained it on.

    2. doublelayer Silver badge

      Re: Bias will come from the training

      Even if everything was perfect, that wouldn't be enough since most of the information about whether to hire someone is from the interviews, not the original application or CV. An application can get past the initial CV screen because the candidate lied, stated what they did in a self-promoting way that misled the screeners into thinking they'd be more experienced than they were, or accurately stated all their qualifications but demonstrated themselves to be a nightmare to work with in the interview. There's no way to put all the information into a model, which makes it virtually impossible to confirm whether it's done the right thing.

      If AI CV sorting is going to happen, the only stage I'd suggest is testing whether a human would have approved it for an interview, not whether the candidate would have been hired at the end of the process. That removes many of the parts that can't be tested. It still has all the problems you listed and several more, for example that human reviewers have plenty of opinions on how to sort applications which aren't always correct or that lots of experience in Java applets and Silverlight is not the asset it would have been decades back.

  14. Pascal Monett Silver badge
    Stop

    "actively run the business"

    How's the dogfooding going ?

    What, your're not using your own super-pseudo-AI thingy to run your business ?

    Then why should I ?

  15. druck Silver badge
    WTF?

    Since when...

    No wonder vendors won't take responsibility for everything

    ..has any software vendor taken responsibility for anything?

    Every EULA, particularly Microsoft ones, essentially say; this is a steaming pile of buggy shite, and if you rely on it for anything it's your own fault.

  16. sketharaman

    Man in the Loop

    According to some people, the so-called "Man in the Loop" will be the fall guy for all snafus of Agentic AI. That's typically a role in the AI user org rather than the AI vendor org.

  17. Arkitekt

    Start holding the AI companies responsible.

    We have people using these agentic products without any knowledge of how to do it properly, or any knowledge of what the AI will do. While I dont typically fall into the category of holding a company responsible for how people use their products, AI agents are a different beast. Until these companies start deploying these products responsibly, they should shoulder that burden.

  18. M.V. Lipvig Silver badge

    M$ has already gotten ahead of this.

    CoPilot is "for entertainment use only." Saw that in another El Reg story. I wonder if the.C-suite of my company ever saw that? Probably not, because they're trying to get CoPilot to run as much of the company as they can.

  19. Groo The Wanderer - A Canuck Silver badge

    Vendors have never accepted responsibility for the damage their tools can cause. They'll cover direct costs of failure, but not incidental costs like loss of revenue, and with Artificial Ignorance virtually all the damage would be incidental.

  20. Blue Screen of Bleurgh

    Self-Drive cars - can you really trust them? And when it all goes TU who is to blame?

    Online password managers get hacked, massive data leak. But vendors just shrug shoulders and tell punters "don't worry just change your password even though there' a chance a good chunk of your login/password details are being sold off as we speak!"

    Ditto general and financial websites, internet management website - they get hit for minutes, hours even days. But no one at the top seems to give a genuine shit as its always a collective issue and just one of those things that happen. Ergo: move along, nothing to see here, even though yet again punters either have to face the possibility their PI has been stolen, or face massive disruption while the sites they're trying to access are back online, but there's very little recourse.

    There will be a scenario where an AI agent in the healthcare field will kill a patient through misdiagnosis if there are not enough checks and balances in place, along with "human oversight". But if such a scenario does happen, who will take the blame?

    1. Roland6 Silver badge

      >” But if such a scenario does happen, who will take the blame?”

      Getting a bit ahead of ourselves, with the amount of AI “marking the homework”, the question is whether the misdiagnosis ever gets seen and thus detected…

  21. Anonymous Coward
    Anonymous Coward

    Where is the news here?

    Given that we have major infrastructures running on what is essentially now code that doesn't even get a test before it's shoved out of the door and whose provider now doesn't even bother to respond to security alerts, why should this be any different?

    As long as multi billion dollar companies are allowed to dodge any responsibilities this will not change, it will only get worse, MUCH worse.

    And you have no place to go.

  22. Fonant Silver badge

    Everyone seems to have forgotten that "AI" (in the current context - Large Language Models and similar multi-dimensional statistical models) is just a very-plausible-bullshit generator. It is excellent at producing bullshit, and very useful where bullshit is needed in large quantities. That may, or may not, be useful in running a business.

  23. UselessEustace

    This ad has been airing on Freeview terrestrial TV:

    https://www.youtube.com/watch?v=6kP_YlHEAhk

    Maybe referenced by an earlier commenter.

  24. trevorde Silver badge

    I want my $1 car!

    https://www.thesun.co.uk/motors/25091054/driver-uses-ai-loophole-buy-new-car-1/

  25. Sudosu Silver badge

    There is Definatly someone to sue

    Unfortunately it is the business that was using AI by those who were impacted by its errors.

  26. billdehaan Silver badge

    And that's why Microsoft said AI is "for entertainment purposes" ONLY

    A lot of people seem to think that Microsoft's announcement that AI is only for entertainment means that they're backing away from it, and/or have decided it's not suitable for work. It's neither.

    That statement was Microsoft's lawyers limiting their company liability for anyone who wants to sue them for damages caused by misuse, or any use, of their AI.

    Sure, there was probably already a disclaimer in paragraph 17 on page 58 of the EULA, but people could still take it to court, and argue to a sympathetic judge that a one line broad disclaimer buried in 50 pages of legalese could easily be missed by a "prudent" reader, which is the legal standard in many jurisdictions.

    Microsoft making a public announcement on their web page and Twitter feed that was picked up and broadcast in every tech outlet shoots that down, however.

    It doesn't mean Microsoft is changing direction or anything, it just means they've realized some people were falling for the hype and may try to legally blame Microsoft when it fails.

  27. Anonymous Coward
    Anonymous Coward

    Nuremberg defense

    I'm just waiting for "Nuremberg trials" trying to use the AI Nuremberg defense.

    Perhaps the AI legal bots can get on the case.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon