back to article Harmed by a decision made by a poorly trained AI? You should be able to sue for damages, says law prof

Companies deploying machine-learning systems trained on subpar datasets should be legally obligated to pay damages to victims that have been harmed by the technology under tort law, a law professor has opined. In a virtual lecture organised by the University of California, Irvine, Professor Frank Pasquale, of the Brooklyn Law …

  1. Eclectic Man Silver badge

    Medical training, and Facial Recognition

    I cannot help feeling that holding those using AI to account for the decisions is a good idea, but getting them to reveal their training data is fraught with problems, particularly if that data is personal information of other people, such as medical information, or the AI is a military system.

    There was an internet petition in the UK recently to get medical textbooks to include examples people of varying skin colour suffering from specific diseases, particularly skin diseases, so that medical students would recognise them. Facial recognition systems were trained generally using white faces (see several other El Reg articles). And there is now a debate on autonomous AI weapons systems 'deciding' for themselves whether to kill a target without human intervention or approval.

  2. ThatOne Silver badge
    WTF?

    Theory and Reality

    That's all very nice, but you'd need to prove AI was somehow involved, and that its decision resulted from "poor training" and not from a conscious choice of the company (now that's a subtle difference!). The biggest problem is that "poor training" is a very vague and subjective notion, which most of the time will be almost impossible to legally prove.

    In short, "snowflake's chance in hell" comes to mind...

    1. Snake Silver badge

      Re: Theory and Reality

      That all may be true but then it is the developer's responsibility to prove that the technology works as intended before application of the technology as a solution. I don't see why "AI" gets a special dispensation regarding this in comparison to other creations; from medical to the latest gadget, things get tested before introduction lest the consumer sues for anything from false advertising to criminal liability.

      The consumer should never be the alpha or even beta-testers. The world tried that before, thousands upon thousands of times - remember Thalidomide??

      1. ThatOne Silver badge

        Re: Theory and Reality

        > I don't see why "AI" gets a special dispensation regarding this

        Because it's a black box: It's not like it gets a dispensation, it's just so much more difficult to find and prove there is a problem, unless it's utterly obvious. A slight bias for instance will mostly go unnoticed, especially if it matches the bias of the people using/testing that AI. Is that AI "poorly trained", or working as expected? It will depend entirely on your personal opinion on that issue, and IMHO it will be hard to legally prove in court there is a training problem. All you can prove is that the decision doesn't suit you.

        1. Ben Tasker

          Re: Theory and Reality

          > not like it gets a dispensation, it's just so much more difficult to find and prove there is a problem

          True

          > Is that AI "poorly trained", or working as expected?

          Well, that's where the law tends to be ahead of technology, because that's not really the question that will be asked once you're looking at court.

          Was the outcome equitable?

          I.e. if your AI has started flagging 1:20 black person for intimate searches, and let all the whites through, the outcome wasn't equitable.

          So as the AI developer, the court would probably (hopefully) find against you.

          Even for more nuanced cases, the question is the same.

          If your design/training decisions have led to an unreasonable outcome and harmed someone else (harm being financial as well as physical) then the equitable outcome is that you carry some liability for that.

          There's no need to show that it specifically was a training problem, only that your system's output led to harm when it should not have. Or, as you put it "All you can prove is that the decision doesn't suit you.", much as a court's decision likely wouldn't suit you very much :)

          EDIT:

          Just to hammer the point home: the fact that AI is a black box, and you can't really discern why a neural net made the decision it did should not be a problem for society at large. It's very much an issue for the developer to deal with, because it is they that should carry liability when their creation starts causing harm.

          That's the basic principle that's followed with almost everything else we produce, so it seems unlikely a court will ultimately consider AI too differently. The alternative is that the purchaser (i.e. the company using it) holds the buck for liability rather than being able to sue the developer. At which point, you've got to ask how much of a customer base you're actually going to have after the first lawsuit.

          1. Doctor Syntax Silver badge

            Re: Theory and Reality

            "It's very much an issue for the developer to deal with, because it is they that should carry liability when their creation starts causing harm."

            I mostly agree but it shouldn't really be on the developer but on whoever's responsible for deploying it. It's up to them to determine whether it's fit for purpose. The developer might not even be aware of the purpose to which it was put, nor would they necessarily endorse it for that purpose if they were.

            1. Alan Brown Silver badge

              Re: Theory and Reality

              "The developer might not even be aware of the purpose to which it was put, nor would they necessarily endorse it for that purpose if they were."

              A developer actually saying that in court would be a huge blow against the marketers trying to push things

              "I had no idea my work was being used for this, nor do I feel it is fit for that purpose" is a rather damning indictement not of the developer but of those who misuse the work

          2. Chris G

            Re: Theory and Reality

            " The alternative is that the purchaser (i.e. the company using it) holds the buck for liability rather than being able to sue the developer."

            I agree with your entire post, as far as the above sentence is concerned, the purchaser using any particular AI for a given purpose should be at fault if an outcome is undesirable.

            If you buy any product and apply it to a particular purpose and it turns out to not be fit for that purpose you are responsible in the first instance for that result. With regard to customers, due diligence and duty of care are your responsibilities.

            1. very angry man

              Re: Theory and Reality

              The answer is the "not so clear to all"

              eg. you buy a gun. legal.

              you shoot your neighbour, noisy bastard with loud tv all night! justified?

              who is liable ?

              the person who conceived the idea of a gun?

              the person who developed that idea?

              the person who designed that gun?

              the person / company that made that gun?

              the person who sold that gun?

              the person that used that gun?

              AI is far more dangerous than a gun, that's why i used a more acceptable example,

              maybe a better example would be a bio/ chem/ nuke weapon.

              Oh SHIT lots of nut cases with those.

              1. Ben Tasker

                Re: Theory and Reality

                No, I agree with Chris, it's perfectly clear.

                Using your example - the person who's liable is the person who used the gun.

                If you're the neighbour who was shot, you sue the person who used the gun for shooting you (just varying slightly: if they didn't own the gun, you might also sue the owner if they were negligent - left it on the table for their kid to grab)

                Now, lets change the scenario.

                You leave your gun on the table, noone touches it, but it discharges and shoots your neighbour through the wall.

                - The neighbour sues you, your gun injured them

                - You might then sue the person who sold you a faulty firearm to recoup your costs

                - They might then sue the supplier for shipping a dangerous batch

                - Stretching to extremes, the supplier might find their designer was negligent and introduced a safety flaw as a result, and so sue them (more likely, fire them and settle).

                The flow for most damages is the same, at least where stuff has been sold legally (if the gun shop sold that gun to your neighbour despite them being a felon, you might also sue the gun shop).

                In the case of your bio/chem/nuke example, assuming it has been legally sold (which in turn, assumes it *can* be legally sold) then the flow would be exactly the same.

                In the case of AI, you'll sue the person who deployed it (they harmed you). My point in my original post (which on a re-read wasn't entirely clear) is that the person who deployed it should retain the right to then sue their supplier.

                i.e. the flow should still be

                * you sue the local council (say)

                * the local council sues Palantir (or whoever)

                What absolutely shouldn't happen is

                * You complain to the council

                * They say "sod off, talk to Palantir"

                * You complain to Palantir

                * They say "sod off, talk to the council"

                Or indeed, that you as an individual are expected to take on the well resourced supplier as a result of someone else's decision to deploy.

                1. Alan Brown Silver badge

                  Re: Theory and Reality

                  What absolutely shouldn't happen is

                  * You complain to the council

                  * They say "sod off, talk to Palantir"

                  * You complain to Palantir

                  * They say "sod off, talk to the council"

                  Funnily enough, this is EXACTLY what is currently happening between Surrey County Council/Surrey Police/district councils in the UK, resulting in a bunch of flagrant issues being unenforced (including major environmental crimes until they get too big to ignore thanks to nationmal newspaper coverage)

                  Then again, what do you expect from the country which made British Leyland such a worldbeating technological and customer service sucess story?

              2. PeteA
                IT Angle

                Re: Theory and Reality

                Clearly rhetorical question answered: the person who used the gun. Exactly the same as any other murder weapon.

                Framing the AI question similarly: the person who chose to entrust [a particular] AI with a life-critical decision. The AI is just a tool, speaking purely personally I'd like to some _real_ intelligence before we move on to artificial.

                Back in the real world, it's usually not so simple - in my experience, you often see organisations forming a "cultural identity" which is essentially a shared-belief-system. This can help promote workforce unity, but can have ugly consequences when a "freak wave event" of shared identity ("tribalism"), social pressure and external pressures are combined with dubiously ethical directives from "The Boss". Haven't personally experienced the phenomenon to this level of importance, but have seen the general sequence of events a couple of times.

                1. Alan Brown Silver badge

                  Re: Theory and Reality

                  " in my experience, you often see organisations forming a "cultural identity" which is essentially a shared-belief-system."

                  Yup, and the problem here is not "One bad apple, oh dear how sad"

                  The REAL problem is that that "one bad apple" will rot the ENTIRE FUCKING BARREL if not nuked on sight, all infected apples removed immediately and the rest then checked for contamination

              3. Alan Brown Silver badge

                Re: Theory and Reality

                The company which made the loud TV?

        2. Anonymous Coward
          Anonymous Coward

          @ThatOne - Re: Theory and Reality

          AI will get dispensation because it is sold as flawless and this offers a golden opportunity to deflect responsability. If someone is making a wrong decision he can (and will) be held accountable but if he can prove he followed the recommendations of an AI advisor he is off the hook.

          The same thing happens with police using taser. That thing was sold as posing no lethal danger to a human being so in those cases when someone died, if the police officer can show he used it correctly then it's nobody's fault.

          1. Alan Brown Silver badge

            Re: @ThatOne - Theory and Reality

            "AI will get dispensation because it is sold as flawless and this offers a golden opportunity to deflect responsability"

            THIS IS NOT THEORY

            It is already happening. The whole "Computer says...." mentality is based on it and that's been going on for 40+ years

      2. Alumoi Silver badge
        Trollface

        Re: Theory and Reality

        Remember Windows 10?

        1. This post has been deleted by its author

      3. very angry man

        Re: Theory and Reality

        and windows!

    2. Anonymous Coward
      Anonymous Coward

      Re: Theory and Reality

      FWIW, a chef's kitchen can be held liable if just 1 utensil has a spec of numerous substances irrespective of training, thus 1 tool with any amount of bugs creates liability. For the chef, there is no "ifs", "ands" or "buts" when health is of concern.

      Note to self: Create a "AI kitchen cleaning" program to help restaurants avoid liability (McDonald's will love this).

      1. ThatOne Silver badge

        Re: Theory and Reality

        > 1 tool with any amount of bugs creates liability

        Yes, but you can prove there is dirt on that tool. You can not prove an AI has been "poorly trained", unless of course it's really blatant.

        Unlike your kitchen, you can't assess the inner workings of an AI, once trained it's a black box. You put something in, something comes out, but you don't really know why.

        1. Neil Barnes Silver badge

          Re: Theory and Reality

          Which renders its use in *any* circumstance dubious at best, surely? Unless each and every decision it makes is supervised by a responsible adult...

          One sees so many examples (claims) of 'AI performs better than humans' but in what is essentially a statistical issue, what matters are the effects of both false positives and false negatives. What happens, for example, when two AI systems trained on different data give differing results? And of course, there are huge differences in the importance of the decision: e.g. medical tests, or self-driving systems, job screening, or immigration decisions might have more of a knock-on effect than deciding if the washing is finished, or food is cooked.

          Computer says no?

        2. Anonymous Coward
          Anonymous Coward

          Re: Theory and Reality

          I feel too fished bowled in AI context at this point, but isn't the global overall point that if someone makes an error, they're at fault regardless of the tools chosen? I don't care if they used Future-Super-Happy-Grumpy-Gnome-Piston-Puncher's, the fault was on them.

    3. spold Silver badge

      Re: Theory and Reality

      The context of the article was in the dear old USA,,, but on the other side of the pond there are some useful levers to pull....

      The GDPR has provisions both on profiling and automated decision making (without any human involvement).

      You can only carry out that type of stuff in very limited circumstances. You have to identify if the processing of your data falls into that category, if so, then you have to give the data subject information about that processing, as well as provide the ability for them to request human intervention and challenge any decisions made. It's mostly in Article 22 if you want some bedtime reading. Also, there is a more extensive article on this from the UK ICO.

      However, this is also in the category of people bothering to read the information, the usual lie "I have read and understood....<Click>".

      1. Alan Brown Silver badge

        Re: Theory and Reality

        "The GDPR has provisions both on profiling "

        The problem for th emost part isn't "INDIVIDUAL" profiling.

        AIs are trained on populations and once they learn that most people named "Fred" are thieves (based on a sample of 2 out of 3), it will proceed on that basis forevermore

        It's not "artificial intelligence", it's artificial STUPIDITY and that's far more dangerous

        Anyone who's had to sit down and walk users through documentation will know that they will come up with the most insane interpretations of what's written down that you can't even think of - and believe they're being entirely reasonable in doing so (and that's before you get into the basicly naive "literal" interpretations of what's written down that children might come up with)

    4. Alan Brown Silver badge

      Re: Theory and Reality

      "The biggest problem is that "poor training" is a very vague and subjective notion"

      "insufficiently diverse data set" and "data selection bias" come to mind

      It'snot just healthcare. Bear in mind that AIs were disproportionately selecting/punishing black lawbreakers because history showed greater conviction rates and stiffer punishments - but looking under the covers showed that selecive enforcement(white offenders more likely to simply be let off with a warning rather than arrested) and selective punishments (judges more likely to give white offenders probation rather than prison) had a lot to do with the stats

      in otther words, AIs were formalising insittionalised racial biases and then people were using "Computer says" as a reason for bllindly going along with it

  3. Sandgrounder
    Alert

    Running short on software patent infringements?

    This smells like a lawyer looking to open up a new front for huge paydays against any company that uses a system that could be, in the eys of a lawyer, labelled AI.

    Been wronged by AI? Call us now to start your claim.

    Whilst the intention may be honerable, I'm sure that the genius who came up with software patents was worried about protecting the little guy too. How well that works.

    There are so many issues here. For example, demographics change. Populations age. The % of each population group fluctuates.

    Does a company have to check the census every year to figure out how many Somalians are in a local population in case it has increased or decreased?

    How does a company validate the dataset for suppliers?

    Who decides whether the dataset is relevant to the task in hand?

    How much will it cost to retrain your AIs for every product for every update? Will make server patching appear a walk in the park.

    How long is your dataset valid for?

    Will end up with an army scouring through every possible product containing AI looking for the slightest chance to claim a trivial infraction against anyone ticking a box for todays list of society's protected charactistics.

    Insufficient latter day saints in your traffic modeliing for your 10 year old route planning app? Where there's blame...

    1. Doctor Syntax Silver badge

      Re: Running short on software patent infringements?

      I doubt a sample should be statistically representative but it would need to represent the range of what it might encounter.

  4. Doctor Syntax Silver badge

    Does an ML system get trained once and then continue operating on the basis of that one training set? If you think of human medics, as an example, they will be trained once, albeit over a period of years but once in practice they will continue learning by experience.

    1. Alan Brown Silver badge

      "Does an ML system get trained once and then continue operating on the basis of that one training set?"

      Yes. Any "lessons learned" have to then be fed back into the training sets. They don't learn/improve "on the fly"

  5. ecofeco Silver badge

    He's right and MIT says so

    MIT has covered this in numerous reports and proven that not just AI, but algorithms currently being used do indeed, discriminate on race, gender and financial situation.

    See MIT Technologyreview.com

    You already live in a Western version of Chinese "social credit".

    1. Alan Brown Silver badge

      Re: He's right and MIT says so

      It's because of the MIT reports that this is being talked about by lawyers

      The average middle aged lawyer thinks that computers and the people who program them are "godlike". They really don't get that computers are "literal" or understand "garbage in, garbage out"

      I'm quite serious. They're frequently the most common proponents of "Computer says" and the idea of statistics showing otherwise - or that human twist things by selectively tweaking statistics inputs is alien to them.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like