back to article When clever code kills, who pays and who does the time? A Brit expert explains to El Reg

On September 26, 1983, Stanislav Petrov, an officer in the Soviet Union's Air Defense Forces, heard an alarm and saw that the warning system he'd been assigned to monitor showed the US had launched five nuclear missiles. Suspecting an error with the system's sensors, he waited instead of alerting his superiors, who probably …

Page:

  1. This post has been deleted by its author

    1. Nick Kew Bronze badge

      Re: Accountability is important.

      Indeed. The article talks of "the programmers". Given that the word programmer commonly applies to some of the most junior folks in the $bigco, and that they may have little freedom to Get It Right, I can see two interpretations that work:

      (1) They mean holding the developer corporately liable.

      (2) They're already anticipating an exercise scapegoating the innocent.

      Why not talk about the the interesting questions, like responsibility for software components (libraries, etc), and the distinction between proprietary and open-source?

      1. Primus Secundus Tertius

        Re: Accountability is important.

        @Kew

        Your point about libraries is important. It is easy to blame the coder for faults in top level logic. But even that can be unjust. I have seen design reviews where top level faults were nodded through because, for all its faults the document was there, the milestone nominally met, and management did not want to "delay" the project.

        The only answer is to insist that corporations are liable. However, it will still take a few court cases where companies are punished before the beancounters accept that logic on a large scale must be done properly.

        But who is going to check the libraries used by specific apps, and the operating systems under which those apps run?

      2. Sir Runcible Spoon Silver badge
        Paris Hilton

        Re: Accountability is important.

        How is this AI situation any different from other 'created' devices?

        If a car manufacture sells you a car that has a serious defect that causes you to crash and die, then the manufacturer is at fault (whether they knew about the fault and still sold the car is the subject of the criminal courts to decide).

        If, on the other hand, you tinkered with the car after you bought it and the fault developed as a result, then the manufacturer is in the clear.

        How are AI developed machines any different?

    2. Anonymous Coward
      Anonymous Coward

      Re: Accountability is important.

      The vendor should *always* be responsible. Full Stop.

      Any code that they have provided, be it labelled "AI" or whatever remains their responsibility.

      1. This post has been deleted by its author

        1. Paul Crawford Silver badge

          Re: @ Oliver Jones

          That is an interesting but also seriously flawed argument:

          1) While parents are not held responsible for their children, companies are held responsible for the actions of their employees in the course of work (which is closer to the vendor/software model)

          2) When they are adults (and to some extent before then), children become liable for their own actions and can be punished by the courts. Unless AI has some concept of reget or self-preservation that is not available.

          Of course threatening to reprogram its data banks with an axe might just work...

          1. This post has been deleted by its author

        2. Doctor Syntax Silver badge

          Re: Accountability is important.

          "Just as self-aware AI also learns."

          The child learns, becomes an adult and is then liable for punishment at law for its adult errors. How do you propose to fine or imprison and AI entity?

          1. Rich 11 Silver badge

            Re: Accountability is important.

            The child learns, becomes an adult and is then liable for punishment at law for its adult errors. How do you propose to fine or imprison and AI entity?

            Cut off its Internet connection and send it to bed without any electricity.

    3. Christoph

      Re: Accountability is important.

      What of the client who neglects to mention a vital function in the specification? Or the manager who orders the programmer to get on with the main code and not spend a lot of time on that rare possibility? Are they liable?

      1. Doctor Syntax Silver badge

        Re: Accountability is important.

        " Are they liable?"

        Yes.

      2. Archtech Silver badge

        Re: Accountability is important.

        Very good questions indeed.

        Broadly speaking, I suggest that anyone who takes the decision to replace a "manual" (human-operated) process with a fully automated process must be responsible for any adverse consequences.

        But how the responsibility is allocated - that's a very tricky set of questions.

        There should certainly be some kind of precautionary principle: "If in doubt, don't".

    4. Anonymous Coward
      Anonymous Coward

      Re: Accountability is important.

      Do we think business and in particular insurance are going to allow the chance to make some money pass?

      I also don't think a business is going to release something where they are responsible, why would you do it? It's better to get government to legislate you out of the problem.

      I agree it should be the vendor, they made it and I have no control over it.

    5. Doctor Syntax Silver badge

      Re: Accountability is important.

      "Only when AI has shown itself to be self-aware and competent to at least the level of a human equivalent, should AI be considered responsible."

      Underlying criminal law is the notion of punishment; it's what happens on a conviction for breaching the law. AI should only be considered responsible if the concept of punishing it is meaningful. Until then it's whoever is responsible for deploying it who is responsible. Not programming it, deploying it. The programmer may have been working under constraints that prevent proper testing, have been overridden by management or been given a task beyond their capabilities. The buck has to stop with whoever decided that the system was fit to be deployed. It's their responsibility to provide due diligence in making that decision and their liability if it fails. Where to product in which is embedded is a consumer product that decision lies with the vendor: is the product fit to be marketed to the general public?

      And, given Kingston's sensible criterion, this applies to any S/W product, not just those which have been given an AI marketing gloss.

    6. Destroy All Monsters Silver badge

      Re: Accountability is important.

      If there is a patent, it should DEFINITELY be the patent-holder.

      1. Doctor Syntax Silver badge

        Re: Accountability is important.

        "If there is a patent, it should DEFINITELY be the patent-holder."

        You're assuming a single patent-holder. If there are multiple patents from multiple patent-holders the plaintiffs will die from old age waiting for it to be resolved. The lawyers will do very well from it, however.

        There has to be a single, easily identified entity carrying full responsibility.

  2. Anonymous Coward
    Anonymous Coward

    What about cases where a malicious actor alters the AI? How would you prove it? I also don't understand how you are going to prove negligence in something which can be very ambiguous and difficult to unravel. What if the faliiure was not caused by the programming but the initial data set used to teach the system?

    I don't think we will have any answers to this until something does go wrong but the discussions still need to be had.

    1. This post has been deleted by its author

  3. John H Woods Silver badge

    *A* Brit Expert

    With all due respect to Dr Kingston and other experts in the field of artificial intelligence, it seems to me that perhaps it wouldn't hurt to co-author some of these papers with experts in law (e.g. see comments above arguing whether or not the vendor is always responsible) and philosophy.

    I don't know about now but I always felt, when I was involved in biology, that some of my peers made the same mistake: doing detailed research into ethics and law surrounding emerging biological science whilst somehow managing to forget the fact that their very own institutions had whole departments devoted to the study of these subjects.

    tl;dr: interdisciplinary research ideally involves collaboration of people from different disciplines.

    1. TRT Silver badge

      Re: *A* Brit Expert

      Dr Kingston of the institution formerly known as Brighton Poly.

      It's a fine place, I'm sure. Must have come on leaps and bounds since I was at a neighbouring university.

    2. thames
      Boffin

      Re: *A* Brit Expert

      The whole premise of the theory is bonkers. A machine is not going to be held "liable" for anything. The police are not going to arrest your car and put it in jail.

      The people who are held accountable for how the software performs will be determined the same way that the people who are held accountable for how the hardware performs. There are loads of safety critical software systems in operation today, and there have been for decades. There is plenty of established legal precedent for deciding liability. Putting the letters "AI" into the description isn't going to change that.

      The company who designed and built the system and sold it to the public are 100% responsible for whatever is in their self driving car (or whatever). They may in turn sue their suppliers to try to recover some of that money, but that's their problem. Individual employees may be held criminally liable, but only if they acted in an obviously negligent manner or tried to cover up problems. The VW diesel scandal is a good analogy in this case, even if it wasn't a safety issue.

      There are genuine legal problems to be solved with respect to self driving cars, but these revolve more around defining globally accepted safety standards as well as consumer protection (e.g. who pays for software updates past the warranty period).

      The people who have an interest in pushing off liability from themselves are dodgy California start-ups who push out crap that only half-works and are here today and gone tomorrow and don't have the capital or cash flow to back up what they sell. They might try to buy insurance coverage, but the insurers may get a serious case of cold feet when they see their development practices. Uber's in house designed self driving ambitions are going to run into a serious road block from this perspective.

      1. Anonymous Coward
        Anonymous Coward

        Re: *A* Brit Expert

        This isn't like today when a vendor sells a product that can be found to be as liable on the day it was bought as when it subsequently went rogue.

        The point is that the AI will learn / teach itself. So it will become a completely different entity (product). So different that the original designer / programmer may not understand its logic any more. In human terms, no different from a child becoming a adult criminal, which you obviously wouldn't punish the parents for.

        That is the Pandora's box we are facing.

        1. Anonymous Coward
          Anonymous Coward

          Re: *A* Brit Expert

          And that's before we get into the realms of AIs training other AIs.

          It's all going to end in tears.

        2. Doctor Syntax Silver badge

          Re: *A* Brit Expert

          "This isn't like today when a vendor sells a product that can be found to be as liable on the day it was bought as when it subsequently went rogue."

          If you chose to sell or deploy it, you're responsible. As simple as that. It was up to you as a vendor to decide whether to accept that long-term responsibility. Why should you think you should be able to shuffle that off?

          1. Anonymous Coward
            Anonymous Coward

            Re: *A* Brit Expert

            @Doctor Syntax

            "Why should you think you should be able to shuffle that off?"

            Because no one can predict what the AI will become or do in the future. Particularly regarding decisions that no human can understand, as happened in the latest Go competitions.

            Who is accountable when it decides do something that a human could equally well have decided was beneficial but turns out to have catastrophic consequences? Eg eradicate every mosquito and wasp species on the planet, due to the problems they cause for mankind, leading to ecological destruction and devastation of certain food chains.

            1. Kinetic
              Terminator

              Re: *A* Brit Expert

              "Because no one can predict what the AI will become or do in the future. Particularly regarding decisions that no human can understand, as happened in the latest Go competitions."

              Yes, in which case it's doubly important to hold the companies in question to account. Maybe, just maybe they should have explored the unexpected consequences... if they decided they were too unpredictable then, pulled the product and I dunno, not risked killing everyone.

              There is a glut of positive thinking going on in the AI and robotics space. Seen the latest Killbot-lite video with fully autonomous drones that successfully "Hunt" humans through dense woods using computer vision? It's okay, because they are just filming their owners. Weaponisation in 3-2-1.... Ooops

              Don't get me started on Boston Dynamics, those guys saw Terminator 1 and cried at the end when Arnie got killed.

              Some serious accountability needs to get injected to start people reconsidering what their products can be re-purposed as, or how they might fail / go rogue.

              1. Anonymous Coward
                Anonymous Coward

                Re: *A* Brit Expert

                Well if that's going to be the benchmark, then there will be zero incentive to develop safety critical AI applications. Unless the companies just get hit with fines, like the banks and car manufacturers, if that's what you mean by accountability (ie no one is accountable).

                Also, there is no guarantee that the company that created the original AI would still be around when a "descendant" went bad. So who would pay in that instance?

                It is more likely that the real problem is going to be deliberately malicious / rogue AIs created by organised crime groups. That could end up with the state being involved in endless war and no possibility of winning.

                Pandora's box indeed.

        3. amanfromMars 1 Silver badge

          Needed ...... RAT Experts ...... for when WMD are to be both Exclusive and Excluded

          This isn't like today when a vendor sells a product that can be found to be as liable on the day it was bought as when it subsequently went rogue. ... Anonymous Coward

          The Virtual Machines and AI are all ROFLing for there are always useful idiots in offices of business and state administration to carry the can and prove systems liability non-existent/null and void thus rendering effective accountability a fantasy making fun of and destroying the fortunes of an applied virtual reality.

          Still today, just like yesterday, are there fool vendors selling wars on a false premise in order to secure shower room bragging rights in the industries and markets that need them to survive and prosper and prevent a colossal catastrophic economic at home and in dodgy marketplaces and spaces abroad, in foreign and alien lands.

          However, unlike yesterday and today, do the future present and ensure, assure and insure that such idiotic fool vendors have a crushingly greater opposition and crashingly grander competition out there in the MainStreaming Media MMORPG Energy Fields ..... with NEUKlearer HyperRadioProACTive IT Systems Applications for Greater IntelAIgent Games Play just one of the new revisions and additions making IT Totally Different from ever before.

          And that makes all internetworking things both practically and virtually impossible to handle just as easily as was done before. Changed Days with Changed Ways with Changing 0Days to Trade is Default Future Norm and urCurrent AIReality too.

          1. amanfromMars 1 Silver badge

            Re: Needed ...... RAT Experts ...... for when WMD are to be both Exclusive and Excluded

            Food for thought on the myriad phorms of contact made freely available for alien wares and cyber warfare ...... https://www.rt.com/news/419755-fear-robots-not-aliens-kaku/

            And .... there are many Surreally Advanced Weapons of Mass Destruction which have No Known Signature for Accountable Identification of Ownership and they can easily be targeted for terrorising the masses. Quite whether successfully rather than disastrously will obviously depend upon the level of intelligence used to paint the pictures for Mass Multi Media Presentation of a possible, but by no means certain, Personal See/Adopted Adaptive Collective View.

  4. Zog_but_not_the_first
    Boffin

    Definitions

    I'm probably out of touch but from what I read about "AI" much seems to be pattern recognition and decision trees running on really fast hardware (compared with the days of "Eliza"). Can someone point me to an example of AI that displays, well, intelligence?

    1. Anonymous Coward
      Anonymous Coward

      Re: Definitions

      Do I pass your instance of the Turing Test?

      1. Zog_but_not_the_first
        Boffin

        Re: Definitions

        Sadly, I'm met people who fail the Turing test so we may need a new yardstick.

        1. Anonymous Coward
          Anonymous Coward

          Re: Definitions

          Do you mean "I've met"?

          The irony ;)

          1. Zog_but_not_the_first
            Thumb Up

            Re: Definitions

            Good catch.

      2. Fruit and Nutcase Silver badge

        Re: Definitions

        @AC

        Do I pass your instance of the Turing Test?

        Hi Siri,

        No you don't.

        TTFN

    2. Nosher

      Re: Definitions

      Eliza's author, Joseph Weizenbaum (sometimes credited as the father of AI) had strong views on this, suggesting that a programmer who helped fake bombing data in the Vietnam War was "just following orders" in the same was as Adolf Eichmann, architect of the Holocaust. He said "The frequently-used arguments about the neutrality of computers and the inability of programs to exploit or correct social deficiencies are an attempt to absolve programs from responsibility for the harm they cause, just as bullets are not responsible for the people they kill. [But] that does not absolve the technologist who puts such tools at the disposal of a morally deficient society"

  5. Muscleguy Silver badge

    The main reason Petrov distrusted the signal is that it indicated only five missiles were launched. A US first strike would not have just used 5 missiles. Such an attack made no sense.

    An AI set up to do the same job could also have such a scenario built in.

    1. Paul Crawford Silver badge

      True, but then who is responsible for setting up the AI?

      Really it comes back to the first commentard's point - always hold the vendor responsible, otherwise they have no incentive to get it right and fix bugs as they are discovered.

      For example, why should my autonomous car insurance premium depend on the performance of the vendor's AI in crash avoidance? Flaws and problems and financial consequences should stop at the car company in this case.

      1. gnasher729 Silver badge

        "For example, why should my autonomous car insurance premium depend on the performance of the vendor's AI in crash avoidance?"

        Some cars are better, some cars are less good. You pay for what you get. You can surely call your insurance company before you buy a car, you may be told that car A is £500 a year cheaper to insure than car B, because it manages to extricate itself from dangerous situations very well. The manufacturer of car A may charge you more for the car than the manufacturer of car B does. Your decision which one to buy.

        You pay different premiums already depending on the average repair cost of your car.

        1. Doctor Syntax Silver badge

          "Some cars are better, some cars are less good."

          Essentially I insure myself to drive. If I'm an 18 year old I have to pay more. If I accumulate a lot of bad driving history I pay more. I can't actually do anything about the first of those except grow older but I can about the second. If I buy a self-driving car then I have no input at all into the quality of its driving ability nor any way to assess it*. If the vendor sells me the vehicle as being fit for use then they should have satisfied themselves that it was and accept liability if it wasn't; that liability can and should then be covered by their public liability insurance.

          *At least not as a consumer. A manufacturer buying the self-driving S/W as a component might have batter opportunities to test it.

          1. Sir Runcible Spoon Silver badge

            "Essentially I insure myself to drive."

            Easy one to sort out. If the car manufacturer doesn't insure the car on your behalf, for life, then it isn't safe enough*.

            *Enough doesn't mean absolutely safe, just a lot better than meat-sacks.

      2. Destroy All Monsters Silver badge

        Remember, Remember...

        The old discussions about the unfeasability of SDI ("Computer System Reliability and Nuclear War"), which journalists may not have heard about.

        And this didn't even involve AI, just button-pushing.

        Sat 24 Feb 12:53:05 UTC 2018.

    2. Doctor Syntax Silver badge

      "An AI set up to do the same job could also have such a scenario built in."

      To have been included in version 2.0.

    3. Nosher

      "An AI set up to do the same job could also have such a scenario built in."

      Which nicely sums up where AI is at the moment - there's still no "intelligence" that can realistically consider situations like this, in the way a human can, outside of its programming. What if someone had even thought about this in advance and added a rule like "do not launch counter-attack if missiles <= 5". What then if 6 "missiles" had been detected? Until such time as AIs can really play a hundred games of tic-tac-toe and come to the conclusion that "the only winning move is not to play", then it's just not safe enough to work in this sort of application.

  6. sad_loser

    There are already some standards out there

    e.g. ISO13485 covering medical devices, that does specify code audit, input and output limits etc.

    1. Anonymous Coward
      Anonymous Coward

      Re: There are already some standards out there

      "ISO13485 covering medical devices, that does specify code audit, input and output limits etc."

      How well does that actually work in practice?

      In a different safety-critical field, my observations in the last decade or so suggest that what gets deployed to production can often bear very little relationship (in hardware or software terms) to what gets audited, tested, certified. Readers will be able to work out why.

    2. Destroy All Monsters Silver badge
      Holmes

      Re: There are already some standards out there

      And luckily, too:

      Therac-25

      It was involved in at least six accidents between 1985 and 1987, in which patients were given massive overdoses of radiation. Because of concurrent programming errors, it sometimes gave its patients radiation doses that were hundreds of times greater than normal, resulting in death or serious injury. These accidents highlighted the dangers of software control of safety-critical systems, and they have become a standard case study in health informatics and software engineering [which is weird, I always encounter 'software engineers' that haven't heard about it]. Additionally the overconfidence of the engineers and lack of proper due diligence to resolve reported software bugs, is highlighted as an extreme case where the engineer's overconfidence in their initial work and failure to believe the end users' claims caused drastic repercussions.

      I don't think anyone was ever successfully held to account for this clusterfuck on the level of reconverted web programmers. The company seems to have successfully weaseled out b< denying and stalling.

      Sat 24 Feb 15:47:26 UTC 2018

  7. Paul Herber Silver badge

    it is unclear whether there would have been any lawyers left to prosecute the case

    "it is unclear whether there would have been any lawyers left to prosecute the case"

    A town that cannot support one lawyer can always support two.

    1. Mark 85 Silver badge

      Re: it is unclear whether there would have been any lawyers left to prosecute the case

      This presumes that there is a town left.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020