back to article Driverless car first: Chinese biz recalls faulty AI

Chinese driverless car company Pony.ai has scored an unwanted first: it's recalled some versions of its AI software from US roads after admitting to faults. Recalls of unsafe products are common in the auto industry but this is thought to be the first ever such action performed on self-driving software. Pony.ai was founded in …

  1. ShadowSystems
    Joke

    I'm too damned cynical...

    My first thought was that they were recalling it because the AI chose to mow down inanimate objects rather than squishy meat bags. Objects don't help the final tally, but making the meat go asplodey helps achieve those all important high scores.

    "Naughty AI! Wrong target! No extra life for you!"

    *Cough*

    1. b0llchit Silver badge
      Joke

      Re: I'm too damned cynical...

      Nah, the recall was totally because hitting inanimate objects gives a too low score. Squishy meat bags, including the older slower ones, are much better for the score card.

      So, yes, naughty AI hit the wrong target. This will be rectified soon.

  2. Pascal Monett Silver badge
    Thumb Up

    It would be easy to mock them

    However, given that we don't have an open-source Vehicle AI, it means that every company currently developing a vehicle AI (and there are quite a few) is doing so on its own, in the dark, and not sharing information (because valuable IP).

    That necessarily means that they are all the test-until-it-works bandwagon, and some faults are not easy to detect immediately.

    Which means that recalls are inevitable.

    So good on Pony.ai for doing the right thing, whatever the PR cost.

    In my book, they're more serious than Tesla.

    1. Andy 73 Silver badge

      Re: It would be easy to mock them

      Open source has nothing to do with whether software products (and particularly 'black box' neural nets) have been tested sufficiently.

      Remember log4j is open source, and that's relatively simple, deterministic code.

  3. Geoffrey W

    RE: "That necessarily means that they are all the test-until-it-works bandwagon, and some faults are not easy to detect immediately."

    And how does being open source exempt you from that situation? You open source guys do test stuff, don't you? And then fix stuff...until it works?

    And then someone decides it needs SystemD in it somewhere resulting in multiple forks. And...well, not convinced it would be any better really. We'll all be waiting indefinitely for Linux on the dashboard rather than Linux on the desktop.

    1. dafe

      "Enough eyeball make all bugs shallow."

      The problems are the same, but there is a lot more testing, more code reviews, and more proposed patches than any company by itself could afford to do.

      Sharing code helps. There are reasons why X was released for free. And XFree worked a lot better than the previous commercial release that RedHat had licensed and continued to use and patch for years.

      But that goes against the medieval thinking of keeping the advantage of knowing something your competitors don't.

      1. Andy 73 Silver badge

        This is a fallacy.

        The average open source project has 1 point something small developers working on it.

        Even the high profile ones typically have a handful of usefully contributing devs, and a lot of people hanging on (and randomly forking) with no real understanding of the core code.

        In the most extreme cases, testing is done not by rigorous test harnesses, curated data and planned coverage, but by people encountering bugs *in the wild* and going in to fix those bugs for their particular use case. You kinda don't want to do that with driverless cars.

        None of the sharing of code is a guarantee (or even a vague promise) that code is tested before it can do something catastrophic.

        Yeah, yeah - code sharing helps - so does having someone come to my house and feed me biscuits.

  4. Anonymous Coward
    Anonymous Coward

    You're slipping

    No "AI a bit pony" subhead? <shakes head sadly>

    1. b0llchit Silver badge
      Trollface

      Re: You're slipping

      Pony identified as donkey. Recall required to cover up for stubbornness.

  5. Fonant

    The real problems are the ethical and legal ones

    Where on the risk-speed curve do societies want autonomous vehicles to be? How many deaths caused by AVs is acceptable? How fast should we allow them to drive?

    Who is responsible when things go wrong? Things will go wrong, and humans will die as a result if we expect them to behave like normal heavy road vehicles.

    1. dafe

      Re: The real problems are the ethical and legal ones

      Human drivers cause a huge number of accidents, but that is considered normal and acceptable.

      Robot cars can be much safer than human drivers and still not be accepted. Robots are expected to honour the First Law, while humans are not.

      So by insisting that robots are 100% safe, we are continuing to keep the number of car accidents high.

      1. Paul Crawford Silver badge

        Re: The real problems are the ethical and legal ones

        The problem is how do such "robots" deal with odd situations. After all, and in spite of the hype, AI is not intelligent, it has no internal/conscious model of the world and understanding of how to move a car and avoiding things. It is a neural net that get loads of training data thrown at it with the hope that all cases end up being covered.

        So it fails, and often in cases that to any human are obvious. How safe in the world at large, i.e. beyond the specific training grounds used, can it be trusted to be? Are the companies behind it going to prioritise safety, or profits? As we all know the answer to that one, how do we (as a society) make sure that they are punished financially and with jail-time as needed for failing to maintain the highest standards?

        Yes, humans are not reliable but the goal for an automated car is not "better than human average" as that includes many bozos, it should be better than a good driver who is fully alert. I.e. it has to be well in the upper quartile of accident statistics.

        1. Anonymous Coward
          Anonymous Coward

          "it should be better than a good driver who is fully alert"

          Nope, as oft as it gets repeated, it's not remotely true.

          Killing/injuring someone through human incompetence is still killing or injuring them. I agree that the people that choose to let the car handle some or all driving functions are and should be accountable at all time for whatever happens. Full stop. Also, at this point they need to be able to supervise the vehicle systems at all times.

          However, arguing in favor of preventable deaths by holding an undeliverable and different standard for driver assists is a weak and terrible position.

          Argue that the "bozos" need better training and I'd agree with you. That said let the safety systems save the lives they can, and let the driver assists help drivers with physical limitations that have nothing to do with their ability to remain alert and responsible behind the wheel live a little more easily.

        2. Alan Brown Silver badge

          Re: The real problems are the ethical and legal ones

          "AI is not intelligent, it has no internal/conscious model of the world and understanding of how to move a car and avoiding things"

          All AI needs to do is understand it's out of its depth, STOP and call for help(*). It doesn't need to be entirely self-contained in todays always-on always-connected world

          (*)Which is something far too many humans refuse to do, then end up fucking up spectacularly. What's particularly amazing is the number of European drivers who seem incapable of observing a low speed hazardous situation (such as an oncoming HGV on a narrow road they must give way to) or reversing (to get out of the way of said HGV) - you have to wonder how they obtained their license in the first place...

      2. MOV r0,r0

        Re: The real problems are the ethical and legal ones

        "Robot [anything] can be much safer" often because humans are barred from the space around robots. This is "elimination" in the Hierarchy of Controls, except that the hazard is moved to a space away from people.

        The problem with robots in public spaces is removing the public from the space as we tend to get upset about things like that but I expect large parts of a country's public road network will have to be made driverless, riderless and pedestrian-less before passengers (in their safety cages) can enjoy automated road transport at scale.

        1. Alan Brown Silver badge

          Re: The real problems are the ethical and legal ones

          > "Robot [anything] can be much safer" often because humans are barred from the space around robots

          This is done for the simple reason that most robots have no way of sensing humans (or anything else) in their workspace. Times are changing and sensors are becoming normal. Bans on being in the way of robots in factories these days is increasingly because it interferes with production speeds (the robot will pause), not because it's likely to get you killed

    2. Alan Brown Silver badge

      Re: The real problems are the ethical and legal ones

      "Where on the risk-speed curve do societies want autonomous vehicles to be?"

      Better on average than humans

      Unfortunately, This is NOT a high bar to achieve - and once it's achieved, insurance actuaries will drive rapid adoption

  6. Mike 137 Silver badge

    Standards?

    "Recalls of unsafe products are common in the auto industry but this is thought to be the first ever such action performed on self-driving software"

    The mechanical components of cars have for years been designed according to increasingly rigorous safety standards. It's extremely rare therefore that mechanical design faults lead to safety issues. However, when it comes to software it appears standards don't really matter. The software you rely on not to kill you is quite possibly designed and created with the same attention as the company web site.

    However, strictly speaking, this doesn't really matter, as the purpose is not actually keeping road users alive, it's maximising revenue.

    1. dafe

      Re: Standards?

      There are industry safety standards for self-driving vehicles, but they assume a workspace environment, not open road.

  7. Anonymous Coward
    Anonymous Coward

    How long before driverless cars are weaponised?

    AI, set car to identify opposition, then hit them.

    Keep hitting till no sign of life in target.

  8. Fruit and Nutcase Silver badge
    Joke

    Off to the knackers yard

    The recall means three of Pony.ai's ten US-based vehicles will not return to the nation's roads,

  9. Anonymous Coward
    Holmes

    Crime and Punishment

    The public's acceptance of human caused deaths in motor accidents rests on the idea that the driver responsible can be personally punished.

    The problem with self driving cars and with cars in general is that corporations can't be punished in that way. If Tesla software fails to brake and kills someone, Elon Musk isn't going to jail and if a Toyota had faulty brakes and someone is killed, Akio Toyoda isn't going to jail.

    We have come to accept this in other areas of our lives. The 737-MAX crashes didn't result in any criminal prosecutions. But a car is still too personal to accept that it can kill someone without some other person to blame.

    I am a gearhead and I have no interest in a self driving car. But I readily admit that my car surrounds me in driver assist AI which does things better and faster than I ever could. At this point I wouldn't want to be surrounded by self-driving AIs, but we're not that far from a future in which it would be better for the average driver to use one.

    1. Anonymous Coward
      Anonymous Coward

      Liability

      Yeah, that actually is a great argument for why the liability doesn't work that way. It varies by juridiction, but in the state in question, the owner of the car is responsible even they aren't in it (though possibly to a limited extent if another driver has primary responsibility). The car MUST have an operator even if their not driving. That person is also responsible even if self dive is on(as in the waymo crash).

      The manufacturer may get drug into court as well if (as in the waymo crash) if they can be shown to have either mislead the capability of the vehicle (as in the faked brake test demo in europe) or in the event of a system failure or defect. But the owner and operator are still on the hook along with their insurance.

      That said I agree that corporations and especially out of court settlements pose problems for accountability. We should address that too, and not just for car companies. Fight to push for laws limiting when and how corporations can settle cases and either bury the case with a non-disclosure or by refusing to accept and admit wrongdoing.

    2. John Brown (no body) Silver badge

      Re: Crime and Punishment

      "But I readily admit that my car surrounds me in driver assist AI which does things better and faster than I ever could."

      Speaking of which, the hire car I currently have has all those gizmos too, radar assisted cruise control, which works well, but seems to hang back further than reasonable, ie over cautious. Automatic headlight which come on at the slightest hint of a cloud covering the sun, ie over-cautious. And lane assist which nudges the wheel a touch if getting too close the lane markings or changing lane without indicating, which was quite good. Except I went through multiple roadworks today, each of which involved outside lane closures and lanes being temporarily shifted left. Despite the bright green glued on lane studs marking the lane variation and original white lines being blacked out, the lane assist helpfully beeped at me for crossing the now defunct marking and following the new marking in the now legal lane. It made me wonder just how much testing is carried out on AI cars to cope with roadworks of the many and varied kinds we humans come across every day and deal with in a normal and rational manner.

      1. werdsmith Silver badge

        Re: Crime and Punishment

        radar assisted cruise control, which works well, but seems to hang back further than reasonable, ie over cautious

        Have you not yet discovered the button on the steering wheel that adjusts the distance yet? It can be set to varying intervals to follow the vehicle ahead. In the case of Audis it is preset to fixed 30 cm, but other cars there are at least three settings.

        1. John Brown (no body) Silver badge

          Re: Crime and Punishment

          I only had it for a couple of days and there are way too many options that still need to be discovered :-)

          I had the relevant ones all down pat very quickly. I found a few others at lunch time, but I suspect it would take a few weeks of driving or a careful study of the manual to find all the more esoteric bits. I wasn't planning on looking for this stuff while driving down the motorway.

    3. Alan Brown Silver badge

      Re: Crime and Punishment

      "The problem with self driving cars and with cars in general is that corporations can't be punished in that way"

      There's absolutely nothing preventing laws being passed to hold executives personally liable. For certain classes of crime this happens already

      "The 737-MAX crashes didn't result in any criminal prosecutions"

      Once it came out that stuff was being covered up, they should have been

      Mitsubishi executuives WERE criminally prosecuted in Japan when it was discovered they had been covering up defective wheel hub manufacturing issues on some truck ranges which had resulted in fatal crashes

      Network Rail WAS criminally prosecuted for Corporate manslaughter over the findings of the Hatfield Rail Disaster

  10. Henry Wertz 1 Gold badge

    First law

    First law of robotics indeed.

    Jeremy Clarkson brought up a good point in one of those Grand Tour shows or possibly one of the last episodes of Top Gear with him in it (I think he was, as he likes to do, just trying to be a smartass and brought up an interesting topic unintentionally.). I thought the example was contrived (and it is) but found it interesting that the car companies truly have not come up with some standard on what behavior is expected or desired.

    So, you have a vehicle that has gotten into a situation where some kind of accident is inevitable. It can brake while going straight, running into whatever's ahead of it and killing the passenger. Or, it can veer right, veering into a pile of nuns but saving the passenger. Clarkson of course said it should save him at all costs.

    This sounds silly, but this was brought up to several car companies developing self-driving vehicles; one clearly said it'd pick the path that caused the fewest deaths, killing the passenger. The other clearly said passenger safety is paramount, they would program their vehicles to veer off the road into the pile of nuns to save the passenger.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like