back to article Waymo sues California's DMV to block autonomous car crash data from publication

Waymo is suing California's Department of Motor Vehicles in an attempt to keep information about its autonomous car crashes and other operational details private, arguing that the data is a trade secret. California's DMV is strict about giving permits to companies testing self-driving cars on real roads. Companies have to …

  1. Dwarf

    The only conclusion we can draw from the Waymo response has to be that they have something to hide. If you look at the aviation industry, they actually learn from the mistakes and make improvements. This sort of learning should be available across all autonomous vehicle companies so that each doesn't have to make the same mistakes.

    On the AI point - using mental health conversations to teach AI - Didn't anyone learn anything from when Microsoft tried this with their Tay chatbot ?

    Its no longer the voices in my head, now the chatbot told me to do it ...

    Seems to be a trend here about people not joining up the dots and learning from other people's mistakes.

    1. ComputerSays_noAbsolutelyNo Silver badge
      Joke

      In a me-centric world, why learn from other people's mistakes?

      1. Anonymous Coward
        Anonymous Coward

        Is the game to learn, or to present a convincing appearance?

        Different tools for different problems right? Lacking general purpose AI and the ability to create one in the next decade, the through line of these three posts is that they need to be tailored and trained to do their jobs.

        Waymo want's to block sharing safety data for competitive advantage, and should be told to shove off in court. Transparency should hold it to account, but could also help other companies in the important business of not killing and or destroying things while getting people from point a to point b in a reasonably efficient manner. Share the data on this one.

        The second highlights that transparency and data sharing shouldn't be universal or automatic, and the last that any current attempt to produce a generalist natural language ML model produces substandard garbage as output and that the game has moved to providing more target tools.

        Also the post shows how 3 headlines rolled into one article is less usefully than three short articles unless you are a search engine. I'm used to ignoring the the "live" feed articles on the guardian for similar reasons, and I have my own rss feed thank you, it's for articles not tweets. Feel free to figure out how to up and down vote on part of this post in your own way(I'd probably ignore it myself).

        1. John Brown (no body) Silver badge

          Re: Is the game to learn, or to present a convincing appearance?

          "Waymo want's to block sharing safety data for competitive advantage, and should be told to shove off in court. Transparency should hold it to account, but could also help other companies in the important business of not killing and or destroying things while getting people from point a to point b in a reasonably efficient manner. Share the data on this one."

          Waymos ego seems to be getting the way here. After all, they will get the same insights into others wanting the same approval and their reports also made public. They seem to be working on the assumption that they have solved all the problems and no one else has, hence their "need" for secrecy.

    2. big_D Silver badge

      Surely those accidents happened in the public domain and are of genuine public interest.

      If we were talking about private testing on private property, then I could see an argument for keeping the information under wraps, until the time the vehicles are ready for release on public roads. But we have companies using experimental vehicles on public roads, so they should be transparent and provide the information openly.

      If they don't want the information published, they should hand back their license and go back to testing on their private test tracks...

    3. bazza Silver badge

      You mention the aviation industry. Here in the UK at least, the legal position on openness in the industry is quite strict and supportive.

      Say a pilot crashes a plane and is likely to be found to be at fault, could get prosecuted, convicted and jailed. Any information given by the pilot to the accident investigation authorities is protected by law; it cannot be given to the prosecuting authorities. This allows the pilot to be free with their cooperation without also risking their liberty.

      It's then up to the police to try, if they can, to independently gather enough information to support a prosecution.

      There was quite a lot of examination of this arrangement in the aftermath of the Shoreham air show crash.

  2. Dave 15

    What they really mean

    Is that there have been a number of accidents created by badly written and designed systems at least as embarrassing as the mess Tesla is using and they dont want the humiliation to be out there

  3. Pascal Monett Silver badge
    Stop

    "the data is a trade secret"

    Not if the accident happened on a public street it aint.

    Not if you had to report it to the authorities.

    That is public information and I hope the judge is going to throw that lawsuit out with prejudice.

    1. Anonymous Coward
      Anonymous Coward

      Re: "the data is a trade secret"

      It's a trade secret that you are now dead, and we'll sue your estate to stop you from knowing it.

  4. ThatOne Silver badge
    Devil

    Artificial Crowd Intelligence

    > They also make up facts less often, and show small decreases in toxic output generation

    Wow, let's hope the average Internet user will soon be upgraded to that class.

    1. Anonymous Coward
      Anonymous Coward

      Damned with faint praise

      It can't even reliably identify content by, from, or about the average internet user to more readily filter it out of my RSS feed.

      Not there is necessarily anything wrong with being an average person, or with the average person. It

      is just the least interesting or useful part of any given person, or of what they have to say.

      The problem is that ML algorithms tend to do the exact opposite of this unless they are carefully trained with specifics. That was one reason I found the GPT-3 output hilarious. They scraped so much content it resulted in a model that, while generally more useful then it's predecessors, was still hopelessly tainted with by the stale sweat of the unwashed meat sacks. A thin veneer that kept cracking and allowing id of 4chan to keep bubbling up in embarrassing places.

  5. Helcat

    Answer is easy

    Terms and conditions of the permit is the data has to be made public.

    Don't want to make it public? No permit.

    It really is that simple.

    1. Doctor Syntax Silver badge

      Re: Answer is easy

      Cake and eat it situation. Perhaps they're keeping a seat ready for BoJo in the board room.

  6. Prst. V.Jeltz Silver badge
    Flame

    un believable

    After all the debate about should we ? can we ? how do we test it ?

    Waymo think the best way is to keep the test results secret?

    maybe im being naive , but If I were in charge complete transparency would be absolute rule number one , above all others .

  7. Sleep deprived

    I wish it were that easy

    You mean Waymo clicked on the "I read and accept the terms of agreement" in order to get a permit and they now want to unclick or change said terms? I wish it were that easy...

    1. John Brown (no body) Silver badge

      Re: I wish it were that easy

      It'll be interesting to see the T&Cs for Waymo vehicles when they start selling them :-)

  8. Lotaresco

    Should mental health crisis text conversations be used to train customer service chatbots?

    Stuff that noise. Should chatbots even be the first contact that a person with a mental health crisis encounters when searching for help?

    Betteridge's Law applies.

    Having encountered chatbots in various guises, I have to say that they can drive even someone who is coping well and has a supportive network to harbour intense, dark thoughts after just a half-hour of nonsense. People who are having a mental health crisis need an understanding, empathetic, human being to talk to. Not a bl**dy chat bot.

    1. Anonymous Coward
      Anonymous Coward

      Re: Should chatbots even be the first contact ...

      No, they shouldn't.

      However, if there are very poor levels of appropriate support, and if that support it not planned to improve, what *if* (and it's a big what-if) a chatbot gives better outcomes than nothing? [1] Of course, provision of (so called) "chatbot support" may make future provision of proper support less likely, which makes it a very tricky thing to balance.

      [1] Actually, the biggest problem here seems to me to be how do you *test* the chatbot? It seems unlikely that it would be ethical to try it out on people in crisis without testing, but how do you test it for live situations without doing so? Perhaps you could run it in parallel with a trained human who scores it's responses (and maybe uses them, if ok; or if not, does not).

      There seems to be quite a lot of interesting and tricky stuff to unpack here that goes far deeper than the mere training of a (putatively assistive) chabot.

      1. Doctor Syntax Silver badge

        Re: Should chatbots even be the first contact ...

        Put the resources that would be used to develop and test the chatbot to providing qualified human support instead.

    2. Anonymous Coward
      Anonymous Coward

      Re: Should mental health crisis text conversations be used to train customer service chatbots?

      While I tend towards agreement with you, the question could be reframed, do we need/want chatbots to improve? Remember all those facial recognition problems caused by 'training', or misleading systems, using only a subset of faces likely to be encountered. Should AI systems be better trained when people are (a) in short supply and (b) get broken by their experiences.

      Which is better, someone on a suicide line being told 'their call is important, so please hold on until your credit runs out', or a better trained voice response system, that does not drive even sane people nuts?

      Posted anonymously, as I have a relation who has faced serious mental health issues and know they can be driven to the point of self-destruction. If AI can identify crunch points in conversations, it might help train human operators to avoid those bear traps. Sometimes even tiny steps can avert a crisis, while even tiny missteps can cause one. Been there, done that, got the tee shirt and the bloody moths ate the darned thing.

    3. Ian Johnston Silver badge

      Re: Should mental health crisis text conversations be used to train customer service chatbots?

      If particular advice helps, why would it matter whether it comes from a robot or a human - who may be reading a script anyway? And if just talking to someone is all that's needed, is it an achingly trendy mental health crisis or just a desire to talk to someone?

      1. jtaylor

        Re: Should mental health crisis text conversations be used to train customer service chatbots?

        When I worked with a crisis line, people mostly called to connect with someone who will hear them, who "gets it." A few call for advice, but those are rarely the ones in crisis.

        That human connection might offer a reason to keep being alive a little longer. The feeling that someone really does care about you. I once told someone that if they hadn't called that night, we would both have missed an excellent conversation.

        Crisis counselors offer their humanity to connect with someone who might feel very far away from life. Chatbots can't.

    4. Robert Carnegie Silver badge

      Re: Should mental health crisis text conversations be used to train customer service chatbots?

      I think there's a misunderstanding. In the story, there's no indication that the suicide (prevention) contact service uses AI or Eliza or scripted responses to chat. They do have records, which were to be used as educational input to a customer service chatbot. The role of the customer service function in suicide (preventing or causing) is, very strictly speaking, incidental. With this data, perhaps more likely than not.

  9. Blackjack Silver badge

    Waymo definitely has something to hide and now everyone knows that they do.

  10. cd
    FAIL

    Not Ready for Service

    I've commented about this before, was fueling a diesel truck in a fuel depot near Phoenix, USA, not a place where cars typically go. A Waymo van festooned with all the whirling gadgets pulled in from the nearby street, a bit too fast, and started circling the pump area I was using. Which is surrounded by cinder block walls, so should have been obviously an obstacle.

    The whirly-van drove a narrowing circle around the tanks and pumps and was headed right for my parked cab with the fuel hose active in the tank when the human driver intervened and grabbed the wheel and turned it away.

    It then tried to circle again and the driver directed it out to the street and away with two hands on the wheel and some leaning. I was relieved when it sped out of sight.

    It seemed to me that those vans are programmed to explore and intrude without concepts of private property or propriety, much like all the other products Google is associated with. They get away with it with Chrome and ReCaptcha but those are not potential catastrophic events.

    Larry Page intended to not emulate Edison taking advantage of Tesla, yet his company takes advantage of a huge swath of the human population with far more damage and profit than Edison could have dreamed of.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like