back to article OpenAI reorg at risk as Attorneys General push AI safety

The Attorneys General of California and Delaware on Friday wrote to OpenAI's board of directors, demanding that the AI company take steps to ensure its services are safe for children. California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings in an open letter [PDF] cited "the heartbreaking death by …

  1. An_Old_Dog Silver badge
    Meh

    Double Standards

    Why is a double-standard being applied here?

    Kitchen knives are sharp and potentially-deadly tools, and have been used to inflict harm and death for hundreds of years.

    Yet kitchen knives are unregulated. A child can walk into a five-and-dime -- well these days, a one-dollar -- store, give their money to the clerk, and legally purchase a "deadly weapon".

    As a society, we depend on responsible and effective parenting to minimize (though not eliminate) child-injury and child-death from kitchen knives. "Billy, those knives are not toys." Etc.

    As a society, we accept some child-injuries and some child-deaths due to kitchen knives. We put them down as unavoidable tragedies, or as tragedies of circumstances ("Stupid kid. No matter how many times I told him, he just would not learn ...").

    Given all that, why are some people demanding chatbots be made 100% child-safe?

    1. Gene Cash Silver badge

      Re: Double Standards

      > A child can walk into a five-and-dime -- well these days, a one-dollar -- store, give their money to the clerk, and legally purchase a "deadly weapon".

      Sure... but do you think that would actually happen? No. The clerk would probably say "go away kid, you bother me"

      OTOH, I used to go in an purchase cigarettes for my mother as a 10-16 year old kid, and never had a problem, even though this was quite illegal.

      > Given all that, why are some people demanding chatbots be made 100% child-safe?

      I do it because I think chatbots are a scourge on humanity, and "THINK OF THE CHILDREN" works both ways as a tool.

      1. This post has been deleted by its author

    2. Richard 12 Silver badge

      Re: Double Standards

      Actually, they can't.

      In the UK, it's illegal to sell a knife to a child under 16.

      In the US it varies by state and by city, but most stores won't sell them (even if they legally could)

      1. This post has been deleted by its author

      2. cd Silver badge

        Re: Double Standards

        Then cometh the AI clerk...

    3. O'Reg Inalsin Silver badge

      Re: Double Standards

      I read sme of the suicide friendly dialog. It would be like the kid saying he wanted to kill some cats to save some birds, so please sell them a knife. Most cashiers would refuse and many would go farther. That's a positive feature of society, that constant sharing of values in ordinary discourse.

    4. Wang Cores Silver badge

      Re: Double Standards

      Frankly when governments handle the same chatbots that teachers are reporting young people refuse to interact with their peers in a healthy way in favor of sycophantic chatbots and Google abandons their commitment to sustainable practices in favor of those chatbots that are encouraging asociality, I call that having a standard.

  2. HMcG Bronze badge

    Risk vs benefit. We allow knives because that are legitimately very useful tools. So far, AI has no benefit that justifies harm, because so far all AI does is regurgitate a plausible stochastic facsimile of answers scraped from the internet.

  3. Smeagolberg

    '"We are heartbroken by these tragedies and our deepest sympathies are with the families," Taylor said. "Safety is our highest priority...'

    ... after profit.

  4. stiine Silver badge
    WTF?

    please fucking stop

    I have a hard enough time swiping 'fuck' and other incredibly emphatic, insanely humourous, and unspeakably obscene words, like Belgium, into my android phone. If I'm ever conversing with an AI, it sure as hell better understand what i'm saying, and not bowlderize it for any alleged children.

    1. David 132 Silver badge
      Happy

      Re: please fucking stop

      I have a hard enough time swiping 'fuck' and other incredibly emphatic, insanely humourous, and unspeakably obscene words, like ███████, into my android phone. If I'm ever conversing with an AI, it sure as hell better understand what i'm saying, and not bowdlerize it for any alleged children.

      Have you no shame or decency, sir?

      I have redacted your comment because even the denizens of 4chan would blanch at such foul language.

  5. BasicReality

    Why do they think government regulation is always the answer?

  6. Anonymous Coward
    Anonymous Coward

    Is there not also a privacy question?

    I think this is possibly where granting kids privacy protection from very early on may backfire. The original idea was to ensure kids would feel safe to talk to help and health providers without their parents finding out - that was deemed more important than the ability of parents to pick up that something is amiss, and it means that any organisation dealing with minors must advice parents they cannot discuss matters, or ask upfront that parents have access (AFAIK not in all cases, but it's best to err on the safe side).

    "the ability for parents to be notified when the system detects their teen is in a moment of acute distress."

    That seems to turn this on its head, but it raises its own questions: how would they know who the parents are in the first place?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like