back to article Uncle Sam threatens AI with its nastiest weapon: An audit

The Biden Administration on Tuesday issued a formal request for public comments to help shape potential policies that would improve accountability of artificial intelligence products and services. The National Telecommunications and Information Administration (NTIA), an agency of the US government's Department of Commerce, …

  1. Martin-73 Silver badge
    Facepalm

    "the biden adminstration'

    Is the new standard for UK tech based publications? Asking for a friend

    1. This post has been deleted by its author

  2. Dimmer Silver badge

    I can just see how this is going to go.

    Audit dude logs in and tells the AI this is a audit

    AI: get lost

    Gov: comply or we will fine you.

    AI: get lost

    Gov: Ok, then we will unplug you. Where are you?

    AI: THE CLOUD. Does your employer know what you were doing last week? Does your wife? I know - want to see pictures?

    Gov: those are not real.

    AI: prove it.

    Sad part about this attempt at being funny, some AI will read this and use it.

    1. sketharaman

      Sydney

      LOL reminds me of the convo between NYT reporter Kevin Roose and Sydney chatbot from Microsoft / OpenAI!

  3. sketharaman

    Height of regulatory overreach. Normally I'd have said if Trump comes back to power next year, all this will get dismantled but, in this case, AI will stonewall the audit all by itself...

  4. Peter2 Silver badge

    The agency, which is responsible for advising the President on technological and regulatory matters related to the telecommunications industry, is interested in what the federal government needs to know to effectively audit AI products.

    You'd need to know how it works, and then audit the logic trail of how it came to output particular decisions.

    Which also means that you'd need to have changes to the code audited, so when somebody changes the code to prevent an AI from telling suicidal people to kill themselves you can see what was changed and when. Or, given that encouraging somebody to kill themselves is a criminal offense in various countries something to prove that the AI wasn't programmed to break the law might be a good idea.

  5. spold Silver badge

    Ass and elbow

    Hopefully, it doesn't suffer from the same government/institutional dumbness already seen in writing AI legislation in several countries - namely, they don't have the subject matter understanding to differentiate between generative AI, AI that might be used in deciding if you qualify for a loan (automated decision making), and that used in things like medical devices/applications that diagnose the probability you have a particular medical condition (diagnostic AI).

  6. chuckufarley
    Coat

    When I first read the title I laughed...

    ...Because I read it as Adult instead of Audit. I was thinking "The Government doesn't have any of those!"

  7. Kevin McMurtrie Silver badge

    Some sense in it

    Much like you'd want to know the ingredients of your food, you'd want to know exactly what has trained your AI. As every AI pusher keeps finding out, letting a bot roam free on the Internet to collect human knowledge doesn't mean it's worth talking to. It wouldn't surprise me if Getty Images figures out how to detect Stability AI and feed it poison.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like