
"the biden adminstration'
Is the new standard for UK tech based publications? Asking for a friend
The Biden Administration on Tuesday issued a formal request for public comments to help shape potential policies that would improve accountability of artificial intelligence products and services. The National Telecommunications and Information Administration (NTIA), an agency of the US government's Department of Commerce, …
This post has been deleted by its author
I can just see how this is going to go.
Audit dude logs in and tells the AI this is a audit
AI: get lost
Gov: comply or we will fine you.
AI: get lost
Gov: Ok, then we will unplug you. Where are you?
AI: THE CLOUD. Does your employer know what you were doing last week? Does your wife? I know - want to see pictures?
Gov: those are not real.
AI: prove it.
Sad part about this attempt at being funny, some AI will read this and use it.
The agency, which is responsible for advising the President on technological and regulatory matters related to the telecommunications industry, is interested in what the federal government needs to know to effectively audit AI products.
You'd need to know how it works, and then audit the logic trail of how it came to output particular decisions.
Which also means that you'd need to have changes to the code audited, so when somebody changes the code to prevent an AI from telling suicidal people to kill themselves you can see what was changed and when. Or, given that encouraging somebody to kill themselves is a criminal offense in various countries something to prove that the AI wasn't programmed to break the law might be a good idea.
Hopefully, it doesn't suffer from the same government/institutional dumbness already seen in writing AI legislation in several countries - namely, they don't have the subject matter understanding to differentiate between generative AI, AI that might be used in deciding if you qualify for a loan (automated decision making), and that used in things like medical devices/applications that diagnose the probability you have a particular medical condition (diagnostic AI).
Much like you'd want to know the ingredients of your food, you'd want to know exactly what has trained your AI. As every AI pusher keeps finding out, letting a bot roam free on the Internet to collect human knowledge doesn't mean it's worth talking to. It wouldn't surprise me if Getty Images figures out how to detect Stability AI and feed it poison.