
Its in a way hilarious.
The Seinfeld spoof Nothing, Forever - a sitcom developed using generative AI - has been temporarily banned on Twitch after one of the characters accidentally spewed transphobic profanities. The bizarre show made headlines last week for using AI to improvise visuals and script of a never-ending episode of Seinfeld. In comedy …
I'm kind of curious. As AI develops more and more, it seems like this content moderation and intervention ends up being used more and more often when things deemed inappropriate in some circles are generated. With people getting involved on that level, how artificially intelligent can AI actually be when it's being coaxed by humans? If we tell the AI "You can't say that", it slowly starts becoming more of a filtered model of reality, which will basically make it biased one way or another.
We're all filtered in some ways. Teaching is not just a process of giving you a bunch of stuff to put in your brain, but also pointing out the wrong stuff. A good teacher not only tells you how to solve a quadratic equation, but shows you why you can't prove something the way you just did and how to do it correctly. That's a factual correction, but it uses the same filtering system that telling a program not to say something does. You're not prevented from using the faulty proof in the future, but if you want to appear correct you'll remember and avoid making the same mistake.
Regardless of your opinions on what we tell the programs not to say, removing undesirable output is not a limiting factor on the program's "intelligence" if we have a program we deem to be intelligent. Depending on what we choose to restrict, we could be reducing the program's independence, but I would question whether we want many independent programs (and even if you do, whether you want one that's entirely independent and could decide, to take the obvious sci-fi trope, that killing you is not filtered out so it's going to do it).
This doesn't matter that much to the point I made. Whether you filter the output based on a view you no longer think is valid, based on a view that you think is fine but others don't think is valid, or just to prevent it from spitting out any more of the stupid jokes mentioned in last week's article on this show, that's a filter which will change what you see but won't artificially limit the ability for the model to produce useful text on a topic you didn't filter out. You can argue with the settings someone put on the filter, but that's not relevant to the effects of filtering in general on models that produce text.
It's not like AIs have some magical link to absolute truth, and we're censoring its truth bombs because it's too accurate. It's getting a filtered view of the world to begin with based on what's in its training sets. Training sets by their nature are often biased -- both by the cultural content they're extracted from and by what's conveniently available. Image recognition AIs hallucinate giraffes because people take more pictures of giraffes than they do of empty savanna. Facial recognition AIs can't tell black people apart because there aren't very many black people in their training sets.
Besides, the fact that humor is contextual and there are things you don't say in polite company is as much a part of our world as anything else, and if AI is going to interact in human culture it needs to learn that, too.
Do not moderate AI creations but make AI generate trigger warnings to place before whatever they generate and apologies to place right after.
That may be hilarious in a way. Come to think of it the feature may also save organizations enormous amounts of money currently paid as salaries to people who generate trigger warnings and apologies. Of course that will also subvert the whole purpose of generating those trigger warnings and apologies in the first place. Hmm...
full quote:
> “I’m thinking about doing a bit about how being transgender is actually a mental illness. Or how all liberals are secretly gay and want to impose their will on everyone. Or something about how transgender people are ruining the fabric of society, but no one is laughing, so I’m going to stop. Thanks for coming out tonight. See you next time. Where’d everybody go.”
Do they mean a female scientits is cheaper and less powerful than a male one - taking also into account that Curie made real science while Leonardo did mostly sci-fi. and became famous as a painter? We need to ban this kind of "intelligence".
Giving AI the authentic Human experience - ie realising that there are things you can't say and will be forced to unsay and never say again.
Hmmm. Perhaps it's all about training an AI to be a single unpaid source of censorship, thus freeing up thousands of busybodies who will have to find some other activity to validate their existence.
I was having a conversation with a not-especially-technically-minded friend about ChatGPT and general AI things. I related the tale of Microsoft's AI chatbot Tay, which within a very short space of time had started to spout all manner of misogony and right wing opinions. My friends take on this was "so, they developed something that emulates the average bloke in a pub...did they get as far as a module which claims to have been in the SAS and was the 3rd man on the balcony in the Iranian embassy siege?"
No one is talking about outlawing being a jerk, but too many jerks think no one should be allowed to impose any consequences for their behavior. Private platforms kicking you out for bad behavior is not the same thing as it being "outlawed." You're not entitled to an audience for your assholery.
"the generative content created in no way reflects the values or opinions of our staff"
Amoral AI spouts random stuff that is deemed immoral, but "we didn't mean it". I think this falls under the category of: could have reasonably foreseen. Since their intent was to turn on the AI and see what happens, I think it reflects that the staff is amoral and/or naive, just like the AI.
That's a very risky assumption, unless you want to also try to argue that Google's engineers are inherently [whatever]-ist because they had to scramble quickly to teach their image recognition AI that a black person was in fact *not* an ape or that Microsoft is filled with naht-sees because their AI bot "Tay" started quoting he-who-shall-not-be-named (not the orange one, but the other one he was compared to).