
"Sentiment Analysis"
I'd be curious to know what that means, and whether someone writing "LOL J/K" at the end of a post completely negates the AI's decision...
Parler, the social network favored by the far-right, is back up on Apple’s App Store and will apparently rely on AI algorithms to automatically flag hate speech. Big Tech deplatformed Parler after it was used by Trump supporters to whip up hate and violence amid an attempted insurrection at the US Capitol on January 6. Parler …
Yeah, there are a million ways this can go wrong, and no daylight in the hope that it will actually work.
No one can do this well, cost effectively, and consistently. I suspect they(Parler) intend to just let it fail most of the time, but Hive gets to be a scapegoat.
My gut this will melt down almost immediately as every troll in the world piles on the problem. First will be a round of well meaning but ineffective patcher. Parler will throw Hive under the bus, then it will become a game of whac-a-mole for a bit as Parler is forced to switch from one failing provider to another.
The math gets simple in the long run. Q-anon, the anti-vaxxers, and the nationalist/Domestic Terrorist crowd will never generate enough revenue of offset the cost of moderating their content. So like attempting to breathe in space or fly by flapping your arms very quickly, Parler try to fail, and succeed in doing so, setting mountains of conservative cash on fire in the process, but to survive they will need another plan.
It is one of the biggest misnomers that sites on the internet have to make money. Sharing is caring and it is time we took all the money away because it’s corrupting everything.
Try using a couple of simple services:
* Gnutella and BitTorrent for public file sharing
* IRC for group communication
* XMPP for one-to-one chats
Then compare them to all the stupid, inane WWW equivalents. It will become very obvious, very fast as to why things are as screwed up as they are (hint: greed)
I remember Microsoft had to pull the plug in its own AI after a day because it started spewing race hate after a few hours of interacting with the public.
Just imagine what an AI interacting exclusively with redneck conspiracy theorists and insurrectionists is going to be doing after a day or two.
The "AI" will be far from perfect, it will let through some really bad stuff. Will there be a way for someone to 'report' that for a human to look at? Will any of their userbase WANT to report something like that, or will they use the examples of stuff that gets through to train themselves how to avoid Parler's AI?
The bigger problem from Parler's PR perspective is needing a way to report/review stuff that gets flagged that should not have been. If enough legit stuff gets taken down automatically and there's no way to appeal that, the userbase will begin posting conspiracy theories about how the "new Parler" is being operated by Hillary Clinton out of Cuba designed to entrap the alt-whites.
"begin posting conspiracy theories about how the "new Parler" is being operated by Hillary Clinton out of Cuba designed to entrap the alt-whites"
That's actually a pretty good idea! If they believed Hillary was operating a child sex trafficking ring from the basement of a pizzeria that didn't have a basement, surely it won't be too tricky to convince them of this?