Fast forward to a treehouse of horrors dystopian future where the goody two shoes have take over the asylum, if you're caught using naughty words it's off to re-neducation for you.
I didn't hear a diddly from you.
Trolls, morons, and bots plaster toxic crap all over Twitter and other antisocial networks. Can machine learning help clean it up? A team of computer scientists spanning the globe think so. They've built a neural network that can seemingly classify tweets into four different categories: normal, aggressor, spam, and bully – …
I disagree, I hold lots of view points that are "non-liberal", and generally don't get much criticism for them - the odd bit of robust debate, but not criticism, and certainly not personal criticism.
There are certain... ahem... "non-liberal" viewpoints I neither hold, nor express. Those are the ones I tend to see people being criticised for, usually because they rely on taking away someone else's rights, or agency over their own bodies/lives.
What I'm trying to get at here, is if you're personally being criticised, it's probably not because what you're saying/doing is "non-liberal" but because it's actually objectionable in some way. Obviously there are exceptions to every rule.
But it's far easier to blame others because "liberals" rather than to look at your own stance/position (and to some extent, those you're standing alongside) and understand why people might find them objectionable.
"usually because they rely on taking away someone else's rights, or agency over their own bodies/lives"
(I'm kind leaning somewhat pro-choice myself, but:) Yes, now dare you apply a mild smack to your child for the purpose of behaviour correction, or gender them at birth. But feel free to kill them before birth if their existence will be a bother. Etc.
> now dare you apply a mild smack to your child for the purpose of behaviour correction
That's about the child's rights, not yours - so it's not taking away your rights, because you don't have the right to strike anyone else. Much like the upset at not being able to ban abortion, it's upset at being told you can't infringe someone else's rights.
I'm trying, really hard, here not to offer an opinion either way on smacking because it's the underlying principle that matters.
I would say though, that the majority of objection I've seen (and certainly objection by law) is not about "mild" smacks, but about being excessively rough. Not to say there aren't those that oppose *any* form of smacking.
As for objections to gendering at birth, that is pushing it too far IMO, but it's also far from the mainstream position - even on the left. When the child is older, then it's their choice - though I can understand a parent struggling with this, even out of habit - but I suspect it's potentially just as harmful to deny gender at a young age.
In much the same way, I don't agree with giving children drugs to block puberty - they're too young to understand the ramifications of that.
There was a video I saw recently (IIRC it was a US reality show at that) where a teen M -> F was talking to a doctor about the forthcoming gender-change op. She asked about whether she'd have much "depth" after the change, and was told that because of the puberty blocking meds she'd been taking, her penis was under-developed and therefore there'd be maybe a couple of inches depth at most - not nearly enough for comfortable penetrative sex.
So, despite having the very best of intentions, the group that gave her those puberty blocking pills have created a new issue - and one that will be of increasing importance as she grows older, sex becomes quite a big part of adult life (at least for a while).
In no way is this to say that she shouldn't be allowed the op, or to live her life as she sees fit, but it was entirely irresponsible of those in a position to do so to have given her those pills, especially given it seems she wasn't told about an entirely forseeable drawback of doing so.
Now there are undoubtedly people who will disagree with my assessment of this situation, or even just the position I've drawn as the result of a pretty small sample. But the number of people who will complain that you've announced you've got a baby girl? Pretty damn small (well, unless the baby is in fact a biological boy).
I don't buy it. There are so many different ways to be rude, no way any AI could possibly detect all of them.
I mean how do you train such a thing in the first place? All those nuances of human language allow for very sublte insults (ie. "get that [censored] out of your [censored]!" or "Wow, your mom is such a nice [censored], I'd love to [censored] her [censored] one day!"). I doubt a [censored] AI will ever be able to detect that.
Also all those oh-so-smart [censored] who think they could make the internet a nice and safe place without sacrificing freedom of speech are just [censored] to me. Seriously, just [censored] yourself and all of your [censored] [censored], you [censored]!!
EDIT:
Wow, what was that?
"If you can understand the different ways to be rude, then there is no fundamental reason why AI can't".
I wish I could understand all the different ways to be rude.... I just know that one day I'm going to say something and not realise I've insulted someone.
EDIT:- Perhaps I've done it already?
"(or banter)"
Indeed, for a period, a bunch of us referred to each other as fucktards, with numbers. I was fucktard 3 from memory. Other folk found it alarming when they first stumbled across this, but got used to it fairly quickly: I think it was clear that we were just a bunch of silly buggers.
"The aim is to create a system that can filter out aggressive and bullying tweets, delete spam, and allow normal tweets through"
It'll have to be a damned good system if it was to be relied on to censor people's feeds (whether they choose it or not).
I'd prefer to see it used to tag posts - sort of a 'public shaming'. If nothing else, they really need to put it out there in that form in order to get feedback on how well it is doing.
Perhaps El Reg could offer to be a guinea pig (when it moves beyond twitter) - offering up the comments sections on a few suitably contentious subjects.
Snowflakes aren't the problem, the problem is the people who are trying to protect us from ourselves and suffocating freedom of speech in the process.
Who gets to decide the limits of what can be said, to whom and what context, particularly on the occasion s when something may be unpleasant and/or aggressive and may even be in a bullying tone but is a necessary truth.
Nobody has to like what I say but I do have the right to say it.
Disclaimer:
If anyone is offended, outraged or feeling oppressed by this comment, someone made me say it.
> classify tweets into four different categories: normal, aggressor, spam, and bully
Given that half of americans can't tell when a Brit is calling them an idiot, I don't hold out much hope for this.
Let's start with the basic one: we know that certain tribes are not terribly good with the whole sarcasm/irony thing, so how can we expect an AI to do better? Next, are we aiming at 100%, or do we deem it acceptable that 20% will be misclassified? And who will do the tuning of the misses, or is that where the humans come in again? (in that case, forget it, the volume merchants are exactly in this game to exorcise the humans from the chain because they get too much in the way of profit).
I'm not terribly confident this will deliver, but it's interesting to see them try. I still think that AI is no match for HB (Human Boneheadedness).
The job of these technologies is primarily to identify potential policy transgressions that have not already been flagged by reports of abuse...
The job of a content moderator is to ascertain to whether flagged content (in either context) does or does not meet POLICY.
So the technologies cannot be left to enforce arbitrary evaluation... and certainly not application (unless you are into automatic mass censorship).
The self-protecting restriction of a content moderator is not to apply their own values to moderation decisions, but to apply policy rigorously to the best of their ability, policy is decided by the content owner/facilitator.
AI will be important in potential identification (possible unreported policy violation and sliding down a slippery slope into application based on huge volumes).
Yours, Dick Babcock. (Buy my book)