Reply to post: ML adaptation

AI sucks at stopping online trolls spewing toxic comments

Michael Wojcik Silver badge

ML adaptation

They can’t readily adapt to new information beyond what’s been spoonfed to them during the training process.

This may be true of the systems examined in this study (I haven't bothered reading the paper, because, frankly, it doesn't look terribly interesting1). It is not, however, true of ML system in general, as Katyanna seems to imply. There are a great many ML systems which can refine their classifiers in production, using unsupervised or semi-supervised learning.

Sometimes that's as simple as kernel augmentation - expanding category features when novel data accompanies a strong match. A more sophisticated approach is having other systems (typically human judges, though they don't always know they're filling this role) label some errors after processing, and feeding those back in as adversarial inputs. For this particular use case, sentiment analysis on replies to a post could be used to build a disagreement graph (basically an inverse reputation network representation) for a conversation and identify hotspots for more in-depth analysis.2

Assuming the Reg's precis is accurate, the authors suggest that the training set is more important than the algorithms. That may well be true for this set of systems (and it agrees with similar studies on, for example, sentiment-analysis systems), but I'm not convinced it's true in general. I suspect a continuous-learning system with heterogeneous feedback channels and a decent world model would eventually do better than any of the systems under discussion, regardless of what training set were used. But building such a system is expensive and goes against the research direction of many of the big players, particularly Google.

1Which is not to say that I don't approve of the work. Much research is not particularly interesting, but still useful, and this is particularly true of research which tempers the claims of inventors.

2Yes, we would not want a system to automatically flag as "bad" a post or contributor simply due to controversy. I'd hope that would be obvious. But it probably is not.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon