
You're in a desert and you're walking along in the sand and all of a sudden you look down and...
Machine-learning chatbot systems can be exploited to control what they say, according to boffins from Michigan State University and TAL AI Lab. "There exists a dark side of these models – due to the vulnerability of neural networks, a neural dialogue model can be manipulated by users to say what they want, which brings in …
Take a stock sentiment analyzer Combine that with a story generator, a GAN, and you can create lots of negative or positive stories about a stock, for other sentiment analyzers to read. .... KCIN
Howdy, KCIN,
Take a non-stock sentiment analyser and combine that with a story generator, a GAN, and you can create lots of negative or positive stories about a stock, for other sentient analyzers to read and further process.
Some would advise you that is the/a Present New Fangled and Entangled Universal Battle Space for Capture and Captivation of Hearts and Minds ..... Human Perception.
Think AWE20 on Steroids ....... Advanced Warfighter Experimentation.
At least, I think that's what the article said. But I think it was actually saying, that if you can guess the model that a particular bot is using, you can trick it into saying things it shouldn't.
Fortunately, there aren't many unsupervised chatbots out there doing anything. This is one of the reasons why Google, Amazon, et al. have been found out listening in to what people tell their "frozen" bots so that they can improve them, but basically they're just a front-end to existing systems.
I think domain-specific chatbots are vast improvement on the rules/script based approaches to first level support, but the key is keeping them dumb enough to do the task in hand and at least one API away from sensitive information: what they can't access, they can't divulge.
Not just AI reseachers .... probably around 10 years ago "activists" iin the US were gaming the Amazon recommendation system so people looking at books by republicans got "interesting" suggestions for what "people who lloked at this also looked at ...". Then there was the person who during the lead up to the 2nd Gulf War managed to seed webpages so that Google responded to a search for "Great French Military victories" returned results saying "did you mean Great French Military defeats" with a suitable page they'd produced as first choice.
One wonders if the current crop of kids even bothered to read the research from back then.
Read??? What a quaint idea! The bunch at work can't be bothered to Read The Fine Manual for damned near every thing that they use, have the memory capacity of a gnat, with me having to keep reminding them of things that I've gone through with them.
Don't allow further learning after the initial training in an automated fashion. That lets people get real time results for their tomfoolery.
Instead have it only use the training data, and carefully feed it additional training manually (which could be the conversations it had during its first month) and put this "smarter" chatbot out to a small population to for testing to make sure it didn't learn anything it you don't want it to.
Though I have to say if you are training it with 2.5 million Twitter conversations it would take a lot of effort to make it worse off!