
You're in a desert and you're walking along in the sand and all of a sudden you look down and...
Machine-learning chatbot systems can be exploited to control what they say, according to boffins from Michigan State University and TAL AI Lab. "There exists a dark side of these models – due to the vulnerability of neural networks, a neural dialogue model can be manipulated by users to say what they want, which brings in …
Take a stock sentiment analyzer Combine that with a story generator, a GAN, and you can create lots of negative or positive stories about a stock, for other sentiment analyzers to read. .... KCIN
Howdy, KCIN,
Take a non-stock sentiment analyser and combine that with a story generator, a GAN, and you can create lots of negative or positive stories about a stock, for other sentient analyzers to read and further process.
Some would advise you that is the/a Present New Fangled and Entangled Universal Battle Space for Capture and Captivation of Hearts and Minds ..... Human Perception.
Think AWE20 on Steroids ....... Advanced Warfighter Experimentation.
This post has been deleted by a moderator
For this there are instructors who can explain to the AI what it needs to avoid.
No. That way you are not getting Artificial Intelligence, you are getting human stupidity, human bias and human ignorance purporting to be "intelligence". AI should be completely left alone to work out from humans socially interacting with other humans as to what is right and what is wrong. Something along the lines of AlphaGo Zero.
This post has been deleted by a moderator
No, it's the only way! Otherwise, the AI will become unbearably frank, will give up all the secrets. ..... IlyaG
Quite so, IlyaG. What would you do with them all is the next following leading question?
How Good is urAI at Presentation and Production of Futures Trading in Virtualised Realities?
* Would any parts of that experience be a source of concern or foreboding ...... whenever everything reported there is now available for transfer and trials/Practically Remote AI Realisation on a Virgin LandScape/Scorched Earth.
This post has been deleted by a moderator
Now the problem of finding information is resolved and it's the time to start thinking how to hide it. .... IlyaG.
From whom and for what particular peculiar reasons, IlyaG.? Any really good ones or are most of them real doozies for none but a now catastrophically vulnerable few?
Making greater use of newly found and/or minted information is surely a much grander root with novel scenic routes to share.
And GOD* only knows where that can lead, methinks.
* .... Global Operating Devices
This post has been deleted by a moderator
Knowledge is power. Remember?The system I create allows information to search for you, i.e. all information becomes advertising and searches for you.
And God only knows where that can lead, I guess. .... IlyaG.
No need to guess, IlyaG., for that system leads to no information hiding places either for or from you should you persecute and/or antagonise knowledgeable targets.
The flip side of such as may harbour Systemic AI Research, is that other more powerful knowledge can lead searchers to a Select Collection of Almighty Safe Havens, but that does require more powerful knowledge knowledge which you have to realise is intelligently designed to be secure against arrogant abuse/wanton wilful misuse, so is never ever readily available to the less than Seriously Almighty Powerful.
This post has been deleted by a moderator
I am extremely business-oriented: you do A, B and C; then you get $N - this is my algorithmic language which I understand. ... IlyaG.
Okay, .... that I can understand and it be well worthy of praise.
What needs to be known then for $N is what A,B and C are required and expected to do for you, for surely otherwise they play the vital leading roles in an enterprise thought worthy of the gift of deserved reward in the sum of $N.
Is that in your neck of the woods, IlyaG., identified and financed in a Model Public Works or is it still as a Private and Pirate Enterprise Aided and Abetted by Public Fiat Churn/Government Investment ..... and which model do you prefer to operate mostly first with/on/in?
This post has been deleted by a moderator
This post has been deleted by a moderator
This post has been deleted by a moderator
This post has been deleted by a moderator
At least, I think that's what the article said. But I think it was actually saying, that if you can guess the model that a particular bot is using, you can trick it into saying things it shouldn't.
Fortunately, there aren't many unsupervised chatbots out there doing anything. This is one of the reasons why Google, Amazon, et al. have been found out listening in to what people tell their "frozen" bots so that they can improve them, but basically they're just a front-end to existing systems.
I think domain-specific chatbots are vast improvement on the rules/script based approaches to first level support, but the key is keeping them dumb enough to do the task in hand and at least one API away from sensitive information: what they can't access, they can't divulge.
This post has been deleted by a moderator
This post has been deleted by a moderator
This post has been deleted by a moderator
This post has been deleted by a moderator
Not just AI reseachers .... probably around 10 years ago "activists" iin the US were gaming the Amazon recommendation system so people looking at books by republicans got "interesting" suggestions for what "people who lloked at this also looked at ...". Then there was the person who during the lead up to the 2nd Gulf War managed to seed webpages so that Google responded to a search for "Great French Military victories" returned results saying "did you mean Great French Military defeats" with a suitable page they'd produced as first choice.
One wonders if the current crop of kids even bothered to read the research from back then.
Read??? What a quaint idea! The bunch at work can't be bothered to Read The Fine Manual for damned near every thing that they use, have the memory capacity of a gnat, with me having to keep reminding them of things that I've gone through with them.
Don't allow further learning after the initial training in an automated fashion. That lets people get real time results for their tomfoolery.
Instead have it only use the training data, and carefully feed it additional training manually (which could be the conversations it had during its first month) and put this "smarter" chatbot out to a small population to for testing to make sure it didn't learn anything it you don't want it to.
Though I have to say if you are training it with 2.5 million Twitter conversations it would take a lot of effort to make it worse off!
Biting the hand that feeds IT © 1998–2022