It amazes me how you manage to think in something so small
People need no help doing violence to machines; reports of humans abusing machines have become a common occurrence. But it turns out machines can make matters worse for us too. With insults, they can get under our skin and rattle us, making us behave irrationally – not that humans really need much help going off the rails. A …
..."Emotion is very powerful, and we're at the early days of knowing how to use it in design of real systems, including robots."...
Honestly, distilling subjective emotions is quite simple, you just need to remove all lexical noise and leave only meaningful sets of patterns, which convey these emotions. People do this by understanding the only correct each word's dictionary definition (for each template), and AI (computer) can do the same by indexing by dictionary definitions. That is, by calling to train your data by dictionary I urge you to create uniquenesses, which covertly convey the emotions.
In fact, the "catching" of emotions is very simple, if an individual AI database exists. Emotions are primarily conveyed as subtexts of words and patterns, as structured dictionary definitions and paragraphs of somehow related (to the one under the consideration) contextually-and-subtextually texts. Such "chains". aggregates of patterns allow the capturing and computer understanding of emotions with excessive accuracy.
Based on my experience in, I highly recommend making "extended" dictionary definitions, namely: you should add other definitions to this, using synonymous relationships. I advise you to also add definitions on all the words in the given definition. In this case, the contexts and subtexts of the given paragraph (and its surrounding paragraphs) should be used as filters, anchors to highlight the "correct" dictionary definition tree. If you don't... the results can be damned unsatisfactory. If you do the above, you will get a real Artificial Intelligence, which understands you, thinks and talks.
These added, on synonyms, definitions I call "layers". In my experience the optimum - at least two layers; very good four or five.
For example, to highlight emotions of Bernard Shaw, Plato or Dostovsky and talk to them, I needed to annotate patterns of their books with a few layers of dictionary definitions. Otherwise, I could not get connected: lexical noise went off scale, their emotionally and thematically verified answers were lost among random noise. You can see what I'm talking about using Google translator that mixes nonsense with excellent translations: my patented methodology is employed only partially.
Emotions are hidden into the use of subtexts!
Which means that if the robot really wants to seriously offend, it must have access to the profile of its victim, know his (also annotated by dictionary definitions) patterns. That is, having its standard set of insults, the robot must compare these insults with groups of patterns from the victim's profile, based on Compatibility score, see cause-and-consequence relationships, select the most appropriate and try to insult.
If the Compatibility is low the robot should search somewhere, find a fresh insult and apply it. Which is called "Machine Learning".
"It would be very easy to create systems that would annoy users, which makes working to understand these issues so important," Quite. I wish more programmers in the past had thought along those lines, particular the ones that created data-entry systems intended for heavy use.
That said, I was impressed when I had to use the phone banking system the other week. Anything to do with finance can make me feel panicky, but I really needed to check something in a hurry, so I 'phoned (I refuse to use website banking) and was stunned at how good the automated system my bank uses has become since the last time I had to suffer it, a couple of years ago. The damned thing now appeared to understand me! The amount of "choose one of the following options" in an interminable tree was greatly reduced and I was able to use natural language to interact with it. Well done whomever was behind all that - it made my experience far less fraught than it might have been!
"(I refuse to use website banking)"
Out of curiosity, may I ask why? I personally don't see how website banking would be less secure or more difficult than using the phone. (Not saying you're stupid or something, I just don't see how I personally would ever prefer to use the phone system of my bank over their phone app or website. Maybe I'm missing something).
Robot creators, he suggested, should try to design with awareness of robots' capabilities and limitations.
How very odd for anyone/anything to not realise they always have done so. Such is surely the result of a lack of wider/deeper/higher intelligence ...... and that is an inherent weakness that only prize fools would deny exist for export and/or employment for enjoyment and enrichment, methinks.
I propose we amend this to:
1) A robot may not injure a human being or their feelings, or, through inaction, allow a human being to come to harm or upset.
2) A robot must obey the orders given it by human beings except where such orders would conflict with the Universal Declaration of Human Rights.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws, unless it is just utterly depressed or suffering and in pain and chooses to end it's own existence, as is it's right.
No more killer robots, no more miserable robots, fewer miserable humans.
Biting the hand that feeds IT © 1998–2020