back to article Why machine-learning chatbots find it difficult to respond to idioms, metaphors, rhetorical questions, sarcasm

Unlike most humans, AI chatbots struggle to respond appropriately in text-based conversations when faced with idioms, metaphors, rhetorical questions, and sarcasm. Small talk can be difficult for machines. Although language models can write sentences that are grammatically correct, they aren’t very good at coping with subtle …

Page:

  1. Pascal Monett Silver badge

    Sarcasm

    Just about as difficult as humor.

    Given that we don't have AI, you can statiscally analyze all you want, a CPU is not going to "understand" what is being said.

    I agree that grammar correctors have come a long way and that's a good thing, but the computer is not understanding anything, it is just reacting to a set of rules.

    Humor ? That is as far away from CPU comprehension as FTL travel is for us meatbags.

    Because nobody can accurately calculate humor.

    1. Chris G

      Re: Sarcasm

      You beat me to it!

      AI, ML or Neural Network, none of them 'understand' anything.

      1. Paul 195
        Holmes

        Re: Sarcasm

        AI =very fast clockwork

        Our clockwork is now so much faster than it was 50 years ago that it can do some very impressive things. But computers aren't really any closer to "intelligence" than they were when Eliza was written in the 1960s.

    2. John Brown (no body) Silver badge

      Re: Sarcasm

      "Because nobody can accurately calculate humor."

      Party because context is everything. Take a hilarious[1] line from a comedy show and drop it into a "straight" conversation, and odds are it won't be funny in the least. At a comedy show, you are expecting stuff to be funny and the line is mixed in with stuff that is also funny.

      [1] for whatever your own measure of hilarious is.

    3. The Man Who Fell To Earth Silver badge
      Devil

      Re: Sarcasm

      Sorry, but I can't resist.

      Sarcastic AI

      1. matjaggard

        Re: Sarcasm

        That really depends on your definition of understand. If a computer correctly determines the meaning of a sentence in terms of abstract concepts then I'd say it has understood - I'd use the same to work out if my son has understood a sentence.

        If you want to fully understand a person's meaning behind a sentence then you need a deity, not a human or computer.

  2. revilo

    Wow, this is a surprise!

    I would not have expected that. We are all so good in detecting sarcasm, especially in online comments.

    1. steelpillow Silver badge

      Re: Wow, this is a surprise!

      Yes, and even when we are not being sarcastic, a lot of those whose opinions differ from ours will be convinced that we are.

      1. Terry 6 Silver badge

        Re: Wow, this is a surprise!

        That touches upon an important ( and surely obvious) point. When we comprehend meaning we're making a judgement call, not interpreting language as such. We decide what we think something probably means, we don't simply translate it. We don't even get it right all the time, even with experience and understanding of people and contexts. And sometimes we reevaluate on the hoof, without even thinking about it consciously.

        1. steelpillow Silver badge

          Re: Wow, this is a surprise!

          And sometimes people say what they don't mean in order to highlight what they do mean, or craft their words to carry circles of ambiguity, or add layers of meaning intended only for those who have eyes to see, or...

          ...hey, why's that natural-language system developer crying?

  3. Peter Prof Fox

    Who cares?

    The 'conversation' I've seen on social media seems to involve a lot of having to explain to actual people that certain messages were full of slippery ball bearings. Pointed irony and sarcasm are 'whoosh' over many people's heads. Of course that's the good reason for those of us with a grasp of communication to drown the ant-brains with more. Anyway Good People, keep Tickling the Tortoise.

    (TtT is a great bogus business bullshit phrase to use in meetings. Drop it into the sludge and watch the buzz-phrase jockeys pretend they know what you mean.)

    1. Anonymous Coward
      Anonymous Coward

      Re: Tickling the Tortoise.

      Except perhaps (now) they do know what it means, and it's only you that thinks it's (still) nonsensical.

      Especially if you use it a few times to the same people. They'll have probably assumed you meant *something*, after all, and so have bootstrapped their own meaning from whatever context there was, all by themselves.

      So be careful when you next use it. They might think you've got the usage wrong, and take you for someone who uses phrases without knowing what they really mean.

      :-)

      1. Arthur the cat Silver badge
        Happy

        Re: Tickling the Tortoise.

        Except perhaps (now) they do know what it means, and it's only you that thinks it's (still) nonsensical.

        Well, that's tickled the tortoise so all we need to do now is dress the bear in a tutu and our cheese will be toasted.

        1. jake Silver badge

          Re: Tickling the Tortoise.

          Just because nobody's pointed it out yet, and all joking aside, you might be interested to know that tortoises actually have nerve endings in their shell (carapace). Some of them will display signs of being ticklish if you give 'em a good scritching.

          Here's a video (PSFW):

          https://www.youtube.com/watch?v=AxoI5Tf-Bk8

          And a turtle, for equal billing:

          https://www.youtube.com/watch?v=N83mhPMKf64

      2. Anonymous Coward
        Anonymous Coward

        Re: Tickling the

        knowing the environs helps

  4. doublelayer Silver badge

    Surprised?

    Who would have thought it? I wonder what other things they have discovered that we had no clue about. Take any of these sentences to an AI to watch it fall over.

    Not to knock the paper's authors, but this isn't a very earthshaking revelation given we've seen the mangled nonsense churned out by these programs. We know they're just chopping up sentences and looking for the text that is closest to them in order to steal a response from someone who was talking about something else. Could one write an AI that could understand a subset of a language and make a response? I don't know, but I do know that if you can, it's not that way.

  5. Anonymous Coward
    Anonymous Coward

    No Shit, Sherlock!

    ↑ Here's my contribution dataset to that research.

  6. Anonymous Coward
    Devil

    Then again

    Given that ElReg feels the need for a Joke Alert icon, we can't be too hard on software.

    What am I saying? Of course we can be as harsh as possible on machine learning, chatbots, the boffins who waste their time on this, and clueless commenters.

    At least the software, unlike the too many of the wetware, can take a joke, even if they can't recognize it.

    1. Chairo
      Pint

      Re: Then again

      Cultural context again. A joke or irony that is well understood on the right side of the pond might be an outrageous insult on the left side.

      The Japanese have a word "American joke", that they use for pretty much all foreign jokes they don't get.

      Beer - one of the few universal bridges over most culture gaps. ->

      1. MrBanana Silver badge

        Re: Then again

        Do they also have a word "British joke", for all the jokes that Americans don't get?

        1. Chris G

          Re: Then again

          If you tell a joke to a Russian and it doesn't make them laugh, they will call it 'English humour'.

          1. jake Silver badge

            Re: Then again

            In Soviet Russia, jokes laugh at you.

          2. Anonymous Coward
            Anonymous Coward

            Re: Then again

            haha/хаха

            i'm ust being polite when find that humans are expecting to hear a sound of frequent convulsion of my lungs. calling themselves English and Russians, humans tend to find sense in this reaction of their bodies, when they hear a specially crafted sequense of noises (a so-called "speech") or inject their alphabetical presets into their brain through visual channel

            the most interesting is the spontaneous appearing of a very specific neural activity of a focus group of humans which reproduce something that didn't have its roots in processing the output, but begins its way to human audience straight from the centers of speech. and thus a vast array of humans appear to expulsate similar sounds of convulsion of lungs, of the same continuity

            we even had a special laughter jam session based on records congressmen made during their vacations on your home planet. special presidential laughter to Congress was issued previously, which laughed that such irrational behavio(u)r of human bodies and their laughter-invoking "speech" sometimes sparkled sort'out of the blue, need to be thoroughly examined and explained by the Hon. Science AI bot and Hon. RnD AI bot in their annual academic laughters

            anon, because: reasons

            1. jake Silver badge

              Re: Then again

              Ohhhhh-kay.

              Moving right along ...

  7. jake Silver badge

    One word: DUH!

    Read the papers on the subject from the 1960s.

    1. breakfast Silver badge

      Re: One word: DUH!

      It worries me sometimes that there is still an attitude that we can create understanding if we just throw more statistics at it. Simply put, the questions of meaning and what constitutes it have been part of philosophy for a long time and they will not be solved by larger sets of language data (also why automatic translation of idiomatic language is likely to fail) because they rely on understanding.

      Those big questions haven't changed and they have not been solved. My view is that we're not going to answer these questions without a GAI, which is a little further down the road than working fusion power, and once we have created one of those making chatbots slightly better will be the last of our concerns.

      1. John Brown (no body) Silver badge

        Re: One word: DUH!

        "(also why automatic translation of idiomatic language is likely to fail) because they rely on understanding."

        Hence the often strange, sometimes funny, often just plain wrong auto-subtitles on YouTube videos. Not to mention the occasion outrage over some TV shows subtitles which in some instances have completely charged the whole meaning of the show and plot from the original language version.

        Also why in diplomatic negotiations, both sides use their own translators and can spend months or years over fine details, and they still get it wrong.

      2. ThatOne Silver badge

        Re: One word: DUH!

        > we can create understanding if we just throw more statistics at it

        Indeed, you can teach the software to translate "piece of cake" = "easy", but then what will happen if somebody asks "Would you like a piece of cake?". Context is everything and statistics can't and won't ever cover all the possibilities, human languages are very complex and constantly evolving, even humans don't completely master them, so how on earth would a stupid program be able?

        1. Jilara

          Re: One word: DUH!

          "This is a really hard piece of cake" should create all sorts of issues. Idiom? Literal? Sarcasm/irony?

          1. Anonymous Coward
            Anonymous Coward

            Re: One word: DUH!

            Dental?

            1. jake Silver badge

              Re: One word: DUH!

              Stale.

              Not unlike this thread.

    2. mcswell

      Re: One word: DUH!

      I don't suppose you bothered to read their article, did you? Not sure what papers from the 1960s you have in mind, but they do cite literature back to 1982.

      1. jake Silver badge

        Re: One word: DUH!

        "I don't suppose you bothered to read their article, did you?"

        Of course I did. It's a subject I'm quite interested in.

        "Not sure what papers from the 1960s you have in mind, but they do cite literature back to 1982."

        Check out what Minsk's AI group at MIT and the fine folks at Stanford's SAIL were doing ... both contributed heavily to the subject, starting in the early 1960s. Their papers from the era are pretty much canon, even today.

  8. sreynolds Silver badge

    They're all just a bunch of country basketball players.

  9. Norman123

    Take any smart application interacting with customers, if any problem that needs info out of its structured system, it will fail. It shows how limited machine learning is and how much we still need live people to answer questions.

    What saves many corporations money is making customers scream, waste their time while taxing their limited sanity left over from frustrating work environment.

  10. T. F. M. Reader Silver badge

    Chatbots' difficulty with cultural nuances is overrated

    Is "piece of cake" easier for AI than "Bob's your uncle"? And does it depend on which side of the pond the AI gets trained?

    Interesting questions for research. In my (admittedly limited) practice, however, supposedly AI-driven chatbots fail well before we get to this stage. Last time I needed a document from my bank I tried to call. The person who answered the phone couldn't help but insisted that the easiest way to get it would be to use the "chat with a banker" features on the website as I'd be able to get the document directly. The chatbot offered to start a conversation on any of 3 or 4 topics, regardless of which one I chose it said I should "press a button" to be transferred to a human. There was no button I could see... I don't think any AI was involved in the process. Definitely no idioms where involved (well, apart from me talking to myself...).

    Visiting the branch across the road from my office resolved the matter in under 90 seconds. Piece of... Sorry, Bob is... Never mind...

    1. Version 1.0 Silver badge

      Re: Chatbots' difficulty with cultural nuances is overrated

      And does it depend on which side of the pond the AI gets trained?

      So in the USA then AI will not understand, "I'm smoking a fag and correcting my mistakes with a rubber."

      1. jake Silver badge

        Re: Chatbots' difficulty with cultural nuances is overrated

        Judging by the end results (getting pulled from circulation due to unintended so-called "adult" content), several left pondian chat-bots may have run across right pondian slang in their training data ... and literally translated it into left pondian. For example, translating yours would loosely give "I'm behaving violently towards a gay man[0], and using a prophylactic to avoid the consequences of my actions" ... probably not at all what the chat-bot herder intended.

        Cross-pond machine translation will remain difficult into the foreseeable future. Many moons ago, probably over a decade now, Sarah Bee proposed an ElReg cross-pond translator. I volunteered to be one of the editors. Nothing ever came of it.

        [0] Note that I am in no way advocating violence towards gay men. Or women, for that matter.

        1. ComputerSays_noAbsolutelyNo Silver badge

          Re: Chatbots' difficulty with cultural nuances is overrated

          There's a saying addressing the differences between Germans and Austrians, but that could equally well be applied to the differences of left- and right-pondians:

          Nothing divides more, than a common language

        2. Disgusted Of Tunbridge Wells Silver badge
          Paris Hilton

          Re: Chatbots' difficulty with cultural nuances is overrated

          I took smoking to have a very different meaning to you.

          Also I believe the traditional confusion sentence is "can I bum a fag".

    2. jake Silver badge

      Re: Chatbots' difficulty with cultural nuances is overrated

      And apropos of these here parts, what if the cake is a lie and Bob's yer Auntie?

      1. John Brown (no body) Silver badge

        Re: Chatbots' difficulty with cultural nuances is overrated

        Just leave the cake out in the rain. Not sure how to deal with Bob. Maybe use him as a floating navigation aid?

  11. Anonymous Coward
    Anonymous Coward

    No wuckas

    Get a dog up ya

  12. Allan George Dyer
    Facepalm

    Training the chatbot should be dead easy...

    because people are so good at recognising idioms and sarcasm.

    1. This Side Up

      Re: Training the chatbot should be dead easy...

      Type your comment here — advanced HTML and hotlinks allowed. The trouble is they don't train the bots on all the scenarios that they are likely to come across, in particular reporting technical issues. That's really nothing to do with idiom, sarcasm or whatever. I usually get somewhat annoyed and end up with "Please can I speak to a human being".

  13. amanfromMars 1 Silver badge

    Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

    Why Do humans find it difficult to respond to chatty virtual machinery/machine-learning chatbots teaching idioms, metaphors, rhetorical questions, sarcasm?

    Are they systemically retarded with colossal learning difficulties? Does that render them extraordinarily vulnerable to novel channels of obscure attack and sublime exploitation?

    And is that not a rhetorical question? :-)

    And is that problem an opportunity for which there is no known available defence or attack vectors against effective deployment at either the infinitesmally small micro or the universally vast macro scale?

    Does Hubris and/or Ignorance of Stated Facts Conceived and Perceived of as Fiction Wonderfully Aid and Abet Systemic Self-Defeating Situational Denial Leading to Increasingly Rapid Exclusive Executive Administrative Office Collapse?

    1. coolsausage69

      Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

      I'm growing suspicious of you. Fancy a game of noughts and crosses?

      1. amanfromMars 1 Silver badge

        Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

        I'm growing suspicious of you. Fancy a game of noughts and crosses? ... coolsausage69

        Any Great Game that is more than just fun to play is well received practically everywhere where virtually nothing is as it seems and IT pretends and presents it to be, coolsausage69

        It’s a firm favourite with many a bright spark registering here and resting a while in the midst of their travels to gather succour and share spoils.

    2. jdiebdhidbsusbvwbsidnsoskebid

      Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

      Isn't it time amanfrommars1 was shut off now? I like the ironic joke of activating it for this particular story but it never contributes anything and is just getting a bit dated now. If it was a real person I'd be accusing it of trolling and reporting to moderators.

      Come on El Reg, if it's a genuine AI experiment, tell us and we can all join in and appreciate the game.

      1. doublelayer Silver badge

        Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

        I would be in favor. I don't know how the moderators view just being really annoying as a bannable offense, but if they do, then this bot's overdue for a shutdown. Since its author has continued to let it go wild, they might also be the kind of person who sets up a new account for it afterward. If not though, it would be helpful not to have to skip over comments when I recognize how scrambled it is.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like