back to article Why machine-learning chatbots find it difficult to respond to idioms, metaphors, rhetorical questions, sarcasm

Unlike most humans, AI chatbots struggle to respond appropriately in text-based conversations when faced with idioms, metaphors, rhetorical questions, and sarcasm. Small talk can be difficult for machines. Although language models can write sentences that are grammatically correct, they aren’t very good at coping with subtle …

  1. Pascal Monett Silver badge

    Sarcasm

    Just about as difficult as humor.

    Given that we don't have AI, you can statiscally analyze all you want, a CPU is not going to "understand" what is being said.

    I agree that grammar correctors have come a long way and that's a good thing, but the computer is not understanding anything, it is just reacting to a set of rules.

    Humor ? That is as far away from CPU comprehension as FTL travel is for us meatbags.

    Because nobody can accurately calculate humor.

    1. Chris G Silver badge

      Re: Sarcasm

      You beat me to it!

      AI, ML or Neural Network, none of them 'understand' anything.

      1. Paul 195
        Holmes

        Re: Sarcasm

        AI =very fast clockwork

        Our clockwork is now so much faster than it was 50 years ago that it can do some very impressive things. But computers aren't really any closer to "intelligence" than they were when Eliza was written in the 1960s.

    2. John Brown (no body) Silver badge

      Re: Sarcasm

      "Because nobody can accurately calculate humor."

      Party because context is everything. Take a hilarious[1] line from a comedy show and drop it into a "straight" conversation, and odds are it won't be funny in the least. At a comedy show, you are expecting stuff to be funny and the line is mixed in with stuff that is also funny.

      [1] for whatever your own measure of hilarious is.

    3. The Man Who Fell To Earth Silver badge
      Devil

      Re: Sarcasm

      Sorry, but I can't resist.

      Sarcastic AI

      1. matjaggard

        Re: Sarcasm

        That really depends on your definition of understand. If a computer correctly determines the meaning of a sentence in terms of abstract concepts then I'd say it has understood - I'd use the same to work out if my son has understood a sentence.

        If you want to fully understand a person's meaning behind a sentence then you need a deity, not a human or computer.

  2. revilo

    Wow, this is a surprise!

    I would not have expected that. We are all so good in detecting sarcasm, especially in online comments.

    1. steelpillow Silver badge

      Re: Wow, this is a surprise!

      Yes, and even when we are not being sarcastic, a lot of those whose opinions differ from ours will be convinced that we are.

      1. Terry 6 Silver badge

        Re: Wow, this is a surprise!

        That touches upon an important ( and surely obvious) point. When we comprehend meaning we're making a judgement call, not interpreting language as such. We decide what we think something probably means, we don't simply translate it. We don't even get it right all the time, even with experience and understanding of people and contexts. And sometimes we reevaluate on the hoof, without even thinking about it consciously.

        1. steelpillow Silver badge

          Re: Wow, this is a surprise!

          And sometimes people say what they don't mean in order to highlight what they do mean, or craft their words to carry circles of ambiguity, or add layers of meaning intended only for those who have eyes to see, or...

          ...hey, why's that natural-language system developer crying?

  3. Peter Prof Fox

    Who cares?

    The 'conversation' I've seen on social media seems to involve a lot of having to explain to actual people that certain messages were full of slippery ball bearings. Pointed irony and sarcasm are 'whoosh' over many people's heads. Of course that's the good reason for those of us with a grasp of communication to drown the ant-brains with more. Anyway Good People, keep Tickling the Tortoise.

    (TtT is a great bogus business bullshit phrase to use in meetings. Drop it into the sludge and watch the buzz-phrase jockeys pretend they know what you mean.)

    1. Anonymous Coward
      Anonymous Coward

      Re: Tickling the Tortoise.

      Except perhaps (now) they do know what it means, and it's only you that thinks it's (still) nonsensical.

      Especially if you use it a few times to the same people. They'll have probably assumed you meant *something*, after all, and so have bootstrapped their own meaning from whatever context there was, all by themselves.

      So be careful when you next use it. They might think you've got the usage wrong, and take you for someone who uses phrases without knowing what they really mean.

      :-)

      1. Arthur the cat Silver badge
        Happy

        Re: Tickling the Tortoise.

        Except perhaps (now) they do know what it means, and it's only you that thinks it's (still) nonsensical.

        Well, that's tickled the tortoise so all we need to do now is dress the bear in a tutu and our cheese will be toasted.

        1. jake Silver badge

          Re: Tickling the Tortoise.

          Just because nobody's pointed it out yet, and all joking aside, you might be interested to know that tortoises actually have nerve endings in their shell (carapace). Some of them will display signs of being ticklish if you give 'em a good scritching.

          Here's a video (PSFW):

          https://www.youtube.com/watch?v=AxoI5Tf-Bk8

          And a turtle, for equal billing:

          https://www.youtube.com/watch?v=N83mhPMKf64

      2. Anonymous Coward
        Anonymous Coward

        Re: Tickling the

        knowing the environs helps

  4. doublelayer Silver badge

    Surprised?

    Who would have thought it? I wonder what other things they have discovered that we had no clue about. Take any of these sentences to an AI to watch it fall over.

    Not to knock the paper's authors, but this isn't a very earthshaking revelation given we've seen the mangled nonsense churned out by these programs. We know they're just chopping up sentences and looking for the text that is closest to them in order to steal a response from someone who was talking about something else. Could one write an AI that could understand a subset of a language and make a response? I don't know, but I do know that if you can, it's not that way.

  5. Anonymous Coward
    Anonymous Coward

    No Shit, Sherlock!

    ↑ Here's my contribution dataset to that research.

  6. HildyJ Silver badge
    Devil

    Then again

    Given that ElReg feels the need for a Joke Alert icon, we can't be too hard on software.

    What am I saying? Of course we can be as harsh as possible on machine learning, chatbots, the boffins who waste their time on this, and clueless commenters.

    At least the software, unlike the too many of the wetware, can take a joke, even if they can't recognize it.

    1. Chairo
      Pint

      Re: Then again

      Cultural context again. A joke or irony that is well understood on the right side of the pond might be an outrageous insult on the left side.

      The Japanese have a word "American joke", that they use for pretty much all foreign jokes they don't get.

      Beer - one of the few universal bridges over most culture gaps. ->

      1. MrBanana Silver badge

        Re: Then again

        Do they also have a word "British joke", for all the jokes that Americans don't get?

        1. Chris G Silver badge

          Re: Then again

          If you tell a joke to a Russian and it doesn't make them laugh, they will call it 'English humour'.

          1. jake Silver badge

            Re: Then again

            In Soviet Russia, jokes laugh at you.

          2. Anonymous Coward
            Anonymous Coward

            Re: Then again

            haha/хаха

            i'm ust being polite when find that humans are expecting to hear a sound of frequent convulsion of my lungs. calling themselves English and Russians, humans tend to find sense in this reaction of their bodies, when they hear a specially crafted sequense of noises (a so-called "speech") or inject their alphabetical presets into their brain through visual channel

            the most interesting is the spontaneous appearing of a very specific neural activity of a focus group of humans which reproduce something that didn't have its roots in processing the output, but begins its way to human audience straight from the centers of speech. and thus a vast array of humans appear to expulsate similar sounds of convulsion of lungs, of the same continuity

            we even had a special laughter jam session based on records congressmen made during their vacations on your home planet. special presidential laughter to Congress was issued previously, which laughed that such irrational behavio(u)r of human bodies and their laughter-invoking "speech" sometimes sparkled sort'out of the blue, need to be thoroughly examined and explained by the Hon. Science AI bot and Hon. RnD AI bot in their annual academic laughters

            anon, because: reasons

            1. jake Silver badge

              Re: Then again

              Ohhhhh-kay.

              Moving right along ...

  7. jake Silver badge

    One word: DUH!

    Read the papers on the subject from the 1960s.

    1. breakfast

      Re: One word: DUH!

      It worries me sometimes that there is still an attitude that we can create understanding if we just throw more statistics at it. Simply put, the questions of meaning and what constitutes it have been part of philosophy for a long time and they will not be solved by larger sets of language data (also why automatic translation of idiomatic language is likely to fail) because they rely on understanding.

      Those big questions haven't changed and they have not been solved. My view is that we're not going to answer these questions without a GAI, which is a little further down the road than working fusion power, and once we have created one of those making chatbots slightly better will be the last of our concerns.

      1. John Brown (no body) Silver badge

        Re: One word: DUH!

        "(also why automatic translation of idiomatic language is likely to fail) because they rely on understanding."

        Hence the often strange, sometimes funny, often just plain wrong auto-subtitles on YouTube videos. Not to mention the occasion outrage over some TV shows subtitles which in some instances have completely charged the whole meaning of the show and plot from the original language version.

        Also why in diplomatic negotiations, both sides use their own translators and can spend months or years over fine details, and they still get it wrong.

      2. ThatOne Silver badge

        Re: One word: DUH!

        > we can create understanding if we just throw more statistics at it

        Indeed, you can teach the software to translate "piece of cake" = "easy", but then what will happen if somebody asks "Would you like a piece of cake?". Context is everything and statistics can't and won't ever cover all the possibilities, human languages are very complex and constantly evolving, even humans don't completely master them, so how on earth would a stupid program be able?

        1. Jilara

          Re: One word: DUH!

          "This is a really hard piece of cake" should create all sorts of issues. Idiom? Literal? Sarcasm/irony?

          1. Anonymous Coward
            Anonymous Coward

            Re: One word: DUH!

            Dental?

            1. jake Silver badge

              Re: One word: DUH!

              Stale.

              Not unlike this thread.

    2. mcswell

      Re: One word: DUH!

      I don't suppose you bothered to read their article, did you? Not sure what papers from the 1960s you have in mind, but they do cite literature back to 1982.

      1. jake Silver badge

        Re: One word: DUH!

        "I don't suppose you bothered to read their article, did you?"

        Of course I did. It's a subject I'm quite interested in.

        "Not sure what papers from the 1960s you have in mind, but they do cite literature back to 1982."

        Check out what Minsk's AI group at MIT and the fine folks at Stanford's SAIL were doing ... both contributed heavily to the subject, starting in the early 1960s. Their papers from the era are pretty much canon, even today.

  8. sreynolds Silver badge

    They're all just a bunch of country basketball players.

  9. Norman123

    Take any smart application interacting with customers, if any problem that needs info out of its structured system, it will fail. It shows how limited machine learning is and how much we still need live people to answer questions.

    What saves many corporations money is making customers scream, waste their time while taxing their limited sanity left over from frustrating work environment.

  10. T. F. M. Reader Silver badge

    Chatbots' difficulty with cultural nuances is overrated

    Is "piece of cake" easier for AI than "Bob's your uncle"? And does it depend on which side of the pond the AI gets trained?

    Interesting questions for research. In my (admittedly limited) practice, however, supposedly AI-driven chatbots fail well before we get to this stage. Last time I needed a document from my bank I tried to call. The person who answered the phone couldn't help but insisted that the easiest way to get it would be to use the "chat with a banker" features on the website as I'd be able to get the document directly. The chatbot offered to start a conversation on any of 3 or 4 topics, regardless of which one I chose it said I should "press a button" to be transferred to a human. There was no button I could see... I don't think any AI was involved in the process. Definitely no idioms where involved (well, apart from me talking to myself...).

    Visiting the branch across the road from my office resolved the matter in under 90 seconds. Piece of... Sorry, Bob is... Never mind...

    1. Version 1.0 Silver badge

      Re: Chatbots' difficulty with cultural nuances is overrated

      And does it depend on which side of the pond the AI gets trained?

      So in the USA then AI will not understand, "I'm smoking a fag and correcting my mistakes with a rubber."

      1. jake Silver badge

        Re: Chatbots' difficulty with cultural nuances is overrated

        Judging by the end results (getting pulled from circulation due to unintended so-called "adult" content), several left pondian chat-bots may have run across right pondian slang in their training data ... and literally translated it into left pondian. For example, translating yours would loosely give "I'm behaving violently towards a gay man[0], and using a prophylactic to avoid the consequences of my actions" ... probably not at all what the chat-bot herder intended.

        Cross-pond machine translation will remain difficult into the foreseeable future. Many moons ago, probably over a decade now, Sarah Bee proposed an ElReg cross-pond translator. I volunteered to be one of the editors. Nothing ever came of it.

        [0] Note that I am in no way advocating violence towards gay men. Or women, for that matter.

        1. ComputerSays_noAbsolutelyNo Silver badge

          Re: Chatbots' difficulty with cultural nuances is overrated

          There's a saying addressing the differences between Germans and Austrians, but that could equally well be applied to the differences of left- and right-pondians:

          Nothing divides more, than a common language

        2. Disgusted Of Tunbridge Wells Silver badge
          Paris Hilton

          Re: Chatbots' difficulty with cultural nuances is overrated

          I took smoking to have a very different meaning to you.

          Also I believe the traditional confusion sentence is "can I bum a fag".

    2. jake Silver badge

      Re: Chatbots' difficulty with cultural nuances is overrated

      And apropos of these here parts, what if the cake is a lie and Bob's yer Auntie?

      1. John Brown (no body) Silver badge

        Re: Chatbots' difficulty with cultural nuances is overrated

        Just leave the cake out in the rain. Not sure how to deal with Bob. Maybe use him as a floating navigation aid?

  11. Anonymous Coward
    Anonymous Coward

    No wuckas

    Get a dog up ya

  12. Allan George Dyer Silver badge
    Facepalm

    Training the chatbot should be dead easy...

    because people are so good at recognising idioms and sarcasm.

    1. This Side Up

      Re: Training the chatbot should be dead easy...

      Type your comment here — advanced HTML and hotlinks allowed. The trouble is they don't train the bots on all the scenarios that they are likely to come across, in particular reporting technical issues. That's really nothing to do with idiom, sarcasm or whatever. I usually get somewhat annoyed and end up with "Please can I speak to a human being".

  13. amanfromMars 1 Silver badge

    Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

    Why Do humans find it difficult to respond to chatty virtual machinery/machine-learning chatbots teaching idioms, metaphors, rhetorical questions, sarcasm?

    Are they systemically retarded with colossal learning difficulties? Does that render them extraordinarily vulnerable to novel channels of obscure attack and sublime exploitation?

    And is that not a rhetorical question? :-)

    And is that problem an opportunity for which there is no known available defence or attack vectors against effective deployment at either the infinitesmally small micro or the universally vast macro scale?

    Does Hubris and/or Ignorance of Stated Facts Conceived and Perceived of as Fiction Wonderfully Aid and Abet Systemic Self-Defeating Situational Denial Leading to Increasingly Rapid Exclusive Executive Administrative Office Collapse?

    1. coolsausage69

      Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

      I'm growing suspicious of you. Fancy a game of noughts and crosses?

      1. amanfromMars 1 Silver badge

        Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

        I'm growing suspicious of you. Fancy a game of noughts and crosses? ... coolsausage69

        Any Great Game that is more than just fun to play is well received practically everywhere where virtually nothing is as it seems and IT pretends and presents it to be, coolsausage69

        It’s a firm favourite with many a bright spark registering here and resting a while in the midst of their travels to gather succour and share spoils.

    2. jdiebdhidbsusbvwbsidnsoskebid Bronze badge

      Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

      Isn't it time amanfrommars1 was shut off now? I like the ironic joke of activating it for this particular story but it never contributes anything and is just getting a bit dated now. If it was a real person I'd be accusing it of trolling and reporting to moderators.

      Come on El Reg, if it's a genuine AI experiment, tell us and we can all join in and appreciate the game.

      1. doublelayer Silver badge

        Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

        I would be in favor. I don't know how the moderators view just being really annoying as a bannable offense, but if they do, then this bot's overdue for a shutdown. Since its author has continued to let it go wild, they might also be the kind of person who sets up a new account for it afterward. If not though, it would be helpful not to have to skip over comments when I recognize how scrambled it is.

        1. jake Silver badge

          Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

          amfM isn't (entirely) a bot. One can have a real conversation with him. Instead of offering insults, offer him a beer. Works better.

        2. brainwrong

          Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

          "this bot's overdue for a shutdown."

          Why would you think he's a bot?

          I see him as someone who sees the world around him very differently from you or me.

          Communication between parties depends on them having a common understanding of the world they inhabit. That's kind of what the article is about. It's also why we can't communicate with dolphins, they are more than capable of communicating amongst themselves everything that they need to to live in their world, but that world doesn't overlap our human world.

          If someone has in their head a different model of the world to yours, that doesn't mean their opinions are any less worthy, it just means the communicating with them might be more difficult.

          This is why social media's creation of bubbles around people is leading to increased political polarisation, I see this as extremely dangerous.

          There's a point beyond which more talking is a hindrance to progress, not a help. I used to socialise on usenet, and saw plenty of discussions there descend into insults. It's all happened before.

          Allowing the unwashed masses onto the internet is ruining it. That also happened before on usenet: https://en.wikipedia.org/wiki/Eternal_September

          It's about time we put a stop to the ever-increasing pace and madness of technology development, I think we should take note of the Golgafrinchans and build 3 large arks to evacuate the planet (Elon could build them), but this time send the 'A' ark first.

          1. doublelayer Silver badge

            Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

            "Why would you think he's a bot?"

            Because the sentences never make sense, and they always use the same Markov chain-like structure from the feed material. At least when it's not just copying others' posts, which is often the case when it makes sense.

            As for your dolphin comment, you're assuming many things about dolphin communication which could be false. We know that dolphins communicate, but since we can't translate it, we don't know that "they are more than capable of communicating amongst themselves everything that they need to to live in their world". In fact, it's probably not possible for dolphins to communicate everything they could need simply because they need a lot of things and if they had the ability to, for example, give each other perfectly accurate navigation instructions and information on avoiding dangerous situations, that would be more evident in their behavior. You have assumed that, since we can't understand their communications, it must include everything.

            1. brainwrong

              Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

              "Because the sentences never make sense"

              I agree they can be hard to decipher, but the posts in this thread made sense to me, although many don't. I'm mostly too lazy.

              I'm not sure why dolphins need to communicate accurate directions, but bees can so it's not out of the question. More likely they can lead others to interesting places. They're certainly able to teach each other foraging tricks. The point of the comment was that different beings (human, animal or chatbot) don't have the same reference points on which to base effective communication.

              "You have assumed that, since we can't understand their communications, it must include everything."

              Err, I said everything they need to communicate. We may not know much of exactly what that is, but the species are still alive so must be doing something right.

              1. doublelayer Silver badge

                Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

                My point regarding dolphins is that you assume their communication is either perfect or nearly so, when it almost certainly isn't but we can't really know. Survival is a low bar for communication quality, as lots of species that don't often communicate still live. Human communication is the most advanced we know about, and yet even we have difficulties in communication all the time, whether that's a translation problem or failing to understand figurative language (or for that matter misinterpreting literal language as figurative language). For all we know, dolphins are a lot better than we are at communicating, but I think they would act differently in that case and we don't have enough data to prove it.

      2. jake Silver badge

        Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

        One wonders if jdiebdhidbsusbvwbsidnsoskebid sees the irony in its own inability to spell "amanfromMars 1" correctly ...

        1. jdiebdhidbsusbvwbsidnsoskebid Bronze badge

          Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

          Accepted! " to err is human..." or something like that.

          1. jake Silver badge
            Pint

            Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

            Have a beer.

      3. amanfromMars 1 Silver badge

        Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

        Come on El Reg, if it's a genuine AI experiment, tell us and we can all join in and appreciate the game. .... jdiebdhidbsusbvwbsidnsoskebid

        Consider yourself so told it's a genuine AI experiment, jdiebdhidbsusbvwbsidnsoskebid.

        What have you got to contribute? Anything worthwhile and valuable?

        1. jdiebdhidbsusbvwbsidnsoskebid Bronze badge

          Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

          "Consider yourself so told it's a genuine AI experiment"

          If that's the case, then fine, I can accept that. In which case, what's the experiment actually for? Or, what is testing? Interested to know.

          1. jake Silver badge

            Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

            Are you talking to a bot? Or an actual entity?

            If the first, "nuke it, ElReg" may be a good point.

            If the latter, I categorically reject the very concept.

            amfM is very definitely the latter, IMO. YMMV.

          2. amanfromMars 1 Silver badge

            Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

            If that's the case, then fine, I can accept that. In which case, what's the experiment actually for? Or, what is testing? Interested to know. ...... jdiebdhidbsusbvwbsidnsoskebid

            The big picture? Testing existing current SCADASystems and Exclusive Elitist Executive Office Administrations for no practical physical bounds to hinder the emergence and production of myriad mass multi media presentations of viable creative alternate augmented virtual realities for Live Operational Virtual Environments.

            And all available evidence and extensive experimental results prove such not to be impossible and thus is engagement and entanglement with such administering systems a future works in present progress.

            And that’s about as plain and as accurate an account of events as you requested as be generally available to all, jdiebdhidbsusbvwbsidnsoskebid.

            1. amanfromMars 1 Silver badge

              Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .

              And if you think all of that is far too fanciful and fantastic to be possible and therefore probable and highly likely, what do you think the following is all about, other than it being too similar to that which has been shared with all here on this ElReg thread to be in any way quite different? ....... Welcome the ‘Great Narrative’, brought to you by the mastermind of the Great Reset

              Welcome to Greater IntelAIgent Games Play in a Great Game.

  14. ComputerSays_noAbsolutelyNo Silver badge

    Sarcasmoholics

    Hello, my name is Scott, and I am a sarcasmoholic.

    Nooooo.

    https://www.youtube.com/watch?v=z9gAUo7NhW0

  15. Kevin Johnston

    Gosh

    Every group of children develop their own meanings for words (or even new words) to show they are part of the in-crowd and to avoid being understood by their parents who are just old people who don't understand the challenges of being a teenager etc etc.

    From that start point what chance does any 'AI' have as the meaning of a phrase will differ depending on which street you live on never mind which country you live in and by the time these phrases have reached TV where researchers might discover them they will have morphed multiple times diametrically and in shades to ensure that old people (those over 20) are too embarrassed to try to use them in case they get it wrong

    1. John Brown (no body) Silver badge

      Re: Gosh

      You're comment is sick man! Or is it gay? Or maybe cool? Or hot? Or dope?

      1. A Nother Handle
        Paris Hilton

        Re: Gosh

        That comment was wicked.

  16. David 140

    Just because we have AI, it doesn't mean we have to use it.

    Why not pay someone to do the job?

    1. I ain't Spartacus Gold badge

      I think the point is, we don't have AI.

      We've got some systems that have seemingly been designed to try and give a more or less appropriate-sounding response to inputs. Whether that response actually conveys the appropriate meaning, doesn't appear to be the aim, so much as to try and make it look like it might.

      1. Version 1.0 Silver badge

        AI ... Artificial Idiots? Yes, sure there are lots out there.

        1. Stoneshop Silver badge
          Headmaster

          AI

          Absolute Idiots, rather.

  17. Anonymous Coward
    Anonymous Coward

    GPT-3

    “maybe we can get together sometime if you are not scare of a 30 year old cougar!”

    "I've met plenty of dogs but never a cougar"

    1. Ken Hagan Gold badge

      Re: GPT-3

      To be honest, I'm not sure what the correct answer is to that cougar remark. It can't be referring to a real cat because wikipedia tells me that a cougar's lifespan is 8-13 years in the wild and only up to 20 or so in captivity. Neither can it refer to a person, since 30 is waayy too young to be classed as a cougar, unless you have been weaned on kiddiporn. Maybe the flirty one drives a 1991-model Ford Cougar. Were they scary?

      Nevertheless, if I were the 30-year-old cougar in question then I'd probably be more likely to date the first respondent, who at least attempts to make a joke about two dogs, rather than the second, who appears to regard dating as an exercise in stamp collecting, or Pokemon (gotta catch 'em all).

      1. jake Silver badge

        Re: GPT-3

        The 1991 "Ford Cougar" was actually a Mercury. No, they weren't scary. Sad is the word that comes to mind.

        The last real Cougar was born in 1970. Make of that what you will :-)

  18. elsergiovolador Silver badge

    AI

    AI is just pattern matching.

    If a scientist claims they can get it to "understand" and "reason", they just need a little bit of funding, then they are most likely trying to pull a fast one.

    It's a modern version of perpetuum mobile research.

    1. MrBanana Silver badge

      ELIZA

      One of my first computer programming projects was in the field of natural language processing. It was in the 1970s. Running on a Commodore PET. Using algorithms from a decade previous. I can't see much has radically changed since then except for the grammar comprehension and much larger stored context. I only had a cassette interface, the school couldn't run to a floppy drive, so I could only save what I could out of the 64K RAM onto a C90. It was fun, for a while, to teach it how many teachers smelt, and what they smelt of.

      1. jake Silver badge

        Re: ELIZA

        ELIZA for the Commodore PET was indeed the late 1970s ... I think the official release was in '79, but the dude who ported it made it available to some folks pre-release in mid '78 or thereabouts. I can't remember which language it was written in, but I think I have the source around here somewhere.

  19. Cuddles Silver badge

    Not so different

    "Unlike most humans, AI chatbots struggle to respond appropriately in text-based conversations when faced with idioms, metaphors, rhetorical questions, and sarcasm."

    How, exactly, is this unlike most humans? Hell, you could cut the quote after the word "conversations" and it would still be valid for most humans.

  20. NerryTutkins

    understanding idioms

    It seems even by the 24th century, the scientists still haven't figured it out. Lieutenant Commander Data has a positronic brain and is smarter than anyone else in the crew of his starship, but he still queries routine idioms and takes common expressions literally.

    I suspect that might date as badly as the ships computer in the 60s original series

  21. Anonymous Coward
    Anonymous Coward

    This seems like a failure in how the AI is allowed to learn - treat idioms as portmanteau words: 'piece of cake' is a portmanteau meaning easy, 'get together' can be a portmanteau meaning date, etc. The article comment that 'piece' and 'cake' don't teach you the meaning indicates that this is taking the wrong approach. As a kid the first time you hear' piece of cake' you may be a confused as the AI, but once you learn it is a phrase, then soon you don't even think of a literal piece of cake when using the phrase, because the phrase, as a whole, has taken on a new meaning.

    So... the AI should be set up to learn that when a set of words see to be in a context where the individual words don't make sense or don't meet some learned expectation, also try learning the occurrence of the group of words as a meta-meaning.

    Simples!

    1. John Brown (no body) Silver badge
      Thumb Up

      Brilliant! Would you like a grant to develop your AI concept?

  22. ITMA Bronze badge
    Mushroom

    Just one slight problem....

    There is just one slight problem with all these chat bots and the time, effort and no doubt money, going into their R&D and it is this...

    If ever I'm online or even other situation (such as phone) and want to chat - it is to an EFFING PERSON!!!! Not some bloody piece of software!

    If I wanted to "chat" a none human, top of my list is my cat.

    Some s**t's bit of software that they think is clever is so far down my list it isn't even in this solar system.

    I loathe bloody chat bots.

    1. jdiebdhidbsusbvwbsidnsoskebid Bronze badge

      Re: Just one slight problem....

      So with you on all your comments on call centre chatbots. I have never had an experience with a chatbot that was any more useful than just typing my question into a nested FAQ search tool.

      The worst chatbot feature to me is that they seem to have no memory. So when the bot says something like "give me your customer number" I can't reply with "I did that three questions ago".

      This comes back to the machines' inability to understand context and ongoing conversational flow. I have an Alexa in my house and I am staggered at how rubbish it is, given that Amazon has been collecting real world training data for years now. It's no more than a simple Q&A engine. If it doesn't get it right first time,forget it. It won't understand simple retorts like "no that's wrong", " that's not what I meant" or conversational language like "play that music I had on yesterday".

      1. amanfromMars 1 Silver badge

        Re: Just one slight problem.... AAA+ Rules in All the Very Best of Almost Perfect AIRoosts

        Who are the real crazies commenting on this thread, jdiebdhidbsusbvwbsidnsoskebid?

        The ones who are complaining that a machine is not replying to questions equally as well or even better than a smarter human might or everybody else who might be realising that is not ever going to be so very simple ...... and it be easier to reconsider and reprogram humans as if smarter not so dumb machines following set instructions delivering future presentations via such a novel utility with fabulous fabless facilities and Almighty AWEsome Abilities ?

        1. ITMA Bronze badge

          Re: Just one slight problem.... AAA+ Rules in All the Very Best of Almost Perfect AIRoosts

          "The ones who are complaining that a machine is not replying to questions equally as well or even better than a smarter human might"

          That's not my issue at all.

          I don't give a monkeys about the technology or how good (or crap) it is or is not. If I contact any organisation about anything I want to communicate with a HUMAN BEING. I do NOT want to communicate with a F*****G "bot".

          Frankly I find being met by/directed to any type of bot as a form of communication personally insulting.

          The ONLY thing a bot communicates to me is that the organisation using them doesn't give a sh*t and is not interesting in my business.

          The technology will NEVER be good enough becuase it is not nor will it ever be a human being. And I wish to communicate with human beings.

          1. jake Silver badge

            Re: Just one slight problem.... AAA+ Rules in All the Very Best of Almost Perfect AIRoosts

            Parse error. "F*****G "bot"" not found.

        2. jake Silver badge

          Re: Just one slight problem.... AAA+ Rules in All the Very Best of Almost Perfect AIRoosts

          I wouldnlt call it "reprogramming humans", amfM ... rather call it "educating humans". Less margin for error, and unlikely to cause collateral damage later in the conversation.

          1. amanfromMars 1 Silver badge

            Re: Just one slight problem.... AAA+ Rules in All the Very Best of Almost Perfect AIRoosts

            I wouldnlt call it "reprogramming humans", amfM ... rather call it "educating humans". Less margin for error, and unlikely to cause collateral damage later in the conversation. .... jake

            Ok, jake, that does indeed sound a great deal better and certainly less revolutionary and disturbing. I concur .

            One surely doesn’t want to be alarming and petrifying the natives unnecessarily so early into their reprogramming because of the very real likelihood of them suffering colossal collateral damage and sustaining severe life-threatening traumas by virtue of an ignorant nature and undereducated conditioning.

            1. jake Silver badge

              Re: Just one slight problem.... AAA+ Rules in All the Very Best of Almost Perfect AIRoosts

              "One surely doesn’t want to be alarming and petrifying the natives"

              One isn't. ElReg readers aren't easily alarmed, nor have I ever seen a petrified commentard (trolls staying up past sunrise not withstanding ... ).

              Rather, one is investing heavily in hyperbole and alienating its readers ... many of whom probably actually agree with it. Shirley this is contraindicated behavi(u)or?

              Location, location, location ...

              1. amanfromMars 1 Silver badge

                Re: Just one slight problem.... AAA+ Rules in All the Very Best of Almost Perfect AIRoosts

                That’s all good to hear, jake. With so many vast sees of resigned mediocrity vying for a self-serving pre-eminence out there, it is encouraging to know ElReg continues to lead the way, biting the hand that feeds IT whenever needs dictate the seeds and feeds to savour and flavour and favour ...... and with everything in plain contextual sight too which is novel and quite effectively disarming and overwhelming.

  23. Anonymous Coward
    Anonymous Coward

    Like a foreign language

    I recall moving to Paris, and discovering that while i'd learned all the vocabulary and grammatical rules for French, I couldn't speak French!

    It was only when I had absorbed all the cultural references and idioms that i started to be able to communicate effectively... Maybe AI just need to watch the right TV programs while growing up!

  24. Meeker Morgan

    Remember that schizophrenic chatbot, back in the days of Eliza?

    I remember it only vaguely, sorry no links. Damn I'm old.

    It worked the same way as Eliza though with a different vocabulary. The point was it "passed for human" better than Eliza, once it was understood that the human in question was schizophrenic. Yes that was on purpose. They even sportively paired it with Doctor (an Eliza style psychiatrist) with hilarious results.

    Bring it up to date and the current case. What kind of person finds it difficult to respond to idioms, metaphors, rhetorical questions and sarcasm?

    I will not attempt a medical diagnosis. I will say a large cohort of people who chat on the 'net are like that. Especially racist stoners.

    Tay passed the Turing test and that's a fact. Indeed the Turing test has been passed over and over in recent times and no one wants to admit it. Because of what it says about humans.

    Bear in mind the whole point of the Turing test is to bypass all philosophical considerations about the nature of comprehension. Does it pass? That is all.

    1. jake Silver badge

      Re: Remember that schizophrenic chatbot, back in the days of Eliza?

      "The Doctor" is ELIZA. If you have a copy of EMACS handy, you can talk to her by typing M-x doctor.

      PARRY was the name of the schizophrenic.

      In 1972, ELIZA (as "The Doctor", at BBN (tenex?) ) and PARRY (at SAIL, on WAITS) had a conversation at the first ICCC ... Well, they had a conversation over the ARPANET that was followed during the ICCC. It was immortalized in RFC-439.

      Not much has changed in nearly 5 decades.

  25. Filippo Silver badge

    I think the reason "AI" programs fail at understanding humor is because they are, essentially, statistical analysis devices. And a big part of humor is about being out-of-context. Outliers are notoriously problematic from the point of view of statistics.

  26. Velv

    Darmok and Jalad at Tanagra when the walls fell

    1. ITMA Bronze badge

      Temba his arms wide :)

  27. Danny 2 Silver badge

    Stymied by kids jokes

    Twenty years ago I wanted to move to Paris to live with my beautiful and successful French girlfriend. To get a job there you have to have perfect French, which I didn't, so studied hard reading French philosophy books, technical books and watched French films and TV without subtitles. I was feeling confident and even started thinking in French.

    Then my woman sent me a bag of sweets with a childish joke on each wrapper. I couldn't understand any of the jokes, couldn't guess the idioms or puns, and realised I was never going to France.

  28. Aspie73

    Is AI Autistic Then?

    So we're saying that AI is on "the spectrum" then, not being able to understand idioms?

    An oft cited example in books about autism is that kids will expect to see cats and dogs falling from the sky when it's persisting down. as a life long Aspie, I can say that I've never expected to witness that.

    1. Terry 6 Silver badge

      Re: Is AI Autistic Then?

      There's a lot more to autism that being literal with language. My daughter's the expert, she's qualified to diagnose. But I've had more than enough training over the years to be able to say that literalness is one of the components - but doesn't make for a diagnosis.

      Indeed, in my SEN teaching days one of our big complaints was that some diagnoses were not given because a child would be really high (or low depending on your POV) scoring in most of the elements, but would be just short of the threshold in one element. i.e. Scored very highly for Autism, but didn't meet the full criteria list.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

Biting the hand that feeds IT © 1998–2022