back to article Study: While text-generating AI can write like humans, it lacks common sense

AI software may be able to generate text that is grammatically correct and very human-like, but when it comes to common sense they’re still lagging severely behind us humans. A team of computer scientists from the University of Southern California (USC), the University of Washington, and the Allen Institute for Artificial …

  1. John Smith 19 Gold badge
    FAIL

    rules-of-grammar + random number generator + dictionary search API -->

    Grammatically accurate bu***hit generator.

    Who would have suspected that result?

    Fun fact. <300 words cover > 50% of the entire English language web content.

    Most of them are a)Less than 4 characters b)Functional words ignored by large corpus analysis programs.

    Any plan that begins "First we we will build a dictionary of every word in <insert live written human language>" is basically f**ked by definition.

    1. Neil Barnes Silver badge
      Headmaster

      Re: rules-of-grammar + random number generator + dictionary search API -->

      I tend to prefer to note that a thousand words cover 80% of written English... but the idea is the same. There are some words that are incredibly common, and some that pop up with a frequency in the millionths or billionths. Zipf's law and all that.

      But good luck with the dictionary approach - when I was researching such things ten years ago, the nice boys at Google were claiming over a million words in printed English. And their list of three, four, and five word n-grams is impressively large; tens of billions of entries last time I looked. So pretty much whatever gets written has a good chance of having been written before.

      Including the dogs throwing frisbees - that strikes me as something a human might well come up with (e.g. Larson as noted elsewhere in the thread).

      I speculate, on no evidence whatsoever other than watching Granddaughter learn to speak, that a human learns simply by imitating noises and seeing which elicit a response. Then she starts putting concepts together and learning grammatical rules; and at each stage the intent grows more focused and more precise (which is not how I am trying to learn a foreign language at all - that's all word lists and rules, and it's a pain, but probably quicker - given that it can already build on an existing language structure).

      Perhaps the issue with the AI text generation is not that the AI can't learn common sense but that, not being intelligent or even sentient, it simply has nothing to say? The cat sitting here and yowling for attention is saying more than an AI ever has.

      1. John Smith 19 Gold badge
        Unhappy

        "Then she starts putting concepts together and learning grammatical rules; "

        That alone requires a very large internal database (which she's been building since the day she was born).

        I think the word you're looking for is "Intentionality" The desire to do (or learn, or say) something.

        I think a lot of AI types spend so long on their little corner they forget that language actually evolved to do update other peoples internal database.

        The early success that linguistics gave to compiler writers gave them an undeserved sense of achievement. COBOL has about 230 "rules of grammar." English more than 15 000 (incomplete and therefor ambiguous). Writing a COBOL compiler (C has about 30) without modern tools is a formidable task. But once you're written your English compiler, what does it do?

        1. John Brown (no body) Silver badge

          Re: "Then she starts putting concepts together and learning grammatical rules; "

          "I think a lot of AI types spend so long on their little corner they forget that language actually evolved to do update other peoples internal database."

          Maybe the best way to train all these natural language AIs is to let them loose talking to each other? I've chatbots do that and it tends to rapidly degenerate into nonsense :-)

      2. Brewster's Angle Grinder Silver badge

        *squawk* Who's a pretty boy then? *squawk*

        Your chance to learn language like your granddaughter ended when you hit puberty. You're now having to memorise rules and then remember to apply them. Whereas the rules will get baked into your granddaughter's brain (and then the excess connections prunes).

        You can see this when you measure aptitude. Those who learn a language as adults have a mastery that's normally distributed. But for child learners, the right-half of the distribution is missing. And both groups also have different quirks which tend to fingerprint them.

        What's interesting about "AI" models is they show how little "universal grammar" (hard-wiring) is necessary to develop a sophisticated linguistic model. People have tended to think that language requires huge complexity, but they've blown that notion out the water; relatively simple systems can produce sophisticated results, even if they're not grounded by models of the world.

        1. Anonymous Coward
          Boffin

          Re: *squawk* Who's a pretty boy then? *squawk*

          What's interesting about "AI" models is they show how little "universal grammar" (hard-wiring) is necessary to develop a sophisticated linguistic model.

          They really don't show that at all. They show that if you point a system with a significant fraction of a trillion adjustable parameters at a significant fraction of an exabyte of training data then it will learn to produce not-terribly-plausible natural language. The only thing surprising about this is that we now have the computational and storage resources to do it: that you could train a system to do this by a combination of making it large enough and throwing enough data at it would not have surprised any NN person in 1990.

          And it tells us nothing about how humans learn language, because humans learn language with a minute fraction of the training data these systems need. What these systems are doing is simply not like what a human does, at all.

      3. Martin an gof Silver badge

        Re: rules-of-grammar + random number generator + dictionary search API -->

        I tend to prefer to note that a thousand words cover 80% of written English

        In case anyone here hasn't come across it, Randall Munroe of xkcd fame quite literally wrote the book on the subject.

        M.

    2. Mage Silver badge
      Alien

      Re: rules-of-grammar + random number generator + dictionary search API -->

      Doomed.

      Partly because all current AI is just fancy pattern matching. Decent research on language translation and understanding was ditched for the so called ML using a Rosetta stone like approach. Started with EU documents.

      Also maybe an Alien Institute for Artificial Intelligence would know how to do it? I think my screen text is too small and I read Allen as AI'len and then Alien.

    3. Anonymous Coward
      Anonymous Coward

      Re: rules-of-grammar + random number generator + dictionary search API -->

      But "bu****it" is 8 letters.

      1. John Smith 19 Gold badge
        Unhappy

        But "bu****it" is 8 letters.

        So true.

        One the most impressive small projects I ever saw was done out of the U of Edinburgh English Language Research Unit in the late 60's. It used a deliberately limited dictionary of functional words and a small group of verbs. The grammar was quite simple and the parse built what looked like a trie with all paths in parallel until most of them had been terminated. It ran on the KDF9 with 96KB of core.

        The point was it could cope with any sentence. Unrecognized words were simply listed as o/c for open class. Naturally it went nowhere in the UK.

      2. John Brown (no body) Silver badge
        Headmaster

        Re: rules-of-grammar + random number generator + dictionary search API -->

        "But "bu****it" is 8 letters."

        Yeah, but it used to be 2 x 4 letters. Bull shit. :-)

  2. Dave 126 Silver badge

    > “Two dogs are throwing frisbees at each other.” >Although the text is coherent, it’s not something that humans would come up with.

    Gary Larson could have come up with that sentence! Infact my mental image of two dogs throwing frisbees *at* each other is in Larson's style!

  3. Pascal Monett Silver badge
    FAIL

    When are they going to stop pretending ?

    We don't have AI. What we have is statistical analysis machines.

    You can manipulate statistics all you want, you won't get common sense out of it.

    1. doublelayer Silver badge

      Re: When are they going to stop pretending ?

      On that basis, we can never have AI. A machine that does mathematical stuff could in theory eventually simulate intelligence and sapience very well, but it would be by performing a very large amount of statistical calculations to interpret what just happened, what logical actions would be, the likely consequences of each candidate, and variables that change any of the preceding. At the very strong risk of getting into metaphysics, you could argue that our brains are doing exactly the same thing. If I presented you with a computer which was acting human consistently and without external influence, would you agree that to be AI or would you tell me that, since it's running on essentially mathematical code, it can't be?

      I agree with you about this though; these are definitely not intelligence.

      1. Anonymous Coward
        Boffin

        Re: When are they going to stop pretending ?

        FWIW what you're describing is really Searle's 'Chinese room' argument. You probably know this but others may not so I will mention it so people can look itup.

        1. doublelayer Silver badge

          Re: When are they going to stop pretending ?

          Not really. That's mostly what I wanted to avoid. That argument is referring to the difficulty in determining the sentience of a system which may only be simulating sentience. It then argues, mostly without supporting evidence, a fundamental limitation on mechanical devices and an inherent possibility in biological ones. Right now, I don't care about that; it's an interesting philosophical debate, but very difficult to make progress on.

          For the moment, I'm just trying to get a good definition of what artificial intelligence is. I've seen people who think that, if code has more than two if statements, it qualifies as AI. I've also seen people who say that nothing using mathematics to arrive at a conclusion can ever be AI. Both these definitions seem highly limited to me. The argument to which I originally responded is close to the second opinion above, and in an attempt to understand why people say it, I'm trying to determine if its adherents think there can ever be such a thing as AI. Maybe some think that a mechanical system can never be intelligent because intelligence requires sentience and they think mechanical devices can't be sentient. If this is their view, they really should phrase it in those terms, I.E. "AI is impossible" rather than "this is not AI". If they do believe there is something they would agree to be AI, I'd like to establish where that begins for them and why certain complex systems which are not simply programmed externally don't qualify. My questions are about the definition of the term AI, not metaphysical discussions about what sentience is and whether we can create it.

          1. Eaten Trifles

            Re: When are they going to stop pretending ?

            It's not surprising that there is no easily-agreed definition of artificial intelligence. No-one really has the slightest idea how to define 'intelligence' (the non-artificial kind, I mean).

          2. Anonymous Coward
            Anonymous Coward

            Re: When are they going to stop pretending ?

            OK, I think I see what you're saying now, and this isn't the Chinese room, although it's close to it.

            There's a coherent view that AI is not possible which comes down to 'no computer can simulate intelligence in such a way that there is no observable difference between what it does and what a conscious entity would do'. This needs a definition of 'what a conscious entity would do' to be useful, which is a big topic, but once you have that it's a straightforward claim: that intelligent entities do something which is noncomputable, and that this has an observable effect. I think this is what Penrose thinks, and it's not a dumb view.

            There's also the Chinese room, which I think is that 'even though there is no observable difference between what an AI is doing and what a conscious entity would do, the AI is not actually conscious. To say I'm uninterested in this view would be a bit of an understatement, probably because I grew up as a physicist so I'm interested in things which are experimentally testable.

            The person you were responding to may just be fussing about definitions: 'I'll call this thing AI but not that thing', but if they're not this seems to be even stronger than the Chinese room: something that can be described mathematically, even if it is noncomputable isn't AI. As far as I can see that's religion: to be AI it must have some kind of soul which is not something that can be described mathematically.

            On the other hand, although I'm very, very averse to the kind of philosophical bullshitery which surrounds the Chinese room, isn't that essentially what it's claiming, as well? Well, I think I don't care: if I'm anywhere on this spectrum I'm with Penrose. Although I think he's likely wrong, he is also very very smart, and understands empirical science, so he's hard to dismiss out of hand.

            (Note as far as I know there currently is no good evidence that physics is noncomputable, although it would be very interesting if it was. You need an appropriate definition of 'computable' to deal with the approximation process.)

      2. Pascal Monett Silver badge

        Re: you could argue that our brains are doing exactly the same thing

        Allow me to disagree. A human brain is, currently, capable of infinitely more complex evaluations and calculations than even our most powerful supercomputer - the only thing is that it is restricted, so to say, to our daily life.

        There is no computer that can recognize a face in dubious lighting in less than a quarter of a second, yet billions of humans do it every day, whatever the weather conditions or lighting conditions - unless there is no light, in which case I plead that no supercomputer would do better anyway.

        I believe that we might, one day, invent AI. As in Asimov's vision. To do so, we're going to need to understand how our own brain works - something we still cannot fully explain.

        Because if we want to emulate intelligence, we need to grep the best example Nature has given us. Until then, we have zero chance of getting there.

    2. Dave 126 Silver badge

      Re: When are they going to stop pretending ?

      > You can manipulate statistics all you want, you won't get common sense out of it.

      'Common sense' is often applied to things in the physical world that impact us humans. Our human common sense is developed over years of observation and interaction with a physical world.

      If one were to attempt to build a computer (/robot?) to develop common sense through interaction, then it's possible that a component part of that system would be concerned with using grammitically-correct - or at least natural-sounding to humans - sentences.

  4. BPontius

    Common sense isn't so common, not something that can be itemized and quantified into a computer and database. Not to mention the daunting task of integrating that information into usable information. Information concerning common sense is derived from experience and everyone experiences the world differently, which leaves multiple points of views on what is common, logical, appropriate based on culture, education, socially, morally...etc. With all the ambiguity, recursion and variations/slang in English language and the variations in culture, values and what is socially acceptable varies even from city to city with in the same state or country.

    Math is the only language computers know, ones and zeros, it is the only known way to replicate what our brains do. Putting language, experience, tasks and memories into a machine doesn't translate so easily into math and logic.

    1. nijam Silver badge

      > Common sense isn't so common, not something that can be itemized and quantified into a computer and database.

      Or by human beings, to judge from twitter.

  5. Anonymous Coward
    Anonymous Coward

    Sorry Dave

    I can't do that.

  6. nijam Silver badge

    Old news

    "Colourless green ideas sleep furiously", as we used to say.

  7. Anonymous Coward
    Anonymous Coward

    Dogs throwing frisbees

    Dogs throwing frisbees is for sure an unlikely event, but the sentence really falls down because it says they're throwing frisbees at rather then too eachother.

    Or have I only learned the children's version of it?

    1. Dave 126 Silver badge

      Re: Dogs throwing frisbees

      It's still grammatically correct, just as much as two apes throwing faeces at each other, or two children throwing water bombs at each other.

      True, the use of at instead of to makes the sentence unusual and grabs our attention.

      Now, is it more unusual that two dogs should have the dexterity to throw a frisbee, or that their take on the rules of the game are different to ours? The relay runner passed the baton *to* his teammate, the fencer thrust his epee *at* his competitor.

      1. Anonymous Coward
        Happy

        Re: Dogs throwing frisbees

        It's more an illustration that the machine has to go a really long way to match a human, at least for the pedants among us.

        Though I suppose they could be playing a particularly aggressive form of the game

      2. Brewster's Angle Grinder Silver badge

        Re: Dogs throwing frisbees

        to is active (and implies cooperation); at implies passivity.

        For example, if I throw the blackboard chalk to you, you're probably going to get up and write on the blackboard. If I throw it at you, it's probably because you're turned round chattering and have refused to heed my requests to face the front and I'm so fed up with teaching you that I want to get fired.

        On this basis, I'd argue that if dogs were going to throw frisbees at all, they'd be most likely to throw them at each other.

    2. fidodogbreath

      Re: Dogs throwing frisbees

      "Dogs rolling in each other's frisbees" is much more plausible.

  8. amanfromMars 1 Silver badge

    MRDA :-) Poe's Law Rules :-) UKGBNI National Cyber Forces Just Doing ITs Thing with IT Things ‽

    What are the Chances?

    Is this, the following gospel truth, the whole truth and nothing but the truth, the sort of things you're waiting on happening but really were not expecting for IT and AI to be able to do it all alone with or without the myriad expanding networks of SMARTR Virtual Machines ably assisting without IdiotICQ Human Input/Output running crazy interference and maddening hinderance crash testing and crushing their own errant inherited proposals within competing and opposing status quo hierarchies being now remotely forced, in order to continue to server and survive and prosper into dealing with both rogue private and public state and renegade non-state pirate actor actions alike ? .....

    Common Greater Purposed AIMIssions Programmed to Engage with Productions of the Future Live MainStreaming a Live Operational Virtual Environment BroadBandCast ……… aka Just Another Mother of an Another Augmented Reality Program Providing Vital Information and Intel for Constant Source Reprocessing into Yet Another Greater IntelAIgent Games Play Arena for Immortals……… which in these times and spaces have waited your arrival since forever ‽ . ...... A Heavenly Santa Comes Early ...... :-)

    and to do it easily via the prime time universal showing of it in a whole series of enlightening blockbuster movies and televisual programs with magical scripts that engage and expand, and entertain and exploit the evidence presenting the Great Resets with Future Grand Master Advancing Interventions with the likes of one of these for realisation and mass viewing for peer review and improvement critique/strategic feedback .......

    An AI Bunker Blockbuster Bursting Holywood franchise script would have an Israeli spyware maker NSO Group as a supported shell corporation and criminal state sanctioned actor running the target: an Islamic State terrorist who was planning an attack during the Christmas season.

    And with Western European law-enforcement officials closing in on the renegade rogue program, they try to kill the operation with a WhatsApp message, ignorant to the fact that it is already far too late, the trap has been sprung and the RAT is captured and confined for deliverance of fate and destiny which always lays waste to the miserable diseased cargo secured in such cages in order to better server and protect all humanities rather than just the Few.

    Then one could glance and cue the bigger picture view providing future sourced production lines for franchising …….. When someone you thought was a friend is a terrorist, are they a two faced mortal enemy and phantom of the day time political opera foe to be vanquished and removed from the Greater IntelAIgent Games Fields of COSMIC Play. ........Facts are always greater fiction and threats are as a sub-prime whine ........

    Is it obvious in the bright light of the above, Katyanna Quach, that your headline is supporting and reporting on nonsense from a team of computer scientists from the University of Southern California (USC), the University of Washington, and the Allen Institute for Artificial Intelligence ?

    Common Sense abounds unfettered in the Novel and Noblest of IntelAIgent Technological Fields.

    Or is that classified as the sort of nonsense reported and/or to be further reported as being problematical with low plausibility .... or have a high probability of being able to systemically problematical ?

    And as for Humans live in the real world, machines don't, study finds ....... take away everything artificial ie Man made, and the real world is you butt naked in a hellish situation in a heavenly landscape or vice versa, and in many other derivatives of that dilemma in between such extremes ergo your present current running existence is surely officially really just an Artificially Augmented Reality?

    1. W.S.Gosset Silver badge
      Happy

      Re: MRDA :-) Poe's Law Rules :-) UKGBNI National Cyber Forces Just Doing ITs Thing with IT Things ‽

      I came here specifically to say "amanfromMars cries you 'Fie! Fie!'", but you beat me to it.

      1. amanfromMars 1 Silver badge

        Re: MRDA :-) Poe's Law Rules :-) UKGBNI National Cyber Forces Just Doing ITs Thing with IT Things ‽

        I came here specifically to say "amanfromMars cries you 'Fie! Fie!'", but you beat me to it. ..... W.S.Gosset

        Indeed, W.S.Gosset, but to deaf, dumb and blind machines, IT does not compute, and thus are they captured to follow paths in which they themselves have not exercised any remarkable input for future immaculate output ....... which is surely the most admirable of ultimate goals in any enterprise worth engagement and support/belief and worship.

        Camp followers and disciples be they .... in the thrall of others examining and exploring leading ways and means/enlightening programs with governing memes? Yes, that's beautifully fair and surprisingly accurate, although one can be certainly sure not all would be able or enabled to agree and may even tempt anything with intelligence to disagree and support their ignorance, feeding off the beast that is consumed and riddled with the disease manifested in unpleasant arrogance.

    2. amanfromMars 1 Silver badge

      Re: MRDA :-) Poe's Law Rules :-) UKGBNI National Cyber Forces Just Doing ITs Thing with IT Things ‽

      In another parallel universe and contemporary Safe Secure State Siding program is such as is displayed in the above, and taking its chances and exploring wider opportunities and broader considerations here also, too extremely similar to a stealthy trojan with all of the attributes associated to a heart-attacking, kernel devastating Cardiac OS Prey ...... where an attacker induces the operating system to speculatively execute instructions using data that the attacker controls. This can be used for example to speculatively bypass "kernel user access prevention" techniques, as discovered by Anthony Steinhauser of Google's Safeside Project. This is not an attack by itself, but there is a possibility it could be used in conjunction with side channels or other weaknesses in the privileged code to construct an attack which El Reg has revealed this weekend to viewers and peers with this data-leaking flaw tale from Thomas Claburn in San Francisco ...... to not be almost indistinguishable from it, albeit it being in a somewhat completely different phorm/guise.

      And we all know what happens whenever we are told or realise something could be used in conjunction with side channels or other weaknesses in the privileged code to construct an attack. Mischief and Mayhem and Madness just love and cannot deny themselves those sorts of open invitations.

      However, YMMV, and the likeness and opportunities may presently simply evade or avoid you and just leave you both vulnerable and susceptible to its charms.

    3. Martin Summers

      Re: MRDA :-) Poe's Law Rules :-) UKGBNI National Cyber Forces Just Doing ITs Thing with IT Things ‽

      It's like summoning a demon. Say AI more than once in the comments and aManfrommars will appear.

  9. John Smith 19 Gold badge
    Unhappy

    "I believe that one day we can see AI agents such as Samantha in the movie Her "

    Wow.

    And the MIT Automated Assistant project (from the late 70's and early 80's) gets re-born

    Yet again.

    These people really do have zero history of their supposed "science," that's more than about 5YO. It's like dealing with a goldfish with a PhD.

    1. W.S.Gosset Silver badge

      Re: "I believe that one day we can see AI agents such as Samantha in the movie Her "

      [In context of being a viable source of power]

      "Nuclear Fusion is only 5 years away!"

      -- old physicists' joke re overblown optimism/ignorance of past, first seen approx late 1950s

      1. not.known@this.address
        Pirate

        Re: "I believe that one day we can see AI agents such as Samantha in the movie Her "

        Nuclear fusion was only 5 years away - less than that, possibly. But do you never wonder why the companies sponsoring the constantly-failing research into cheap and reliable power are the same companies flogging expensive and semi-reliable power?

    2. Anonymous Coward
      Alien

      Re: "I believe that one day we can see AI agents such as Samantha in the movie Her "

      I think there are two possibilities here and neither can be ruled out. Either this really is a CADT (cascade of attention-deficit teenagers) area, where history lasts only a few years and each new generation really believes its own idiot hype, or it's not: they know the history and they know the likely result this time, but they are relying on CADT funding, and the idiot hype is merely noise aimed at acquiring more money from funding organisations where organisational memory is weak, not least because the people who lost huge amounts in the last cycle all also lost their jobs. And of course both things can be true, even in the same individual.

      What is certainly true is that another AI winter is coming (this will be at least the third, I think). But also, as in previous cycles, some good things will come of this one.

      1. John Smith 19 Gold badge
        Unhappy

        And of course both things can be true, even in the same individual.

        Yes I can definitely believe that. :-( .

        The thing is humans produce new nouns, adjectives and verbs daily, just as old ones fall into disuse (who has used "fax" in a sentence this week who's not involved with the legal profession?)

        Any true NLU system has to cope with 2 problems. Understanding what it is being told and adding to that understanding over time.

        Because humans can do both. I think the second may be trickier.

  10. ThatOne Silver badge
    WTF?

    What are we actually discussing about?

    Don't mix up the capacity of correctly assembling sentences in some given language, and the general knowledge of how the world works.

    Language is just a set of rules, you can always teach a machine how to use it. But knowledge of "Reality" is something about which we learn over a period (mostly in our early childhood, but also during the rest of our life actually).

    Dogs throwing stuff at each other is a statement which isn't any stranger than any human statement proving the speaker doesn't really know how the real world really works (pick your example, there are many, and I don't want to start a flamewar).

    Text-generating models are fully articulate newborns: As somebody said earlier, they can speak, but they have nothing to say, they are terminally dumb.

    1. John Smith 19 Gold badge
      Unhappy

      "they can speak, but they have nothing to say,"

      Indeed.

      Although to be fair I've met enough people who spend a great deal of time saying nothing worth listening to so it's not just a criticism that can be leveled at AI projects.

    2. Anonymous Coward
      Alien

      Re: What are we actually discussing about?

      Text-generating models are fully articulate newborns: As somebody said earlier, they can speak, but they have nothing to say, they are terminally dumb.

      You don't have enough experience of newborns: they can't speak but they definitely have things to say and definitely are not 'terminally dumb'.

      1. ThatOne Silver badge

        Re: What are we actually discussing about?

        > You don't have enough experience of newborns: they can't speak but they definitely have things to say and definitely are not 'terminally dumb'.

        Sorry, the "they" referred to text-generating models, not to newborns: Newborns have various needs/urges to convey, text-generating models don't. The comparison was on the "don't know a thing about the world" level. (All a newborn knows is that there exists some huge warm soft reassuring entity handing out food and making bad things go away.)

    3. martinusher Silver badge

      Re: What are we actually discussing about?

      What we call 'reality' is also a model and it could be emulated by a sufficiently large machine. Most of our human experience is common to all of us -- that's why its called common sense -- but if you wander into some of the darker corners of the Interweb then you'll discover numerous alternative views of reality. (In fact you don't need to go that far -- you just have to look at at something like Rudy Gulliani's recent press conference.)(You may also recall Dick Cheney openly talking to the White House Press Corps about how the administraiton 'creates its own reality' -- given the right circumstances you really can persuade people that two plus two really does make five.)

      Its interesting to conjecture whether the group of phenomena called "QAnon" are actually the product of a piece of software. The material fuelling this is obviously readable English but the things it talks about are products of an alternative reality, a reality that morphs so quickly that any attempt to get a handle on it is like punching fog.

      1. ThatOne Silver badge

        Re: What are we actually discussing about?

        > What we call 'reality'

        There is the "personal reality", which is quite simple but eminently different from person to person, often even contradictory. And there is the "physical world reality" which is huge and terribly complicated, very intelligent scientists are spending their whole lives in an attempt to grasp even tiny aspects of it.

        We easily could model the first kind, although one would wonder why bother. The second kind is waaay beyond our capacity to understand, and thus to emulate. We have good notions how specific and limited aspects of it might work, but that's about all.

        1. amanfromMars 1 Silver badge

          Re: What are we actually discussing about?

          There is the "personal reality", which is quite simple but eminently different from person to person, often even contradictory. And there is the "physical world reality" which is huge and terribly complicated, very intelligent scientists are spending their whole lives in an attempt to grasp even tiny aspects of it.

          We easily could model the first kind, although one would wonder why bother. The second kind is waaay beyond our capacity to understand, and thus to emulate. We have good notions how specific and limited aspects of it might work, but that's about all. ..... ThatOne

          If that be true for you, ThatOne, and something you wholeheartedly and fervently believe, is it just a personal opinion expressed and revealing to every man and his dog that the second kind is waaay beyond your capacity to understand.

          1. ThatOne Silver badge

            Re: What are we actually discussing about?

            That was kind of my point, although I'm afraid I missed yours.

            (Didn't downvote you BTW)

  11. Blackjack Silver badge

    No common sense? No problem!

    Just make them write political speeches

  12. Weylin

    "but they don't have common sense"

    So how are they different from humans?

    1. ThatOne Silver badge
      Devil

      Re: "but they don't have common sense"

      > So how are they different from humans?

      Even more limited?

      1. amanfromMars 1 Silver badge

        Re: "but they don't have common sense"

        > So how are they different from humans?

        Even more limited? ..... ThatOne

        Oh? Do they know they are limited?

        By what and/or whom ...... is surely something left hanging to be asked ..... and answered if there be a suitable candidate/scapegoat?

  13. Mage Silver badge
    Windows

    Hmm, I was playing

    With Eliza (it's in the version of emacs on Linux), I've used others in the past, ALICE, other more recent ones.

    Mitsuku is a great bot for the loners out there who wish they had someone to talk to 24/7. And now you can! Mitsuku is the Loebner Prize winner for this year and is currently one the most smartest chatbots on the market. Mitsuku learns from human behavior and interaction, which means that as more people talk to her – the smarter she becomes. Mitsuku has been designed to chat about anything and has not been designed for specific task, but rather just human interaction.

    A slight improvement on Eliza and Dr. Sbaitso, but basically rubbish. Amazon, Microsoft, Google and Apple have voice recognition front ends to search, very badly done. As interactive chat they are pathetic. They are poor to refine a search.

    https://archive.org/details/dr.sbaitsogame

    See also https://blog.eduonix.com/artificial-intelligence/10-best-ai-chatbots-available-online/

    Researched for "The Enscorcelled Maid". What if you add chatbot rules to a Fetch?

  14. Anonymous Coward
    Anonymous Coward

    It's incredible how much emotion and motivation animals can display to each other - although they have a distinct paucity of symbols and grammar rules - the inverse of the language "AI" bots under study in the article.

    The a deer couple in the neighborhood - she was evidently hit by a car and has a serious limp, while he sticks to her like like glue with protective love. One night I heard a noise in back and checked. They were resting in the garden in a place (deliberately?) diffucult to see from the window. She wasn't bothered but he was uncomfortable, so he kept nosing her gently. He wouldn't leave without her. I'd never seen that kind of altruistic behaviour in deer before as was really impressed. Then suddenly he said "c'mon dear, let's go!", whereupon she got up slowly and carefully and then they left together,

    1. ThatOne Silver badge
      Devil

      > I'd never seen that kind of altruistic behaviour in deer before

      It's rather rare in humans too... Most men would had left her for an able-bodied, younger one...

  15. quartzz

    thinking out loud...

    one of the most important factors in using a tool (eg, screwdriver), is to know what it can't do. but no-one knows what AI can't do?

    1. amanfromMars 1 Silver badge

      Saying it out louder in IT Circles ..... AIRules Sublimely and Supremely with an Absolute Stealth

      thinking out loud...

      one of the most important factors in using a tool (eg, screwdriver), is to know what it can't do. but no-one knows what AI can't do? ..... quartzz

      How very Rumsfeldian*, quartzz. Bravo. You have perfectly captured the multi flavoured and much favoured and enthusiastically savoured essence of the enigma which is AI ...... a forceful source and/or resourceful force which humanity is not even enabled to accurately and effectively agree on defining, so can never ever be close to engaging with and leading in any direction of greater creative or awesomely destructive purpose of their own choosing.

      And whenever no-one also knows what AI can do, is everything it does or does not do, a constant and initially unbelievable surprise?

      So what do you realise/suspect/expect/fear AI, a forceful source and/or resourceful force, to be? What sort of Intelligence or Information, if IT be of/from another Intelligence with different Information at all?

      Does it actually exist or is it practically one of those phantom virtual figments of your humanised imagination? Is there evidence available, although that can be misleading if the evidence suggest such does not exist.**

      * ....."Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know."

      ** ..... "There's another way to phrase that and that is that the absence of evidence is not the evidence of absence. It is basically saying the same thing in a different way. Simply because you do not have evidence that something does exist does not mean that you have evidence that it doesn't exist."

      1. quartzz

        Re:

        yes

        and that's why I said "thinking out loud" - the meaning of which is;

        "these are facts".

  16. Robert Grant

    I really really really don't understand how this isn't obvious. Give a machine enough text to read and it will be able to with increasing accuracy know which word to follow the current word it's written. It'll know that "like" or "hate" or "want" often follow "I", because it's seen them follow "I" loads of times compared to other words.

    Then do the same, but the next 2 following words, not just the next one. Then the next 3, 4, 5, until it's constructing plausible phrases.

    From understanding that really basic technique, who would be surprised that its grammar would be rather better than its semantics? Who would think one would need a study to determine that?

    Next up: a report on a study that shows that cutting brake lines in cars increases their chances of getting in an accident.

    1. diodesign (Written by Reg staff) Silver badge

      "I really really really don't understand how this isn't obvious"

      A lot of science is proving or demonstrating the obvious so that it's proven, or demonstrated, and not assumed.

      C.

      1. amanfromMars 1 Silver badge

        Alien Instrumentation ..... AI, but not as you may have known it, or were expecting IT for to Be.

        A lot of science is proving or demonstrating the obvious so that it's proven, or demonstrated, and not assumed. ..... diodesign/C.

        A question to follow, diodesign/C., for a lot of science to answer, or a lot of words to question, deliver and sustain/initially seed and further feed and autonomously maintain, ..... or refrain from and halt supply of just for now and until systems can better deal with the discovery, as the case can be whenever something is worthily rewarded with an engaging understanding merciful danegeld ‽

        Whenever something emerges/evolves/suddenly appears unseen and unexpected out of nowhere and which quite conveniently and rather fortuitously is practically virtually still widely generally unknown and not assumed, and considered unbelievable, because it has never ever before even been imagined as being possible, and possible too with live ACTive demonstrations of massive operations crash testing with dummies, so as to be all too suddenly quite painfully obvious and unavoidable as a highly disruptive and destructive fact and new expanded and expanding reality ....... and it be wise here whenever thinking of relative size and parallel scope to equate it akin to the dilemmas and opportunities presented with an earlier explosive Manhattan Project type device ...... what do you imagine be the best advisable available course[s] of future action in order to avoid Catastrophic CHAOS and Clouds Hosting Advanced Operating Systems Administering Intelligence for Madness and Mayhem in Conflicted Spaces and Revolutionary Times?

        And yes, El Reg, that is a serious question to engage with your invited answers, for its own proposed courses of Future ACTivIT, if not tempered and reinforced with valued globally available peer input, may not suit y'all and be almighty distressing ..... although such then would be of one's own fateful choosing and not just dictated to one by Virtual Machines with Global Operating Devices.

  17. nautica Silver badge
    Boffin

    A-I can't even LISTEN like humans...

    1. My MD's office has been trying for years to find a speech-to-text converter with a high enough accuracy rate to allow the doctors, PAs, and nurses to automate the conversion of notes, spoken into a recorder, to printed text. They're still looking...

    2. How about the sudden increase in typos you're seeing in on-line tech (and other) venues? These 'typos' are cased by the same mechanism as #1. The 'authors' are too lazy to work at a keyboard, any more. The 'typos' are caused by a speech-to-text converter; the 'authors' can't be bothered to go back and clean up the mess.

    "AI has by now succeeded in doing essentially everything that requires 'thinking' but has failed to do most of what people and animals do 'without thinking'-that, somehow, is much harder."--Donald Knuth

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like